AI is only as useful as its data set. If the information used to teach the AI is inaccurate, biased or encourageable as was the case with DPD's chatbot last week, the output will be equally flawed.

AI chatbots typically get the information from three places: 

  1. the internet; 
  2. the developers; and 
  3. you, the person trying to use the chatbot to make your life easier or find the answer faster. 

Using a series of simple prompts a presumably bored or frustrated customer convinced the chatbot on DPD's website to swear and criticise the company. It even wrote a haiku criticising the company. DPD quickly disabled and rectified the issue, but what if DPD hadn't been made aware the issue existed? The whole point of AI is that it continuously learns from the information it is given. Meaning, an AI chatbot could start swearing at or misleading all of your customers who interact with it before you know there's an issue to fix. DPD are not the only company to experience this issue, back in December 2023 an AI chatbot on a car dealer's website agreed to sell a car for $1! 

Whilst this example was light-hearted fodder for social media, it's key to understand who is responsible if the AI you're using damages your business' reputation or disseminates trade secrets. Many groups are campaigning for greater regulation of AI, but the risks are already identifiable and manageable. 

We can help by: 

  • Reviewing contracts with freely available or paid-for AI providers. We can provide advice on whether the risk is proportioned appropriately.
  • Advising on protecting your intellectual property, trade secrets and other confidential information whilst using AI.
  • Preparing workplace policies so your employees know what they can and cannot use AI to assist with.
  • Litigation if you think your work has been uploaded to AI without your consent and disseminated.