‘De-Risking AI’ white paper

Wisely AI has identified five risks associated with the use of Generative AI in organisations.

  • Anthropomorphising AI chatbots: projecting human motivations onto their behaviour, thereby compromising ourselves.
  • Training data vulnerabilities: Malicious data sets, scooped up in a ‘crawl’ of the Internet, together with data sets commercially protected by copyright, have made their way into all publicly available AI chatbots.
  • Hallucinations: Erroneous and sometimes entirely fictional responses generated by AI chatbots — often as a response to vague or ambiguous instructions.
  • Privacy, Data Security and Data Sovereignty: Potential inputs to chatbots need to be closely inspected and classified, so that personal, private, commercial sensitive or legally restricted data is never shared with a public service.
  • Prompt attacks: Both ‘prompt subversions’ that can coax an AI chatbot into generating responses its creators have explicitly forbidden, and ‘prompt injections’ that can ‘pervert’ the goals of a chatbot, secretly turning it into an agent acting against the interests of its user.

We provide guidance on how to mitigate these risks.

Read (or download) our white paper here.

Leave a comment