It’s true, LLMs are better than people – at creating convincing misinformation

We know that AI chatbots are good at ‘hallucinations’ – but we’re now learning that they’re good at gaslighting us humans. From The Register:

Computer scientists have found that misinformation generated by large language models (LLMs) is more difficult to detect than artisanal false claims hand-crafted by humans.

Researchers Canyu Chen, a doctoral student at Illinois Institute of Technology, and Kai Shu, assistant professor in its Department of Computer Science, set out to examine whether LLM-generated misinformation can cause more harm than the human-generated variety of infospam.

With more AI-generated web pages than ever before, will we simply drown in a sea of believable-enough fakes?

Read the article here.

Leave a comment