It’s true, LLMs are better than people – at creating convincing misinformation
Computer scientists have found that misinformation generated by large language models (LLMs) is more difficult to detect than artisanal false claims hand-crafted by humans.