I fact-checked ChatGPT with Bard, Claude, and Copilot – and this AI was the most confidently incorrect

Which AI hallucinates the most? That prize once belonged to Google Bard (running PaLM2, as in this test). Now…? ZDNet rings in with their opinion:

As we saw in our tests, the results (as with Bard) can look quite impressive, but be completely or partially wrong. Overall, it was interesting to ask the various AIs to crosscheck each other, and this is
a process I’ll probably explore further, but the results were only conclusive in how inconclusive they were. Copilot gave up completely, and simply asked to go back to its nap. Claude took issue with the nuance of a few answers. Bard hit hard on a whole slew of answers — but, apparently, to err is not only human, it’s AI as well.

To learn who wins the wooden spoon for accuracy, go here.

Leave a comment