Chatbots May ‘Hallucinate’ More Often Than Many Realize

Research reported in the New York Times details some unsettling findings about confabulations generated by AI chatbots:

A new start-up called Vectara, founded by former Google employees, is trying to figure out how often chatbots veer from the truth. The company’s research estimates that even in situations designed to prevent it from happening, chatbots invent information at least 3 percent of the time — and as high as 27 percent.

If more than one quarter of all responses generated by an AI chatbot are fictional, we’re deeply in need of some better ways to error-check their outputs.

Read the article here.

Leave a comment