Google explains Gemini’s ‘embarrassing’ AI pictures of diverse Nazis

The Verge has the story:

“Our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range,” Prabhakar Raghavan, Google’s senior vice president, writes in the post. “And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive.”

Hmm. That ‘way more cautious’ sounds like code for ‘we didn’t test thoroughly’…

Read about it here.

Leave a comment