Study: Some AI chatbots provide racist health info

Healthcare via AI chatbot is coming – but will it serve everyone well? This briefing from Axios looks at some false – and racially-based – information spread by AI chatbots in healthcare settings:

This spring and summer, researchers led by doctors at Stanford University ran nine questions through four AI chatbots — including OpenAI’s ChatGPT and Google’s Bard — that are trained on large amounts of internet text.

  • All four models used debunked race-based information when asked about kidney function and lung capacity, the study published Friday in Digital Medicine found. Two of the models gave incorrect answers about Black people having different muscle masses.
  • To varying degrees, the models appeared to be using race-based equations for kidney and lung function, which the medical establishment increasingly recognizes could lead to misdiagnosis or delayed care for Black patients.

These findings are disturbing – and need to be remedied immediately.

Read the full briefing here.

Leave a comment