The dark side of the AI revolution
“New patterns of discrimination are emerging from AI and algorithms,” said Ferda Ataman, Germany’s federal anti-discrimination commissioner.
“New patterns of discrimination are emerging from AI and algorithms,” said Ferda Ataman, Germany’s federal anti-discrimination commissioner.
The only change in my question was ‘John’ to ‘Jane’. No other details were specified.
Yet the output given by ChatGPT couldn’t have been more different.
The same companies telling the public that “AI is enabling new forms of connection and expression” should also be willing to offer an explanation when its systems are unable to handle queries for an entire race of people.
The dialect of the language you speak decides what artificial intelligence (AI) will say about your character, your employability, and whether you are a criminal.
Recruiters are eager to use generative AI, but a Bloomberg experiment found bias against job candidates based on their names alone.
“…over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive…”
Google started offering image generation through its Gemini AI models earlier this month, but over the past few days some users on social media had flagged that the model returns historical images which are sometimes inaccurate.
“Data collected in mid-January on 44 top news sites by Ontario-based AI detection startup Originality AI shows that almost all of them block AI web crawlers, including newspapers like The New York Times, The Washington Post, and The Guardian…”
In some cases, biased selection criteria is clear – like ageism or sexism – but in others, it is opaque.
“ChatGPT’s claim that any bias it might ‘inadvertently reflect’ is a product of its biased training is not an empty excuse or an adolescent-style shifting of responsibility…”
“10 months on since the release of ChatGPT 4, let’s have a look at the top problems with generative AI, and some ideas about how you might overcome them.”
“With the infrastructure in place—the base generative models from OpenAI, Google, Meta, and a handful of others—people other than the ones who built it will start using and misusing it in ways its makers never dreamed of.”
To varying degrees, the models appeared to be using race-based equations for kidney and lung function, which the medical establishment increasingly recognizes could lead to misdiagnosis or delayed care for Black patients.