Even ChatGPT Says ChatGPT Is Racially Biased

Bias is rife in the training data used to create the models that power AI chatbots, and Scientific American has something to say about that:

ChatGPT’s claim that any bias it might “inadvertently reflect” is a product of its biased training is not an empty excuse or an adolescent-style shifting of responsibility in which it says, “Don’t blame me. It’s my trainer’s fault.” ChatGPT’s stories are simply generated from probability tables derived from the sequences of letters, words and phrases that appear in its vast training material of books, magazines, periodicals and Web content.

It’s a great read – and a needed reminder of the bias baked into our tools.

Leave a comment