ChatGPT is bulls**t

“We argue that these falsehoods, and the overall activity of large language models, is better understood as bulls**t in the sense explored by Frankfurt: the models are in an important way indifferent to the truth of their outputs.”

Measuring the Persuasiveness of Language Models

Developing ways to measure the persuasive capabilities of AI models is important because it serves as a proxy measure of how well AI models can match human skill in an important domain, and because persuasion may ultimately be tied to certain kinds of misuse, such as using AI to generate disinformation, or persuading people to take actions against their own interests.

Can an A.I. Make Plans?

How can these powerful systems beat us in chess but falter on basic math? This paradox reflects more than just an idiosyncratic design quirk. It points toward something fundamental about how large language models think.