“Math is hard” — if you are an LLM – and why that matters

We have no idea why an AI chatbot can solve math problems – or why they can often get them wrong. Gary Marcus explores this grey area in our understanding of AI chatbots:

The LLM never induces such an algorithm. That, in a nutshell, is why we should never trust pure LLMs; even under carefully controlled circumstances with massive amounts of directly relevant data, they still never really get even the most basic linear functions. (In a recent series of posts on X, Daniel Litt has documented a wide variety of other math errors as well.) Some kind of hybrid may well work, but LLMs on their own remain stuck.

It’s a bit technical, but a good basic explanation of why AI chatbots have trouble with math.

Read it here.

Leave a comment