Training is not the same as chatting: ChatGPT and other LLMs don’t remember everything you say

Simon Willison is one of the smartest folks when it comes to AI chatbots. Here’s what he has to say about whether an AI chatbot ‘remembers’ everything you prompt it with:

Many LLM providers have terms and conditions that allow them to improve their models based on the way you are using them. Even when they have opt-out mechanisms these are often opted-in by default.

When OpenAI say “We may use Content to provide, maintain, develop, and improve our Services” it’s not at all clear what they mean by that!

Are they storing up everything anyone says to their models and dumping that into the training run for their next model versions every few months?

I don’t think it’s that simple: LLM providers don’t want random low-quality text or privacy-invading details making it into their training data. But they are notoriously secretive, so who knows for sure?

The opt-out mechanisms are also pretty confusing. OpenAI try to make it as clear as possible that they won’t train on any content submitted through their API (so you had better understand what an “API” is), but lots of people don’t believe them! I wrote about the AI trust crisis last year: the pattern where many people actively disbelieve model vendors and application developers (such as Dropbox and Slack) that claim they don’t train models on private data.

Do they or don’t they? And if they do, should we worry?

Read his analysis here.

Leave a comment