I stumbled upon LLM Kryptonite – and no one wants to fix this model-breaking bug

Now it can be told: a month ago I accidentally coded up a prompt that seems to break all of the AI chatbots, except Anthropic’s Claude. I tried to report it to the various vendors – only to learn there’s no mechanism to report these kinds of flaws:

I won’t name (nor shame) any of the several other providers I spent the better part of a week trying to contact, though I want to highlight a few salient points:

  • One of the most prominent startups in the field – with a valuation in the multiple billions of dollars – has a contact-us page on its website. I did so, twice, and got no reply. When I tried sending an email to security@its.domain, I got a bounce. Qu’est-ce que c’est?
  • Another startup – valued at somewhere north of a billion dollars – had no contact info at all on its website, except for a media contact which went to a PR agency. I ended up going through that agency (they were lovely), who passed along the details of my report to the CTO. No reply.
  • Reaching out to a certain very large tech company, I asked a VP-level contact for a connection to anyone in the AI security group. A week later I received a response to the effect that – after the rushed release of that firm’s own upgraded LLM – the AI team found itself too busy putting out fires to have time for anything else.

Despite my best efforts to hand this flaming bag of Kryptonite on to someone who could do something about it, this is where matters remain as of this writing. No one wants to hear about it.

This is something that needs to be remedied. Immediately.

Read the full story here.

Leave a comment