How to weaponize LLMs to auto-hijack websites

A recent paper explores how to use AI chatbots to autonomously hijack websites. The Register spoke to one of the authors of the paper:

In an interview with The Register, Daniel Kang, assistant professor at UIUC, emphasized that he and his co-authors did not actually let their malicious LLM agents loose on the world. The tests, he said, were done on real websites in a sandboxed environment to ensure no harm would be done and no personal information would be compromised.

It’s good to know they sandboxed everything – and bad to know it can be done.

Read the report here.

Leave a comment