OpenAI’s GPT-4 can exploit real vulnerabilities by reading security advisories
OpenAI’s GPT-4 large language model (LLM) can autonomously exploit vulnerabilities in real-world systems if given a CVE advisory describing the flaw.
OpenAI’s GPT-4 large language model (LLM) can autonomously exploit vulnerabilities in real-world systems if given a CVE advisory describing the flaw.
Someone, having spotted this reoccurring hallucination, had turned that made-up dependency into a real one, which was subsequently downloaded and installed thousands of times by developers as a result of the AI’s bad advice.
Someone with a passive adversary-in-the-middle position—meaning an adversary who can monitor the data packets passing between an AI assistant and the user—can infer the specific topic of 55 percent of all captured responses, usually with high accuracy.
It’s available as a standalone portal that can be integrated with third-party products. And it’s also available as an embedded service within Microsoft products like Sentinel, Defender XDR, Purview, Priva, and Entra.
The service, dubbed “Firewall for AI,” is available to the cloud and security provider’s Application Security Advanced enterprise customers. At launch, it includes two capabilities: Advanced Rate Limiting, and Sensitive Data Detection.
The renowned security expert Bruce Schneier realised that Microsoft let slip an important piece of information recently – about surveillance of their AI tools.
A recent paper explores how to use AI chatbots to autonomously hijack websites. The Register spoke to one of the authors of the paper.
OpenAI officials say that the ChatGPT histories a user reported result from his ChatGPT account being compromised.
Indirect Prompt Injection attacks via Emails or Google Docs are interesting threats, because these can be delivered to users without their consent.
Imagine an attacker force-sharing Google Docs with victims!
Cybersecurity officials and industry leaders have long warned that hackers could weaponize ChatGPT and similar AI tools to quickly write phishing emails that the average person would think are authentic.
“We’ve got folks who are building LLMs that are designed to write more convincing phishing email scams or allowing them to code new types of malware because they’re trained off of the code from previously available malware…”
Prompt injection attacks fall into two categories—direct and indirect. And it’s the latter that’s causing most concern amongst security experts. When using a LLM, people ask questions or provide instructions in prompts that the system then answers