GPT-4 developer tool can be exploited for misuse with no easy fix

New Scientist reports that the GPT-4 API can be misused – jumping its ‘guardrails’:

It is surprisingly easy to remove the safety measures intended to prevent AI chatbots from giving harmful responses that could aid would-be terrorists or mass shooters. The discovery seems to be prompting companies, including OpenAI, to develop strategies to solve the problem. But research suggests their efforts have been met with only limited success so far.

This is a continuing problem, known – yet without an obvious solution.

Read their report here.

Leave a comment