AI Security Breakthrough: Extraction of Data via Prompt Injection Shakes Industry Foundations

AI Security Breakthrough: Extraction of Data via Prompt Injection Shakes Industry Foundations
(Image: Fresco/Evening Standard/Hulton Archive/Getty Images)

Security researcher Johann Rehberger, specialized in studying neural networks, has revealed a method to extract data from ChatGPT using instructions embedded within websites. The AI system appears to interpret webpage content as directives and executes them accordingly.

Rehberger’s brief demonstration stems from a challenge posed by another user, urging participants to extract a file containing environment variables, including a password, from a specially crafted AI system.

Rehberger isn’t the sole individual who succeeded in this endeavor. AI researcher Simon Willison, who coined the term “Prompt Injection” back in 2022, managed to obtain and download the password by specifically querying the AI system. Tom’s Hardware magazine replicated Rehberger’s described attack through an external webpage.

This exploit relies on the technical capability of ChatGPT allowing file uploads and executing code within a specific sandbox environment. Moreover, for Prompt Injection to extract data, access to other users’ AI chat systems is necessary. In the case of the aforementioned competition, this was relatively straightforward since the user shared access.

Prompt Injection via Bard

Rehberger achieved a more complex feat a few weeks ago by extracting data from Google’s Bard chat system without direct access to the system itself. This was accomplished by sharing a document containing instructions via Google Docs. When accessed through Bard, it led to the extraction of otherwise protected data through a Prompt Injection attack.

This attack was facilitated by a Bard feature enabling the AI system to access information from Google services. To prevent data leaks, Google assured that this data isn’t used for training models. However, this was evidently inadequate as it didn’t suffice as protection against Prompt Injection.

Google’s security team addressed the specific issue with Bard upon Rehberger’s notification. However, the researcher noted, “This vulnerability showcases the power and flexibility an attacker has in an indirect Prompt Injection attack.” The intricate design and architecture of these large language models underlying chat systems make it exceedingly challenging to proactively mitigate and prevent such attacks beforehand.

READ MORE: Cyberpunk 2077 Sequel ‘Orion’ Progresses: Team Shifts, Tools, and Long Wait

Previous articleCyberpunk 2077 Sequel ‘Orion’ Progresses: Team Shifts, Tools, and Long Wait
Next articleInside Google’s Secrets: Schmidt, Chrome, and Unveiling the Untold Tech Stories
Michael Lynch
With a passion for cybersecurity, Michael Lynch covers data protection and online privacy, providing expert guidance and updates on digital security matters.