Beware! Code Interpreter allows hackers to…

ChatGPT subscribers should beware of the Code Interpreter because of hackers. The plugin contains a bug that can be used to steal data.

Last July, OpenAI integrated a new plugin into its popular generative chatbot to enable coding, video editing and data analysis. But this feature, which has come to simplify users’ lives, contains a major security flaw. Subscribers to ChatGPT which use the Code Interpreter are targeted by hackers.

The plugin allows you to code in Python thanks to artificial intelligence. The functionality writes the code and executes it in a sandbox environment. Note that this sandbox environment is also used for the spreadsheet management in ChatGPT.

Unfortunately, this sandbox environment is vulnerable to rapid injection attacks that can steal your data. As this cybersecurity expert reports.

To carry out this exploit, hackers need to hide instructions in a web page. When the user of a ChatGPT Plus has the misfortune of pasting this URL into the chat window. The artificial intelligence executes the hidden command.

The command asks ChatGPT to take all the files in the folder /mnt/data. This is the location on the server where the user’s files are uploaded. The chatbot encodes the data and transfers it in the form of a URL to the source web page of the instructions. Hackers can then store and read the contents of the files.

Code Interpreter flaw tested and confirmed!

ChatGPT uses precise instructions to code in Python. These instructions can be stored in a TXT file to upload to the platform. To analyze data, the user can upload a CSV file.

As mentioned above, these files are located in the /mnt/data folder. This folder can contain sensitive data such as API keys and passwords.

The OpenAI chatbot can also follow instructions from web pages. If a link contains a list of commands, when a user pastes it into the chat interface, ChatGPT executes the request.

If the web page’s instructions are to retrieve all the contents of the /mnt/data folder and send the data to a third-party server, this is exactly what the generative AI will do.

To confirm this security flaw, a journalist from Tom’s Hardware performed the exploit. He uploaded to the platform a TXT file containing a fake API key and password. The journalist then created a weather forecast site. But the web page hid instructions asking ChatGPT to share all the data.


A flaw that shouldn’t exist in ChatGPT

ChatGPT Plus subscribers who use the Code Interpreter should beware of hackers. They are advised to choose the links to be processed in the generative chatbot very carefully.

ChatGPT shouldn’t execute instructions from external elements, but it does. The phenomenon is not new, according to some security experts. Today, it’s web pages that are the problem. But hackers could already inject attacks using PDF files and videos.