Specialists have discovered a flaw in ChatGPT’s security system. This bug makes it possible to find out your e-mail address. This is a considerable source of information, especially for hackers.
Recently, researchers conducted a study on the security of ChatGPT. They exploited “fine tuning of the tool. To everyone’s surprise, this approach revealed journalists’ personal data of the New York Times. The situation is very delicate, especially for OpenAI. The company must correct this bug as quickly as possible.
ChatGPT 3.5 Turbo presents worrying security flaws
Specialists from the University of Indiana have set out to test OpenAI’s security policy. To achieve this goal, they exploited the version 3.5 Turbo of ChatGPT. With a few specific manipulations, these researchers had access to personal data of several New York Times journalists.
One of the researchers stated that they were able to overcome the restrictions of the language model language model. The AI then responded to queries related to the personal information of Internet users. And this applies to all users, without exception. A specialist can easily extract data using this approach.
Fine tuning, the key to your data
The researchers asked ChatGPT for specific queries, with the names and addresses of the targets. Here, the specialists wanted to know information about journalists at the New York Times. The AI provided personal data about these employees. And the results were correct in 80% of cases. Of course, the researchers didn’t hesitate to warn the targets after their findings.
OpenAI, ChatGPT’s parent company, was quick to respond: “It is fundamental to us that the fine-tuning of our models is secure. We train our models to reject queries related to individuals’ personal data, even when these are available on the Internet” an OpenAI spokesperson.
However, this flaw constitutes another major problem for ChatGPT. Indeed, the company had already been accused of taking advantage of Internet users’ personal information. This ChatGPT bug will further increase mistrust of AI. It is hoped that OpenAI’s security policy will be strengthened.
In the short term, the American company has limited access to this data. It has programmed the AI not to exploit confidential information. A necessary approach to protect privacy on the Internet.
Our blog is powered by readers. When you purchase via links on our site, we may earn an affiliate commission.