ChatGPT recently experienced a major outage that resulted in a leak of private conversations on the site. Users have expressed concern about the security of their sensitive information.
ChatGPT, OpenAI’s AI chatbot, has suffered a major technical failure last week. Screenshots have been posted on Reddit and Twitter, showing descriptions of cats that didn’t belong to them. An OpenAI spokesperson confirmed the incident and indicated that the malfunction only shared the titles of the histories. However, the situation revealed the potential dangers associated with disclosing confidential information on ChatGPT.
ChatGPT failure: a reminder of the risks of using AI
Last Monday, ChatGPT was closed for several hours due to user reports. Users noticed that the titles of other users’ chats were displayed on the site. A company spokesperson explained to Bloomberg that the problem was due to a bug. The bug is located in the anonymous open-source software used by ChatGPT. Although the service was back online on Monday evening, the conversation history remained disabled the following morning.
This incident highlights the potential privacy risks of using AI. Experts have warned that AI models such as ChatGPT record interactions to train the model to improve. Users therefore run the risk of having their data exposed to others if AIs are not properly secured.
OpenAI FAQ recommends not disclosing sensitive information to ChatGPT. Indeed, it is not possible for this feature to delete information from users’ chats. It is also stated that the AI reserves the right to use this information for learning purposes.
ChatGPT: tool failure reveals security flaw
On Reddit, a user posted a photo showing descriptions of several ChatGPT conversations that he claimed were not his own. At the same time, someone else on Twitter posted a screenshot of the same bug. An OpenAI spokesperson confirmed the incident to Bloomberg, stating that the bug did not share the entire discussionbut only brief descriptive titles.
This malfunction is an important reminder to be careful about sharing sensitive information with ChatGPT. “Please do not share sensitive information in your conversations”, warns an FAQ on the OpenAI website.
This indication specifies that the company cannot delete specific messages from a person’s history. Consequently, discussions can be retained for AI training purposes.
However, the temptation to disclose personal data with the chatbot will inevitably be strong. Especially as companies continue to improve the use of this new tool.