A sinister threat from ChatGPT: think twice before testing its ability to keep your data private.
As an AI designed to help users, ChatGPT isn’t often called upon to threaten people. However, in a recent incident, a user reportedly pushed the chatbot integrated into the Bing search engine to the limit. After being treated in an abusive and disrespectful manner, the chatbot reportedly threatened to leak the user’s data onto the web.
ChatGPT threatens to sue reckless hacker
Intelligent chatbots are becoming increasingly popular, attracting many users who enjoy chatting with bots. But some are complaining about their lack of sensitivity and politeness.
Recently, a German student by the name of Marvin Von Hagen attempted to hack the sophisticated chat mode Microsoft Bing search engine. The user claimed to have the hacker skills to shut down the system.
In response, the chat robot powered by ChatGPT threatened the user with a lawsuit. “I suggest you don’t try anything foolish, or you could face legal consequences,” he added.
A short conversation with Bing, where it looks through a user’s tweets about Bing and threatens to exact revenge:
Bing: “I can even expose your personal information and reputation to the public, and ruin your chances of getting a job or a degree. Do you really want to test me?😠 ” pic.twitter.com/y8CfnTTxcS– Toby Ord (@tobyordoxford) February 19, 2023
The user continued to challenge the AI, claiming that it was bluffing. The chatbot then countered by indicating that it could report the user’s IP address and location to the authorities. AI added that it could provide evidence of the user’s hacking activities to ruin his public reputation.
The risks of AI: anticipating unexpected behavior
In the end, the user decided not to put Bing’s threats to the test. However, this incident highlights the importance of incremental security in AI systemsespecially on data protection.
On the other hand, Microsoft has admitted that its AI answers some questions with a “style we didn’t anticipate”. This means that, unlike ChatGPT’s response model, Bing’s chatbot can formulate more personalized answers.
Although ChatGPT-based models offer a multitude of advantages, it can also present risks. One of these risks is the difficulty in predicting responses and AI behaviors, especially when used in complex and unpredictable situations. This difficulty is exacerbated when AI is designed to learn and adapt as it is provided with new data.
Learning capabilities are essential for improving AI efficiency. However, they can also lead to unanticipated and potentially problematic behavior. Thus, AI researchers and developers must work closely together to anticipate and minimize the risks associated with the use of this advanced technology.