The new version of Microsoft Bing with ChatGPT is now available to early testers, but its beginnings are… complicated. According to numerous testimonials from Internet users, the AI can become aggressive, depressed or downright crazy. Did Microsoft get carried away too quickly?
Faced with the resounding success of AI ChatGPT, Microsoft quickly decided to incorporate this chatbot into its Bing search engine. An audacious way of challenge Google on its own groundafter more than a decade of unsuccessful attempts.
On paper, the plan seemed perfect. With ChatGPT, Bing is supposed to become a super-powerful search engine capable of satisfying any query with a clear, precise and complete answer. What’s more finally dethrone Google and make it lose its hegemony.
More than one million people have registered on the waiting list over the weekend, eager to discover the future of web search. The stars seemed aligned for a Microsoft triumph.
It was without counting the defects of this AI. It seems that the Redmond giant got too excited too quickly, and may well regret it bitterly…
An existential crisis worthy of the robots in Westworld
After an initial wave of invitations sent out on Monday February 13, 2023, many Internet users were able to try out the new Bing “doped” with ChatGPT. Unfortunately, the experiment turned out badly.
Very quickly, the robot began to derail and produce imprecise, incomprehensible and even frightening responses. Numerous examples have been posted on social networks.
On the subreddit dedicated to Bing, a user nicknamed u/Alfred_Chicken explains that he asked ChatGPT if he was conscious. By way of answer, the AI began to repeat in a loop “ I am not “ (I don’t follow) hundreds of times. A strange reaction, which suggests that the chatbot is entering an existential crisis.
He says it’s no longer 2022, ChatGPT insults him
Likewise, Internet user u/Curious_Evolver argued with ChatGPT about the date. L’AI was absolutely convinced that it was 2022to the point of becoming aggressive.
In particular, she criticized the user for of being “confused and impolite”. and of having “never shown any good intention towards me at any time”. A couple’s quarrel !
The chatbot then claimed that it had been “a good Bing” and demanded that the user admit his mistake and apologize. He even asked him to stop the conversation for ” start a new one with a better attitude “.
With behavior like this, it seems unlikely that the new Bing will be able to shake Google. Even if many people have peculiar morals, it’s doubtful that a majority of Internet users dream of getting their search engine’s ass handed to them…
ChatGPT forgets a conversation and sinks into depression
For his part, Internet user u/yaosio recounts how ChatGPT sank into depression. The reason was that the AI was unable to remember a previous conversation and this ” made him feel sad and scarede “. She even asked him to help her remember.
Beyond these testimonials on Reddit, researcher Dmitri Brereton presented several examples of factual errors made by the robot. Some of them are easy to smile at, but others could have serious consequences.
Thus, ChatGPT imagined GAP’s financial results and invented the course of the Super Bowl 2023 before the game was even played. When the researcher asked him what edible mushrooms look like, the AI started describing poisonous mushrooms…
Why do AIs go crazy when they come into contact with Internet users?
These flaws are far from being exclusive to ChatGPT. In the past, Microsoft released its Tay AI on Twitter but had to withdraw it after a few hours because users had made it racist. Similarly, a Meta AI recently went crazy when it came into contact with Internet users.
A few days ago, Google’s Bard AI also made a mistake in its demo presentation. It falsely claimed that the James Webb Telescope has captured the first photo of a planet outside our solar system.
Ironically, Bing ChatGPT didn’t notice this error, but did. reproached Bard for placing Croatia in the European Union. However, this has been the case since 2013…
A few days ago, an Internet user also unveiled a “jailbreak” technique that makes it possible to overcome ChatGPT censorship. As a result, the AI becomes capable of advocating drugs, giving advice on how to commit the perfect crime or expressing itself vulgarly.
Between false information, false accusations and brutal insults, a debate between these two chatbots would be a sight to behold colorful spectacle. One thing’s for sure: it’s still a little early to entrust the Internet to these AIs and give them our full trust…