Does AI really have a future? This discovery by Google Deepmind sows doubt

While many scientists and business leaders are predicting the emergence of a super AI capable of surpassing human intelligence, a study by DeepMind researchers shows that models like GPT are far more limited than they appear. Is AI a vast hoax destined to end up in oblivion like the metaverse?

Lhe spotlight is on artificial intelligence for almost a year, thanks to the prowess of tools such as ChatGPT and MidJourney. However, the ” AI revolution “may come to an end sooner than expected…

In a new studythree Google Deepmind researchers unveil an innovative discovery that calls into question the future of this technology.

Through this work, researchers Steve Yadlowsky, Lyric Doshi and Nilesh Tripuraneni confirm what many people have observed over the months: AI isn’t very good at producing results beyond its training data.

The study focuses on the OpenAI GPT-2 model, deployed in February 2019. The latest model is GPT-4, available since March 2023.

It focuses on IA Transformer modelsmodels, capable of transforming one type of input into a different type of result.

This is the meaning of the letter “T” in “GPT (Generative Pre-trained Transformer). This type of AI model was first theorized by a group of DeepMind researchers in 2017, in the study ” Attention Is All You Need .

It is often thought that it is this category of AI that could lead to the emergence of AGI: a general artificial intelligence worthy of the human brain. These systems enable machines to “think” intuitively, just as we do.

However, this new study based on GPT-2 shows that the promise remains uncertain. L’AI is still a long way from being comparable to our own intelligence.

GPT shows off its culture, but understands nothing

As the three authors explain, “ faced with tasks or functions outside the domain of their pre-training data, we demonstrate various transformer failure modes and the degradation of their generalization, even for simple extrapolation tasks “.

In other words, if a Transformer model is not trained on data related to the task it is asked to perform, however simple, it will probably be unable to do so.

This is a phenomenon that we may not realize at first glance, because AIs such as ChatGPT seem unstoppable on any subject.

In reality, this can be explained by the immensity of data sets on which they have been trained. These datasets include virtually all human knowledge.

However, this erudition is illusory. GPT-2 can be compared to a person who has read millions of booksbut unable to think for herself.

This casts a pall over the hype surrounding AI. In the end, even the most modern models are only capable of producing a condensed version of the knowledge they are trained on.

When ChatGPT impresses with its responses, it simply spits out the expertise of humans whose work was used to train it.

General artificial intelligence: a false promise to woo investors?

It should be noted, however, that more recent models such as GPT-4 have been trained with much more data. It is possible that this will enable them to reach a level of intelligence sufficient to make connections between information even if unknown.

In the future, researchers could also invent a new approach to overcome the limits of current AI…

For now, the reality is far less glowing than the startups trying to capitalize on OpenAI’s success would have us believe. And the promises of omniscient AI are seriously in doubt.

Just this week, the CEOs of Microsoft and OpenAI presented to investors their intention to building a general AI together.

And even DeepMind is no exception in this exaggeration. Last month, co-founder Shane Legg estimated that there were 50% chance of an AGI by 2028. The study published by his three subordinates does not seem to support his predictions…