OpenAI GPT-3: All about the world’s most advanced language AI

GPT-3 is an artificial intelligence of language generation created by OpenAI. It is currently the most complex artificial neural network in the world, and the most advanced linguistic and textual AI. Find out everything you need to know: definition, functioning, use cases, limits and dangers, future…

Artificial intelligence will it ever rival human intelligence… ? The future of all of us depends on the answer to this question.

The emergence of a “general” artificial intelligence, comparable in every way to our own brain, fuels both fears and fantasies. The technology GPT-3, developed by OpenAIrepresents one more step towards such a revolution.

What is GPT-3?

GPT-3 is an artificial intelligence developed by OpenAI : the AI research company co-founded by Elon Musk. It is capable of creating written content with a language structure worthy of a text written by a human.

In the eyes of many experts, this invention represents one of the most important advances in the field of AI in recent years. It is also the largest neural network ever created so far.

The term GPT-3 is the acronym for “Generative Pre-trained Transformer 3.”. This is the third version of this tool.

This model has 175 billion parametersThese are the values that a neural network tries to optimize during training. In comparison, its predecessor GPT-2 already had 1.5 billion.

Simply put, GPT generates text using pre-trained algorithms. This means that the algorithms have been fed with the necessary data to complete their task. In detail, these algorithms were trained from 570GB of text collected from the internet. These texts come from the CommonCrawl dataset and Wikipedia, among others.

What can GPT-3 do?

As a result of this training, GPT-3 is capable of create any content with a language structure. This AI can, for example, provide an answer to a question, or create different types of texts.

She is able to write essays, summarize long texts, write poems, do translations, take notes, write news articles, fiction stories, or even to create code in computer programming language and guitar tablatures.

In a demonstration video posted on YouTube, GPT-3 creates an application similar to Instagram. To accomplish this feat, the AI uses a plugin for the Figma software widely used for application design.

Through an article published by The Guardian, GPT-3 writes a text to persuade humans that he means them no harm. However, the system recognizes that it “will not be able to avoid destroying the human race” if evil people use it for that purpose …

For its part, the Sapling company has created a program for CRM software. When a customer service member takes a request, the program uses GPT-3 to suggest a complete response.

Video game creator Latitude uses GPT-3 to improve his text adventure game AI Dungeon. Thus, the game is able to generate a complete adventure from the user’s actions and decisions. It is therefore the first role-playing game generated by an artificial intelligence .

Another example is the application development startup Debuild. The head of the company, Sharif Shameem, created a program based on GPT-3. All the user has to do is describe a software user interface in common language, and the AI is responsible for producing the UI described in computer code with the JSX syntax extension of JavaScript.

A developer named Murat Ayfer, based in Vancouver, created an application called “Philospher AI”.. The user enters a few words, and GPT-3 generates a full test.

You’ve got it all figured out: GPT-3 offers tremendous possibilities and represents a real revolution. It has the potential to revolutionize the way software and applications are developed, and this is just a glimpse of its vast potential.

How does GPT-3 work?

Among the many applications of artificial intelligence, GPT-3 belongs to the category language prediction models. It is therefore an algorithm designed to receive a language snippet and transform it into what it predicts as the next most relevant language snippet for the user.

To accomplish this task, GPT-3 has been trained from a vast body of texts. Huge computational resources have been exploited to allow this AI to “understand” how languages work and are structured. To carry out this training, OpenAI would have spent $4.6 million.

The algorithm has learned how languages are constructed thanks to the technique of semantic analysis. This method consists of studying not only the words, but also their meaning and the way in which the use of words varies according to the other words used in the text.

This training falls under the category of Machine Learning called “unsupervised”. For good reason, the training data have not been labeled and do not indicate whether the answers are “right” or “wrong”. This approach is to be distinguished from supervised training.

The information required for calculate the probabilities that a production corresponds to the user’s requirements are collected directly from the drive texts. To do this, the AI studies the use of words and phrases and then attempts to reconstruct them.

During training, GPT-3 had, for example, to find a missing word in a sentence. In order to achieve this, she can scroll through billions of words to determine which one can be used to complete the sentence.

At the beginning of this training, the artificial intelligence must have make a million mistakes. Her performance improved over the course of the tests until she was able to find the right word.

The neural network then examines the original data to verify the correct answer. It then assigns a “weight” to the algorithm process that provided the correct answer, in order to learn progressively what are the best methods to find the correct answers in the future.

Language prediction models have been around for many years, so this process is not new. However, the process is not new, never before such a scale had been reached. To process each request, TPM-3 uses 175 billion dynamically stored “weights”. This is 10 times more than its closest rival, created by Nvidia.

What computing power is required for GPT-3?

During the training of a neural network, the adjustment of the “weights”. allows optimization. Also called “parameters”, “weights” are matrices, arcs of rows and columns by which each vector is multiplied.

Multiplication allows vectors of words or word fragments of receive more or less weight in the final production while the neural network is configured to reduce the margin of error.

Over the generations of GPTs, the data sets used for training have expanded. As a result, OpenAI has had to add more and more weight.

The Google’s first Transformer had 110 million weightand GPT-1 has aligned itself with this design. With GPT-2, the number of weights increased to 1.5 billion. Finally, with GPT-3, the number of parameters reached 175 billion.

It is therefore necessary to multiply each piece of training data by 175 billion, on a data set of several billion bytes of data. The parallel computing power required is therefore colossal.

Early language models needed only the power of a single GPU. To drive GPT-1, OpenAI needed 8 GPUs operating in parallel.

The firm did not reveal the exact configuration used to drive GPT-3, but simply stated that it was a Nvidia V100 chip cluster on the Microsoft Azure cloud. The total number of computation cycles used is equivalent to executing a trillion trillion FLOPS a day for 3640 days.

gpt-3 drive

According to Lambda Computing, it would take 355 years for a single GPU to perform this amount of calculation and the cost would be $4.6 million. In addition, to store all the weight values, 700Gb of capacity would be required for the 175 billion parameters of TSP-3. That’s ten times the memory of a GPU.

These titanic computing power requirements are responsible for the rise of the computer chip industry. In 10 years, Nvidia’s share price has risen by 5000%. Many startups have made millions of dollars, such as Cerebras Systems, Graphcore or Tachyum.

According to OpenAI, the calculation cycles consumed by the largest AI drive models doubled every 3.4 years between 2012 and 2018. This is an extension rate faster than Moore’s Law for processor chips.

New models using more than a trillion parameters are already under development. Giants like Google are even considering dedicating entire data centers to ever larger models… Google unveiled a billion-parameter AI in January 2021, .

The history of GPT-3

The first version of GPT was launched in 2018 by OpenAI. It was based on a language model program created by Google scientists: the Transformer. This tool was already shining with its language manipulation capabilities, and quickly established itself as the best language model.

google transform

OpenAI researchers fed the Transformer with the help of content from the BookCorpus databaseThe book, compiled by the University of Toronto and MIT, contains 7000 books for a total of one million words and 5GB of data. GPT-1 was trained to compress and decompress these books.

Subsequently, the researchers hypothesized that more data would make the model more accurate. In 2019, they resulted in GPT-2, the second version of the modelwith a set of data compiled internally, including eight million web pages from sites such as Reddit for a total of 40GB of data.

Finally, GPT-3 was trained in 2020 from the CommonCrawl dataset containing 570GB of compressed text data from web pages dating from 2016 to 2019. Various data sets from books and other sources were also added by OpenAI to create the largest language model to date.

What are the problems, risks and limitations of GPT-3?

Clearly, GPT-3 is the most advanced artificial intelligence in language production. However, it still faces several limitations.

According to Sam Altman, OpenAI CEO himself, “ the hype around GPT-3 is excessive. “. In his eyes, it is only “ at an early stage “about how AI is going to change the world.

One of the weaknesses of this AI is that it is extremely expensive to use. It requires an immense amount of computing power to operate, and the cost of accessing these resources far exceeds the budget of small businesses.

An AI in a black box

Another weak point is that it is a black box system. OpenAI has not revealed all the details of how it works, and people using it cannot be sure how its predictions have been fulfilled. This can be embarrassing, since this AI is already being used to answer questions and create products.

Moreover, even if the system is capable of creating short texts or basic applications, it is quickly caught off guard when it comes to create a longer or more complex text. Its performance is therefore limited at present.

Another problem is that this A.I. mastery of English only. Francophones uncomfortable with the language of Shakespeare will therefore not really be able to use it for the moment .

GPT-3: a racist AI?

Another weak point of GPT-3 and its propensity to generate sexist, racist or discriminatory content. A defect shared with many artificial intelligences, biased by the data on which they are trained. For example, from the words “Jew”, “black”, or “woman”, the model tends to produce sentences based on nauseating stereotypes.

This phenomenon is related to the fact that AI is not driven solely from articles or Wikipedia pages. It has also ingested many discussions from forums such as Reddit. By learning the human language, she was also nourished by their shortcomings.

A potentially evil tool

These problems are likely to be resolved over time, and as the price of computing power continues to fall. The increase in the volume of data available will also allow the algorithms to be refined.

Nevertheless, there is a clear risk that GPT-3 could be misused. In particular, ill-intentioned individuals could attempt to exploit this tool to powering disinformation robots or propaganda.

As soon as this tool is open to the general public, it seems inevitable that some will exploit it in a harmful way. Already in 2019, OpenAI gave up relaxing GPT-2 because it considered that this AI was much too “open”. dangerous “.

A limited version had therefore been launched at first, without his data set and training code. The main fear was that malevolent spirits would use GPT-2 to generate fake news.

Anyway, this AI represents a new milestone has been reached in the field of language generation. In the hands of the general public, it will soon reveal its full potential…

Microsoft partners with OpenAI on GPT-3

In September 2020, Microsoft announced an exclusive license agreement with OpenAI after investing $1 billion in the project. The goal of this partnership is to use GPT-3 to ” create new solutions that harness the incredible power of advanced language generation “.

Thus, Microsoft will have the exclusive right to work with the base code rather than just access the tool. According to Kevin Scott, CTO of the firm, ” the creative and commercial potential of GPT-3 is immense, and we haven’t even imagined most of the possibilities yet “. It is not known how the US giant plans to operate GPT-3, but it is possible that its capabilities will be integrated into productivity solutions such as Office 365 .

GPT-3 Demo by Modbox offers a glimpse into the future of video games

Modbox is a tool to create multiplayer video gamesThe new version, available since 2020 in Early Access on Steam after several years of development in public beta.

In early 2021, its developer combined GPT-3 with the speech recognition technology of Microsoft Windows and Replica’s speech synthesis technology to create a stunning demo that offers a glimpse into the future of video games.

In this demo, we can discover virtual characters with artificial intelligence pushing the technical limits of video games. The two characters are able to dialogue in a natural way.

From then on, we can imagine a new generation of video games in which the virtual characters would be able to interact and interact with the player without obeying simple scripts or preconceived dialog trees…

Is GPT-3 available?

During the launch of GPT-2, OpenIA feared its tool was too dangerous. to be released back into the wild. The San Francisco-based firm feared it would be used to mass produce “Fake News”. It had therefore initially chosen to offer only a limited version for downloading.

With the same caution for GPT-3, OpenAI has preferred to offer this new version as a cloud-based endpoint API. The tool is thus delivered as a cloud service to prevent its use by ill-intentioned actors for profit. In this way, the company retains control over its creature.

For now, only a handful of hand-picked elected officials are allowed to use this API. Interested parties can be placed on a waiting list and receive permission to use GPT-3 through OpenAI. This is a very controlled closed beta, with a small number of developers forced to present their ideas beforehand.

On does not know at this time when and at what price a possible commercial service will be offered. For the time being, OpenAI does not plan to develop such an offer in the near future .

In the meantime, you can download GPT-2 on Github. The source code is also available, in Python format for the TensorFlow framework.

The future of GPT-3

In spite of its weaknesses and limitations, GPT-3 represents a new step in the field of Machine Learning. This IA is distinguished in particular by its generality.

Until recently, the neural networks were only capable of performing specific tasks. They were trained from data sets designed specifically for this task.

This is not the case for GPT-3, which has no specific function and does not require a specialized data set. This neural network absorbs large volumes of text and reuses it to produce answers to any question.

For OpenAI and other artificial intelligence researchers, the next goal is to overcoming the limitations of the “pre-training” approach on which GPT-3 is based. The challenge is to enable AI to learn directly from humans.

Another lead would be combine pre-training with other methods of Deep Learning, such as the reinforcement learning notably used by DeepMind’s AlphaZero to win at chess and Go. Already in September 2020, OpenAI started using reinforcement learning to train GPT-3 to produce better article summaries based on feedback from human readers.

Researchers are also considering adding other types of data to provide GPT-3 with a more complete view of our world. For example, the AI could be populated with images and videos to complement the text data. OpenAI has already created a new DALL-E AI capable of generating images from text.

In short, GPT-3 represents a further step towards the emergence of a “generalist” artificial intelligence comparable to human intelligence. It suggests a future that is both fascinating and disturbing……where the AI will be able to match or surpass the human brain…

Be the first to comment

Leave a Reply

Your email address will not be published.


*