In a scientific study, researchers discovered that AI shares a vital need with humans: sleep. This would be essential to enable neural networks to forget previously acquired knowledge when learning new tasks…
Neural networks can outperform humans on many tasks. However, AI has a weakness it can suddenly forget everything it has learned if asked to learn new knowledge.
This “catastrophic oversight” is one of the main challenges in the field of deep learning. When an AI learns a new task, it tends to abruptly forget all previously accumulated knowledge. In other words: the AI overwrites its past data with new information.
In comparison, the human brain is capable of continuing to learn new tasks throughout its life without affecting its ability to apply old knowledge.
This is a weakness of artificial intelligence compared with human intelligence. However, this is not the case, scientists have discovered a technique to prevent machines from suffering such amnesia…
And ironically, this technique is the same as that used by humans: sleeping. Indeed, science has shown that the brain learns better when learning sessions are interspersed with periods of sleep. This helps to incorporate recent experiences into the pool of long-term memories.
According to Erik Delanois, a neuroscientist at the University of California, San Diego, “ memory reorganization could be one of the main reasons why organisms need to go through a resting stage “.
An old and erroneous approach
Previously, other researchers have attempted to solve the problem of catastrophic forgetting by letting the AI mimic sleep. For example, the ” interwoven learning “interleaved training consists of simultaneously feeding a machine with new data and previously assimilated old data.
This approach allows the neural network to learn a new task, while preserving its past knowledge. The original idea was to imitate the way the brain works during sleep, by reliving old memories.
However, the scientists behind this method thought that interleaved learning required feeding a neural network with all the data used to learn old skills. And this, each time it had to be taught new knowledge.
The technique required a great deal of time and data. What’s more, it turns out that it doesn’t really reflect what the brain does during sleep. It doesn’t store all the old data, and would never have time to replay it.
A neural network close to the human brain
The new study by Erik Delanois and his team has made it possible to to better analyze the mechanisms of catastrophic forgetting, and the role of sleep in preventing it.
Rather than using ordinary neural networks, the researchers used a spiking neural network more similar to the human brain.
To understand this better, we need to go back to how neural networks work. As a reminder, components called neurons are fed with data and cooperate to solve a problem such as face recognition.
The network adjusts its synapses many timesand checks whether it is more successful in finding a solution to the problem with each configuration. Over time, it discovers which synapse pattern performs best and adopts it by default. This process mimics the learning process of the human brain.
In most neural networks, each neuron produces a number which changes continuously in parallel with the data received as input. This is similar to the number of signals a biological neuron can emit over a period of time.
The how the spiking neural network works is a little different. Each neuron generates a signal only after receiving a certain volume of signals over a period of time. This is precisely how biological neurons behave.
Consequently, the spiking neural network sends far fewer signals and therefore requires less data than conventional neural networks. They therefore require less computing power and bandwidth.
A good nap between two lessons
In the course of the study, the spiking neural network had to learn to recognize vertical pairs of particles on a grid. However, it had already learned to detect horizontal pairs. During the first attempthe suffered the famous catastrophic oversight.
Later, between two learning sessions, the researchers added intervals during which the neurons involved in learning the first task were reactivated. This process is closer to the modern scientific understanding of biological sleep.
According to the study’s second co-author, Pavel Sanda, a neuroscientist at the Institute of Computer Science of the Czech Academy of Sciences “ the advantage is that we don’t store data explicitly associated with early memories to artificially replay them during sleep to avoid forgetting “.
In fact, the researchers found that their strategy helps prevent catastrophic forgetting. The spiking neural network has become capable of performing both tasks after the sleep phases.
Thus, the scientists suggest that their strategy helped preserve synapse patterns associated with old and new tasks. As Delanois points out, ” our work highlights the value of developing solutions inspired by biological “.
These researchers also note that their findings are not not limited to spiking neural networks. These sleep-simulating phases could also help overcome catastrophic forgetting on standard neural networks.
Are human beings programmed to recreate themselves?
The study by these scientists is presented in the journal PLOS Computational Biology. This project highlights how we can draw inspiration from biology and our own organism to overcome obstacles towards the future. the creation of a perfect general AI comparable to the human brain.
However, this similarity may also raise deeper questions. Why have human beings always wanted to create artificial intelligence and robots? Is it a question of way of recreating himself ?
And if without knowing it, we ourselves were mere robots created by an advanced civilization? This discovery lends credence to the theory of many scientists, including Elon Musk, that our world is merely a simulation…