The new ‘liquid’ AI continuously learns from its experience of the world

Despite all the comparisons with the human brain, AI still doesn’t look much like us. Maybe everything is fine. In the animal kingdom, brains exist in all shapes and sizes. So, in a new approach to machine learning, engineers eliminated the human brain and all its beautiful complexity – turning to the brain of a humble worm for inspiration.

It turns out that simplicity has its benefits. The resulting neural network is efficient, transparent and here’s the kick: it is a lifelong learner.

Considering that most machine learning algorithms cannot improve their skills beyond an initial training period, the researchers say the new approach, called the liquid neural network, has a type of “neuroplasticity” built into it. That is, as he does his job – say, in the future, perhaps driving a car or driving a robot – he can learn from experience and adjust his connections dynamically.

In a noisy and chaotic world, this adaptability is essential.

Worm-Brained Driver

The architecture of the algorithm was inspired by the mere 302 neurons that make up the nervous system of C. elegans, a small nematode (or worm).

In work published last year, the group, which includes researchers from MIT and the Austrian Institute of Science and Technology, said that despite its simplicity, C. elegans is capable of surprisingly interesting and varied behavior. They then developed equations to mathematically model the worm’s neurons and built them into a neural network.

Its worm brain algorithm was much simpler than other state-of-the-art machine learning algorithms, and yet it was able to perform similar tasks, such as keeping a car in its lane.

“Today, deep learning models with many millions of parameters are often used to learn complex tasks, such as autonomous driving,” said Mathias Lechner, a doctoral student at the Austrian Institute of Science and Technology and author of the study. “However, our new approach allows us to reduce the size of networks by two orders of magnitude. Our systems use only 75,000 trainable parameters. “

Now, in a new article, the group takes its worm-inspired system further, adding a whole new feature.

Old worm, new tricks

The output of a neural network – turning the steering wheel to the right, for example – depends on a set of weighted connections between the network’s “neurons”.

In our brains, it’s the same. Each cell in the brain is connected to many other cells. Whether a particular cell fires or not depends on the sum of the signals it is receiving. In addition to some limit – or weight – the cell fires a signal to its own network of downstream connections.

In a neural network, these weights are called parameters. As the system feeds the data over the network, its parameters converge in the configuration producing the best results.

Normally, the parameters of a neural network are locked after training and the algorithm put in place. But in the real world, this can mean that it is a little fragile – show an algorithm something that deviates too much from your training and it will fail. It is not an ideal result.

In contrast, in a liquid neural network, parameters can continue to change over time and with experience. AI learns at work.

This adaptability means that the algorithm is less likely to break as the world launches new or noisy information in its path – such as when rain obscures an autonomous car’s camera. In addition, in contrast to larger algorithms, whose inner workings are largely inscrutable, the simple architecture of the algorithm allows researchers to examine the interior and audit their decision making.

Neither his new ability nor his diminutive stature seemed to contain AI. The algorithm performed as well or better than other state-of-the-art time sequence algorithms in predicting the next steps in a series of events.

“Everyone talks about expanding their network,” said Ramin Hasani, the study’s lead author. “We want to reduce the scale, have fewer knots, but are richer.”

An adaptive algorithm that consumes relatively little computing power would be an ideal robot brain. Hasani believes that the approach can be useful in other applications that involve real-time analysis of new data, such as video processing or financial analysis.

He plans to continue dialing in the approach to make it practical.

“We have a demonstrably more expressive neural network that is inspired by nature. But this is just the beginning of the process, ”said Hasani. “The obvious question is how do you extend this? We think that this type of network may be a key element of future intelligence systems. “

The bigger is better?

At a time when big companies like OpenAI and Google are regularly making headlines with giant machine learning algorithms, it’s a fascinating example of an alternative approach going in the opposite direction.

OpenAI’s GPT-3 algorithm fell apart collectively last year, both for its size – a record 175 billion parameters at the time – and for its skills. A recent Google algorithm topped the charts for more than a trillion parameters.

However, critics fear that the drive for ever-increasing AI to be wasteful, expensive and consolidate research in the hands of some companies with money to finance large-scale models. In addition, these huge models are “black boxes”, and their actions are impenetrable. This can be especially problematic when unsupervised models are trained on the unfiltered Internet. There is no way to tell (or perhaps control) what bad habits they will acquire.

Increasingly, academic researchers are aiming to address some of these issues. As companies like OpenAI, Google and Microsoft strive to prove the hypothesis that bigger is better, it is possible that serious AI innovations in efficiency will emerge elsewhere – not despite a lack of resources, but Why of this. As they say, necessity is the mother of invention.

Image credit: benjamin henon / Unsplash

Source