Simon Thorpe, neuroscientist and computer scientist

There is a paradox in artificial intelligence (AI). The technology is already very powerful, and most people agree that it will transform every industry and every aspect of our lives. But deployment of AI in industry seems to be proceeding slower than expected. One explanation for this is that CEOs and CTOs are understandably nervous about deploying systems that are unpredictable. They are even more nervous about systems which make mistakes, but unless they make mistakes they cannot learn.

Fast progress in AI research

Whatever the hold-ups in industry, AI is making great strides in the lab. Researchers are surprised, and some of them are becoming concerned. Simon Thorpe is one researcher who believes that AI is progressing much faster than almost anyone recognises, and that much more attention should be paid to it. He has spent 40 years researching how the human brain works, and what we can learn about that from computers. For much of that time he has been a director at France’s CNRS, the Centre Nationale pour Recherche Scientifique (National Centre for Scientific Research). He was the guest on the latest episode of the London Futurist Podcast.

Simon has some most eyebrow-raising predictions about how fast AI is advancing, but before we can get to that, we need to look at his ideas about why human brains are so much more energy-efficient than machines.

The human visual system

The visual perception system in the human brain is basically a feed-forward system, meaning that it can make high level decisions without the need to use feedback. This enables it to be fast and efficient, so we can recognise important images – like the faces of family members – very fast, and even when there is noise in the signal, like an out-of-focus photo. This feature of our visual perception system has drawbacks: when we are looking out for something specific we are very likely to miss something else which is even more important. Simon observes that security guards looking for guns are surprisingly prone to failing to notice the presence of a hand grenade. This also explains how conjuring tricks work.

It is a form of bias known as “inattentional blindness”, and a well-known demonstration can be found in this video of a game of basketball: http://bit.ly/1gXmThe. If you haven’t seen this, take a look now, and then come back. I can almost guarantee it will be the most surprising thing you see today.

Energy-efficient brains

Human brains are currently far more energy-efficient than AI systems. The human brain consumes around 20 watts, about the same as a light bulb. GPT-3 and other large natural language processing models use thousands of times more.

Indeed, if you wanted to simulate the 86 billion neurons in the human brain using the sort of model used in the current range of Deep Learning trained neural networks you would need an enormous amount of computation – around 500 petaflops, or 500,000,000,000,000,000 floating-point operations per second. This is half an exaflop, and the largest supercomputers in the world have only just reached that scale. Those supercomputers use around 30 Megawatts of power – over 1 million time more energy than the brain.

Sparse and spiking

Simon is convinced that to make machines as efficient as brains they need to employ “sparse networks”. The basic idea here is that when humans recognise an image, only the neurons which are trained to expect the components of that image need to fire. In machines, by contrast, the state of every neuron needs to be taken into account for every computation.

Furthermore, machines also need to adopt a spiking model. Artificial neurons have continuously varying activation values, encoded using floating point numbers that are computationally very expensive. By contrast, neurons send information using electrical pulses or spikes that can be very sparse. When you have lots of neurons, the order in which they fire can convey information very efficiently, and with very few spikes. Simon argues that the great majority of AI researchers are simply ignoring this crucial fact, and that as a result, they are going to be very surprised by the speed of some imminent developments.

Terabrain

Simon is impressed with the power of Apple’s latest proprietary chips, found in its most recent laptop computers. He thinks that by employing the ideas described above, he can design AI systems to run on these computers that have billions of neurons and hundreds of billions of connections. He calls this his Terabrain project, and he plans to open source these designs, making them freely available to researchers everywhere. Remarkably, he believes that using such designs, it may be possible to create something similar to artificial general intelligence (AGI) before the end of 2023. Superintelligence, he says, may not be far behind. If he’s right, the world is about to change completely.

Related Posts