What is neuromorphic computing?

As the name suggests, neuromorphic computing uses a model that's inspired by the workings of the brain.

The brain makes a really appealing model for computing: unlike most supercomputers, which fill rooms, the brain is compact, fitting neatly in something the size of, well... your head. 

Brains also need far less energy than most supercomputers: your brain uses about 20 watts, whereas the Fugaku supercomputer[1] needs 28 megawatts -- or to put it another way, a brain needs about 0.00007% of Fugaku's power supply. While supercomputers need elaborate cooling systems, the brain sits in a bony housing that keeps it neatly at 37°C. 

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation[2] (TechRepublic Premium)

True, supercomputers make specific calculations at great speed, but the brain wins on adaptability. It can write poetry, pick a familiar face out of a crowd in a flash, drive a car, learn a new language, take good decisions and bad, and so much more. And with traditional models of computing struggling, harnessing techniques used by our brains could be the key to vastly more powerful computers in the future.

Why do we need neuromorphic systems?

Most hardware today is based on the von Neumann architecture, which separates out memory and computing. Because von Neumann chips have to shuttle information back and forth between the memory and CPU, they waste time (computations are held back by the speed of the bus between the compute and memory) and energy -- a problem known as the von Neumann bottleneck.

By cramming more transistors onto these von Neumann processors, chipmakers have for a long time been able to keep adding to the amount of computing power on a chip, following Moore's Law.

Read more from our friends at ZDNet