Intel unveils Loihi neuromorphic chip, chases IBM in artificial brains

1843

Intel has announced what it calls the first-of-its-kind self-learning neuromorphic chip, named Loihi, which it says will get smarter over time and enable extremely power efficient designs. It follows IBM’s work on neuromorphic processors, which draw inspiration from the human brain, and are poised to heavily disrupt any application that needs processing power in a mobile device.

[Neuromorphic computing is a concept developed by scientist and engineer Carver Mead in the late 1980s. It describes the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system.]

Following the Loihi announcement, Intel also unveiled a new quantum computing chip, in collaboration with QuTech (a Dutch company that scored $50m of Intel investment in 2015), which houses 17 qubits, and was developed over the past 18 months. That’s a very different kettle of fish from the machine-learning elements in Loihi, but Intel is pouring money into its R&D divisions, hoping to stay ahead of the curve and prevent GPU and ARM-based incursions on its core CPU markets.

[A qubit is a quantum bit, the quantum analogue of the classical bit. Quantum mechanics allows the qubit to be in a vertical and horizontal polarization state at the same time a property fundamental to quantum computing.]

In addition, Intel has also unveiled new Field Programmable Gate Array (FPGA) offerings, using its Arria 10 GX FPGA PCI server cards to expand the processing power of a Xeon-based server, thanks to a new Acceleration Stack for those Xeons that plays nicely with the FPGA card. Intel spent $16.7bn acquiring FPGA specialist Altera, back in 2015, and thanks to the silicon’s ability to be repurposed to suit specific applications, they can be bent to perform exceptionally well in dedicated applications – without having to design individual specialist chips for each separate application.

But Intel is far from the only neuromorphic player. IBM and its TrueNorth processors are the most advanced neuromorphic proposition on the market, with IBM already having contracts with laboratories in the USA, including the Lawrence Livermore National Laboratory to run simulations to evaluate the safety of the US nuclear arsenal. IBM also has a development partnership with Samsung, which sees Samsung use IBM’s newer SyNAPSE design in Samsung’s Dynamic Vision Sensor – which provided a 2000-fps view of the world in a 300mW power package.

Qualcomm was also a player in neuromorphic designs, but seems to have cooled on the technology – publicly, at least. Back in 2013, it was working on Zeroth processors, actual silicon hardware, before morphing the Zeroth project into a more software-based proposition that will eventually find its way into its core SoC offerings – as a software code stack, rather than as a standalone processor.

The promise of neuromorphic chips is their potentially incredible power efficiency. The human brain is an incredibly efficient processing engine, and chips built to mimic its design appear to reap the rewards. Intel claims Loihi is about 1000x more energy efficient than the general-purpose computing power needed to train the neural networks that rival Loihi’s performance.

In theory, this means that chips like Loihi can be much more quickly turned to tasks that use pattern recognition and intuition, which currently rely on vast banks of CPUs and GPUs to achieve results. One chip could replace all the hard work of a traditional machine-learning instance, as well as powering a device out in the wild that can carry out advanced pattern recognition – thanks to having both training and inference on the same silicon.

Intel says that this dual system will allow for machines to operate independently of a cloud connection, and says that its researchers have demonstrated a learning rate that boasts a million-times improvement over typical spiking neural networks in MNIST finger/digit recognition problems. Intel says this is much more efficient than using convolutional neural networks (CNNs) or deep-learning neural networks (DLNNs).

The brain’s neural networks relay information with pulses or spikes, modulate the synaptic strengths or weight of the interconnections based on timing of these spikes, and store these changes locally at the interconnections. Intelligent behaviors emerge from the cooperative and competitive interactions between multiple regions within the brain’s neural networks and its environment,” notes Michael Mayberry, Corporate VP and MD of Intel Labs.

The reason Intel in particular is so keen on being at the heart of these new brain-chips is due to their ability to generalize – something that the current model of training doesn’t do well. Currently, a machine-learning process could be trained to expertly identify cats, but it would be awful at spotting dogs, even though they share many characteristics.

This is because the training data set and model do not account for dogs, and so a completely different model would need to be trained to spot dogs. With a chip that can self-learn without the need for the new training data set, it should be much easier to tweak the system to spot smaller generalized differences in data or events. In this very simplified example, it would be much easier for the chip to begin spotting dogs, through some smaller software tweaks, as it has already learned how to do most of the task at hand. You wouldn’t need to go almost all the way back to the drawing board – in theory.

Intel points to a system for monitoring a person’s heartbeat, taking readings after events such as exercise or eating, and uses the neuromorphic chip to normalize the data and work out the ‘normal’ heartbeat. It can then spot abnormalities, but also deal with any new events or conditions that the user is subject to – in theory, meaning that you don’t have to create a training data set to cover all bases. It should provide shorter development time, and better performance in the wild. Again, these are the promises – we’re still a long way from benchmarking these things in the wild.

Loihi will be released to research institutions and universities in the first half of 2018, who will get to test its fully asynchronous neuromorphic many-core mesh – a design that Intel says supports a wide range of neural network topologies, and allows each neuron to communicate with the other on-chip neurons (hence mesh). Each of the neuromorphic cores has a programmable learning engine, which the developers can use to adapt the neural network parameters, to support the different ‘learning paradigms’ – supervised, unsupervised, and reinforcement, mainly.

The first iteration of the Loihi chip was made using Intel’s 14nm fabrication process, and houses 1,024 artificial neurons that provide 130,000 simulated neurons – giving it 130m synapses, which is still a rather long way from the human brains 80bn synapses, and behind IBM’s TrueNorth, which has around 256m but using 4,096 cores. It seems Intel is getting more synapses with fewer cores, but there’s no practical way of benchmarking the two chips, so it’s not feasible to declare either approach superior at this stage.

As the neurons spike and communicate with other neurons, as they process information, the inter-neuron connections are strengthened – which leads to improved performance (the learning element). That learning takes place on the chip, and doesn’t require the enormous data sets, but still requires knowing what the question is, and how to gauge a correct answer.

Source: Rethink Research