In the quest for ever faster and more efficient computing, researchers and manufacturers are busy exploring novel processing architectures. Among these, neuromorphic computing—the emulation of brain function inside computer chips—is showing particular promise for applications involving deep learning, an increasingly common form of artificial intelligence (AI) that uses neural networks inspired by brains to uncover patterns in large datasets.

In traditional machine learning based on conventional computer hardware, the memory and processing nodes are physically separated. In contrast, neuromorphic computer hardware mimics neurons and places both functions in the same spot. By eliminating the need to transfer data back and forth between processing and storage sites, this architecture can substantially reduce computing time and power requirements for certain specific learning tasks such as pattern recognition and classification.

While the concept of neuromorphic computing originated in the late 1980s, its trajectory has been hampered by the slow pace of algorithm development, the need for novel materials with which to build the joint memory/processing nodes, and challenges in scaling up. Early neuromorphic neural networks had no "plasticity,” said Thomas Cleland, a professor of psychology at Cornell University in Ithaca, NY, USA; once they were set up and trained to do a particular task, that was it—to do something different they needed to be rebuilt and retrained. That constraint was "extremely limiting,” said Cleland.

Technical advances have now largely overcome this constraint. "One of the fundamental advances in AI over the last decade is coming up with faster and better ways to do learning,” said Gabriel Kreiman, a professor of ophthalmology and associate director of the Center for Brains, Minds and Machines at Harvard Medical School in Cambridge, MA, USA. "Implanting plasticity directly on the hardware so it can be retrained without starting from scratch can be quite transformative.”

Two new applications of neuromorphic computing showcase the potential of this kind of design to efficiently solve a wide array of problems with great speed and minimal power expenditure: an electronic nose that can learn the scent of a chemical after just one exposure [1] and a machine-vision device with an image sensor that doubles as an artificial neural network and can process images thousands of times faster than conventional technology [2,3].

The electronic nose is a "one-shot learning” olfaction system Cleland built with Nabil Imam, an engineer at Intel’s Neuromorphic Computing Laboratory in Santa Clara, CA, USA. The system is powered by Intel’s fifth-generation neuromorphic chip (Fig. 1 [1]), Loihi, which contains 128 core processing units, each with a built-in learning module, and more than 130 000 computational "neurons” linked to thousands of their neighbors [4].

《Fig. 1》

Fig. 1. Cornell University and Intel researchers built an electronic nose that can learn the scent of a chemical after just one exposure on top of Loihi, Intel’s fifthgeneration research chip for neuromorphic computing [1]. The chip, shown here, places memory and processing nodes within individual modules to enable superefficient detection of odors and other patterned stimuli [4]. Credit: Tim Herman/ Intel Corporation.

Cleland and Imam evaluated their system by pitting it against a traditional neural network in a smell test of ten odors wafting through a wind tunnel outfitted with 72 metal oxide gas sensors (data derived from a publicly available dataset [5]). Training for the neuromorphic system involved a single exposure to each odor, while hundreds of trials went into training the traditional AI. Every learned smell comprised only 20%–80% of the overall tested aroma, reflecting real-world conditions where numerous odors often blend in with one another. The neuromorphic AI identified the target odor 92% of the time, compared to 52% of the time for the traditional AI [1].

"We can train our algorithm once on a clean odor, like orange or amyl acetate [a banana-like scent], and present that odor against many different backgrounds,” Cleland said. "You could test it in a bakery, a garbage dump, or a swamp, and it would be able to recognize that odor.”

Training of standard AI, in addition to being time-consuming and power-hungry, has to start from scratch every time a new smell is added. The neuromorphic AI, on the other hand, can keep learning new scents simply by adding new "neurons” to the network. Cleland is now trying to adapt the system to work in autonomous robots. "We would like to be able to train it within seconds, and have it accurately detect odors, even if they are deeply obscured by uncontrolled contaminants,” he said. "We do not want to have to say, ‘Oh yeah, it does not work when things are acidic or when it is too humid or whatever.’”

Potential applications for the system include air quality monitoring, toxic waste identification, land mine detection, trace drug detection, and medical diagnoses. However, the algorithm is not limited to chemosensation, Cleland said. He and his team have used it to classify ground cover types from hyperspectral satellite images and differentiate frog calls in South America jungles [6]. "We can work with anything where we have a sufficient number of sensors,” he said. "The one caveat is the sensors need to be good enough to detect the things you want to detect.”

While Cleland and Imam leveraged Intel’s Loihi chip, researchers at Vienna University of Technology (TU Wien) have designed their own neuromorphic chip that enables incredibly fast image processing (Fig. 2 [2,3]). Machine vision technology typically involves cameras scanning image pixels row by row, converting video frames to digital signals, then transmitting the data to off-board computers for analysis—all of which cause significant delays. The TU Wien group sought to speed up this process by developing an image sensor that itself functions as an artificial neural network capable of simultaneously acquiring and analyzing images. "Combining sensing with computing in one step really opens up a whole new direction for image interpretation,” said Lukas Mennel, a graduate student at the TU Wien Photonics Institute in Austria.

《Fig. 2》

Fig. 2. (a) The image sensor chip developed by TU Wien researchers doubles as an artificial neural network that processes images thousands of times faster than conventional techniques [2,3]. (b) The artificial neural network auto-encodes noise-free images projected onto the sensor into a current code which is converted into a binary activation code and finally reconstructed into an image by the decoder [2,3]. Once trained the auto-encoder can take noisy inputs and reconstruct the projected images. Credit: TU Wien, with permission.

The new sensor consists of a three-by-three array of pixels that each represents a neuron [2]. The pixels in turn consist of three photodiodes that each represents a synapse. Each photodiode is made from three-atom-thick sheets of tungsten diselenide, a semiconductor with a tunable response to light. Such tunability allows the photodiodes to remember and respond to light in a programmable way.

To test their system, the TU Wien researchers used lasers to project the letters "n,” "v,” and "z” onto the neural network image sensor [3]. The sensor was able to correctly process the image of the letter at the equivalent of 20 million frames per second (fps). In contrast, conventional machine vision technology would be capable of processing the images at no more than about 1000 fps.

Mennel said the sensor’s speed is limited only by the speed of the electrons in the circuits and that, theoretically, the system could operate a few orders of magnitude faster than what they have reported. In addition to the ultra-fast processing, the image sensor does not consume any electrical power when in operation. Rather, the sensed photons themselves provide the necessary electric current to power the sensor.

The TU Wien image sensor technology has a variety of highspeed applications, including fracture mechanics—determining which direction cracks propagate from—and particle detection— figuring out which of several possible particles has just passed by. While in theory the system could handle complex tasks such as guiding autonomous vehicles, it would need to be scaled up significantly, Mennel said. "So, the obvious next step is scaling up, which should be fairly easy since people are now able to build sensors with millions of pixels.”

Based on these results, it looks like neuromorphic computing could become an important part of the digital future. "The amount of power consumed by current machine-learning approaches is enormous, often prohibitively so,” Kreiman said. "Neuromorphic computing shows potential to revolutionize the way we think about computation, in terms of enabling certain approaches that are currently not feasible, and at a fraction of the cost.”