Inscrivez-vous maintenant pour un meilleur devis personnalisé!

Intel's Hala Point, the world's largest neuromorphic computer, has 1.15 billion neurons

Apr, 18, 2024 Hi-network.com
intel-2024-hala-point-case-lifting-up
Intel

Three years after introducing its second-generation "neuromorphic" computer chip, Intel on Wednesday announced the company has assembled 1,152 of the parts into a single, parallel-processing system called Hala Point, in partnership with the US Department of Energy's Sandia National Laboratories.

The Hala Point system's 1,152 Loihi 2 chips enable a total of 1.15 billion artificial neurons, Intel said, "and 128 billion synapses distributed over 140,544 neuromorphic processing cores." That is an increase from the previous Intel multi-chip Loihi system, debuted in 2020, called Pohoiki Springs, which used just 768 Loihi 1 chips.

Sandia Labs intends to use the system for what it calls "brain-scale computing research," to solve problems in areas of device physics, computer architecture, computer science, and informatics. 

Also:Intel rolls out second-gen Loihi neuromorphic chip with big results in optimization problems

"For the first time we're showing standard deep neural networks being mapped and transformed into a form that can run at this kind of scale in a neuromorphic system," Mike Davies, the head of Intel's Neuromorphic Computing Lab, told . "That is a first for anybody, to show that standard deep neural networks can actually, with some caveats, run with the competitive efficiency on par with the very best GPUs and, ASICs [application-specific integrated circuits] that are being produced now."

Neuromorphic computing is an umbrella term given to a variety of efforts to build computation that resembles some aspect of the way the brain is formed. The term goes back to early 1980s work by legendary computing pioneer Carver Mead, who was interested in how the increasingly dense collections of transistors on a chip could best communicate. Mead's insight was that the wires between transistors would have to achieve some of the efficiency of the brain's neural wiring.

There have been many projects since then, including work by Winfried Wilcke of IBM's Almaden Research Center in San Jose, the TrueNorth chip effort at IBM, and Intel's Loihi project. ZDNet's Scott Fulton III has a great roundup of some of the most interesting developments in neuromorphic computing.

Also:Intel, partners make new strides in Loihi neuromorphic computing chip development

The premise of most neuromorphic chips is that replicating the asynchronous "spike" of the brain's neurons is a more efficient approach than using billions of neural net "weights," or, "parameters" that repetitively transform every data point.

Intel's focus with neuromorphic computing has been mostly in "edge" computing devices, such as slimmed-down server computers with embedded processors rather than Xeon-class machines.

intel-2024-hala-point-specs
Intel

The Hala Point machine is an effort by Davies and team to explore to what extent neuromorphic can scale. 

"There is a really compelling long-term vision for scaling up on the basic science level," Davies said. "We all have the human brain scale in mind; it'd be great to build a system that large and show it doing something even close to what a human brain can achieve." The human brain is believed to have a trillion neurons, though not necessarily all functioning at the same time.

Increasing scale may be important for revealing what neuromorphic computing is capable of. Just as large language models such as OpenAI's ChatGPT gain what are called "emergent" capabilities with the increasing size of neural net models, and increasing compute budgets, "we believe that the same scaling advantages and trends we'll see with neuromorphic systems as well," Davies said. 

(An "AI model," or "neural net model," is the part of an AI program that contains numerous neural net parameters and activation functions that are the key elements for how an AI program functions.)

Also:Why neuromorphic engineering triggered an analog revolution

Hala Point is capable of producing 20 quadrillion operations per second at 15 trillion operations per second per watt, using 8-bit math. That energy consumption is superior to what GPU chips and CPUs require, Intel claims.

Beyond these metrics, Intel is still learning about the kinds of productivity gains that can come from such a scaled-up neuromorphic system. To prove the worth of Hala Point, Intel is focused on how the system can perform for hard optimization problems, such as those that arise in drug development. At a small scale, on a per-chip basis, the Loihi 2 parts can be up of 50 times faster than conventional chips, Davies said.

"We're very excited by those kinds of speed-ups we've observed," Davies said, "and a hundred to a thousand times energy savings at that scaled-up level in Hala Point could be tremendous savings, and really valuable for scientific problems."

Although it is a research system, Hala Point can help to reveal neuromorphic advantages that could be implemented in the next version of Loihi, or in smaller edge configurations, Davies said.

Also:Intel Labs searches for chip giant's next act in quantum, neuromorphic advances

"If we find something at a very large scale that really performs well, we then can think about ways to specialize the architecture so that we could shrink that down to a scale that could fit within a smaller edge form factor," he said. 

Neuromorphic is "never going to replace GPUs or today's deep learning accelerators for the types of workloads that run well," Davies said. However, recent research suggests there are areas of high-performance computing where it can have an edge. 

At a conference in South Korea this week, Intel scientists Sumit Bam Shrestha and team are presenting findings of a research paper comparing the Loihi 2 chip to Nvidia's edge computing platform Jetson, and also Intel's i9 embedded processor. 

The applications include PilotNet, a deep learning neural network that calculates "the steering angle of a car based on the input from a dashboard RGB camera." The neural net is tasked with processing frames of video using a "convolutional neural network," or a CNN, which has been extensively used in AI for decades.

Also:Neuromorphic computing finds new life in machine learning

"We find that these new neuromorphic approaches can provide orders of magnitude gains in combined efficiency and latency (energy-delay-product) for feed-forward and convolutional neural networks applied to video, audio denoising, and spectral transforms compared to state-of-the-art solutions," Shrestha and the team wrote.

Because the Loihi part uses asynchronous spiking functions, the chip doesn't operate unless there are changes in the data. That means saving on compute energy when there is redundant data, as can often be the case for video or images that feature pixels that don't change.

"If there's a signal where there's temporal continuity in the input stream, the architecture can take advantage of the fact that there sometimes is no change, a given pixel doesn't change, so therefore it doesn't need to recompute the entire frame," Davies explained.

The "next priority" for Intel is commercialization of neuromorphic computing, Davies said. "I would say we're a couple years away from commercialization, but no more than that," Davies said. Intel is not in any rush to be the first to commercialize, he said. "Our interest has been in making sure that when we do commercialize, we're giving the biggest factor of gains that we can, to provide the greatest value differentiated from the existing technologies, existing architectures."

tag-icon Tags chauds: Innovation et Innovation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.