Neuromorphic Computing Solves the Energy Problem of Tomorrow

Scalable, Small, Fast and Incredibly Energy Efficient.

Neuromorphic Computing Solves the Energy Problem of Tomorrow

Technology Article

7 Min
May 31, 2022
Modal to share this post
Powered by:
Logo Fraunhofer IPMS
Logo Think WIot dark.

The Human Brain as Benchmark

The human brain performs five orders of magnitude better than any artificial intelligence. Language understanding and image recognition are two areas in which the difference between the brain and conventional computers has already been particularly researched. For example, depending on the equipment, the rate of correct image recognition of a trained AI is seldom greater than 90 %.

This is a challenge for AI developers, and is compounded by a serious energy problem. Training a large-scale speech recognition AI generates over 300 kg of CO2, which is roughly equivalent to the carbon footprint of an average 17-year-old American.

Neuromorphic computing refers to the processing of data in a manner similar to that of the human brain. The performance of neurons is imitated by microelectronic components. Immeasurable computing power and speed are being made possible.

In an interview with RFID & Wireless IoT Global, Dr. Thomas Kampfe, Group Leader Integrated RF & AI at the Fraunhofer Institute for Photonic Microsystems IPMS, explains why the future of computing no longer lies in Central Processing Units (CPUs) but in Neural Processing Units (NPUs).

Edge Computing

Data Processing with High Energy Consumption and Low Computing Power

The Internet of Things (IoT) is expected to grow by more than 25 percent per year globally up to 2028. Home offices, smart homes, smart health, cloud computing, e-commerce – the number of data-generating applications that are being integrated into the everyday lives of ever larger population groups is increasing significantly. In industry and industry-related research, supercomputers, autonomous driving and novel AI algorithms are also being researched. There is a global data growth of 40 percent annually. By 2020, 97 percent of all network-based global data traffic will have been processed via data centers.

Data centers are particularly important in the field of machine learning. Training an algorithm for speech or image recognition is very complex and can take months of uninterrupted computing. Without these algorithms, data sets are even harder to analyze. As such, data centers are incredibly important. However, they are not as powerful as desired in the face of growing data streams. Most of the time, only 45 % of their computing capacity is used; 100 % utilization rarely or never occurs. For real-time applications, on the other hand, conventional computer structures are simply not powerful enough.

Dr. Thomas Kämpfe’s work takes place at this interface between the pursuit of energy-efficient computing and high-performance components. “Solving the energy problem of computing by increasing efficiency – this is a main focus of current semiconductor industry research, and this is also where the contribution of Fraunhofer’s lead project NeurOSmart, of which my research is a part, stems from,” explains Thomas Kämpfe. With his team, he is developing microelectronic chips that are modeled on human neural networks in terms of performance and efficiency. Their energy consumption is extremely low.

The Energy Problem of Computing

The carbon footprint of all data centers in the world is about 100 megatons of CO2, which can be compared to the annual footprint of American commercial aviation before the Corona pandemic. Projections are that this footprint will increase 2 to 9 times over the next decade. In 2030, electricity consumption by data centers alone may reach 974 TWh. In comparison, 565 Twh of electricity was consumed in Germany in 2021. The high power consumption is due to Von Neumann’s architecture, the underlying construction principle of conventional computers.

Essentially, all communication processes in such a computer run via the bus: the input is passed on to the bus, which then passes it on to the arithmetic unit and the memory, and finally to output devices or other connected peripherals. In this split computer architecture, there are four places – because everything passes through the bus – where a data transfer must occur. Power is required for each transmission.

Data Processing with High Energy Consumption and Low Computing Power

Data centers are particularly important in the field of machine learning.

The Efficiency of Computer and Server Architectures

This also explains the low efficiency of conventional computers. In Von Neumann’s architecture, only one of the components can write to the bus at a time. Simultaneous write operations are not possible, which limits the computing speed of such a system. Latency cannot be ruled out. The situation is similar for data centers: data is transferred to them, processed and transferred back again, latency times included. However, in order to increase computing speed, computing services can be provided closer to where they occur, without the need for a server.

This is the principle of edge computing: instead of transferring all data to the central server and back again from there in an energy- and time-consuming manner, computing is performed at the edge of the network. This way, latency times can already be minimized and real-time services can be enabled. Edge computing is also referred to as “near memory computing,” which is basically data processing close to the memory.

In-Memory Computing

Our Declared Goal is to Reduce Power Consumption in Data Centers

Thomas Kämpfe, Group Manager Integrated RF & AI at Fraunhofer IPMS, researches microelectronic components with particularly low energy consumption. The research field focuses on in-memory computing. “With in-memory computing, storage is done in the computing unit. Everything is processed in one place. Intermediate storage is prevented wherever possible, which is why less energy is needed,” explains Thomas Kämpfe.

The miniaturization of components is a focus of industry-related research at Fraunhofer IPMS. Work has long been underway there to bring these processes to the scale level of a chip. Various types of memory are being tested in collaboration with different transistors. The goal is to develop a memory that is small yet performant.

Thomas Kämpfe, Fraunhofer IPMS

Dr. Thomas Kämpfe, Group Leader Integrated RF & AI, Fraunhofer IPMS

“Our declared goal is to reduce power consumption in data centers. With in-memory computing, data is stored in the computing unit. Everything is processed in one place. Intermediate storage is prevented wherever possible, which is why less energy is needed.”

Dr. Thomas Kämpfe

– Group Leader Integrated RF & AI

at Fraunhofer IPMS

Pilot Project with Industrial Robots

In an ongoing pilot project, the cooperation between humans and robots is recorded via LiDAR. The emphasis is primarily on the detection of humans at close range of a robot with AI. The robot captures its environment accurately and three-dimensionally using LiDAR sensors.

The goal of this project is to change the robot’s motion depending on the human’s motion. The motion adaptation of the robot must be performed very quickly. There must be no security-relevant gap due to latency. For applications of this kind, a new accelerator generation for AI algorithms is being developed at Fraunhofer IPMS to close the gap in security.

Fraunhofer IPMS: Pilot Project with Industrial Robots

Robot systems are equipped with a complex LiDAR system developed at Fraunhofer so that they can recognize their surroundings with the help of detailed proximity information even under poor weather conditions and over a wide range of distances.

AI and Complex Algorithms

Processing large amounts of data in (near) real time alone is often referred to as an AI application.

“This is very far from cognitive artificial intelligence, which has consciousness, for example,” explains Thomas Kämpfe. “AI, in today’s parlance, starts where you develop an algorithm that can process something that you could not process before and that may even be predictive. However, some of that can be classified as machine learning. That’s the development step before AI, where complex regression analyses are run over large amounts of data to generate results. But an AI does not analyze. Ultimately, with an AI, you have no way of knowing how exactly the AI is evaluating the data and why the algorithm chose the solution variant that it did.”

Processing large amounts of data in (near) real time alone is often referred to as an AI application.

Processing large amounts of data in (near) real time alone is often referred to as an AI application.

Neuromorphic Semiconductor Chip

Neuromorphic Computing

“In-memory computing tries to speed up or simplify operations that you need at all times in AI algorithms,” explains Thomas Kämpfe. Neuromorphic computing is the next step in this direction. Based on in-memory computing, this involves developing networks of electromagnetic neurons that enable real-time computing performance. Neurons are stimulated by an electrical charge transmitted via a nerve stimulus. They in turn transmit information to other neurons via synapses. It is the same with artificial neurons in neuromorphic chips. The individual neurons are connected to each other with synapses.

Not all neurons are needed for computational operations – only some are necessary. The type of connection that occurs in a particular computational operation carries weight. How exactly individual neurons are addressed can be calculated from the weights of all their connections as a weighted sum. A neuromorphic chip organizes and regulates itself much like the human brain. Data processing and data storage merge within it, and it can adapt plastically to the required performance depending on the task at hand. In other words, a neuromorphic chip can handle tasks for which it was not programmed.

After completing the tasks, it reconfigures itself without further power. For this reason alone, neuromorphic chips require very little power. Furthermore, there is no need for a constantly applied energy pulse. An artificial neuron starts its work the moment a stimulus arrives and finishes it fractions of a second later.

Neuromorphic Semiconductor Chip from Fraunhofer IPMS

Neuromorphic Semiconductor Chip

We take time for every enquiry!

Your message was sent successfully!

Thank you for reaching out. We'll get back to you shortly!

Oops!

Something went wrong. Please try again later.

Processing!

Thank you for your message. We are processing the information.

Upcoming events

Think WIoT Day

Livestream on Wireless IoT in Logistics

Date

March 26th 2025

Location

Online

WIoT tomorrow 2025

International Exhibition | Summit

Date

22. - 23.

October 2025

Location

Wiesbaden,

Germany