New algorithm predicts CPU power consumption billions of times per second while requiring little power or clean circuitry
Computer engineers at Duke University have developed a new AI method to accurately predict the power consumption of any type of computer processor over a trillion times per second while using barely the computing power it -same. Dubbed APOLLO, the technique has been validated on high-performance real-world microprocessors and could help improve efficiency and inform the development of new microprocessors.
The approach is detailed in an article published at MICRO-54: 54th Annual IEEE / ACM International Symposium on Microarchitecture, one of the leading conferences in computer architecture, where it was selected as the best conference publication.
“This is an intensely studied problem that traditionally relied on additional circuitry to solve,” said Zhiyao Xie, the paper’s first author and doctoral student in the lab of Yiran Chen, professor of electrical and computer engineering at Duke. “But our approach works right on the backend chip, which opens up a lot of new opportunities. I think that’s why people are so excited about it.”
In modern computer processors, calculation cycles are of the order of 3,000 billion times per second. Keeping track of the power consumed by such rapid transitions is important to maintaining the performance and efficiency of the entire chip. If a processor consumes too much power, it can overheat and cause damage. Sudden fluctuations in power demand can lead to internal electromagnetic complications that can slow down the entire processor.
By implementing software that can predict and prevent these unwanted extremes from occurring, computer engineers can protect their hardware and increase its performance. But such diets come at a cost. Keeping up with modern microprocessors usually requires valuable additional hardware and computing power.
“APOLLO approaches an ideal power estimation algorithm that is both precise and fast and can easily be integrated into a processing core at low energy cost,” said Xie. “And because it can be used in any type of processing unit, it could become a common component in future chip design.”
The secret of APOLLO’s power comes from artificial intelligence. The algorithm developed by Xie and Chen uses AI to identify and select only 100 of a processor’s millions of signals that most closely match its power consumption. It then builds a power consumption model from those 100 signals and monitors them to predict the performance of the entire chip in real time.
Because this learning process is autonomous and data-driven, it can be implemented on most computer processor architectures, even those that have not yet been invented. And while it doesn’t require any human designer expertise to do its job, the algorithm could help human designers do theirs.
“Once the AI has selected its 100 signals, you can look at the algorithm and see what they are,” Xie said. “A lot of the selections make intuitive sense, but even if they don’t, they can provide feedback to designers by informing them of the processes most strongly correlated with power consumption and performance. “
The work is part of a collaboration with Arm Research, a computer engineering research organization that aims to analyze disruptions affecting the industry and create advanced solutions, years before deployment. With the help of Arm Research, APOLLO has already been validated on some of today’s best performing processors. But according to the researchers, the algorithm still needs full testing and evaluation on many other platforms before it can be adopted by commercial computer manufacturers.
“Arm Research works with and receives funding from some of the biggest names in the industry, like Intel and IBM, and forecasting power consumption is one of their top priorities,” Chen added. “Projects like this give our students the opportunity to work with these industry leaders, and these are the types of results that make them want to continue working with and hiring Duke graduates.”
This work was conducted as part of Arm Research’s AClass High Performance Processor Research Program and was partially supported by the National Science Foundation (NSF-2106828, NSF-2112562) and the Semiconductor Research Corporation (SRC).