8.4 C
Canberra
Sunday, July 13, 2025

Vitality-Environment friendly NPU Know-how Cuts AI Energy Use by 44%


Researchers on the Korea Superior Institute of Science and Know-how (KAIST) have developed energy-efficient NPU expertise that demonstrates substantial efficiency enhancements in laboratory testing. 

Their specialised AI chip ran AI fashions 60% quicker whereas utilizing 44% much less electrical energy than the graphics playing cards at present powering most AI programs, based mostly on outcomes from managed experiments. 

To place it merely, the analysis, led by Professor Jongse Park from KAIST’s Faculty of Computing in collaboration with HyperAccel Inc., addresses some of the urgent challenges in trendy AI infrastructure: the big power and {hardware} necessities of large-scale generative AI fashions. 

Present programs akin to OpenAI’s ChatGPT-4 and Google’s Gemini 2.5 demand not solely excessive reminiscence bandwidth but additionally substantial reminiscence capability, driving corporations like Microsoft and Google to buy tons of of 1000’s of NVIDIA GPUs.

The reminiscence bottleneck problem

The core innovation lies within the group’s method to fixing reminiscence bottleneck points that plague current AI infrastructure. Their energy-efficient NPU expertise focuses on “light-weight” the inference course of whereas minimising accuracy loss—a vital stability that has confirmed difficult for earlier options.

PhD pupil Minsu Kim and Dr Seongmin Hong from HyperAccel Inc., serving as co-first authors, introduced their findings on the 2025 Worldwide Symposium on Pc Structure (ISCA 2025) in Tokyo. The analysis paper, titled “Oaken: Quick and Environment friendly LLM Serving with On-line-Offline Hybrid KV Cache Quantization,” particulars their complete method to the issue.

The expertise centres on KV cache quantisation, which the researchers determine as accounting for most reminiscence utilization in generative AI programs. By optimising this element, the group permits the identical degree of AI infrastructure efficiency utilizing fewer NPU units in comparison with conventional GPU-based programs.

Technical innovation and structure

The KAIST group’s energy-efficient NPU expertise employs a three-pronged quantisation algorithm: threshold-based online-offline hybrid quantisation, group-shift quantisation, and fused dense-and-sparse encoding. This method permits the system to combine with current reminiscence interfaces with out requiring adjustments to operational logic in present NPU architectures.

The {hardware} structure incorporates page-level reminiscence administration methods for environment friendly utilisation of restricted reminiscence bandwidth and capability. Moreover, the group launched new encoding methods particularly optimised for quantised KV cache, addressing the distinctive necessities of their method.

“This analysis, via joint work with HyperAccel Inc., discovered an answer in generative AI inference light-weighting algorithms and succeeded in creating a core NPU expertise that may remedy the reminiscence drawback,” Professor Park defined. 

“By this expertise, we applied an NPU with over 60% improved efficiency in comparison with the most recent GPUs by combining quantisation methods that cut back reminiscence necessities whereas sustaining inference accuracy.”

Sustainability implications

The environmental affect of AI infrastructure has grow to be a rising concern as generative AI adoption accelerates. The energy-efficient NPU expertise developed by KAIST provides a possible path towards extra sustainable AI operations. 

With 44% decrease energy consumption in comparison with present GPU options, widespread adoption may considerably cut back the carbon footprint of AI cloud companies. Nevertheless, the expertise’s real-world affect will rely on a number of components, together with manufacturing scalability, cost-effectiveness, and trade adoption charges. 

The researchers acknowledge that their resolution represents a big step ahead, however widespread implementation would require continued improvement and trade collaboration.

Trade context and future outlook

The timing of this energy-efficient NPU expertise breakthrough is especially related as AI corporations face growing stress to stability efficiency with sustainability. The present GPU-dominated market has created provide chain constraints and elevated prices, making different options more and more enticing.

Professor Park famous that the expertise “has demonstrated the potential of implementing high-performance, low-power infrastructure specialised for generative AI, and is anticipated to play a key function not solely in AI cloud knowledge centres but additionally within the AI transformation (AX) surroundings represented by dynamic, executable AI akin to agentic AI.”

The analysis represents a big step towards extra sustainable AI infrastructure, however its final affect might be decided by how successfully it may be scaled and deployed in industrial environments. Because the AI trade continues to grapple with power consumption considerations, improvements like KAIST’s energy-efficient NPU expertise supply hope for a extra sustainable future in synthetic intelligence computing.

(Photograph by Korea Superior Institute of Science and Know-how)

See additionally: The 6 practices that guarantee extra sustainable knowledge centre operations

Wish to study extra about cybersecurity and the cloud from trade leaders? Take a look at Cyber Safety & Cloud Expo happening in Amsterdam, California, and London.

Discover different upcoming enterprise expertise occasions and webinars powered by TechForge right here.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

[td_block_social_counter facebook="tagdiv" twitter="tagdivofficial" youtube="tagdiv" style="style8 td-social-boxed td-social-font-icons" tdc_css="eyJhbGwiOnsibWFyZ2luLWJvdHRvbSI6IjM4IiwiZGlzcGxheSI6IiJ9LCJwb3J0cmFpdCI6eyJtYXJnaW4tYm90dG9tIjoiMzAiLCJkaXNwbGF5IjoiIn0sInBvcnRyYWl0X21heF93aWR0aCI6MTAxOCwicG9ydHJhaXRfbWluX3dpZHRoIjo3Njh9" custom_title="Stay Connected" block_template_id="td_block_template_8" f_header_font_family="712" f_header_font_transform="uppercase" f_header_font_weight="500" f_header_font_size="17" border_color="#dd3333"]
- Advertisement -spot_img

Latest Articles