According to a study, this new technique has the potential to reduce energy usage by up to 95%. The algorithm, referred to as "Linear-Complexity Multiplication" (L-Mul), is based on integer addition, which requires far less energy compared to the floating-point multiplication typically used in AI-related tasks, as reported by TechSpot.

Currently, floating-point numbers are essential in AI computations to handle extremely large or small values, offering binary-like precision that enables accurate complex calculations. However, this precision comes at a high cost in terms of energy, raising concerns as some AI models require vast amounts of electricity. For instance, running ChatGPT consumes enough electricity to power 18,000 U.S. households daily, amounting to 564 MWh per day. Analysts from the Cambridge Centre for Alternative Finance predict that by 2027, the AI industry could consume between 85 and 134 TWh annually.

The L-Mul algorithm tackles this issue by replacing complex floating-point operations with simpler integer additions. During testing, AI models maintained their accuracy, with tensor operations reducing energy usage by 95% and scalar operations by 80%.

L-Mul not only cuts energy consumption but also improves performance. It outperforms current 8-bit computation standards, delivering higher precision with fewer bit-level operations. Across various AI tasks such as natural language processing and computer vision, performance degradation was a mere 0.07%, a negligible trade-off considering the massive energy savings.

Transformer-based models, like GPT, stand to benefit the most from L-Mul, as the algorithm can easily integrate into these systems. Tests on popular AI models such as Llama and Mistral even showed improved accuracy for some tasks.

However, the downside is that L-Mul requires specialized hardware, and current AI accelerators are not yet optimized for this method. The good news is that efforts to develop such hardware and APIs are already underway.

One potential obstacle could be resistance from major chip manufacturers like Nvidia, which may slow down the adoption of this new technology. As a leader in AI hardware production, Nvidia may be reluctant to give up its dominant position to more energy-efficient solutions.