• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Large Language Models No Longer Require Powerful Servers

Large Language Models No Longer Require Powerful Servers

© iStock

Scientists from Yandex, HSE University, MIT, KAUST, and ISTA have made a breakthrough in optimising LLMs. Yandex Research, in collaboration with leading science and technology universities, has developed a method for rapidly compressing large language models (LLMs) without compromising quality. Now, a smartphone or laptop is enough to work with LLMs—there's no need for expensive servers or high-powered GPUs.

This method enables faster testing and more efficient implementation of new neural network-based solutions, reducing both development time and costs. As a result, LLMs are more accessible not only to large corporations, but also to smaller companies, non-profit laboratories and institutes, as well as individual developers and researchers.

Previously, running a language model on a smartphone or laptop required quantising on an expensive server—a process that could take anywhere from a few hours to several weeks. Quantisation can now be performed directly on a smartphone or laptop in just a few minutes.

Challenges in implementing LLMs

The main obstacle to using LLMs is that they require considerable computational power. This applies to open-source models as well. For example, the popular DeepSeek-R1 is too large to run even on high-end servers built for AI and machine learning workloads, meaning that very few companies can effectively use LLMs, even if the model itself is publicly available.

The new method reduces the model's size while maintaining its quality, making it possible to run on more accessible devices. This method allows even larger models, such as DeepSeek-R1 with 671 billion parameters and Llama 4 Maverick with 400 billion parameters, to be compressed, which until now could only be quantised using basic methods and resulted in significant quality loss.

The new quantisation method opens up more opportunities to use LLMs across various fields, particularly in resource-limited sectors such as education and the social sphere. Startups and independent developers can now implement compressed models to create innovative products and services without the need for costly hardware investments. Yandex is already applying the new method for prototyping—creating working versions of products and quickly validating ideas. Testing compressed models takes less time than testing the original versions.

Key details of the new method

The new quantisation method is named HIGGS (Hadamard Incoherence with Gaussian MSE-Optimal GridS). It enables the compression of neural networks without the need for additional data or computationally intensive parameter optimisation. This is especially useful in situations where there is not enough relevant data available to train the model. HIGGS strikes a balance between the quality, size, and complexity of the quantised models, making them suitable for use on a variety of devices.

The method has already been validated on the widely used Llama 3 and Qwen2.5 models. Experiments have shown that HIGGS outperforms all existing data-free quantisation methods, including NF4 (4-bit NormalFloat) and HQQ (Half-Quadratic Quantisation), in terms of both quality and model size.

© iStock

Scientists from HSE University, the Massachusetts Institute of Technology (MIT), the Austrian Institute of Science and Technology (ISTA), and King Abdullah University of Science and Technology (KAUST, Saudi Arabia), all contributed to the development of the method.

The HIGGS method is already accessible to developers and researchers on Hugging Face and GitHub, with a research paper available on arXiv.

Response from the academic community, and other methods

The paper describing the new method has been accepted for presentation at one of the largest AI conferences in the world—the North American Chapter of the Association for Computational Linguistics (NAACL). The conference will be held from April 29 to May 4, 2025, in Albuquerque, New Mexico, USA, and Yandex will be among the attendees, along with other companies and universities such as Google, Microsoft Research, and Harvard University. The paper has been cited by Red Hat AI, an American software company, as well as Peking University, Hong Kong University of Science and Technology, Fudan University, and others.

Previously, scientists from Yandex presented 12 studies focused on LLM quantisation. The company aims to make the application of LLMs more efficient, less energy-consuming, and accessible to all developers and researchers. For example, the Yandex Research team has previously developed methods for compressing LLMs, which reduce computational costs by nearly eight times, while not significantly compromising the quality of the neural network’s responses. The team has also developed a solution that allows running a model with 8 billion parameters on a regular computer or smartphone through a browser interface, even without major computational power.

See also:

Scientists Develop Effective Microlasers as Small as a Speck of Dust

Researchers at HSE University–St Petersburg have discovered a way to create effective microlasers with diameters as small as 5 to 8 micrometres. They operate at room temperature, require no cooling, and can be integrated into microchips. The scientists relied on the whispering gallery effect to trap light and used buffer layers to reduce energy leakage and stress. This approach holds promise for integrating lasers into microchips, sensors, and quantum technologies. The study has been published in Technical Physics Letters.

HSE Scientists Test New Method to Investigate Mechanisms of New Word Acquisition

Researchers at the HSE Centre for Language and Brain were among the first to use transcranial alternating current stimulation to investigate whether it can influence the acquisition of new words. Although the authors of the experiment have not yet found a link between brain stimulation and word acquisition, they believe that adjusting the stimulation parameters may yield different results in the future. The study has been published in Language, Cognition and Neuroscience.

Twenty vs Ten: HSE Researcher Examines Origins of Numeral System in Lezgic Languages

It is commonly believed that the Lezgic languages spoken in Dagestan and Azerbaijan originally used a vigesimal numeral system, with the decimal system emerging later. However, a recent analysis of numerals in various dialects, conducted by linguist Maksim Melenchenko from HSE University, suggests that the opposite may be true: the decimal system was used originally, with the vigesimal system developing later. The study has been published in Folia Linguistica.

Scientists Rank Russian Regions by Climate Risk Levels

Researchers from HSE University and the Russian Academy of Sciences have assessed the levels of climate risks across Russian regions. Using five key climate risks—heatwaves, water stress, wildfires, extreme precipitation, and permafrost degradation—the scientists ranked the country’s regions according to their need for adaptation to climate change. Krasnoyarsk Krai, Irkutsk Region, and Sverdlovsk Region rank among the highest for four of the five climate risks considered. The study has been published in Science of the Total Environment.

HSE Researchers Teach Neural Network to Distinguish Origins from Genetically Similar Populations

Researchers from the AI and Digital Science Institute, HSE Faculty of Computer Science, have proposed a new approach based on advanced machine learning techniques to determine a person’s genetic origin with high accuracy. This method uses graph neural networks, which make it possible to distinguish even very closely related populations.

HSE Economists Reveal the Secret to Strong Families

Researchers from the HSE Faculty of Economic Sciences have examined the key factors behind lasting marriages. The findings show that having children is the primary factor contributing to marital stability, while for couples without children, a greater income gap between spouses is associated with a stronger union. This is the conclusion reported in Applied Econometrics.

Fifteen Minutes on Foot: How Post-Soviet Cities Manage Access to Essential Services

Researchers from HSE University and the Institute of Geography of the Russian Academy of Sciences analysed three major Russian cities to assess their alignment with the '15-minute city' concept—an urban design that ensures residents can easily access essential services and facilities within walking distance. Naberezhnye Chelny, where most residents live in Soviet-era microdistricts, demonstrated the highest levels of accessibility. In Krasnodar, fewer than half of residents can easily reach essential facilities on foot, and in Saratov, just over a third can. The article has been published in Regional Research of Russia.

HSE Researchers Find Counter-Strike Skins Outperform Bitcoin and Gold as Alternative Investments

Virtual knives, custom-painted machine guns, and gloves are common collectible items in videogames. A new study by scientists from HSE University suggests that digital skins from the popular video game Counter-Strike: Global Offensive (CS:GO) rank among the most profitable types of alternative investments, with average annual returns exceeding 40%. The study has been published in the Social Science Research Network (SSRN), a free-access online repository.

HSE Neurolinguists Reveal What Makes Apps Effective for Aphasia Rehabilitation

Scientists at the HSE Centre for Language and Brain have identified key factors that increase the effectiveness of mobile and computer-based applications for aphasia rehabilitation. These key factors include automated feedback, a variety of tasks within the application, extended treatment duration, and ongoing interaction between the user and the clinician. The article has been published in NeuroRehabilitation.

'Our Goal Is Not to Determine Which Version Is Correct but to Explore the Variability'

The International Linguistic Convergence Laboratory at the HSE Faculty of Humanities studies the processes of convergence among languages spoken in regions with mixed, multiethnic populations. Research conducted by linguists at HSE University contributes to understanding the history of language development and explores how languages are perceived and used in multilingual environments. George Moroz, head of the laboratory, shares more details in an interview with the HSE News Service.