• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Large Language Models No Longer Require Powerful Servers

Large Language Models No Longer Require Powerful Servers

© iStock

Scientists from Yandex, HSE University, MIT, KAUST, and ISTA have made a breakthrough in optimising LLMs. Yandex Research, in collaboration with leading science and technology universities, has developed a method for rapidly compressing large language models (LLMs) without compromising quality. Now, a smartphone or laptop is enough to work with LLMs—there's no need for expensive servers or high-powered GPUs.

This method enables faster testing and more efficient implementation of new neural network-based solutions, reducing both development time and costs. As a result, LLMs are more accessible not only to large corporations, but also to smaller companies, non-profit laboratories and institutes, as well as individual developers and researchers.

Previously, running a language model on a smartphone or laptop required quantising on an expensive server—a process that could take anywhere from a few hours to several weeks. Quantisation can now be performed directly on a smartphone or laptop in just a few minutes.

Challenges in implementing LLMs

The main obstacle to using LLMs is that they require considerable computational power. This applies to open-source models as well. For example, the popular DeepSeek-R1 is too large to run even on high-end servers built for AI and machine learning workloads, meaning that very few companies can effectively use LLMs, even if the model itself is publicly available.

The new method reduces the model's size while maintaining its quality, making it possible to run on more accessible devices. This method allows even larger models, such as DeepSeek-R1 with 671 billion parameters and Llama 4 Maverick with 400 billion parameters, to be compressed, which until now could only be quantised using basic methods and resulted in significant quality loss.

The new quantisation method opens up more opportunities to use LLMs across various fields, particularly in resource-limited sectors such as education and the social sphere. Startups and independent developers can now implement compressed models to create innovative products and services without the need for costly hardware investments. Yandex is already applying the new method for prototyping—creating working versions of products and quickly validating ideas. Testing compressed models takes less time than testing the original versions.

Key details of the new method

The new quantisation method is named HIGGS (Hadamard Incoherence with Gaussian MSE-Optimal GridS). It enables the compression of neural networks without the need for additional data or computationally intensive parameter optimisation. This is especially useful in situations where there is not enough relevant data available to train the model. HIGGS strikes a balance between the quality, size, and complexity of the quantised models, making them suitable for use on a variety of devices.

The method has already been validated on the widely used Llama 3 and Qwen2.5 models. Experiments have shown that HIGGS outperforms all existing data-free quantisation methods, including NF4 (4-bit NormalFloat) and HQQ (Half-Quadratic Quantisation), in terms of both quality and model size.

© iStock

Scientists from HSE University, the Massachusetts Institute of Technology (MIT), the Austrian Institute of Science and Technology (ISTA), and King Abdullah University of Science and Technology (KAUST, Saudi Arabia), all contributed to the development of the method.

The HIGGS method is already accessible to developers and researchers on Hugging Face and GitHub, with a research paper available on arXiv.

Response from the academic community, and other methods

The paper describing the new method has been accepted for presentation at one of the largest AI conferences in the world—the North American Chapter of the Association for Computational Linguistics (NAACL). The conference will be held from April 29 to May 4, 2025, in Albuquerque, New Mexico, USA, and Yandex will be among the attendees, along with other companies and universities such as Google, Microsoft Research, and Harvard University. The paper has been cited by Red Hat AI, an American software company, as well as Peking University, Hong Kong University of Science and Technology, Fudan University, and others.

Previously, scientists from Yandex presented 12 studies focused on LLM quantisation. The company aims to make the application of LLMs more efficient, less energy-consuming, and accessible to all developers and researchers. For example, the Yandex Research team has previously developed methods for compressing LLMs, which reduce computational costs by nearly eight times, while not significantly compromising the quality of the neural network’s responses. The team has also developed a solution that allows running a model with 8 billion parameters on a regular computer or smartphone through a browser interface, even without major computational power.

See also:

AI to Enable Accurate Modelling of Data Storage System Performance

Researchers at the HSE Faculty of Computer Science have developed a new approach to modelling data storage systems based on generative machine learning models. This approach makes it possible to accurately predict the key performance characteristics of such systems under various conditions. Results have been published in the IEEE Access journal.

Researchers Present the Rating of Ideal Life Partner Traits

An international research team surveyed over 10,000 respondents across 43 countries to examine how closely the ideal image of a romantic partner aligns with the actual partners people choose, and how this alignment shapes their romantic satisfaction. Based on the survey, the researchers compiled two ratings—qualities of an ideal life partner and the most valued traits in actual partners. The results have been published in the Journal of Personality and Social Psychology.

Trend-Watching: Radical Innovations in Creative Industries and Artistic Practices

The rapid development of technology, the adaptation of business processes to new economic realities, and changing audience demands require professionals in the creative industries to keep up with current trends and be flexible in their approach to projects. Between April and May 2025, the Institute for Creative Industries Development (ICID) at the HSE Faculty of Creative Industries conducted a trend study within the creative sector.

From Neural Networks to Stock Markets: Advancing Computer Science Research at HSE University in Nizhny Novgorod

The International Laboratory of Algorithms and Technologies for Network Analysis (LATNA), established in 2011 at HSE University in Nizhny Novgorod, conducts a wide range of fundamental and applied research, including joint projects with large companies: Sberbank, Yandex, and other leaders of the IT industry. The methods developed by the university's researchers not only enrich science, but also make it possible to improve the work of transport companies and conduct medical and genetic research more successfully. HSE News Service discussed work of the laboratory with its head, Professor Valery Kalyagin.

Children with Autism Process Sounds Differently

For the first time, an international team of researchers—including scientists from the HSE Centre for Language and Brain—combined magnetoencephalography and morphometric analysis in a single experiment to study children with Autism Spectrum Disorder (ASD). The study found that children with autism have more difficulty filtering and processing sounds, particularly in the brain region typically responsible for language comprehension. The study has been published in Cerebral Cortex.

HSE Scientists Discover Method to Convert CO₂ into Fuel Without Expensive Reagents

Researchers at HSE MIEM, in collaboration with Chinese scientists, have developed a catalyst that efficiently converts CO₂ into formic acid. Thanks to carbon coating, it remains stable in acidic environments and functions with minimal potassium, contrary to previous beliefs that high concentrations were necessary. This could lower the cost of CO₂ processing and simplify its industrial application—eg in producing fuel for environmentally friendly transportation. The study has been published in Nature Communications. 

HSE Scientists Reveal How Staying at Alma Mater Can Affect Early-Career Researchers

Many early-career scientists continue their academic careers at the same university where they studied, a practice known as academic inbreeding. A researcher at the HSE Institute of Education analysed the impact of academic inbreeding on publication activity in the natural sciences and mathematics. The study found that the impact is ambiguous and depends on various factors, including the university's geographical location, its financial resources, and the state of the regional academic employment market. A paper with the study findings has been published in Research Policy.

Group and Shuffle: Researchers at HSE University and AIRI Accelerate Neural Network Fine-Tuning

Researchers at HSE University and the AIRI Institute have proposed a method for quickly fine-tuning neural networks. Their approach involves processing data in groups and then optimally shuffling these groups to improve their interactions. The method outperforms alternatives in image generation and analysis, as well as in fine-tuning text models, all while requiring less memory and training time. The results have been presented at the NeurIPS 2024 Conference.

When Thoughts Become Movement: How Brain–Computer Interfaces Are Transforming Medicine and Daily Life

At the dawn of the 21st century, humans are increasingly becoming not just observers, but active participants in the technological revolution. Among the breakthroughs with the potential to change the lives of millions, brain–computer interfaces (BCIs)—systems that connect the brain to external devices—hold a special place. These technologies were the focal point of the spring International School ‘A New Generation of Neurointerfaces,’ which took place at HSE University.

New Clustering Method Simplifies Analysis of Large Data Sets

Researchers from HSE University and the Institute of Control Sciences of the Russian Academy of Sciences have proposed a new method of data analysis: tunnel clustering. It allows for the rapid identification of groups of similar objects and requires fewer computational resources than traditional methods. Depending on the data configuration, the algorithm can operate dozens of times faster than its counterparts. Thestudy was published in the journal Doklady Rossijskoj Akademii Nauk. Mathematika, Informatika, Processy Upravlenia.