Cookie Consent

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Preferences

Rethinking Federated Learning: GLAI as a low-cost, high-accuracy alternative

In a world where data is more fragmented and confidential than ever, training AI systems without compromising security and/or privacy has become one of the greatest challenges of our time. Federated learning has emerged as a promising solution, but existing techniques carry their own trade-offs in terms of accuracy, communication overhead, computational cost, and security.

At Qsimov, we’ve been working on an enhanced approach that addresses these trade-offs head-on by replacing neural networks with our advanced AI system: GreenLightningAI (GLAI).

During a recent conference, at the Presentation of the UCLM-JCCM Institutional Chair of Cybersecurity, José Duato, CTO of Qsimov and member of the Real Academia de Ciencias, explored how GLAI represents a turning point in federated learning and sustainable AI design.

The challenge: Privacy vs. Accuracy vs. Efficiency

When dealing with private, proprietary, or sensitive datasets, transmitting information over networks is not a viable option. Yet these same datasets are often critical to training useful and accurate AI models, especially in industries and/or public services like healthcare, finance, or legal tech.

Traditional federated learning techniques enable decentralized model training across devices, but often at the cost of:

  • High communication load
  • Heavy encryption requirements
  • Precision loss when models are averaged infrequently

Moreover, repeated model updates are necessary to adapt to evolving data, which increases energy consumption—a growing concern for the AI community.

The proposal: a linearly structured system

What if we could directly eliminate many of these drawbacks? The solution, Duato argues, lies in replacing traditional neural networks with an alternative system that is:

  • Globally linear, yet behaving like a non-linear model through selective subsetting for each individual sample.
  • Capable of decoupling the calculation of activation patterns from parameter training.
  • Flexible enough to match the accuracy of a deep neural network using a smart configuration of linear estimators.

This theoretical proposal has been realized through GLAI.

How GLAI solves the problem

GLAI introduces a paradigm shift by decoupling model component activation from actual computations in deep neural network architectures. Its structure combines a Path Selector and an Estimator to deliver high accuracy with dramatically reduced computational costs and, when applied to federated learning, also with dramatically reduced communication costs.

The Path Selector is a small neural network module trained by using standard federated learning techniques. Its purpose is to determine the optimal paths, or subset of parameters to activate for a given sample, enabling selective computation rather than full-model activation.

The Estimator is a linear system that carries out the actual prediction. Its simplicity and efficiency are what allow GLAI to operate with minimal overhead. And its linearity enables fast and accurate model merging.

In the context of federated training, this architecture provides several strategic advantages:

  • Once the path selector has been initialized and distributed among the participating devices, the estimators can be fully trained using only the local datasets, without requiring any information exchange with other devices. This local training is followed by a single global averaging step, significantly cutting down on communication demands and encryption requirements compared to iterative exchanges in other federated learning techniques.
  • Furthermore, when new data becomes available, only the estimator needs to be retrained—not the entire system—. Moreover, only the new samples need to be used for retraining—not the historic dataset—, allowing for extremely efficient, incremental updates without degrading accuracy.

By intelligently separating model functions and leveraging a globally linear structure, GLAI offers a robust, scalable alternative to conventional federated learning approaches, ensuring accuracy, privacy, and computational efficiency in one integrated solution.

The results: high accuracy, low cost

Accuracy: GLAI achieves accuracy levels similar to deep neural networks with the same number of parameters.

Efficiency: The linearity of the model structure allows for energy-efficient training and retraining. Moreover, a single averaging step at the end of training is required, thus minimizing communication overhead.

Security: Since data never leaves the local device, and only small models are shared, GLAI naturally supports privacy-first AI.

GLAI is more than a technological innovation, it’s a vision for responsible and efficient AI in real-world environments. As federated learning becomes more relevant, especially with tightening regulations and growing privacy concerns, solutions like GLAI will be key to unlocking the full potential of decentralized data in a sustainable manner.

Interested in how GLAI could improve your AI infrastructure?

Innovate

Contact Us

Fill out the form, and we’ll get in touch with you as soon as possible.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.