In a world where data is more fragmented and confidential than ever, training AI systems without compromising security and/or privacy has become one of the greatest challenges of our time. Federated learning has emerged as a promising solution, but existing techniques carry their own trade-offs in terms of accuracy, communication overhead, computational cost, and security.
At Qsimov, we’ve been working on an enhanced approach that addresses these trade-offs head-on by replacing neural networks with our advanced AI system: GreenLightningAI (GLAI).
During a recent conference, at the Presentation of the UCLM-JCCM Institutional Chair of Cybersecurity, José Duato, CTO of Qsimov and member of the Real Academia de Ciencias, explored how GLAI represents a turning point in federated learning and sustainable AI design.
When dealing with private, proprietary, or sensitive datasets, transmitting information over networks is not a viable option. Yet these same datasets are often critical to training useful and accurate AI models, especially in industries and/or public services like healthcare, finance, or legal tech.
Traditional federated learning techniques enable decentralized model training across devices, but often at the cost of:
Moreover, repeated model updates are necessary to adapt to evolving data, which increases energy consumption—a growing concern for the AI community.
What if we could directly eliminate many of these drawbacks? The solution, Duato argues, lies in replacing traditional neural networks with an alternative system that is:
This theoretical proposal has been realized through GLAI.
GLAI introduces a paradigm shift by decoupling model component activation from actual computations in deep neural network architectures. Its structure combines a Path Selector and an Estimator to deliver high accuracy with dramatically reduced computational costs and, when applied to federated learning, also with dramatically reduced communication costs.
The Path Selector is a small neural network module trained by using standard federated learning techniques. Its purpose is to determine the optimal paths, or subset of parameters to activate for a given sample, enabling selective computation rather than full-model activation.
The Estimator is a linear system that carries out the actual prediction. Its simplicity and efficiency are what allow GLAI to operate with minimal overhead. And its linearity enables fast and accurate model merging.
In the context of federated training, this architecture provides several strategic advantages:
By intelligently separating model functions and leveraging a globally linear structure, GLAI offers a robust, scalable alternative to conventional federated learning approaches, ensuring accuracy, privacy, and computational efficiency in one integrated solution.
Accuracy: GLAI achieves accuracy levels similar to deep neural networks with the same number of parameters.
Efficiency: The linearity of the model structure allows for energy-efficient training and retraining. Moreover, a single averaging step at the end of training is required, thus minimizing communication overhead.
Security: Since data never leaves the local device, and only small models are shared, GLAI naturally supports privacy-first AI.
GLAI is more than a technological innovation, it’s a vision for responsible and efficient AI in real-world environments. As federated learning becomes more relevant, especially with tightening regulations and growing privacy concerns, solutions like GLAI will be key to unlocking the full potential of decentralized data in a sustainable manner.
Interested in how GLAI could improve your AI infrastructure?