Fed-FUEL Strengthens Fairness in Federated Learning 

L3S Best Publication Award Q3+Q4/2025 
Category: Federated Learning 

Fed-FUEL: fairness and utility enhancing agnostic federated learning framework 

Authors: M Badar, R Younis, S Sikdar, W Nejdl, M Fisichella 

Published in Data Mining and Knowledge Discovery

The paper in a nutshell

Artificial intelligence is increasingly used in sensitive areas such as healthcare, finance, and public administration. In many cases, data cannot be shared centrally due to privacy regulations, so organizations train AI systems collaboratively using a technique called federated learning. However, these systems can unintentionally become unfair, especially when different institutions have different data distributions. 

This research introduces Fed-FUEL, a new method that helps federated AI systems remain accurate, fair, and private. The approach reduces discrimination against protected groups while also ensuring that rare but important cases are not overlooked while preserving privacy rights of individuals. The result is a more trustworthy AI framework that supports responsible decision-making in real-world applications. 

Which problem does the research solve? 

AI systems trained across multiple organizations may unintentionally disadvantage certain demographic groups, particularly when data is unevenly distributed or imbalanced. This research provides a practical way to reduce such bias while maintaining strong predictive performance — without requiring changes to the underlying AI models. 

What is the potential impact? 

Fed-FUEL can help organizations deploy federated AI systems that are both fair and practically useful, reducing the risk that automated decisions systematically disadvantage protected groups while still maintaining strong predictive quality. This is relevant for high-impact domains such as credit risk, recruitment, and healthcare, where privacy constraints often prevent centralizing data and where fairness is a key societal concern. 

What is new and why does it matter? 

1.      Model-agnostic fairness in federated learning: A pre-processing method that does not require changing the learning algorithm, making it easier to use across different models. 

2.      Fairness + utility + privacy together: A novel adaptive data manipulation (SMOTE-based) approach that targets discrimination while also addressing class imbalance and privacy, and highlights why balanced accuracy is essential for credible evaluation.  

3.      Supports statistical and causal fairness notions: The framework is designed to work with multiple fairness notions, including a causal notion (FACE), and is validated experimentally. 

Link to the paper: https://link.springer.com/article/10.1007/s10618-025-01152-0