Adaptive Federated Learning with Client Profiling for Efficient and Fair Model Training under Heterogeneous Conditions

Main Article Content

Jieming Bian
Yuanzhe Peng

Abstract


Federated Learning (FL) enables decentralized model training across distributed clients without sharing raw data, preserving user privacy and data security. Despite its advantages, FL faces major challenges due to heterogeneity in client data distributions (non-IID data) and system capabilities (compute power, availability, bandwidth). These imbalances lead to inefficient training, slower convergence, and unfair contribution across clients. To address these issues, we propose Adaptive Federated Learning with Client Profiling (AFL‑CP), a lightweight framework that dynamically assesses clients based on data utility, training reliability, and computational efficiency. Using this profiling, AFL‑CP adjusts both client participation and aggregation weights to improve model convergence and representation. We evaluate AFL‑CP on CIFAR‑10 and FEMNIST under non-IID settings, demonstrating up to 45% faster convergence, 1.5–2% improvement in test accuracy, and significantly enhanced fairness as measured by the Gini coefficient. Unlike prior approaches, AFL‑CP maintains inclusivity by avoiding exclusion of low-resource clients, while still favoring high-quality updates. Our results suggest that AFL‑CP offers a scalable and practical enhancement to traditional federated learning, supporting more efficient and equitable model training in real-world deployments.


Article Details

Section
Original Research

References

1. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep net-works from decentralized data. Artif. Intell. Stat. 2017, 54, 1273–1282.

2. Durmus, A.E.; Yue, Z.; Ramon, M.; Matthew, M.; Paul, W.; Venkatesh, S. Federated learning based on dynamic reg-ularization. In Proceedings of the International Conference on Learning Representations (ICLR), Online, 2021.

3. Wu, X.; Yao, X.; Wang, C.L. FedSCR: Structure-based communication reduction for federated learning. IEEE Trans. Parallel Distrib. Syst. 2020, 32, 1565–1577.

4. Kasturi, A.; Ellore, A.R.; Hota, C. Fusion learning: A one shot federated learning. In Computational Science–ICCS 2020: 20th International Conference, Amsterdam, The Netherlands, 3–5 June 2020; Springer: Cham, Switzerland, 2020, 3, 424–436.

5. Ezzeldin, Y.H.; Yan, S.; He, C.; Ferrara, E.; Avestimehr, A.S. FairFed: Enabling group fairness in federated learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, D.C., USA, 7–14 February 2023, 37, 6, 7494–7502.

6. Li, J.; Zhu, T.; Ren, W.; Raymond, K.K. Improve individual fairness in federated learning via adversarial training. Comput. Secur. 2023, 132, 103336.

7. Gu, X.; Tianqing, Z.; Li, J.; Zhang, T.; Ren, W.; Choo, K.K.R. Privacy, accuracy, and model fairness trade-offs in feder-ated learning. Comput. Secur. 2022, 122, 102907.

8. Yue, X.; Nouiehed, M.; Al Kontar, R. Gifair-FL: A framework for group and individual fairness in federated learning. Informs J. Data Sci. 2023, 2, 10–23.

9. Huang, W.; Ye, M.; Shi, Z.; Wan, G.; Li, H.; Du, B.; Yang, Q. Federated learning for generalization, robustness, fair-ness: A survey and benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 2024, in press.

10. Kim, D.; Woo, H.; Lee, Y. Addressing bias and fairness using fair federated learning: A synthetic review. Electronics 2024, 13, 4664.

11. Rafi, T.H.; Noor, F.A.; Hussain, T.; Chae, D.K. Fairness and privacy preserving in federated learning: A survey. Inf. Fusion 2024, 105, 102198.

12. Mukhtiar, N.; Mahmood, A.; Sheng, Q.Z. Fairness in federated learning: Trends, challenges, and opportunities. Adv. Intell. Syst. 2025, 2400836.

13. Kim, D.; Oh, K.; Lee, Y.; Woo, H. Overview of fair federated learning for fairness and privacy preservation. Expert Syst. Appl. 2025, 128568.

14. He, Z.; Wang, Z.; Dong, X.; Sun, P.; Ren, J.; Ren, K. Towards fair federated learning via unbiased feature aggregation. IEEE Trans. Dependable Secure Comput. 2025, in press.

15. Vucinich, S.; Zhu, Q. The current state and challenges of fairness in federated learning. IEEE Access 2023, 11, 80903–80914.

16. Lv, Y.; Ding, H.; Wu, H.; Zhao, Y.; Zhang, L. FedRDS: Federated learning on non-IID data via regularization and data sharing. Appl. Sci. 2023, 13, 12962.

17. Gao, Y.; He, X.; Song, Y.; Shi, H.; Chen, Y. Personalized federated learning with adaptive optimization of local model. Knowl. Inf. Syst. 2025, in press.

18. Niu, Y.; Prakash, S.; Kundu, S.; Lee, S.; Avestimehr, S. Overcoming resource constraints in federated learning: Large models can be trained with only weak clients. Trans. Mach. Learn. Res. 2023, in press.

19. Su, D.; Zhou, Y.; Cui, L.; Guo, S. Boosting dynamic decentralized federated learning by diversifying model sources. IEEE Trans. Serv. Comput. 2024, in press.

20. Tam, K.; Li, L.; Han, B.; Xu, C.; Fu, H. Federated noisy client learning. IEEE Trans. Neural Netw. Learn. Syst. 2023, in press.

21. Thabet, S.K.S.; Soltani, B.; Zhou, Y.; Sheng, Q.Z.; Wen, S. Towards efficient decentralized federated learning: A sur-vey. In Proceedings of the International Conference on Advanced Data Mining and Applications, Singapore, 14–16 December 2024; Springer: Singapore, 2024, 208–222.