Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness
Federated learning (FL) has emerged as a prominent distributed machine learning paradigm that facilitates collaborative model training across multiple clients while ensuring data privacy. Despite its growing adoption in practical applications, performance degradation caused by data heterogeneity—com...
Saved in:
| Main Authors: | , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-07-01
|
| Series: | Applied Sciences |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2076-3417/15/14/7843 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849246567604682752 |
|---|---|
| author | Junhui Song Zhangqi Zheng Afei Li Zhixin Xia Yongshan Liu |
| author_facet | Junhui Song Zhangqi Zheng Afei Li Zhixin Xia Yongshan Liu |
| author_sort | Junhui Song |
| collection | DOAJ |
| description | Federated learning (FL) has emerged as a prominent distributed machine learning paradigm that facilitates collaborative model training across multiple clients while ensuring data privacy. Despite its growing adoption in practical applications, performance degradation caused by data heterogeneity—commonly referred to as the non-independent and identically distributed (non-IID) nature of client data—remains a fundamental challenge. To mitigate this issue, a heterogeneity-aware and robust FL framework is proposed to enhance model generalization and stability under non-IID conditions. The proposed approach introduces two key innovations. First, a heterogeneity quantification mechanism is designed based on statistical feature distributions, enabling the effective measurement of inter-client data discrepancies. This metric is further employed to guide the model aggregation process through a heterogeneity-aware weighted strategy. Second, a multi-loss optimization scheme is formulated, integrating classification loss, heterogeneity loss, feature center alignment, and L2 regularization for improved robustness against distributional shifts during local training. Comprehensive experiments are conducted on four benchmark datasets, including CIFAR-10, SVHN, MNIST, and NotMNIST under Dirichlet-based heterogeneity settings (alpha = 0.1 and alpha = 0.5). The results demonstrate that the proposed method consistently outperforms baseline approaches such as FedAvg, FedProx, FedSAM, and FedMOON. Notably, an accuracy improvement of approximately 4.19% over FedSAM is observed on CIFAR-10 (alpha = 0.5), and a 1.82% gain over FedMOON on SVHN (alpha = 0.1), along with stable enhancements on MNIST and NotMNIST. Furthermore, ablation studies confirm the contribution and necessity of each component in addressing data heterogeneity. |
| format | Article |
| id | doaj-art-8d508948f4e8445b96fc275db72cc7c9 |
| institution | Kabale University |
| issn | 2076-3417 |
| language | English |
| publishDate | 2025-07-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Applied Sciences |
| spelling | doaj-art-8d508948f4e8445b96fc275db72cc7c92025-08-20T03:58:27ZengMDPI AGApplied Sciences2076-34172025-07-011514784310.3390/app15147843Research into Robust Federated Learning Methods Driven by Heterogeneity AwarenessJunhui Song0Zhangqi Zheng1Afei Li2Zhixin Xia3Yongshan Liu4School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, ChinaSchool of Mathematics and Information Technology, Hebei Normal University of Science & Technology, Qinhuangdao 066004, ChinaSchool of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, ChinaSchool of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, ChinaSchool of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, ChinaFederated learning (FL) has emerged as a prominent distributed machine learning paradigm that facilitates collaborative model training across multiple clients while ensuring data privacy. Despite its growing adoption in practical applications, performance degradation caused by data heterogeneity—commonly referred to as the non-independent and identically distributed (non-IID) nature of client data—remains a fundamental challenge. To mitigate this issue, a heterogeneity-aware and robust FL framework is proposed to enhance model generalization and stability under non-IID conditions. The proposed approach introduces two key innovations. First, a heterogeneity quantification mechanism is designed based on statistical feature distributions, enabling the effective measurement of inter-client data discrepancies. This metric is further employed to guide the model aggregation process through a heterogeneity-aware weighted strategy. Second, a multi-loss optimization scheme is formulated, integrating classification loss, heterogeneity loss, feature center alignment, and L2 regularization for improved robustness against distributional shifts during local training. Comprehensive experiments are conducted on four benchmark datasets, including CIFAR-10, SVHN, MNIST, and NotMNIST under Dirichlet-based heterogeneity settings (alpha = 0.1 and alpha = 0.5). The results demonstrate that the proposed method consistently outperforms baseline approaches such as FedAvg, FedProx, FedSAM, and FedMOON. Notably, an accuracy improvement of approximately 4.19% over FedSAM is observed on CIFAR-10 (alpha = 0.5), and a 1.82% gain over FedMOON on SVHN (alpha = 0.1), along with stable enhancements on MNIST and NotMNIST. Furthermore, ablation studies confirm the contribution and necessity of each component in addressing data heterogeneity.https://www.mdpi.com/2076-3417/15/14/7843federated learningdata heterogeneityheterogeneity-awareweighted aggregationmulti-loss function |
| spellingShingle | Junhui Song Zhangqi Zheng Afei Li Zhixin Xia Yongshan Liu Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness Applied Sciences federated learning data heterogeneity heterogeneity-aware weighted aggregation multi-loss function |
| title | Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness |
| title_full | Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness |
| title_fullStr | Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness |
| title_full_unstemmed | Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness |
| title_short | Research into Robust Federated Learning Methods Driven by Heterogeneity Awareness |
| title_sort | research into robust federated learning methods driven by heterogeneity awareness |
| topic | federated learning data heterogeneity heterogeneity-aware weighted aggregation multi-loss function |
| url | https://www.mdpi.com/2076-3417/15/14/7843 |
| work_keys_str_mv | AT junhuisong researchintorobustfederatedlearningmethodsdrivenbyheterogeneityawareness AT zhangqizheng researchintorobustfederatedlearningmethodsdrivenbyheterogeneityawareness AT afeili researchintorobustfederatedlearningmethodsdrivenbyheterogeneityawareness AT zhixinxia researchintorobustfederatedlearningmethodsdrivenbyheterogeneityawareness AT yongshanliu researchintorobustfederatedlearningmethodsdrivenbyheterogeneityawareness |