ADF-SL: An Adaptive and Fair Scheme for Smart Learning Task Distribution
Split Learning (SL) is an emerging decentralized paradigm that enables numerous participants, to train a deep neural network without disclosing sensitive information, such as patient data, in fields such as healthcare. In healthcare, SL enables distributed training across a variety of medical device...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11073189/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Split Learning (SL) is an emerging decentralized paradigm that enables numerous participants, to train a deep neural network without disclosing sensitive information, such as patient data, in fields such as healthcare. In healthcare, SL enables distributed training across a variety of medical devices, hospitals, and organizations, improving model robustness while maintaining patient confidentiality. However, training models within SL is affected by data heterogeneity and sensitivity, and often requires more computational resources than an individual data provider can afford. This can result in significant model divergence and decreased performance due to differences in data distributions between various clients. To address this issue, we propose a framework that integrates fairness and adaptivity considerations, called ADF-SL. In particular, ADF-SL dynamically adjusts the total number of clients involved in model training and the number of iteration required to achieve convergence without compromising participant privacy. To evaluate performance, we compare the effectiveness of ADF-SL with that of the naive (Vanilla) SL approach, SplitFed and FairFed. Extensive experiments performed on time series electrocardiogram (ECG) databases (MITDB, SVDB, and INCARTDB) indicate that ADF-SL significantly outperforms the three existing algorithms that served as baselines. Compared to these baseline methods, ADF-SL accelerates model training on clients by up to 22.7%, 10.4%, and 5.8% compared to Vanilla SL, SplitFed, and FairFed, respectively, while maintaining model convergence and accuracy. Furthermore, the conducted ablation study has confirmed the importance of ADF-SL decay enrichment, which has outperformed non-decay ADF-SL for each used dataset by up to 15.8%, 43.9%, and 7.6%, respectively. |
|---|---|
| ISSN: | 2169-3536 |