FedSVD: Asynchronous Federated Learning With Stale Weight Vector Decomposition

Federated learning (FL) emerges as a collaborative learning framework that addresses the critical needs for privacy preservation and communication efficiency. In synchronous FL, each client waits for the global model, which is aggregated from the trained models of all participating clients. To allev...

Full description

Saved in:
Bibliographic Details
Main Authors: Giwon Sur, Hyejin Kim, Seunghyun Yoon, Hyuk Lim
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11015799/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849472264197636096
author Giwon Sur
Hyejin Kim
Seunghyun Yoon
Hyuk Lim
author_facet Giwon Sur
Hyejin Kim
Seunghyun Yoon
Hyuk Lim
author_sort Giwon Sur
collection DOAJ
description Federated learning (FL) emerges as a collaborative learning framework that addresses the critical needs for privacy preservation and communication efficiency. In synchronous FL, each client waits for the global model, which is aggregated from the trained models of all participating clients. To alleviate the downtime associated with waiting for all clients to complete training, asynchronous FL enables independent aggregation of client models. In asynchronous FL, the global model is continuously updated during client training, leading to the inevitable issue of stale updates when clients return their models to the server. These outdated updates hinder the convergence of the global model during aggregation. To address this staleness problem, we propose FedSVD, a method that leverages vector decomposition of stale weights. FedSVD evaluates each client’s trained weight in terms of their staleness relative to the current global model and decomposes the weights into two vectors: one pointing in the direction of the previous global model update, and another orthogonal to it. The global model is then updated using only the orthogonal vector, as the parallel vector is considered already accounted for in the current global model. Experimental results show that FedSVD outperforms existing baseline methods on benchmark datasets under various client conditions and data distributions.
format Article
id doaj-art-c22d99113c59404a8069877d7d8381cb
institution Kabale University
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-c22d99113c59404a8069877d7d8381cb2025-08-20T03:24:35ZengIEEEIEEE Access2169-35362025-01-0113948349484510.1109/ACCESS.2025.357380611015799FedSVD: Asynchronous Federated Learning With Stale Weight Vector DecompositionGiwon Sur0https://orcid.org/0000-0001-8085-3174Hyejin Kim1https://orcid.org/0000-0001-8448-0168Seunghyun Yoon2https://orcid.org/0000-0001-6264-976XHyuk Lim3https://orcid.org/0000-0002-9926-3913Korea Institute of Energy Technology (KENTECH), Naju, Republic of KoreaKorea Institute of Energy Technology (KENTECH), Naju, Republic of KoreaKorea Institute of Energy Technology (KENTECH), Naju, Republic of KoreaKorea Institute of Energy Technology (KENTECH), Naju, Republic of KoreaFederated learning (FL) emerges as a collaborative learning framework that addresses the critical needs for privacy preservation and communication efficiency. In synchronous FL, each client waits for the global model, which is aggregated from the trained models of all participating clients. To alleviate the downtime associated with waiting for all clients to complete training, asynchronous FL enables independent aggregation of client models. In asynchronous FL, the global model is continuously updated during client training, leading to the inevitable issue of stale updates when clients return their models to the server. These outdated updates hinder the convergence of the global model during aggregation. To address this staleness problem, we propose FedSVD, a method that leverages vector decomposition of stale weights. FedSVD evaluates each client’s trained weight in terms of their staleness relative to the current global model and decomposes the weights into two vectors: one pointing in the direction of the previous global model update, and another orthogonal to it. The global model is then updated using only the orthogonal vector, as the parallel vector is considered already accounted for in the current global model. Experimental results show that FedSVD outperforms existing baseline methods on benchmark datasets under various client conditions and data distributions.https://ieeexplore.ieee.org/document/11015799/Federated learningmachine learningdata privacydata heterogeneitystaleness
spellingShingle Giwon Sur
Hyejin Kim
Seunghyun Yoon
Hyuk Lim
FedSVD: Asynchronous Federated Learning With Stale Weight Vector Decomposition
IEEE Access
Federated learning
machine learning
data privacy
data heterogeneity
staleness
title FedSVD: Asynchronous Federated Learning With Stale Weight Vector Decomposition
title_full FedSVD: Asynchronous Federated Learning With Stale Weight Vector Decomposition
title_fullStr FedSVD: Asynchronous Federated Learning With Stale Weight Vector Decomposition
title_full_unstemmed FedSVD: Asynchronous Federated Learning With Stale Weight Vector Decomposition
title_short FedSVD: Asynchronous Federated Learning With Stale Weight Vector Decomposition
title_sort fedsvd asynchronous federated learning with stale weight vector decomposition
topic Federated learning
machine learning
data privacy
data heterogeneity
staleness
url https://ieeexplore.ieee.org/document/11015799/
work_keys_str_mv AT giwonsur fedsvdasynchronousfederatedlearningwithstaleweightvectordecomposition
AT hyejinkim fedsvdasynchronousfederatedlearningwithstaleweightvectordecomposition
AT seunghyunyoon fedsvdasynchronousfederatedlearningwithstaleweightvectordecomposition
AT hyuklim fedsvdasynchronousfederatedlearningwithstaleweightvectordecomposition