<i>VFL-Cafe</i>: Communication-Efficient Vertical Federated Learning via Dynamic Caching and Feature Selection

Vertical Federated Learning (VFL) is a promising category of Federated Learning that enables collaborative model training among distributed parties with data privacy protection. Due to its unique training architecture, a key challenge of VFL is high communication cost due to transmitting intermediat...

Full description

Saved in:
Bibliographic Details
Main Authors: Jiahui Zhou, Han Liang, Tian Wu, Xiaoxi Zhang, Yu Jiang, Chee Wei Tan
Format: Article
Language:English
Published: MDPI AG 2025-01-01
Series:Entropy
Subjects:
Online Access:https://www.mdpi.com/1099-4300/27/1/66
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832588595272089600
author Jiahui Zhou
Han Liang
Tian Wu
Xiaoxi Zhang
Yu Jiang
Chee Wei Tan
author_facet Jiahui Zhou
Han Liang
Tian Wu
Xiaoxi Zhang
Yu Jiang
Chee Wei Tan
author_sort Jiahui Zhou
collection DOAJ
description Vertical Federated Learning (VFL) is a promising category of Federated Learning that enables collaborative model training among distributed parties with data privacy protection. Due to its unique training architecture, a key challenge of VFL is high communication cost due to transmitting intermediate results between the Active Party and Passive Parties. Current communication-efficient VFL methods rely on using stale results without meticulous selection, which can impair model accuracy, particularly in noisy data environments. To address these limitations, this work proposes <i>VFL-Cafe</i>, a new VFL training method that leverages dynamic caching and feature selection to boost communication efficiency and model accuracy. In each communication round, the employed caching scheme allows multiple batches of intermediate results to be cached and strategically reused by different parties, reducing the communication overhead while maintaining model accuracy. Additionally, to eliminate the negative impact of noisy features that may undermine the effectiveness of using stale results to reduce communication rounds and incur significant model degradation, a feature selection strategy is integrated into each round of local updates. Theoretical analysis is then conducted to provide guidance on cache configuration, optimizing performance. Finally, extensive experimental results validate <i>VFL-Cafe</i>’s efficacy, demonstrating remarkable improvements in communication efficiency and model accuracy.
format Article
id doaj-art-f96d0191d58f44e08cd2d0934b6ca7f5
institution Kabale University
issn 1099-4300
language English
publishDate 2025-01-01
publisher MDPI AG
record_format Article
series Entropy
spelling doaj-art-f96d0191d58f44e08cd2d0934b6ca7f52025-01-24T13:31:52ZengMDPI AGEntropy1099-43002025-01-012716610.3390/e27010066<i>VFL-Cafe</i>: Communication-Efficient Vertical Federated Learning via Dynamic Caching and Feature SelectionJiahui Zhou0Han Liang1Tian Wu2Xiaoxi Zhang3Yu Jiang4Chee Wei Tan5School of Computer and Science and Engineering, Sun Yat-sen University, Guangzhou 510275, ChinaSchool of Computer and Science and Engineering, Sun Yat-sen University, Guangzhou 510275, ChinaSchool of Computer and Science and Engineering, Sun Yat-sen University, Guangzhou 510275, ChinaSchool of Computer and Science and Engineering, Sun Yat-sen University, Guangzhou 510275, ChinaCollege of Computing and Data Science, Nanyang Technological University in Singapore, Singapore 639798, SingaporeCollege of Computing and Data Science, Nanyang Technological University in Singapore, Singapore 639798, SingaporeVertical Federated Learning (VFL) is a promising category of Federated Learning that enables collaborative model training among distributed parties with data privacy protection. Due to its unique training architecture, a key challenge of VFL is high communication cost due to transmitting intermediate results between the Active Party and Passive Parties. Current communication-efficient VFL methods rely on using stale results without meticulous selection, which can impair model accuracy, particularly in noisy data environments. To address these limitations, this work proposes <i>VFL-Cafe</i>, a new VFL training method that leverages dynamic caching and feature selection to boost communication efficiency and model accuracy. In each communication round, the employed caching scheme allows multiple batches of intermediate results to be cached and strategically reused by different parties, reducing the communication overhead while maintaining model accuracy. Additionally, to eliminate the negative impact of noisy features that may undermine the effectiveness of using stale results to reduce communication rounds and incur significant model degradation, a feature selection strategy is integrated into each round of local updates. Theoretical analysis is then conducted to provide guidance on cache configuration, optimizing performance. Finally, extensive experimental results validate <i>VFL-Cafe</i>’s efficacy, demonstrating remarkable improvements in communication efficiency and model accuracy.https://www.mdpi.com/1099-4300/27/1/66vertical federated learningcommunication efficientfeature selectiondynamic caching
spellingShingle Jiahui Zhou
Han Liang
Tian Wu
Xiaoxi Zhang
Yu Jiang
Chee Wei Tan
<i>VFL-Cafe</i>: Communication-Efficient Vertical Federated Learning via Dynamic Caching and Feature Selection
Entropy
vertical federated learning
communication efficient
feature selection
dynamic caching
title <i>VFL-Cafe</i>: Communication-Efficient Vertical Federated Learning via Dynamic Caching and Feature Selection
title_full <i>VFL-Cafe</i>: Communication-Efficient Vertical Federated Learning via Dynamic Caching and Feature Selection
title_fullStr <i>VFL-Cafe</i>: Communication-Efficient Vertical Federated Learning via Dynamic Caching and Feature Selection
title_full_unstemmed <i>VFL-Cafe</i>: Communication-Efficient Vertical Federated Learning via Dynamic Caching and Feature Selection
title_short <i>VFL-Cafe</i>: Communication-Efficient Vertical Federated Learning via Dynamic Caching and Feature Selection
title_sort i vfl cafe i communication efficient vertical federated learning via dynamic caching and feature selection
topic vertical federated learning
communication efficient
feature selection
dynamic caching
url https://www.mdpi.com/1099-4300/27/1/66
work_keys_str_mv AT jiahuizhou ivflcafeicommunicationefficientverticalfederatedlearningviadynamiccachingandfeatureselection
AT hanliang ivflcafeicommunicationefficientverticalfederatedlearningviadynamiccachingandfeatureselection
AT tianwu ivflcafeicommunicationefficientverticalfederatedlearningviadynamiccachingandfeatureselection
AT xiaoxizhang ivflcafeicommunicationefficientverticalfederatedlearningviadynamiccachingandfeatureselection
AT yujiang ivflcafeicommunicationefficientverticalfederatedlearningviadynamiccachingandfeatureselection
AT cheeweitan ivflcafeicommunicationefficientverticalfederatedlearningviadynamiccachingandfeatureselection