Balancing Privacy and Utility in Split Learning: An Adversarial Channel Pruning-Based Approach

Machine Learning (ML) has been exploited across diverse fields with significant success. However, the deployment of ML models on resource-constrained devices, such as edge devices, has remained challenging due to the limited computing resources. Moreover, training such models using private data is p...

Full description

Saved in:
Bibliographic Details
Main Authors: Afnan Alhindi, Saad Al-Ahmadi, Mohamed Maher Ben Ismail
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10838505/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832592886533718016
author Afnan Alhindi
Saad Al-Ahmadi
Mohamed Maher Ben Ismail
author_facet Afnan Alhindi
Saad Al-Ahmadi
Mohamed Maher Ben Ismail
author_sort Afnan Alhindi
collection DOAJ
description Machine Learning (ML) has been exploited across diverse fields with significant success. However, the deployment of ML models on resource-constrained devices, such as edge devices, has remained challenging due to the limited computing resources. Moreover, training such models using private data is prone to serious privacy risks resulting from inadvertent disclosure of sensitive information. Split Learning (SL) has emerged as a promising technique to mitigate these risks through partitioning neural networks into the client and the server subnets. One should note that although only the extracted features are transmitted to the server, sensitive information can still be unwittingly revealed. Existing approaches addressing this privacy concern in SL struggle to maintain a balance of privacy and utility. This research introduces a novel privacy-preserving split learning approach that integrates: 1) Adversarial learning and 2) Network channel pruning. Specifically, adversarial learning aims to minimize the risk of sensitive data leakage while maximizing the performance of the target prediction task. Furthermore, the channel pruning performed jointly with the adversarial training allows the model to dynamically adjust and reactivate the pruned channels. The association of these two techniques makes the intermediate representations (features) exchanged between the client and the server models less informative and more robust against data reconstruction attacks. Accordingly, the proposed approach enhances data privacy without ceding the model’s performance in achieving the intended utility task. The contributions of this research were validated and assessed using benchmark datasets. The experiments demonstrated the superior defense ability, against data reconstruction attacks, of the proposed approach in comparison with relevant state-of-the-art approaches. In particular, the SSIM between the original data and the data reconstructed by the attacker, achieved by our approach, decreased significantly by 57%. In summary, the obtained quantitative and qualitative results proved the efficiency of the proposed approach in balancing privacy and utility for typical split learning frameworks.
format Article
id doaj-art-bd275e3ac27d4f54ae8e7d78683d3b2e
institution Kabale University
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-bd275e3ac27d4f54ae8e7d78683d3b2e2025-01-21T00:02:12ZengIEEEIEEE Access2169-35362025-01-0113100941011010.1109/ACCESS.2025.352857510838505Balancing Privacy and Utility in Split Learning: An Adversarial Channel Pruning-Based ApproachAfnan Alhindi0https://orcid.org/0009-0002-6675-9346Saad Al-Ahmadi1https://orcid.org/0000-0001-9406-6809Mohamed Maher Ben Ismail2https://orcid.org/0000-0001-7770-5752Computer Science Department, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi ArabiaComputer Science Department, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi ArabiaComputer Science Department, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi ArabiaMachine Learning (ML) has been exploited across diverse fields with significant success. However, the deployment of ML models on resource-constrained devices, such as edge devices, has remained challenging due to the limited computing resources. Moreover, training such models using private data is prone to serious privacy risks resulting from inadvertent disclosure of sensitive information. Split Learning (SL) has emerged as a promising technique to mitigate these risks through partitioning neural networks into the client and the server subnets. One should note that although only the extracted features are transmitted to the server, sensitive information can still be unwittingly revealed. Existing approaches addressing this privacy concern in SL struggle to maintain a balance of privacy and utility. This research introduces a novel privacy-preserving split learning approach that integrates: 1) Adversarial learning and 2) Network channel pruning. Specifically, adversarial learning aims to minimize the risk of sensitive data leakage while maximizing the performance of the target prediction task. Furthermore, the channel pruning performed jointly with the adversarial training allows the model to dynamically adjust and reactivate the pruned channels. The association of these two techniques makes the intermediate representations (features) exchanged between the client and the server models less informative and more robust against data reconstruction attacks. Accordingly, the proposed approach enhances data privacy without ceding the model’s performance in achieving the intended utility task. The contributions of this research were validated and assessed using benchmark datasets. The experiments demonstrated the superior defense ability, against data reconstruction attacks, of the proposed approach in comparison with relevant state-of-the-art approaches. In particular, the SSIM between the original data and the data reconstructed by the attacker, achieved by our approach, decreased significantly by 57%. In summary, the obtained quantitative and qualitative results proved the efficiency of the proposed approach in balancing privacy and utility for typical split learning frameworks.https://ieeexplore.ieee.org/document/10838505/Adversarial learningchannel pruningdistributed collaborative machine learningprivacy-preserving split learningsplit learning
spellingShingle Afnan Alhindi
Saad Al-Ahmadi
Mohamed Maher Ben Ismail
Balancing Privacy and Utility in Split Learning: An Adversarial Channel Pruning-Based Approach
IEEE Access
Adversarial learning
channel pruning
distributed collaborative machine learning
privacy-preserving split learning
split learning
title Balancing Privacy and Utility in Split Learning: An Adversarial Channel Pruning-Based Approach
title_full Balancing Privacy and Utility in Split Learning: An Adversarial Channel Pruning-Based Approach
title_fullStr Balancing Privacy and Utility in Split Learning: An Adversarial Channel Pruning-Based Approach
title_full_unstemmed Balancing Privacy and Utility in Split Learning: An Adversarial Channel Pruning-Based Approach
title_short Balancing Privacy and Utility in Split Learning: An Adversarial Channel Pruning-Based Approach
title_sort balancing privacy and utility in split learning an adversarial channel pruning based approach
topic Adversarial learning
channel pruning
distributed collaborative machine learning
privacy-preserving split learning
split learning
url https://ieeexplore.ieee.org/document/10838505/
work_keys_str_mv AT afnanalhindi balancingprivacyandutilityinsplitlearninganadversarialchannelpruningbasedapproach
AT saadalahmadi balancingprivacyandutilityinsplitlearninganadversarialchannelpruningbasedapproach
AT mohamedmaherbenismail balancingprivacyandutilityinsplitlearninganadversarialchannelpruningbasedapproach