TrapMI: A Data Protection Method to Resist Model Inversion Attacks in Split Learning
Split learning is a neural network training approach that can overcome the limitations of traditional deep neural networks in edge artificial intelligence environments. It offers the advantage of privacy protection because it transmits intermediate features that are calculated via the client-side mo...
Saved in:
| Main Authors: | Hyunsik Na, Daeseon Choi |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10902388/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Survey of split learning data privacy
by: QIN Yiqun, et al.
Published: (2024-06-01) -
Cyber attacks and privacy protection distributed consensus algorithm for multi-agent systems
by: Ming XU, et al.
Published: (2023-03-01) -
Cyber attacks and privacy protection distributed consensus algorithm for multi-agent systems
by: Ming XU, et al.
Published: (2023-03-01) -
Model split-based data privacy protection method for federated learning
by: CHEN Ka
Published: (2024-09-01) -
Privacy Auditing of Lithium-Ion Battery Ageing Model by Recovering Time-Series Data Using Gradient Inversion Attack in Federated Learning
by: Kaspars Sudars, et al.
Published: (2025-05-01)