Sensitivity-Aware Differential Privacy for Federated Medical Imaging
Federated learning (FL) enables collaborative model training across multiple institutions without the sharing of raw patient data, making it particularly suitable for smart healthcare applications. However, recent studies revealed that merely sharing gradients provides a false sense of security, as...
Saved in:
| Main Authors: | , , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-04-01
|
| Series: | Sensors |
| Subjects: | |
| Online Access: | https://www.mdpi.com/1424-8220/25/9/2847 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850279070523195392 |
|---|---|
| author | Lele Zheng Yang Cao Masatoshi Yoshikawa Yulong Shen Essam A. Rashed Kenjiro Taura Shouhei Hanaoka Tao Zhang |
| author_facet | Lele Zheng Yang Cao Masatoshi Yoshikawa Yulong Shen Essam A. Rashed Kenjiro Taura Shouhei Hanaoka Tao Zhang |
| author_sort | Lele Zheng |
| collection | DOAJ |
| description | Federated learning (FL) enables collaborative model training across multiple institutions without the sharing of raw patient data, making it particularly suitable for smart healthcare applications. However, recent studies revealed that merely sharing gradients provides a false sense of security, as private information can still be inferred through gradient inversion attacks (GIAs). While differential privacy (DP) provides provable privacy guarantees, traditional DP methods apply uniform protection, leading to excessive protection for low-sensitivity data and insufficient protection for high-sensitivity data, which degrades model performance and increases privacy risks. This paper proposes a new privacy notion, sensitivity-aware differential privacy, to better balance model performance and privacy protection. Our idea is that the sensitivity of each data sample can be objectively measured using real-world attacks. To implement this new notion, we develop the corresponding defense mechanism that adjusts privacy protection levels based on the variation in the privacy leakage risks of gradient inversion attacks. Furthermore, the method extends naturally to multi-attack scenarios. Extensive experiments on real-world medical imaging datasets demonstrate that, under equivalent privacy risk, our method achieves an average performance improvement of 13.5% over state-of-the-art methods. |
| format | Article |
| id | doaj-art-003eff9a4d2a4cfea1fb891ae5c7fd3d |
| institution | OA Journals |
| issn | 1424-8220 |
| language | English |
| publishDate | 2025-04-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Sensors |
| spelling | doaj-art-003eff9a4d2a4cfea1fb891ae5c7fd3d2025-08-20T01:49:14ZengMDPI AGSensors1424-82202025-04-01259284710.3390/s25092847Sensitivity-Aware Differential Privacy for Federated Medical ImagingLele Zheng0Yang Cao1Masatoshi Yoshikawa2Yulong Shen3Essam A. Rashed4Kenjiro Taura5Shouhei Hanaoka6Tao Zhang7School of Computer Science and Technology, Xidian University, Xi’an 710126, ChinaDepartment of Computer Science, Institute of Science Tokyo, Tokyo 152-8550, JapanFaculty of Data Science, Osaka Seikei University, Osaka 533-0007, JapanSchool of Computer Science and Technology, Xidian University, Xi’an 710126, ChinaGraduate School of Information Science, University of Hyogo, Hyogo 670-0092, JapanGraduate School of Information Science and Technology, University of Tokyo, Tokyo 113-0033, JapanGraduate School of Medicine, University of Tokyo, Tokyo 113-0033, JapanSchool of Computer Science and Technology, Xidian University, Xi’an 710126, ChinaFederated learning (FL) enables collaborative model training across multiple institutions without the sharing of raw patient data, making it particularly suitable for smart healthcare applications. However, recent studies revealed that merely sharing gradients provides a false sense of security, as private information can still be inferred through gradient inversion attacks (GIAs). While differential privacy (DP) provides provable privacy guarantees, traditional DP methods apply uniform protection, leading to excessive protection for low-sensitivity data and insufficient protection for high-sensitivity data, which degrades model performance and increases privacy risks. This paper proposes a new privacy notion, sensitivity-aware differential privacy, to better balance model performance and privacy protection. Our idea is that the sensitivity of each data sample can be objectively measured using real-world attacks. To implement this new notion, we develop the corresponding defense mechanism that adjusts privacy protection levels based on the variation in the privacy leakage risks of gradient inversion attacks. Furthermore, the method extends naturally to multi-attack scenarios. Extensive experiments on real-world medical imaging datasets demonstrate that, under equivalent privacy risk, our method achieves an average performance improvement of 13.5% over state-of-the-art methods.https://www.mdpi.com/1424-8220/25/9/2847smart healthcaredifferential privacygradient inversion attacksfederated learning |
| spellingShingle | Lele Zheng Yang Cao Masatoshi Yoshikawa Yulong Shen Essam A. Rashed Kenjiro Taura Shouhei Hanaoka Tao Zhang Sensitivity-Aware Differential Privacy for Federated Medical Imaging Sensors smart healthcare differential privacy gradient inversion attacks federated learning |
| title | Sensitivity-Aware Differential Privacy for Federated Medical Imaging |
| title_full | Sensitivity-Aware Differential Privacy for Federated Medical Imaging |
| title_fullStr | Sensitivity-Aware Differential Privacy for Federated Medical Imaging |
| title_full_unstemmed | Sensitivity-Aware Differential Privacy for Federated Medical Imaging |
| title_short | Sensitivity-Aware Differential Privacy for Federated Medical Imaging |
| title_sort | sensitivity aware differential privacy for federated medical imaging |
| topic | smart healthcare differential privacy gradient inversion attacks federated learning |
| url | https://www.mdpi.com/1424-8220/25/9/2847 |
| work_keys_str_mv | AT lelezheng sensitivityawaredifferentialprivacyforfederatedmedicalimaging AT yangcao sensitivityawaredifferentialprivacyforfederatedmedicalimaging AT masatoshiyoshikawa sensitivityawaredifferentialprivacyforfederatedmedicalimaging AT yulongshen sensitivityawaredifferentialprivacyforfederatedmedicalimaging AT essamarashed sensitivityawaredifferentialprivacyforfederatedmedicalimaging AT kenjirotaura sensitivityawaredifferentialprivacyforfederatedmedicalimaging AT shouheihanaoka sensitivityawaredifferentialprivacyforfederatedmedicalimaging AT taozhang sensitivityawaredifferentialprivacyforfederatedmedicalimaging |