Sensitivity-Aware Differential Privacy for Federated Medical Imaging

Federated learning (FL) enables collaborative model training across multiple institutions without the sharing of raw patient data, making it particularly suitable for smart healthcare applications. However, recent studies revealed that merely sharing gradients provides a false sense of security, as...

Full description

Saved in:
Bibliographic Details
Main Authors: Lele Zheng, Yang Cao, Masatoshi Yoshikawa, Yulong Shen, Essam A. Rashed, Kenjiro Taura, Shouhei Hanaoka, Tao Zhang
Format: Article
Language:English
Published: MDPI AG 2025-04-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/25/9/2847
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Federated learning (FL) enables collaborative model training across multiple institutions without the sharing of raw patient data, making it particularly suitable for smart healthcare applications. However, recent studies revealed that merely sharing gradients provides a false sense of security, as private information can still be inferred through gradient inversion attacks (GIAs). While differential privacy (DP) provides provable privacy guarantees, traditional DP methods apply uniform protection, leading to excessive protection for low-sensitivity data and insufficient protection for high-sensitivity data, which degrades model performance and increases privacy risks. This paper proposes a new privacy notion, sensitivity-aware differential privacy, to better balance model performance and privacy protection. Our idea is that the sensitivity of each data sample can be objectively measured using real-world attacks. To implement this new notion, we develop the corresponding defense mechanism that adjusts privacy protection levels based on the variation in the privacy leakage risks of gradient inversion attacks. Furthermore, the method extends naturally to multi-attack scenarios. Extensive experiments on real-world medical imaging datasets demonstrate that, under equivalent privacy risk, our method achieves an average performance improvement of 13.5% over state-of-the-art methods.
ISSN:1424-8220