Boosting adversarial transferability in vision-language models via multimodal feature heterogeneity
Abstract Vision-language pre-training models have achieved significant success in the field of medical imaging but have exhibited vulnerability to adversarial examples. Although adversarial attacks are harmful, they are valuable in revealing the weaknesses of VLP models and enhancing their robustnes...
Saved in:
| Main Authors: | Long Chen, Yuling Chen, Zhi Ouyang, Hui Dou, Yangwen Zhang, Haiwei Sang |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-03-01
|
| Series: | Scientific Reports |
| Online Access: | https://doi.org/10.1038/s41598-025-91802-6 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Boosting the Transferability of Ensemble Adversarial Attack via Stochastic Average Variance Descent
by: Lei Zhao, et al.
Published: (2024-01-01) -
Video Abnormal Action Recognition Based on Multimodal Heterogeneous Transfer Learning
by: Hong-Bo Huang, et al.
Published: (2024-01-01) -
Multimodal data fusion for Alzheimer's disease based on dynamic heterogeneous graph convolutional neural network and generative adversarial network
by: Xiaoyu Chen, et al.
Published: (2025-07-01) -
Patch is enough: naturalistic adversarial patch against vision-language pre-training models
by: Dehong Kong, et al.
Published: (2024-12-01) -
MAS-PD: Transferable Adversarial Attack Against Vision-Transformers-Based SAR Image Classification Task
by: Boshi Zheng, et al.
Published: (2025-01-01)