Boosting adversarial transferability in vision-language models via multimodal feature heterogeneity

Abstract Vision-language pre-training models have achieved significant success in the field of medical imaging but have exhibited vulnerability to adversarial examples. Although adversarial attacks are harmful, they are valuable in revealing the weaknesses of VLP models and enhancing their robustnes...

Full description

Saved in:
Bibliographic Details
Main Authors: Long Chen, Yuling Chen, Zhi Ouyang, Hui Dou, Yangwen Zhang, Haiwei Sang
Format: Article
Language:English
Published: Nature Portfolio 2025-03-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-025-91802-6
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Vision-language pre-training models have achieved significant success in the field of medical imaging but have exhibited vulnerability to adversarial examples. Although adversarial attacks are harmful, they are valuable in revealing the weaknesses of VLP models and enhancing their robustness. However, due to the under-utilization of modal differences and consistent features in existing methods, the attack effectiveness and migration of adversarial samples are not satisfactory. To address this issue and enhance attack effectiveness and transferability, we propose the multimodal feature heterogeneous attack framework. To enhance the adversarial capability, we propose a feature heterogenization method based on triplet contrastive learning, utilizing data augmentation and cross-modal global contrastive learning, intra-modal contrastive learning, and cross-modal global-local mutual information contrastive learning. This further heterogenizes the consistent features between modalities into distinct features, thereby improving the adversarial capability. To improve transferability, we propose a cross-modal variance aggregation-based multi-domain feature perturbation method, using text-guided image attacks to perturb consistent spatial and frequency features while combining previous gradient momentum, achieving better transferability. Extensive experiments demonstrate MFHA’s significant advantage in transferable attack capability, with an average improvement of 16.05%, and outstanding attack performance on multimodal large language models like MiniGPT4 and LLaVA. The work we did has been open-sourced on GitHub: https://github.com/doyoudooo/MFHA .
ISSN:2045-2322