Transferability analysis of adversarial attacks on gender classification to face recognition: Fixed and variable attack perturbation
Abstract Most deep learning‐based image classification models are vulnerable to adversarial attacks that introduce imperceptible changes to the input images for the purpose of model misclassification. It has been demonstrated that these attacks, targeting a specific model, are transferable among mod...
Saved in:
Main Authors: | Zohra Rezgui, Amina Bassit, Raymond Veldhuis |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2022-09-01
|
Series: | IET Biometrics |
Subjects: | |
Online Access: | https://doi.org/10.1049/bme2.12082 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Learning deep forest for face anti-spoofing: An alternative to the neural network against adversarial attacks
by: Rizhao Cai, et al.
Published: (2024-10-01) -
An Adversarial Attack via Penalty Method
by: Jiyuan Sun, et al.
Published: (2025-01-01) -
Mape: defending against transferable adversarial attacks using multi-source adversarial perturbations elimination
by: Xinlei Liu, et al.
Published: (2025-01-01) -
Adversarial Robust Modulation Recognition Guided by Attention Mechanisms
by: Quanhai Zhan, et al.
Published: (2025-01-01) -
APDL: an adaptive step size method for white-box adversarial attacks
by: Jiale Hu, et al.
Published: (2025-01-01)