Perceptual Carlini-Wagner Attack: A Robust and Imperceptible Adversarial Attack Using LPIPS
Adversarial attacks on deep neural networks (DNNs) present significant challenges by exploiting model vulnerabilities using perturbations that are often imperceptible to human observers. Traditional approaches typically constrain perturbations using p-norms, which do not effectively capture human pe...
Saved in:
| Main Authors: | Liming Fan, Anis Salwa Mohd Khairuddin, Haichuan Liu, Khairunnisa Binti Hasikin |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/11078278/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
VariGAN: Enhancing Image Style Transfer via UNet Generator, Depthwise Discriminator, and LPIPS Loss in Adversarial Learning Framework
by: Dawei Guan, et al.
Published: (2025-04-01) -
Investigating imperceptibility of adversarial attacks on tabular data: An empirical analysis
by: Zhipeng He, et al.
Published: (2025-03-01) -
Generative Adversarial Network-Based Distortion Reduction Adapted to Peak Signal-to-Noise Ratio Parameters in VVC
by: Weihao Deng, et al.
Published: (2024-12-01) -
Localizing Adversarial Attacks To Produces More Imperceptible Noise
by: Pavan Reddy, et al.
Published: (2025-05-01) -
Point Cloud Adversarial Perturbation Generation for Adversarial Attacks
by: Fengmei He, et al.
Published: (2023-01-01)