Dual-Mode Method for Generating Adversarial Examples to Attack Deep Neural Networks
Deep neural networks yield desirable performance in text, image, and speech classification. However, these networks are vulnerable to adversarial examples. An adversarial example is a sample generated by inserting a small amount of noise into an original sample (with minimal distortion) such that it...
Saved in:
| Main Authors: | Hyun Kwon, Sunghwan Kim |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10046665/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Tailoring adversarial attacks on deep neural networks for targeted class manipulation using DeepFool algorithm
by: S. M. Fazle Rabby Labib, et al.
Published: (2025-03-01) -
Increasing Neural-Based Pedestrian Detectors’ Robustness to Adversarial Patch Attacks Using Anomaly Localization
by: Olga Ilina, et al.
Published: (2025-01-01) -
Label flipping adversarial attack on graph neural network
by: Yiteng WU, et al.
Published: (2021-09-01) -
DDoS Attacks Detection With Deep Learning Approach Using Convolutional Neural Network
by: Rafiq Amalul Widodo, et al.
Published: (2024-08-01) -
Clock Glitch Fault Attacks on Deep Neural Networks and Their Countermeasures
by: Sangwon Lee, et al.
Published: (2025-04-01)