A decade of adversarial examples: a survey on the nature and understanding of neural network non-robustness
Adversarial examples, in the context of computer vision, are inputs deliberately crafted to deceive or mislead artificial neural networks. These examples exploit vulnerabilities in neural networks, resulting in minimal alterations to the original input that are imperceptible by humans but can signif...
Saved in:
| Main Authors: | A.V. Trusov, E.E. Limonova, V.V. Arlazarov |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Samara National Research University
2025-04-01
|
| Series: | Компьютерная оптика |
| Subjects: | |
| Online Access: | https://computeroptics.ru/KO/Annot/KO49-2/490209.html |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
A Gradual Adversarial Training Method for Semantic Segmentation
by: Yinkai Zan, et al.
Published: (2024-11-01) -
MeetSafe: enhancing robustness against white-box adversarial examples
by: Ruben Stenhuis, et al.
Published: (2025-08-01) -
Rectifying Adversarial Examples Using Their Vulnerabilities
by: Fumiya Morimoto, et al.
Published: (2025-01-01) -
An Adversarial Attack via Penalty Method
by: Jiyuan Sun, et al.
Published: (2025-01-01) -
ActiveGuard: An active intellectual property protection technique for deep neural networks by leveraging adversarial examples as users' fingerprints
by: Mingfu Xue, et al.
Published: (2023-07-01)