Defending against and generating adversarial examples together with generative adversarial networks
Abstract Although deep neural networks have achieved great success in many tasks, they encounter security threats and are often fooled by adversarial examples, which are created by making slight modifications to pixel values. To address these problems, a novel DG-GAN framework is proposed, integrati...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-04-01
|
| Series: | Scientific Reports |
| Online Access: | https://doi.org/10.1038/s41598-024-83444-x |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Abstract Although deep neural networks have achieved great success in many tasks, they encounter security threats and are often fooled by adversarial examples, which are created by making slight modifications to pixel values. To address these problems, a novel DG-GAN framework is proposed, integrating generator, encoder, and discriminator, to defend against and generate adversarial examples with generative adversarial networks. Under the DG-GAN framework, we establish the relationship between defending against and generating adversarial examples by bidirectional mapping from images to adversarial examples, which means that we can not only use the generator to defend against adversarial examples, but also use the encoder to generate adversarial examples without gradient information. Moreover, the proposed DG-GAN can be used with any classification model and does not modify the classifier structure or the training procedure. We design a series of experiments to validate the DG-GAN framework. According to the results, as a defense method, DG-GAN effectively defends against different attacks and improves on existing defense strategies. On the other hand, DG-GAN also serves as a black-box attack, which has similar attack performance to existing attack methods. |
|---|---|
| ISSN: | 2045-2322 |