A comprehensive survey of robust deep learning in computer vision
Deep learning has presented remarkable progress in various tasks. Despite the excellent performance, deep learning models remain not robust, especially to well-designed adversarial examples, limiting deep learning models employed in security-critical applications. Therefore, how to improve the robus...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
KeAi Communications Co., Ltd.
2023-11-01
|
| Series: | Journal of Automation and Intelligence |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S294985542300045X |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Deep learning has presented remarkable progress in various tasks. Despite the excellent performance, deep learning models remain not robust, especially to well-designed adversarial examples, limiting deep learning models employed in security-critical applications. Therefore, how to improve the robustness of deep learning has attracted increasing attention from researchers. This paper investigates the progress on the threat of deep learning and the techniques that can enhance the model robustness in computer vision. Unlike previous relevant survey papers summarizing adversarial attacks and defense technologies, this paper also provides an overview of the general robustness of deep learning. Besides, this survey elaborates on the current robustness evaluation approaches, which require further exploration. This paper also reviews the recent literature on making deep learning models resistant to adversarial examples from an architectural perspective, which was rarely mentioned in previous surveys. Finally, interesting directions for future research are listed based on the reviewed literature. This survey is hoped to serve as the basis for future research in this topical field. |
|---|---|
| ISSN: | 2949-8554 |