Multiple Adversarial Domains Adaptation Approach for Mitigating Adversarial Attacks Effects

Although neural networks are near achieving performance similar to humans in many tasks, they are susceptible to adversarial attacks in the form of a small, intentionally designed perturbation, which could lead to misclassifications. The best defense against these attacks, so far, is adversarial tra...

Full description

Saved in:
Bibliographic Details
Main Authors: Bader Rasheed, Adil Khan, Muhammad Ahmad, Manuel Mazzara, S. M. Ahsan Kazmi
Format: Article
Language:English
Published: Wiley 2022-01-01
Series:International Transactions on Electrical Energy Systems
Online Access:http://dx.doi.org/10.1155/2022/2890761
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832549720104370176
author Bader Rasheed
Adil Khan
Muhammad Ahmad
Manuel Mazzara
S. M. Ahsan Kazmi
author_facet Bader Rasheed
Adil Khan
Muhammad Ahmad
Manuel Mazzara
S. M. Ahsan Kazmi
author_sort Bader Rasheed
collection DOAJ
description Although neural networks are near achieving performance similar to humans in many tasks, they are susceptible to adversarial attacks in the form of a small, intentionally designed perturbation, which could lead to misclassifications. The best defense against these attacks, so far, is adversarial training (AT), which improves a model’s robustness by augmenting the training data with adversarial examples. However, AT usually decreases the model’s accuracy on clean samples and could overfit to a specific attack, inhibiting its ability to generalize to new attacks. In this paper, we investigate the usage of domain adaptation to enhance AT’s performance. We propose a novel multiple adversarial domain adaptation (MADA) method, which looks at this problem as a domain adaptation task to discover robust features. Specifically, we use adversarial learning to learn features that are domain-invariant between multiple adversarial domains and the clean domain. We evaluated MADA on MNIST and CIFAR-10 datasets with multiple adversarial attacks during training and testing. The results of our experiments show that MADA is superior to AT on adversarial samples by about 4% on average and on clean samples by about 1% on average.
format Article
id doaj-art-6b53e280f12743058d37e3a2db5d1937
institution Kabale University
issn 2050-7038
language English
publishDate 2022-01-01
publisher Wiley
record_format Article
series International Transactions on Electrical Energy Systems
spelling doaj-art-6b53e280f12743058d37e3a2db5d19372025-02-03T06:08:43ZengWileyInternational Transactions on Electrical Energy Systems2050-70382022-01-01202210.1155/2022/2890761Multiple Adversarial Domains Adaptation Approach for Mitigating Adversarial Attacks EffectsBader Rasheed0Adil Khan1Muhammad Ahmad2Manuel Mazzara3S. M. Ahsan Kazmi4Institute of Data Science and Artificial IntelligenceInstitute of Data Science and Artificial IntelligenceDepartment of Computer ScienceInstitute of Software Development and EngineeringFaculty of Computer Science and Creative TechnologiesAlthough neural networks are near achieving performance similar to humans in many tasks, they are susceptible to adversarial attacks in the form of a small, intentionally designed perturbation, which could lead to misclassifications. The best defense against these attacks, so far, is adversarial training (AT), which improves a model’s robustness by augmenting the training data with adversarial examples. However, AT usually decreases the model’s accuracy on clean samples and could overfit to a specific attack, inhibiting its ability to generalize to new attacks. In this paper, we investigate the usage of domain adaptation to enhance AT’s performance. We propose a novel multiple adversarial domain adaptation (MADA) method, which looks at this problem as a domain adaptation task to discover robust features. Specifically, we use adversarial learning to learn features that are domain-invariant between multiple adversarial domains and the clean domain. We evaluated MADA on MNIST and CIFAR-10 datasets with multiple adversarial attacks during training and testing. The results of our experiments show that MADA is superior to AT on adversarial samples by about 4% on average and on clean samples by about 1% on average.http://dx.doi.org/10.1155/2022/2890761
spellingShingle Bader Rasheed
Adil Khan
Muhammad Ahmad
Manuel Mazzara
S. M. Ahsan Kazmi
Multiple Adversarial Domains Adaptation Approach for Mitigating Adversarial Attacks Effects
International Transactions on Electrical Energy Systems
title Multiple Adversarial Domains Adaptation Approach for Mitigating Adversarial Attacks Effects
title_full Multiple Adversarial Domains Adaptation Approach for Mitigating Adversarial Attacks Effects
title_fullStr Multiple Adversarial Domains Adaptation Approach for Mitigating Adversarial Attacks Effects
title_full_unstemmed Multiple Adversarial Domains Adaptation Approach for Mitigating Adversarial Attacks Effects
title_short Multiple Adversarial Domains Adaptation Approach for Mitigating Adversarial Attacks Effects
title_sort multiple adversarial domains adaptation approach for mitigating adversarial attacks effects
url http://dx.doi.org/10.1155/2022/2890761
work_keys_str_mv AT baderrasheed multipleadversarialdomainsadaptationapproachformitigatingadversarialattackseffects
AT adilkhan multipleadversarialdomainsadaptationapproachformitigatingadversarialattackseffects
AT muhammadahmad multipleadversarialdomainsadaptationapproachformitigatingadversarialattackseffects
AT manuelmazzara multipleadversarialdomainsadaptationapproachformitigatingadversarialattackseffects
AT smahsankazmi multipleadversarialdomainsadaptationapproachformitigatingadversarialattackseffects