A Robust Method to Protect Text Classification Models against Adversarial Attacks

Text classification is one of the main tasks in natural language processing. Recently, adversarial attacks have shown a substantial negative impact on neural network-based text classification models. There are few defenses to strengthen model predictions against adversarial attacks; popular among th...

Full description

Saved in:
Bibliographic Details
Main Authors: BALA MALLIKARJUNARAO GARLAPATI, Ajeet Kumar Singh, Srinivasa Rao Chalamala
Format: Article
Language:English
Published: LibraryPress@UF 2022-05-01
Series:Proceedings of the International Florida Artificial Intelligence Research Society Conference
Online Access:https://journals.flvc.org/FLAIRS/article/view/130706
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Text classification is one of the main tasks in natural language processing. Recently, adversarial attacks have shown a substantial negative impact on neural network-based text classification models. There are few defenses to strengthen model predictions against adversarial attacks; popular among them are adversarial training and spelling correction. While adversarial training adds different synonyms to the training data, spelling correction methods defend against character variations at the word level. The diversity and sparseness of adversarial perturbations of different attack methods challenge these approaches. This paper proposes an approach to correct adversarial samples for text classification tasks. Our proposed approach combines grammar correction and spelling correction methods. In this, we use Gramformer for grammar correction and Textblob for spelling correction. These approaches are generic and can be applied to any text classification model without any retraining. We evaluated our approach with two state-of-the-art attacks, DeepWordBug and TextBugger, on three open-source datasets IMDB, CoLA, and AGNews. The experimental results show that our approach can effectively counter adversarial attacks on text classification models while maintaining classification performance on original clean data.
ISSN:2334-0754
2334-0762