A Robust Method to Protect Text Classification Models against Adversarial Attacks
Text classification is one of the main tasks in natural language processing. Recently, adversarial attacks have shown a substantial negative impact on neural network-based text classification models. There are few defenses to strengthen model predictions against adversarial attacks; popular among th...
Saved in:
| Main Authors: | BALA MALLIKARJUNARAO GARLAPATI, Ajeet Kumar Singh, Srinivasa Rao Chalamala |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
LibraryPress@UF
2022-05-01
|
| Series: | Proceedings of the International Florida Artificial Intelligence Research Society Conference |
| Online Access: | https://journals.flvc.org/FLAIRS/article/view/130706 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Enhancing Robustness Against Adversarial Attacks in Multimodal Emotion Recognition With Spiking Transformers
by: Guoming Chen, et al.
Published: (2025-01-01) -
Perceptual Carlini-Wagner Attack: A Robust and Imperceptible Adversarial Attack Using LPIPS
by: Liming Fan, et al.
Published: (2025-01-01) -
Moving target defense against adversarial attacks
by: Bin WANG, et al.
Published: (2021-02-01) -
A multi-layered defense against adversarial attacks in brain tumor classification using ensemble adversarial training and feature squeezing
by: Ahmeed Yinusa, et al.
Published: (2025-05-01) -
Evaluating Pretrained Deep Learning Models for Image Classification Against Individual and Ensemble Adversarial Attacks
by: Mafizur Rahman, et al.
Published: (2025-01-01)