LoRA-Adv: Boosting Text Classification in Large Language Models Through Adversarial Low-Rank Adaptations

Low-rank adaptation (LoRA), a paradigm bridging the gap between large language models and fine-tuning, has demonstrated effectiveness across various natural language processing tasks. The LoRA algorithm updates only a small number of model parameters, significantly reducing the consumption of comput...

Full description

Saved in:
Bibliographic Details
Main Authors: Hong Ye, Xialin Xie, Fenlong Xie, Jun Zuo, Chunyan Bu
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11036123/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Low-rank adaptation (LoRA), a paradigm bridging the gap between large language models and fine-tuning, has demonstrated effectiveness across various natural language processing tasks. The LoRA algorithm updates only a small number of model parameters, significantly reducing the consumption of computational resources. Although the LoRA algorithm achieves feasible performance, it is susceptible to the influence of training samples. To enhance the stability of the LoRA algorithm and improve the performance of classification models, this paper introduces a novel adversarially enhanced LoRA algorithm, named LoRA-Adv. Specifically, in the LoRA-Adv algorithm, we leverage carefully designed adversarial perturbations to expand the diversity of training samples. Minimizing the adversarial training loss for both normal and adversarial samples significantly reinforces the model’s robustness. To verify the performance of the LoRA-Adv algorithm, multiple state-of-the-art large language models were employed, and all experimental results confirmed that LoRA-Adv significantly enhances the classification performance of the models. Therefore, the LoRA-Adv algorithm represents a significant advancement in fine-tuning large language models, enhancing both their stability and classification accuracy.
ISSN:2169-3536