An Experimental Study on Dynamic Lifelong Learning With GPT for Mitigating Catastrophic Forgetting in Aspect-Based Sentiment Analysis
GPT is widely recognized as one of the most versatile and powerful large language models, excelling across diverse domains. However, its significant computational demands often render it economically unfeasible for individuals and small businesses, underscoring the need for efficient, domain-specifi...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10909082/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849762371404300288 |
|---|---|
| author | Huang Huang Mumtaz Begum Mustafa Adeleh Asemi |
| author_facet | Huang Huang Mumtaz Begum Mustafa Adeleh Asemi |
| author_sort | Huang Huang |
| collection | DOAJ |
| description | GPT is widely recognized as one of the most versatile and powerful large language models, excelling across diverse domains. However, its significant computational demands often render it economically unfeasible for individuals and small businesses, underscoring the need for efficient, domain-specific alternatives. In Aspect-Based Sentiment Analysis (ABSA), existing models are typically optimized for single domains, facing challenges in performing effectively across multiple domains. A key issue, known as “catastrophic forgetting,” arises when models trained on one domain lose previously learned knowledge upon exposure to new domain data. This leads to two significant problems: limited cross-domain generalization and difficulty in retaining prior knowledge while learning domain-specific information. To address these challenges, we introduce the Dynamic Lifelong Learning Aspect-Based Sentiment Analysis GPT (DllaGPT), a model designed to handle multiple domains while mitigating catastrophic forgetting. Leveraging datasets from four ABSA domains—Laptops, Restaurants, Tweets, and Finance—this study fine-tunes a pretrained GPT model from HuggingFace sequentially across domains. DllaGPT employs a mechanism to retain real data from earlier domains during new domain training, effectively preserving prior knowledge. Experimental results highlight that DllaGPT achieves an average accuracy of 0.85 and a Backward Transfer (BWT) score of -0.09 across the four domains, showcasing its high accuracy and robust lifelong learning capabilities. |
| format | Article |
| id | doaj-art-ff0e08c131c14651bb89fbeef8ca330c |
| institution | DOAJ |
| issn | 2169-3536 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj-art-ff0e08c131c14651bb89fbeef8ca330c2025-08-20T03:05:45ZengIEEEIEEE Access2169-35362025-01-0113903169033210.1109/ACCESS.2025.354735910909082An Experimental Study on Dynamic Lifelong Learning With GPT for Mitigating Catastrophic Forgetting in Aspect-Based Sentiment AnalysisHuang Huang0https://orcid.org/0009-0006-8282-9446Mumtaz Begum Mustafa1Adeleh Asemi2https://orcid.org/0000-0002-9193-2430Department of Software Engineering, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur, MalaysiaDepartment of Software Engineering, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur, MalaysiaDepartment of Software Engineering, Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur, MalaysiaGPT is widely recognized as one of the most versatile and powerful large language models, excelling across diverse domains. However, its significant computational demands often render it economically unfeasible for individuals and small businesses, underscoring the need for efficient, domain-specific alternatives. In Aspect-Based Sentiment Analysis (ABSA), existing models are typically optimized for single domains, facing challenges in performing effectively across multiple domains. A key issue, known as “catastrophic forgetting,” arises when models trained on one domain lose previously learned knowledge upon exposure to new domain data. This leads to two significant problems: limited cross-domain generalization and difficulty in retaining prior knowledge while learning domain-specific information. To address these challenges, we introduce the Dynamic Lifelong Learning Aspect-Based Sentiment Analysis GPT (DllaGPT), a model designed to handle multiple domains while mitigating catastrophic forgetting. Leveraging datasets from four ABSA domains—Laptops, Restaurants, Tweets, and Finance—this study fine-tunes a pretrained GPT model from HuggingFace sequentially across domains. DllaGPT employs a mechanism to retain real data from earlier domains during new domain training, effectively preserving prior knowledge. Experimental results highlight that DllaGPT achieves an average accuracy of 0.85 and a Backward Transfer (BWT) score of -0.09 across the four domains, showcasing its high accuracy and robust lifelong learning capabilities.https://ieeexplore.ieee.org/document/10909082/Aspect-based sentiment analysisGPTlifelong learninglanguage modeling |
| spellingShingle | Huang Huang Mumtaz Begum Mustafa Adeleh Asemi An Experimental Study on Dynamic Lifelong Learning With GPT for Mitigating Catastrophic Forgetting in Aspect-Based Sentiment Analysis IEEE Access Aspect-based sentiment analysis GPT lifelong learning language modeling |
| title | An Experimental Study on Dynamic Lifelong Learning With GPT for Mitigating Catastrophic Forgetting in Aspect-Based Sentiment Analysis |
| title_full | An Experimental Study on Dynamic Lifelong Learning With GPT for Mitigating Catastrophic Forgetting in Aspect-Based Sentiment Analysis |
| title_fullStr | An Experimental Study on Dynamic Lifelong Learning With GPT for Mitigating Catastrophic Forgetting in Aspect-Based Sentiment Analysis |
| title_full_unstemmed | An Experimental Study on Dynamic Lifelong Learning With GPT for Mitigating Catastrophic Forgetting in Aspect-Based Sentiment Analysis |
| title_short | An Experimental Study on Dynamic Lifelong Learning With GPT for Mitigating Catastrophic Forgetting in Aspect-Based Sentiment Analysis |
| title_sort | experimental study on dynamic lifelong learning with gpt for mitigating catastrophic forgetting in aspect based sentiment analysis |
| topic | Aspect-based sentiment analysis GPT lifelong learning language modeling |
| url | https://ieeexplore.ieee.org/document/10909082/ |
| work_keys_str_mv | AT huanghuang anexperimentalstudyondynamiclifelonglearningwithgptformitigatingcatastrophicforgettinginaspectbasedsentimentanalysis AT mumtazbegummustafa anexperimentalstudyondynamiclifelonglearningwithgptformitigatingcatastrophicforgettinginaspectbasedsentimentanalysis AT adelehasemi anexperimentalstudyondynamiclifelonglearningwithgptformitigatingcatastrophicforgettinginaspectbasedsentimentanalysis AT huanghuang experimentalstudyondynamiclifelonglearningwithgptformitigatingcatastrophicforgettinginaspectbasedsentimentanalysis AT mumtazbegummustafa experimentalstudyondynamiclifelonglearningwithgptformitigatingcatastrophicforgettinginaspectbasedsentimentanalysis AT adelehasemi experimentalstudyondynamiclifelonglearningwithgptformitigatingcatastrophicforgettinginaspectbasedsentimentanalysis |