Review of Enhancement Research for Closed-Source Large Language Model
With the rapid development of large language models in the field of natural language processing, performance enhancement of closed-source large language models represented by the GPT family has become a challenge. Due to the inaccessibility of parameter weights inside the models, traditional trainin...
Saved in:
| Main Author: | LIU Hualing, ZHANG Zilong, PENG Hongshuai |
|---|---|
| Format: | Article |
| Language: | zho |
| Published: |
Journal of Computer Engineering and Applications Beijing Co., Ltd., Science Press
2025-05-01
|
| Series: | Jisuanji kexue yu tansuo |
| Subjects: | |
| Online Access: | http://fcst.ceaj.org/fileup/1673-9418/PDF/2407021.pdf |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Empowering medical systematic reviews with large language models: methods, development directions, and applications
by: Yannan HUANG, et al.
Published: (2025-06-01) -
Streamlining systematic reviews with large language models using prompt engineering and retrieval augmented generation
by: Fouad Trad, et al.
Published: (2025-05-01) -
Entropy-Optimized Dynamic Text Segmentation and RAG-Enhanced LLMs for Construction Engineering Knowledge Base
by: Haiyuan Wang, et al.
Published: (2025-03-01) -
Fine-Tuning Retrieval-Augmented Generation with an Auto-Regressive Language Model for Sentiment Analysis in Financial Reviews
by: Miehleketo Mathebula, et al.
Published: (2024-11-01) -
Enhancing the Precision and Interpretability of Retrieval-Augmented Generation (RAG) in Legal Technology: A Survey
by: Mahd Hindi, et al.
Published: (2025-01-01)