T-LLaMA: a Tibetan large language model based on LLaMA2
Abstract The advent of ChatGPT and GPT-4 has generated substantial interest in large language model (LLM) research, showcasing remarkable performance in various applications such as conversation systems, machine translation, and research paper summarization. However, their efficacy diminishes when a...
Saved in:
Main Authors: | Hui Lv, Chi Pu, La Duo, Yan Li, Qingguo Zhou, Jun Shen |
---|---|
Format: | Article |
Language: | English |
Published: |
Springer
2024-12-01
|
Series: | Complex & Intelligent Systems |
Subjects: | |
Online Access: | https://doi.org/10.1007/s40747-024-01641-7 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Advancing Computational Humor: LLaMa-3 Based Generation with DistilBert Evaluation Framework
by: He Jinliang, et al.
Published: (2025-01-01) -
Sentiment Analysis of Product Reviews Using Fine-Tuned LLaMa-3 Model: Evaluation with Comprehensive Benchmark Metrics
by: Wang Yili
Published: (2025-01-01) -
BanglaBlend: A large-scale nobel dataset of bangla sentences categorized by saint and common form of bangla languageMendeley Data
by: Umme Ayman, et al.
Published: (2025-02-01) -
Human-interpretable clustering of short text using large language models
by: Justin K. Miller, et al.
Published: (2025-01-01) -
Causality Extraction from Medical Text Using Large Language Models (LLMs)
by: Seethalakshmi Gopalakrishnan, et al.
Published: (2024-12-01)