Human-interpretable clustering of short text using large language models
Clustering short text is a difficult problem, owing to the low word co-occurrence between short text documents. This work shows that large language models (LLMs) can overcome the limitations of traditional clustering approaches by generating embeddings that capture the semantic nuances of short text...
Saved in:
Main Authors: | Justin K. Miller, Tristram J. Alexander |
---|---|
Format: | Article |
Language: | English |
Published: |
The Royal Society
2025-01-01
|
Series: | Royal Society Open Science |
Subjects: | |
Online Access: | https://royalsocietypublishing.org/doi/10.1098/rsos.241692 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
LLM-Guided Crowdsourced Test Report Clustering
by: Ying Li, et al.
Published: (2025-01-01) -
Cluster validity indices for automatic clustering: A comprehensive review
by: Abiodun M. Ikotun, et al.
Published: (2025-01-01) -
Causality Extraction from Medical Text Using Large Language Models (LLMs)
by: Seethalakshmi Gopalakrishnan, et al.
Published: (2024-12-01) -
Artificial Intelligence vs. Human: Decoding Text Authenticity with Transformers
by: Daniela Gifu, et al.
Published: (2025-01-01) -
T-LLaMA: a Tibetan large language model based on LLaMA2
by: Hui Lv, et al.
Published: (2024-12-01)