Comparing large language models for supervised analysis of students’ lab notes
Recent advancements in large language models (LLMs) hold significant promise for improving physics education research that uses machine learning. In this study, we compare the application of various models for conducting a large-scale analysis of written text grounded in a physics education research...
Saved in:
| Main Authors: | Rebeckah K. Fussell, Megan Flynn, Anil Damle, Michael F. J. Fox, N. G. Holmes |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
American Physical Society
2025-03-01
|
| Series: | Physical Review Physics Education Research |
| Online Access: | http://doi.org/10.1103/PhysRevPhysEducRes.21.010128 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Improving Large Language Models’ Summarization Accuracy by Adding Highlights to Discharge Notes: Comparative Evaluation
by: Mahshad Koohi Habibi Dehkordi, et al.
Published: (2025-07-01) -
A CURE lab in an introductory biology course has minimal impact on student outcomes, self-confidence, and preferences compared to a traditional lab
by: Andrew F. Mashintonio, et al.
Published: (2025-04-01) -
Technical Note: Rapid Species Barcoding Using Bento Lab Mobile Laboratory
by: Karolina Mahlerová, et al.
Published: (2024-10-01) -
A Temporal Knowledge Graph Generation Dataset Supervised Distantly by Large Language Models
by: Jun Zhu, et al.
Published: (2025-05-01) -
Supervised Natural Language Processing Classification of Violent Death Narratives: Development and Assessment of a Compact Large Language Model
by: Susan T Parker
Published: (2025-06-01)