Leveraging Large Language Models and Agent-Based Systems for Scientific Data Analysis: Validation Study

Abstract BackgroundLarge language models have shown promise in transforming how complex scientific data are analyzed and communicated, yet their application to scientific domains remains challenged by issues of factual accuracy and domain-specific precision. The Laureate Insti...

Full description

Saved in:
Bibliographic Details
Main Authors: Dale Peasley, Rayus Kuplicki, Sandip Sen, Martin Paulus
Format: Article
Language:English
Published: JMIR Publications 2025-02-01
Series:JMIR Mental Health
Online Access:https://mental.jmir.org/2025/1/e68135
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract BackgroundLarge language models have shown promise in transforming how complex scientific data are analyzed and communicated, yet their application to scientific domains remains challenged by issues of factual accuracy and domain-specific precision. The Laureate Institute for Brain Research–Tulsa University (LIBR-TU) Research Agent (LITURAt) leverages a sophisticated agent-based architecture to mitigate these limitations, using external data retrieval and analysis tools to ensure reliable, context-aware outputs that make scientific information accessible to both experts and nonexperts. ObjectiveThe objective of this study was to develop and evaluate LITURAt to enable efficient analysis and contextualization of complex scientific datasets for diverse user expertise levels. MethodsAn agent-based system based on large language models was designed to analyze and contextualize complex scientific datasets using a “plan-and-solve” framework. The system dynamically retrieves local data and relevant PubMed literature, performs statistical analyses, and generates comprehensive, context-aware summaries to answer user queries with high accuracy and consistency. ResultsOur experiments demonstrated that LITURAt achieved an internal consistency rate of 94.8% and an external consistency rate of 91.9% across repeated and rephrased queries. Additionally, GPT-4 evaluations rated 80.3% (171/213) of the system’s answers as accurate and comprehensive, with 23.5% (50/213) receiving the highest rating of 5 for completeness and precision. ConclusionsThese findings highlight the potential of LITURAt to significantly enhance the accessibility and accuracy of scientific data analysis, achieving high consistency and strong performance in complex query resolution. Despite existing limitations, such as model stability for highly variable queries, LITURAt demonstrates promise as a robust tool for democratizing data-driven insights across diverse scientific domains.
ISSN:2368-7959