Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loop
BackgroundDigital mental health interventions, such as artificial intelligence (AI) conversational agents, hold promise for improving access to care by innovating therapy and supporting delivery. However, little research exists on patient perspectives regarding AI conversational agents, which is cru...
Saved in:
Main Authors: | , , , , , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2025-01-01
|
Series: | Frontiers in Psychiatry |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fpsyt.2024.1505024/full |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832576240323657728 |
---|---|
author | Hyein S. Lee Hyein S. Lee Colton Wright Julia Ferranto Jessica Buttimer Clare E. Palmer Andrew Welchman Kathleen M. Mazor Kimberly A. Fisher David Smelson Laurel O’Connor Laurel O’Connor Nisha Fahey Nisha Fahey Apurv Soni Apurv Soni Apurv Soni |
author_facet | Hyein S. Lee Hyein S. Lee Colton Wright Julia Ferranto Jessica Buttimer Clare E. Palmer Andrew Welchman Kathleen M. Mazor Kimberly A. Fisher David Smelson Laurel O’Connor Laurel O’Connor Nisha Fahey Nisha Fahey Apurv Soni Apurv Soni Apurv Soni |
author_sort | Hyein S. Lee |
collection | DOAJ |
description | BackgroundDigital mental health interventions, such as artificial intelligence (AI) conversational agents, hold promise for improving access to care by innovating therapy and supporting delivery. However, little research exists on patient perspectives regarding AI conversational agents, which is crucial for their successful implementation. This study aimed to fill the gap by exploring patients’ perceptions and acceptability of AI conversational agents in mental healthcare.MethodsAdults with self-reported mild to moderate anxiety were recruited from the UMass Memorial Health system. Participants engaged in semi-structured interviews to discuss their experiences, perceptions, and acceptability of AI conversational agents in mental healthcare. Anxiety levels were assessed using the Generalized Anxiety Disorder scale. Data were collected from December 2022 to February 2023, and three researchers conducted rapid qualitative analysis to identify and synthesize themes.ResultsThe sample included 29 adults (ages 19-66), predominantly under age 35, non-Hispanic, White, and female. Participants reported a range of positive and negative experiences with AI conversational agents. Most held positive attitudes towards AI conversational agents, appreciating their utility and potential to increase access to care, yet some also expressed cautious optimism. About half endorsed negative opinions, citing AI’s lack of empathy, technical limitations in addressing complex mental health situations, and data privacy concerns. Most participants desired some human involvement in AI-driven therapy and expressed concern about the risk of AI conversational agents being seen as replacements for therapy. A subgroup preferred AI conversational agents for administrative tasks rather than care provision.ConclusionsAI conversational agents were perceived as useful and beneficial for increasing access to care, but concerns about AI’s empathy, capabilities, safety, and human involvement in mental healthcare were prevalent. Future implementation and integration of AI conversational agents should consider patient perspectives to enhance their acceptability and effectiveness. |
format | Article |
id | doaj-art-f059dd0a3886468f829c40154d7d45b5 |
institution | Kabale University |
issn | 1664-0640 |
language | English |
publishDate | 2025-01-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Psychiatry |
spelling | doaj-art-f059dd0a3886468f829c40154d7d45b52025-01-31T09:02:50ZengFrontiers Media S.A.Frontiers in Psychiatry1664-06402025-01-011510.3389/fpsyt.2024.15050241505024Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loopHyein S. Lee0Hyein S. Lee1Colton Wright2Julia Ferranto3Jessica Buttimer4Clare E. Palmer5Andrew Welchman6Kathleen M. Mazor7Kimberly A. Fisher8David Smelson9Laurel O’Connor10Laurel O’Connor11Nisha Fahey12Nisha Fahey13Apurv Soni14Apurv Soni15Apurv Soni16Program in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United StatesDepartment of Population and Quantitative Health Sciences, University of Massachusetts Chan Medical School, Worcester, MA, United StatesProgram in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United StatesProgram in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United StatesIeso Digital Health, Cambridge, United KingdomIeso Digital Health, Cambridge, United KingdomIeso Digital Health, Cambridge, United KingdomDivision of Health System Science, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United StatesDivision of Health System Science, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United StatesDivision of Health System Science, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United StatesProgram in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United StatesDepartment of Emergency Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United StatesProgram in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United StatesDepartment of Pediatrics, University of Massachusetts Chan Medical School, Worcester, MA, United StatesProgram in Digital Medicine, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United StatesDepartment of Population and Quantitative Health Sciences, University of Massachusetts Chan Medical School, Worcester, MA, United StatesDivision of Health System Science, Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, United StatesBackgroundDigital mental health interventions, such as artificial intelligence (AI) conversational agents, hold promise for improving access to care by innovating therapy and supporting delivery. However, little research exists on patient perspectives regarding AI conversational agents, which is crucial for their successful implementation. This study aimed to fill the gap by exploring patients’ perceptions and acceptability of AI conversational agents in mental healthcare.MethodsAdults with self-reported mild to moderate anxiety were recruited from the UMass Memorial Health system. Participants engaged in semi-structured interviews to discuss their experiences, perceptions, and acceptability of AI conversational agents in mental healthcare. Anxiety levels were assessed using the Generalized Anxiety Disorder scale. Data were collected from December 2022 to February 2023, and three researchers conducted rapid qualitative analysis to identify and synthesize themes.ResultsThe sample included 29 adults (ages 19-66), predominantly under age 35, non-Hispanic, White, and female. Participants reported a range of positive and negative experiences with AI conversational agents. Most held positive attitudes towards AI conversational agents, appreciating their utility and potential to increase access to care, yet some also expressed cautious optimism. About half endorsed negative opinions, citing AI’s lack of empathy, technical limitations in addressing complex mental health situations, and data privacy concerns. Most participants desired some human involvement in AI-driven therapy and expressed concern about the risk of AI conversational agents being seen as replacements for therapy. A subgroup preferred AI conversational agents for administrative tasks rather than care provision.ConclusionsAI conversational agents were perceived as useful and beneficial for increasing access to care, but concerns about AI’s empathy, capabilities, safety, and human involvement in mental healthcare were prevalent. Future implementation and integration of AI conversational agents should consider patient perspectives to enhance their acceptability and effectiveness.https://www.frontiersin.org/articles/10.3389/fpsyt.2024.1505024/fullartificial intelligencechatbotsconversational agentspatient perspectivesqualitativemental health |
spellingShingle | Hyein S. Lee Hyein S. Lee Colton Wright Julia Ferranto Jessica Buttimer Clare E. Palmer Andrew Welchman Kathleen M. Mazor Kimberly A. Fisher David Smelson Laurel O’Connor Laurel O’Connor Nisha Fahey Nisha Fahey Apurv Soni Apurv Soni Apurv Soni Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loop Frontiers in Psychiatry artificial intelligence chatbots conversational agents patient perspectives qualitative mental health |
title | Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loop |
title_full | Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loop |
title_fullStr | Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loop |
title_full_unstemmed | Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loop |
title_short | Artificial intelligence conversational agents in mental health: Patients see potential, but prefer humans in the loop |
title_sort | artificial intelligence conversational agents in mental health patients see potential but prefer humans in the loop |
topic | artificial intelligence chatbots conversational agents patient perspectives qualitative mental health |
url | https://www.frontiersin.org/articles/10.3389/fpsyt.2024.1505024/full |
work_keys_str_mv | AT hyeinslee artificialintelligenceconversationalagentsinmentalhealthpatientsseepotentialbutpreferhumansintheloop AT hyeinslee artificialintelligenceconversationalagentsinmentalhealthpatientsseepotentialbutpreferhumansintheloop AT coltonwright artificialintelligenceconversationalagentsinmentalhealthpatientsseepotentialbutpreferhumansintheloop AT juliaferranto artificialintelligenceconversationalagentsinmentalhealthpatientsseepotentialbutpreferhumansintheloop AT jessicabuttimer artificialintelligenceconversationalagentsinmentalhealthpatientsseepotentialbutpreferhumansintheloop AT clareepalmer artificialintelligenceconversationalagentsinmentalhealthpatientsseepotentialbutpreferhumansintheloop AT andrewwelchman artificialintelligenceconversationalagentsinmentalhealthpatientsseepotentialbutpreferhumansintheloop AT kathleenmmazor artificialintelligenceconversationalagentsinmentalhealthpatientsseepotentialbutpreferhumansintheloop AT kimberlyafisher artificialintelligenceconversationalagentsinmentalhealthpatientsseepotentialbutpreferhumansintheloop AT davidsmelson artificialintelligenceconversationalagentsinmentalhealthpatientsseepotentialbutpreferhumansintheloop AT laureloconnor artificialintelligenceconversationalagentsinmentalhealthpatientsseepotentialbutpreferhumansintheloop AT laureloconnor artificialintelligenceconversationalagentsinmentalhealthpatientsseepotentialbutpreferhumansintheloop AT nishafahey artificialintelligenceconversationalagentsinmentalhealthpatientsseepotentialbutpreferhumansintheloop AT nishafahey artificialintelligenceconversationalagentsinmentalhealthpatientsseepotentialbutpreferhumansintheloop AT apurvsoni artificialintelligenceconversationalagentsinmentalhealthpatientsseepotentialbutpreferhumansintheloop AT apurvsoni artificialintelligenceconversationalagentsinmentalhealthpatientsseepotentialbutpreferhumansintheloop AT apurvsoni artificialintelligenceconversationalagentsinmentalhealthpatientsseepotentialbutpreferhumansintheloop |