Incorporating Multimodal Directional Interpersonal Synchrony into Empathetic Response Generation
This study investigates how interpersonal (speaker–partner) synchrony contributes to empathetic response generation in communication scenarios. To perform this investigation, we propose a model that incorporates multimodal directional (positive and negative) interpersonal synchrony, operationalized...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/25/2/434 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832587509255634944 |
---|---|
author | Jingyu Quan Yoshihiro Miyake Takayuki Nozawa |
author_facet | Jingyu Quan Yoshihiro Miyake Takayuki Nozawa |
author_sort | Jingyu Quan |
collection | DOAJ |
description | This study investigates how interpersonal (speaker–partner) synchrony contributes to empathetic response generation in communication scenarios. To perform this investigation, we propose a model that incorporates multimodal directional (positive and negative) interpersonal synchrony, operationalized using the cosine similarity measure, into empathetic response generation. We evaluate how incorporating specific synchrony affects the generated responses at the language and empathy levels. Based on comparison experiments, models with multimodal synchrony generate responses that are closer to ground truth responses and more diverse than models without synchrony. This demonstrates that these features are successfully integrated into the models. Additionally, we find that positive synchrony is linked to enhanced emotional reactions, reduced exploration, and improved interpretation. Negative synchrony is associated with reduced exploration and increased interpretation. These findings shed light on the connections between multimodal directional interpersonal synchrony and empathy’s emotional and cognitive aspects in artificial intelligence applications. |
format | Article |
id | doaj-art-8f2ec71f4ea84953b56b9d0a26637409 |
institution | Kabale University |
issn | 1424-8220 |
language | English |
publishDate | 2025-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj-art-8f2ec71f4ea84953b56b9d0a266374092025-01-24T13:48:55ZengMDPI AGSensors1424-82202025-01-0125243410.3390/s25020434Incorporating Multimodal Directional Interpersonal Synchrony into Empathetic Response GenerationJingyu Quan0Yoshihiro Miyake1Takayuki Nozawa2Department of Computer Science, Institute of Science Tokyo, Yokohama 226-8502, JapanDepartment of Computer Science, Institute of Science Tokyo, Yokohama 226-8502, JapanFaculty of Engineering, University of Toyama, Toyama 930-8555, JapanThis study investigates how interpersonal (speaker–partner) synchrony contributes to empathetic response generation in communication scenarios. To perform this investigation, we propose a model that incorporates multimodal directional (positive and negative) interpersonal synchrony, operationalized using the cosine similarity measure, into empathetic response generation. We evaluate how incorporating specific synchrony affects the generated responses at the language and empathy levels. Based on comparison experiments, models with multimodal synchrony generate responses that are closer to ground truth responses and more diverse than models without synchrony. This demonstrates that these features are successfully integrated into the models. Additionally, we find that positive synchrony is linked to enhanced emotional reactions, reduced exploration, and improved interpretation. Negative synchrony is associated with reduced exploration and increased interpretation. These findings shed light on the connections between multimodal directional interpersonal synchrony and empathy’s emotional and cognitive aspects in artificial intelligence applications.https://www.mdpi.com/1424-8220/25/2/434affective computingmultimodal learningempathetic response generation |
spellingShingle | Jingyu Quan Yoshihiro Miyake Takayuki Nozawa Incorporating Multimodal Directional Interpersonal Synchrony into Empathetic Response Generation Sensors affective computing multimodal learning empathetic response generation |
title | Incorporating Multimodal Directional Interpersonal Synchrony into Empathetic Response Generation |
title_full | Incorporating Multimodal Directional Interpersonal Synchrony into Empathetic Response Generation |
title_fullStr | Incorporating Multimodal Directional Interpersonal Synchrony into Empathetic Response Generation |
title_full_unstemmed | Incorporating Multimodal Directional Interpersonal Synchrony into Empathetic Response Generation |
title_short | Incorporating Multimodal Directional Interpersonal Synchrony into Empathetic Response Generation |
title_sort | incorporating multimodal directional interpersonal synchrony into empathetic response generation |
topic | affective computing multimodal learning empathetic response generation |
url | https://www.mdpi.com/1424-8220/25/2/434 |
work_keys_str_mv | AT jingyuquan incorporatingmultimodaldirectionalinterpersonalsynchronyintoempatheticresponsegeneration AT yoshihiromiyake incorporatingmultimodaldirectionalinterpersonalsynchronyintoempatheticresponsegeneration AT takayukinozawa incorporatingmultimodaldirectionalinterpersonalsynchronyintoempatheticresponsegeneration |