Cross-modal gated feature enhancement for multimodal emotion recognition in conversations
Abstract Emotion recognition in conversations (ERC), which involves identifying the emotional state of each utterance within a dialogue, plays a vital role in developing empathetic artificial intelligence systems. In practical applications, such as video-based recruitment interviews, customer servic...
Saved in:
| Main Authors: | Shiyun Zhao, Jinchang Ren, Xiaojuan Zhou |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-08-01
|
| Series: | Scientific Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1038/s41598-025-11989-6 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Hybrid Multi-Attention Network for Audio–Visual Emotion Recognition Through Multimodal Feature Fusion
by: Sathishkumar Moorthy, et al.
Published: (2025-03-01) -
MemoCMT: multimodal emotion recognition using cross-modal transformer-based feature fusion
by: Mustaqeem Khan, et al.
Published: (2025-02-01) -
Dual-stage gated segmented multimodal emotion recognition method
by: MA Fei, et al.
Published: (2025-06-01) -
Graph attention based on contextual reasoning and emotion-shift awareness for emotion recognition in conversations
by: Juan Yang, et al.
Published: (2025-05-01) -
Modality-Guided Refinement Learning for Multimodal Emotion Recognition
by: Sunyoung Cho
Published: (2025-01-01)