EEG analysis of speaking and quiet states during different emotional music stimuli
IntroductionMusic has a profound impact on human emotions, capable of eliciting a wide range of emotional responses, a phenomenon that has been effectively harnessed in the field of music therapy. Given the close relationship between music and language, researchers have begun to explore how music in...
Saved in:
Main Authors: | , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2025-02-01
|
Series: | Frontiers in Neuroscience |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fnins.2025.1461654/full |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832548191696846848 |
---|---|
author | Xianwei Lin Xinyue Wu Zefeng Wang Zhengting Cai Zihan Zhang Guangdong Xie Lianxin Hu Laurent Peyrodie |
author_facet | Xianwei Lin Xinyue Wu Zefeng Wang Zhengting Cai Zihan Zhang Guangdong Xie Lianxin Hu Laurent Peyrodie |
author_sort | Xianwei Lin |
collection | DOAJ |
description | IntroductionMusic has a profound impact on human emotions, capable of eliciting a wide range of emotional responses, a phenomenon that has been effectively harnessed in the field of music therapy. Given the close relationship between music and language, researchers have begun to explore how music influences brain activity and cognitive processes by integrating artificial intelligence with advancements in neuroscience.MethodsIn this study, a total of 120 subjects were recruited, all of whom were students aged between 19 and 26 years. Each subject is required to listen to six 1-minute music segments expressing different emotions and speak at the 40-second mark. In terms of constructing the classification model, this study compares the classification performance of deep neural networks with other machine learning algorithms.ResultsThe differences in EEG signals between different emotions during speech are more pronounced compared to those in a quiet state. In the classification of EEG signals for speaking and quiet states, using deep neural network algorithms can achieve accuracies of 95.84% and 96.55%, respectively.DiscussionUnder the stimulation of music with different emotions, there are certain differences in EEG between speaking and resting states. In the construction of EEG classification models, the classification performance of deep neural network algorithms is superior to other machine learning algorithms. |
format | Article |
id | doaj-art-5aaf8b0beb8f4f308d446d9898a34a7d |
institution | Kabale University |
issn | 1662-453X |
language | English |
publishDate | 2025-02-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Neuroscience |
spelling | doaj-art-5aaf8b0beb8f4f308d446d9898a34a7d2025-02-03T06:33:45ZengFrontiers Media S.A.Frontiers in Neuroscience1662-453X2025-02-011910.3389/fnins.2025.14616541461654EEG analysis of speaking and quiet states during different emotional music stimuliXianwei Lin0Xinyue Wu1Zefeng Wang2Zhengting Cai3Zihan Zhang4Guangdong Xie5Lianxin Hu6Laurent Peyrodie7College of Information Engineering, Huzhou University, Huzhou, ChinaSchool of Life Sciences, Beijing University of Chinese Medicine, Beijing, ChinaCollege of Information Engineering, Huzhou University, Huzhou, ChinaCollege of Information Engineering, Huzhou University, Huzhou, ChinaCollege of Information Engineering, Huzhou University, Huzhou, ChinaCollege of Information Engineering, Huzhou University, Huzhou, ChinaCollege of Information Engineering, Huzhou University, Huzhou, ChinaICL, Junia, Université Catholique de Lille, Lille, FranceIntroductionMusic has a profound impact on human emotions, capable of eliciting a wide range of emotional responses, a phenomenon that has been effectively harnessed in the field of music therapy. Given the close relationship between music and language, researchers have begun to explore how music influences brain activity and cognitive processes by integrating artificial intelligence with advancements in neuroscience.MethodsIn this study, a total of 120 subjects were recruited, all of whom were students aged between 19 and 26 years. Each subject is required to listen to six 1-minute music segments expressing different emotions and speak at the 40-second mark. In terms of constructing the classification model, this study compares the classification performance of deep neural networks with other machine learning algorithms.ResultsThe differences in EEG signals between different emotions during speech are more pronounced compared to those in a quiet state. In the classification of EEG signals for speaking and quiet states, using deep neural network algorithms can achieve accuracies of 95.84% and 96.55%, respectively.DiscussionUnder the stimulation of music with different emotions, there are certain differences in EEG between speaking and resting states. In the construction of EEG classification models, the classification performance of deep neural network algorithms is superior to other machine learning algorithms.https://www.frontiersin.org/articles/10.3389/fnins.2025.1461654/fullmusicspeakemotionEEGdeep learning |
spellingShingle | Xianwei Lin Xinyue Wu Zefeng Wang Zhengting Cai Zihan Zhang Guangdong Xie Lianxin Hu Laurent Peyrodie EEG analysis of speaking and quiet states during different emotional music stimuli Frontiers in Neuroscience music speak emotion EEG deep learning |
title | EEG analysis of speaking and quiet states during different emotional music stimuli |
title_full | EEG analysis of speaking and quiet states during different emotional music stimuli |
title_fullStr | EEG analysis of speaking and quiet states during different emotional music stimuli |
title_full_unstemmed | EEG analysis of speaking and quiet states during different emotional music stimuli |
title_short | EEG analysis of speaking and quiet states during different emotional music stimuli |
title_sort | eeg analysis of speaking and quiet states during different emotional music stimuli |
topic | music speak emotion EEG deep learning |
url | https://www.frontiersin.org/articles/10.3389/fnins.2025.1461654/full |
work_keys_str_mv | AT xianweilin eeganalysisofspeakingandquietstatesduringdifferentemotionalmusicstimuli AT xinyuewu eeganalysisofspeakingandquietstatesduringdifferentemotionalmusicstimuli AT zefengwang eeganalysisofspeakingandquietstatesduringdifferentemotionalmusicstimuli AT zhengtingcai eeganalysisofspeakingandquietstatesduringdifferentemotionalmusicstimuli AT zihanzhang eeganalysisofspeakingandquietstatesduringdifferentemotionalmusicstimuli AT guangdongxie eeganalysisofspeakingandquietstatesduringdifferentemotionalmusicstimuli AT lianxinhu eeganalysisofspeakingandquietstatesduringdifferentemotionalmusicstimuli AT laurentpeyrodie eeganalysisofspeakingandquietstatesduringdifferentemotionalmusicstimuli |