Real-Time Facial Expression Recognition Based on Image Processing in Virtual Reality
Abstract More virtual reality (VR) scenarios have become more prevalent in recent years. More and more people are getting into VR, meaning that objective physiological measures to assess a user's emotional state automatically are becoming more critical. Individuals’ emotional states impact thei...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Springer
2025-01-01
|
Series: | International Journal of Computational Intelligence Systems |
Subjects: | |
Online Access: | https://doi.org/10.1007/s44196-024-00729-9 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832585446885949440 |
---|---|
author | Qingzhen Gong Xuefang Liu Yongqiang Ma |
author_facet | Qingzhen Gong Xuefang Liu Yongqiang Ma |
author_sort | Qingzhen Gong |
collection | DOAJ |
description | Abstract More virtual reality (VR) scenarios have become more prevalent in recent years. More and more people are getting into VR, meaning that objective physiological measures to assess a user's emotional state automatically are becoming more critical. Individuals’ emotional states impact their behaviour, opinions, emotions, and decisions. They may be used to analyze VR experiences and make systems react to and engage with the user’s emotions. VR environments require users to wear head-mounted displays (HMDs), blocking off their upper faces. That makes traditional Facial Expression Recognition (FER) approaches very limited in their usefulness. Thus, a Deep Learning (DL) solution combined with image processing is utilized to classify universal emotions: sadness, happiness, disgust, anger, fear and surprise. Hence, this paper suggests the Deep Automatic Facial Expression Recognition Model (DAFERM) for interactive virtual reality (VR) applications such as intelligent education, social networks, and virtual training. Two main parts comprise the system: one that uses deep neural networks (DNNs) for facial emotion identification and another that automatically tracks and segments faces. The system begins by following a marker on the front of the head-mounted display (HMD). With the help of the spatial data that has been retrieved, the positions and rotations of the face are estimated to segment the mouth. Finally, the system interacts with DNN using the pixels processed by the lips. It obtains the facial expression results in real time using an adaptive method for histogram-based mouth segmentation. |
format | Article |
id | doaj-art-ad304a752ade46db95ce57368c0f2816 |
institution | Kabale University |
issn | 1875-6883 |
language | English |
publishDate | 2025-01-01 |
publisher | Springer |
record_format | Article |
series | International Journal of Computational Intelligence Systems |
spelling | doaj-art-ad304a752ade46db95ce57368c0f28162025-01-26T12:51:42ZengSpringerInternational Journal of Computational Intelligence Systems1875-68832025-01-0118111610.1007/s44196-024-00729-9Real-Time Facial Expression Recognition Based on Image Processing in Virtual RealityQingzhen Gong0Xuefang Liu1Yongqiang Ma2School of Physical and Electronic Information Engineering, Jining Normal UniversitySchool of Information Engineering, Jingdezhen UniversitySchool of Computer and Big Data, Jining Normal UniversityAbstract More virtual reality (VR) scenarios have become more prevalent in recent years. More and more people are getting into VR, meaning that objective physiological measures to assess a user's emotional state automatically are becoming more critical. Individuals’ emotional states impact their behaviour, opinions, emotions, and decisions. They may be used to analyze VR experiences and make systems react to and engage with the user’s emotions. VR environments require users to wear head-mounted displays (HMDs), blocking off their upper faces. That makes traditional Facial Expression Recognition (FER) approaches very limited in their usefulness. Thus, a Deep Learning (DL) solution combined with image processing is utilized to classify universal emotions: sadness, happiness, disgust, anger, fear and surprise. Hence, this paper suggests the Deep Automatic Facial Expression Recognition Model (DAFERM) for interactive virtual reality (VR) applications such as intelligent education, social networks, and virtual training. Two main parts comprise the system: one that uses deep neural networks (DNNs) for facial emotion identification and another that automatically tracks and segments faces. The system begins by following a marker on the front of the head-mounted display (HMD). With the help of the spatial data that has been retrieved, the positions and rotations of the face are estimated to segment the mouth. Finally, the system interacts with DNN using the pixels processed by the lips. It obtains the facial expression results in real time using an adaptive method for histogram-based mouth segmentation.https://doi.org/10.1007/s44196-024-00729-9Facial expression recognitionDeep learningVirtual realityImage processingDeep neural network |
spellingShingle | Qingzhen Gong Xuefang Liu Yongqiang Ma Real-Time Facial Expression Recognition Based on Image Processing in Virtual Reality International Journal of Computational Intelligence Systems Facial expression recognition Deep learning Virtual reality Image processing Deep neural network |
title | Real-Time Facial Expression Recognition Based on Image Processing in Virtual Reality |
title_full | Real-Time Facial Expression Recognition Based on Image Processing in Virtual Reality |
title_fullStr | Real-Time Facial Expression Recognition Based on Image Processing in Virtual Reality |
title_full_unstemmed | Real-Time Facial Expression Recognition Based on Image Processing in Virtual Reality |
title_short | Real-Time Facial Expression Recognition Based on Image Processing in Virtual Reality |
title_sort | real time facial expression recognition based on image processing in virtual reality |
topic | Facial expression recognition Deep learning Virtual reality Image processing Deep neural network |
url | https://doi.org/10.1007/s44196-024-00729-9 |
work_keys_str_mv | AT qingzhengong realtimefacialexpressionrecognitionbasedonimageprocessinginvirtualreality AT xuefangliu realtimefacialexpressionrecognitionbasedonimageprocessinginvirtualreality AT yongqiangma realtimefacialexpressionrecognitionbasedonimageprocessinginvirtualreality |