Robust CNN for facial emotion recognition and real-time GUI

Computer vision is witnessing a surge of interest in machines accurately recognizing and interpreting human emotions through facial expression analysis. However, variations in image properties such as brightness, contrast, and resolution make it harder for models to predict the underlying emotion ac...

Full description

Saved in:
Bibliographic Details
Main Authors: Imad Ali, Faisal Ghaffar
Format: Article
Language:English
Published: AIMS Press 2024-05-01
Series:AIMS Electronics and Electrical Engineering
Subjects:
Online Access:https://www.aimspress.com/article/doi/10.3934/electreng.2024010
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832590267614494720
author Imad Ali
Faisal Ghaffar
author_facet Imad Ali
Faisal Ghaffar
author_sort Imad Ali
collection DOAJ
description Computer vision is witnessing a surge of interest in machines accurately recognizing and interpreting human emotions through facial expression analysis. However, variations in image properties such as brightness, contrast, and resolution make it harder for models to predict the underlying emotion accurately. Utilizing a robust architecture of a convolutional neural network (CNN), we designed an efficacious framework for facial emotion recognition that predicts emotions and assigns corresponding probabilities to each fundamental human emotion. Each image is processed with various pre-processing steps before inputting it to the CNN to enhance the visibility and clarity of facial features, enabling the CNN to learn more effectively from the data. As CNNs entail a large amount of data for training, we used a data augmentation technique that helps to enhance the model's generalization capabilities, enabling it to effectively handle previously unseen data. To train the model, we joined the datasets, namely JAFFE and KDEF. We allocated 90% of the data for training, reserving the remaining 10% for testing purposes. The results of the CCN framework demonstrated a peak accuracy of 78.1%, which was achieved with the joint dataset. This accuracy indicated the model's capability to recognize facial emotions with a promising level of performance. Additionally, we developed an application with a graphical user interface for real-time facial emotion classification. This application allows users to classify emotions from still images and live video feeds, making it practical and user-friendly. The real-time application further demonstrates the system's practicality and potential for various real-world applications involving facial emotion analysis.
format Article
id doaj-art-880cc94eb7764dda9da22916f5962652
institution Kabale University
issn 2578-1588
language English
publishDate 2024-05-01
publisher AIMS Press
record_format Article
series AIMS Electronics and Electrical Engineering
spelling doaj-art-880cc94eb7764dda9da22916f59626522025-01-24T01:10:27ZengAIMS PressAIMS Electronics and Electrical Engineering2578-15882024-05-018221723610.3934/electreng.2024010Robust CNN for facial emotion recognition and real-time GUIImad Ali0Faisal Ghaffar1Department of Computer Science, University of Swat, Swat, KP, PakistanSystem Design Engineering Department, University of Waterloo, Waterloo, CanadaComputer vision is witnessing a surge of interest in machines accurately recognizing and interpreting human emotions through facial expression analysis. However, variations in image properties such as brightness, contrast, and resolution make it harder for models to predict the underlying emotion accurately. Utilizing a robust architecture of a convolutional neural network (CNN), we designed an efficacious framework for facial emotion recognition that predicts emotions and assigns corresponding probabilities to each fundamental human emotion. Each image is processed with various pre-processing steps before inputting it to the CNN to enhance the visibility and clarity of facial features, enabling the CNN to learn more effectively from the data. As CNNs entail a large amount of data for training, we used a data augmentation technique that helps to enhance the model's generalization capabilities, enabling it to effectively handle previously unseen data. To train the model, we joined the datasets, namely JAFFE and KDEF. We allocated 90% of the data for training, reserving the remaining 10% for testing purposes. The results of the CCN framework demonstrated a peak accuracy of 78.1%, which was achieved with the joint dataset. This accuracy indicated the model's capability to recognize facial emotions with a promising level of performance. Additionally, we developed an application with a graphical user interface for real-time facial emotion classification. This application allows users to classify emotions from still images and live video feeds, making it practical and user-friendly. The real-time application further demonstrates the system's practicality and potential for various real-world applications involving facial emotion analysis.https://www.aimspress.com/article/doi/10.3934/electreng.2024010facial emotionspredictionrecognitionpreprocessingcnngui
spellingShingle Imad Ali
Faisal Ghaffar
Robust CNN for facial emotion recognition and real-time GUI
AIMS Electronics and Electrical Engineering
facial emotions
prediction
recognition
preprocessing
cnn
gui
title Robust CNN for facial emotion recognition and real-time GUI
title_full Robust CNN for facial emotion recognition and real-time GUI
title_fullStr Robust CNN for facial emotion recognition and real-time GUI
title_full_unstemmed Robust CNN for facial emotion recognition and real-time GUI
title_short Robust CNN for facial emotion recognition and real-time GUI
title_sort robust cnn for facial emotion recognition and real time gui
topic facial emotions
prediction
recognition
preprocessing
cnn
gui
url https://www.aimspress.com/article/doi/10.3934/electreng.2024010
work_keys_str_mv AT imadali robustcnnforfacialemotionrecognitionandrealtimegui
AT faisalghaffar robustcnnforfacialemotionrecognitionandrealtimegui