IoT-Based Multisensors Fusion for Activity Recognition via Key Features and Hybrid Transfer Learning

Human activity recognition (HAR) has attracted significant attention in various fields, including healthcare, smart homes, and human-computer interaction. Accurate HAR can enhance user experience, provide critical health insights, and enable sophisticated context-aware applications. This paper prese...

Full description

Saved in:
Bibliographic Details
Main Authors: Ahmad Jalal, Danyal Khan, Touseef Sadiq, Moneerah Alotaibi, Sultan Refa Alotaibi, Hanan Aljuaid, Hameedur Rahman
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10818665/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832583979326242816
author Ahmad Jalal
Danyal Khan
Touseef Sadiq
Moneerah Alotaibi
Sultan Refa Alotaibi
Hanan Aljuaid
Hameedur Rahman
author_facet Ahmad Jalal
Danyal Khan
Touseef Sadiq
Moneerah Alotaibi
Sultan Refa Alotaibi
Hanan Aljuaid
Hameedur Rahman
author_sort Ahmad Jalal
collection DOAJ
description Human activity recognition (HAR) has attracted significant attention in various fields, including healthcare, smart homes, and human-computer interaction. Accurate HAR can enhance user experience, provide critical health insights, and enable sophisticated context-aware applications. This paper presents a comprehensive system for HAR utilizing both RGB videos and inertial measurement unit (IMU) sensor data. The system employs a multi-stage processing pipeline involving preprocessing, segmentation, feature extraction, and classification to achieve high accuracy in activity recognition. In the preprocessing stage, frames are extracted from RGB videos, and IMU sensor data undergoes denoising. The segmentation phase applies Naive Bayes segmentation for video frames and Hamming windows for sensor data to prepare them for feature extraction. Key features are extracted using techniques such as ORB (Oriented FAST and Rotated BRIEF), MSER (Maximally Stable Extremal Regions), DFT (Discrete Fourier Transform), and KAZE for image data, and LPCC (Linear Predictive Cepstral Coefficients), PSD (Power Spectral Density), AR Coefficient, and entropy for sensor data. Feature fusion is performed using Linear Discriminant Analysis (LDA) to create a unified feature set, which is then classified using ResNet50 (Residual Neural Network) to recognize activities such as using a smartphone, cooking, and reading a newspaper. The system was evaluated using the LARa and HWU-USP datasets, achieving classification accuracies of 92% and 93%, respectively. These results demonstrate the robustness and effectiveness of the proposed HAR system in diverse scenarios.
format Article
id doaj-art-64195fa2be2445ad865d48e07b915714
institution Kabale University
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-64195fa2be2445ad865d48e07b9157142025-01-28T00:01:32ZengIEEEIEEE Access2169-35362025-01-0113147271474210.1109/ACCESS.2024.352443110818665IoT-Based Multisensors Fusion for Activity Recognition via Key Features and Hybrid Transfer LearningAhmad Jalal0https://orcid.org/0009-0000-8421-8477Danyal Khan1Touseef Sadiq2https://orcid.org/0000-0001-6603-3639Moneerah Alotaibi3https://orcid.org/0000-0002-0074-8153Sultan Refa Alotaibi4Hanan Aljuaid5Hameedur Rahman6https://orcid.org/0000-0001-8892-9911Faculty of Computing and AI, Air University, Islamabad, PakistanFaculty of Computing and AI, Air University, Islamabad, PakistanDepartment of Information and Communication Technology, Centre for Artificial Intelligence Research, University of Agder, Grimstad, NorwayDepartment of Computer Science, College of Science and Humanities Dawadmi, Shaqra University, Shaqra, Saudi ArabiaDepartment of Computer Science, College of Science and Humanities Dawadmi, Shaqra University, Shaqra, Saudi ArabiaDepartment of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, Saudi ArabiaFaculty of Computing and AI, Air University, Islamabad, PakistanHuman activity recognition (HAR) has attracted significant attention in various fields, including healthcare, smart homes, and human-computer interaction. Accurate HAR can enhance user experience, provide critical health insights, and enable sophisticated context-aware applications. This paper presents a comprehensive system for HAR utilizing both RGB videos and inertial measurement unit (IMU) sensor data. The system employs a multi-stage processing pipeline involving preprocessing, segmentation, feature extraction, and classification to achieve high accuracy in activity recognition. In the preprocessing stage, frames are extracted from RGB videos, and IMU sensor data undergoes denoising. The segmentation phase applies Naive Bayes segmentation for video frames and Hamming windows for sensor data to prepare them for feature extraction. Key features are extracted using techniques such as ORB (Oriented FAST and Rotated BRIEF), MSER (Maximally Stable Extremal Regions), DFT (Discrete Fourier Transform), and KAZE for image data, and LPCC (Linear Predictive Cepstral Coefficients), PSD (Power Spectral Density), AR Coefficient, and entropy for sensor data. Feature fusion is performed using Linear Discriminant Analysis (LDA) to create a unified feature set, which is then classified using ResNet50 (Residual Neural Network) to recognize activities such as using a smartphone, cooking, and reading a newspaper. The system was evaluated using the LARa and HWU-USP datasets, achieving classification accuracies of 92% and 93%, respectively. These results demonstrate the robustness and effectiveness of the proposed HAR system in diverse scenarios.https://ieeexplore.ieee.org/document/10818665/Human activity recognition (HAR)RGB videosIMU sensor dataNaive Bayes segmentationrecurrent neural network (RNN)Shapley additive explanations (SHAP)
spellingShingle Ahmad Jalal
Danyal Khan
Touseef Sadiq
Moneerah Alotaibi
Sultan Refa Alotaibi
Hanan Aljuaid
Hameedur Rahman
IoT-Based Multisensors Fusion for Activity Recognition via Key Features and Hybrid Transfer Learning
IEEE Access
Human activity recognition (HAR)
RGB videos
IMU sensor data
Naive Bayes segmentation
recurrent neural network (RNN)
Shapley additive explanations (SHAP)
title IoT-Based Multisensors Fusion for Activity Recognition via Key Features and Hybrid Transfer Learning
title_full IoT-Based Multisensors Fusion for Activity Recognition via Key Features and Hybrid Transfer Learning
title_fullStr IoT-Based Multisensors Fusion for Activity Recognition via Key Features and Hybrid Transfer Learning
title_full_unstemmed IoT-Based Multisensors Fusion for Activity Recognition via Key Features and Hybrid Transfer Learning
title_short IoT-Based Multisensors Fusion for Activity Recognition via Key Features and Hybrid Transfer Learning
title_sort iot based multisensors fusion for activity recognition via key features and hybrid transfer learning
topic Human activity recognition (HAR)
RGB videos
IMU sensor data
Naive Bayes segmentation
recurrent neural network (RNN)
Shapley additive explanations (SHAP)
url https://ieeexplore.ieee.org/document/10818665/
work_keys_str_mv AT ahmadjalal iotbasedmultisensorsfusionforactivityrecognitionviakeyfeaturesandhybridtransferlearning
AT danyalkhan iotbasedmultisensorsfusionforactivityrecognitionviakeyfeaturesandhybridtransferlearning
AT touseefsadiq iotbasedmultisensorsfusionforactivityrecognitionviakeyfeaturesandhybridtransferlearning
AT moneerahalotaibi iotbasedmultisensorsfusionforactivityrecognitionviakeyfeaturesandhybridtransferlearning
AT sultanrefaalotaibi iotbasedmultisensorsfusionforactivityrecognitionviakeyfeaturesandhybridtransferlearning
AT hananaljuaid iotbasedmultisensorsfusionforactivityrecognitionviakeyfeaturesandhybridtransferlearning
AT hameedurrahman iotbasedmultisensorsfusionforactivityrecognitionviakeyfeaturesandhybridtransferlearning