In Shift and In Variance: Assessing the Robustness of HAR Deep Learning Models Against Variability
Deep learning (DL)-based Human Activity Recognition (HAR) using wearable inertial measurement unit (IMU) sensors can revolutionize continuous health monitoring and early disease prediction. However, most DL HAR models are untested in their robustness to real-world variability, as they are trained on...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/25/2/430 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832587481569034240 |
---|---|
author | Azhar Ali Khaked Nobuyuki Oishi Daniel Roggen Paula Lago |
author_facet | Azhar Ali Khaked Nobuyuki Oishi Daniel Roggen Paula Lago |
author_sort | Azhar Ali Khaked |
collection | DOAJ |
description | Deep learning (DL)-based Human Activity Recognition (HAR) using wearable inertial measurement unit (IMU) sensors can revolutionize continuous health monitoring and early disease prediction. However, most DL HAR models are untested in their robustness to real-world variability, as they are trained on limited lab-controlled data. In this study, we isolated and analyzed the effects of the subject, device, position, and orientation variabilities on DL HAR models using the HARVAR and REALDISP datasets. The Maximum Mean Discrepancy (MMD) was used to quantify shifts in the data distribution caused by these variabilities, and the relationship between the distribution shifts and model performance was drawn. Our HARVAR results show that different types of variability significantly degraded the DL model performance, with an inverse relationship between the data distribution shifts and performance. The compounding effect of multiple variabilities studied using REALDISP further underscores the challenges of generalizing DL HAR models to real-world conditions. Analyzing these impacts highlights the need for more robust models that generalize effectively to real-world settings. The MMD proved valuable for explaining the performance drops, emphasizing its utility in evaluating distribution shifts in HAR data. |
format | Article |
id | doaj-art-20ceb2e7b80e40fb8ba1efea7c57bee7 |
institution | Kabale University |
issn | 1424-8220 |
language | English |
publishDate | 2025-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj-art-20ceb2e7b80e40fb8ba1efea7c57bee72025-01-24T13:48:54ZengMDPI AGSensors1424-82202025-01-0125243010.3390/s25020430In Shift and In Variance: Assessing the Robustness of HAR Deep Learning Models Against VariabilityAzhar Ali Khaked0Nobuyuki Oishi1Daniel Roggen2Paula Lago3Department of Electrical and Computer Engineering, Concordia University, Montreal, QC H3G 1M8, CanadaSchool of Engineering and Informatics, University of Sussex, Brighton BN1 9PS, UKSchool of Engineering and Informatics, University of Sussex, Brighton BN1 9PS, UKDepartment of Electrical and Computer Engineering, Concordia University, Montreal, QC H3G 1M8, CanadaDeep learning (DL)-based Human Activity Recognition (HAR) using wearable inertial measurement unit (IMU) sensors can revolutionize continuous health monitoring and early disease prediction. However, most DL HAR models are untested in their robustness to real-world variability, as they are trained on limited lab-controlled data. In this study, we isolated and analyzed the effects of the subject, device, position, and orientation variabilities on DL HAR models using the HARVAR and REALDISP datasets. The Maximum Mean Discrepancy (MMD) was used to quantify shifts in the data distribution caused by these variabilities, and the relationship between the distribution shifts and model performance was drawn. Our HARVAR results show that different types of variability significantly degraded the DL model performance, with an inverse relationship between the data distribution shifts and performance. The compounding effect of multiple variabilities studied using REALDISP further underscores the challenges of generalizing DL HAR models to real-world conditions. Analyzing these impacts highlights the need for more robust models that generalize effectively to real-world settings. The MMD proved valuable for explaining the performance drops, emphasizing its utility in evaluating distribution shifts in HAR data.https://www.mdpi.com/1424-8220/25/2/430human activity recognitionwearable sensorsdeep learningdistribution shiftreal world variabilitydata heterogeneity |
spellingShingle | Azhar Ali Khaked Nobuyuki Oishi Daniel Roggen Paula Lago In Shift and In Variance: Assessing the Robustness of HAR Deep Learning Models Against Variability Sensors human activity recognition wearable sensors deep learning distribution shift real world variability data heterogeneity |
title | In Shift and In Variance: Assessing the Robustness of HAR Deep Learning Models Against Variability |
title_full | In Shift and In Variance: Assessing the Robustness of HAR Deep Learning Models Against Variability |
title_fullStr | In Shift and In Variance: Assessing the Robustness of HAR Deep Learning Models Against Variability |
title_full_unstemmed | In Shift and In Variance: Assessing the Robustness of HAR Deep Learning Models Against Variability |
title_short | In Shift and In Variance: Assessing the Robustness of HAR Deep Learning Models Against Variability |
title_sort | in shift and in variance assessing the robustness of har deep learning models against variability |
topic | human activity recognition wearable sensors deep learning distribution shift real world variability data heterogeneity |
url | https://www.mdpi.com/1424-8220/25/2/430 |
work_keys_str_mv | AT azharalikhaked inshiftandinvarianceassessingtherobustnessofhardeeplearningmodelsagainstvariability AT nobuyukioishi inshiftandinvarianceassessingtherobustnessofhardeeplearningmodelsagainstvariability AT danielroggen inshiftandinvarianceassessingtherobustnessofhardeeplearningmodelsagainstvariability AT paulalago inshiftandinvarianceassessingtherobustnessofhardeeplearningmodelsagainstvariability |