Research on Gait Recognition Based on GaitSet and Multimodal Fusion
With the continuous technological progress, especially the development in biometrics, gait recognition has shown broad application prospects in healthcare (e.g., health monitoring), security (e.g., assisted identity verification), and human-computer interaction. However, individual differences, such...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10852208/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832575602806226944 |
---|---|
author | Xiling Shi Wenqiang Zhao Huandou Pei Hongru Zhai Yongxia Gao |
author_facet | Xiling Shi Wenqiang Zhao Huandou Pei Hongru Zhai Yongxia Gao |
author_sort | Xiling Shi |
collection | DOAJ |
description | With the continuous technological progress, especially the development in biometrics, gait recognition has shown broad application prospects in healthcare (e.g., health monitoring), security (e.g., assisted identity verification), and human-computer interaction. However, individual differences, such as changes in physical condition, and environmental variability, such as differences in lighting, can impact its accuracy. Based on the information derived from the gait contour sequence during walking (such as temporal and spatial information), this study proposes an improved gait recognition method based on the GaitSet model, which improves video-based gait recognition performance by combining gait energy images and silhouette images to form a multimodal representation. The experimental results showed a significant performance improvement compared with the original model, especially in subjects with bags. Large-sample training experiment results based on the CASIA-B database indicated that the recognition rates in the Normal (NM), Bag (BG), and Coat (CL) states were 95.8%, 89.3%, and 72.5%, respectively, and that in the CL state achieved a significant improvement of 3.3%. |
format | Article |
id | doaj-art-970d3caee12f4b63899d7e3d5fda2851 |
institution | Kabale University |
issn | 2169-3536 |
language | English |
publishDate | 2025-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj-art-970d3caee12f4b63899d7e3d5fda28512025-01-31T23:05:17ZengIEEEIEEE Access2169-35362025-01-0113200172002410.1109/ACCESS.2025.353357110852208Research on Gait Recognition Based on GaitSet and Multimodal FusionXiling Shi0https://orcid.org/0009-0006-0221-9892Wenqiang Zhao1https://orcid.org/0009-0002-0341-3081Huandou Pei2Hongru Zhai3https://orcid.org/0009-0007-1133-7265Yongxia Gao4https://orcid.org/0009-0002-7606-0141School of Electrical and Control Engineering, North University of China, Taiyuan, Shanxi, ChinaSchool of Electrical and Control Engineering, North University of China, Taiyuan, Shanxi, ChinaSchool of Electrical and Control Engineering, North University of China, Taiyuan, Shanxi, ChinaSchool of Electrical and Control Engineering, North University of China, Taiyuan, Shanxi, ChinaSchool of Electrical and Control Engineering, North University of China, Taiyuan, Shanxi, ChinaWith the continuous technological progress, especially the development in biometrics, gait recognition has shown broad application prospects in healthcare (e.g., health monitoring), security (e.g., assisted identity verification), and human-computer interaction. However, individual differences, such as changes in physical condition, and environmental variability, such as differences in lighting, can impact its accuracy. Based on the information derived from the gait contour sequence during walking (such as temporal and spatial information), this study proposes an improved gait recognition method based on the GaitSet model, which improves video-based gait recognition performance by combining gait energy images and silhouette images to form a multimodal representation. The experimental results showed a significant performance improvement compared with the original model, especially in subjects with bags. Large-sample training experiment results based on the CASIA-B database indicated that the recognition rates in the Normal (NM), Bag (BG), and Coat (CL) states were 95.8%, 89.3%, and 72.5%, respectively, and that in the CL state achieved a significant improvement of 3.3%.https://ieeexplore.ieee.org/document/10852208/Deep learninggait recognitioninterpolationmultimodal attention mechanism |
spellingShingle | Xiling Shi Wenqiang Zhao Huandou Pei Hongru Zhai Yongxia Gao Research on Gait Recognition Based on GaitSet and Multimodal Fusion IEEE Access Deep learning gait recognition interpolation multimodal attention mechanism |
title | Research on Gait Recognition Based on GaitSet and Multimodal Fusion |
title_full | Research on Gait Recognition Based on GaitSet and Multimodal Fusion |
title_fullStr | Research on Gait Recognition Based on GaitSet and Multimodal Fusion |
title_full_unstemmed | Research on Gait Recognition Based on GaitSet and Multimodal Fusion |
title_short | Research on Gait Recognition Based on GaitSet and Multimodal Fusion |
title_sort | research on gait recognition based on gaitset and multimodal fusion |
topic | Deep learning gait recognition interpolation multimodal attention mechanism |
url | https://ieeexplore.ieee.org/document/10852208/ |
work_keys_str_mv | AT xilingshi researchongaitrecognitionbasedongaitsetandmultimodalfusion AT wenqiangzhao researchongaitrecognitionbasedongaitsetandmultimodalfusion AT huandoupei researchongaitrecognitionbasedongaitsetandmultimodalfusion AT hongruzhai researchongaitrecognitionbasedongaitsetandmultimodalfusion AT yongxiagao researchongaitrecognitionbasedongaitsetandmultimodalfusion |