BAHGRF3: Human gait recognition in the indoor environment using deep learning features fusion assisted framework and posterior probability moth flame optimisation

Abstract Biometric characteristics are playing a vital role in security for the last few years. Human gait classification in video sequences is an important biometrics attribute and is used for security purposes. A new framework for human gait classification in video sequences using deep learning (D...

Full description

Saved in:
Bibliographic Details
Main Authors: Muhammad Abrar Ahmad Khan, Muhammad Attique Khan, Ateeq Ur Rehman, Ahmed Ibrahim Alzahrani, Nasser Alalwan, Deepak Gupta, Saima Ahmed Rahin, Yudong Zhang
Format: Article
Language:English
Published: Wiley 2025-04-01
Series:CAAI Transactions on Intelligence Technology
Subjects:
Online Access:https://doi.org/10.1049/cit2.12368
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Biometric characteristics are playing a vital role in security for the last few years. Human gait classification in video sequences is an important biometrics attribute and is used for security purposes. A new framework for human gait classification in video sequences using deep learning (DL) fusion assisted and posterior probability‐based moth flames optimization (MFO) is proposed. In the first step, the video frames are resized and fine‐tuned by two pre‐trained lightweight DL models, EfficientNetB0 and MobileNetV2. Both models are selected based on the top‐5 accuracy and less number of parameters. Later, both models are trained through deep transfer learning and extracted deep features fused using a voting scheme. In the last step, the authors develop a posterior probability‐based MFO feature selection algorithm to select the best features. The selected features are classified using several supervised learning methods. The CASIA‐B publicly available dataset has been employed for the experimental process. On this dataset, the authors selected six angles such as 0°, 18°, 90°, 108°, 162°, and 180° and obtained an average accuracy of 96.9%, 95.7%, 86.8%, 90.0%, 95.1%, and 99.7%. Results demonstrate comparable improvement in accuracy and significantly minimize the computational time with recent state‐of‐the‐art techniques.
ISSN:2468-2322