Brain-inspired multimodal motion and fine-grained action recognition
IntroductionTraditional action recognition methods predominantly rely on a single modality, such as vision or motion, which presents significant limitations when dealing with fine-grained action recognition. These methods struggle particularly with video data containing complex combinations of actio...
Saved in:
Main Authors: | Yuening Li, Xiuhua Yang, Changkui Chen |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2025-01-01
|
Series: | Frontiers in Neurorobotics |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fnbot.2024.1502071/full |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Precise Recognition and Feature Depth Analysis of Tennis Training Actions Based on Multimodal Data Integration and Key Action Classification
by: Weichao Yang
Published: (2025-01-01) -
ClinClip: a Multimodal Language Pre-training model integrating EEG data for enhanced English medical listening assessment
by: Guangyu Sun
Published: (2025-01-01) -
Manet: motion-aware network for video action recognition
by: Xiaoyang Li, et al.
Published: (2025-02-01) -
Research on Gait Recognition Based on GaitSet and Multimodal Fusion
by: Xiling Shi, et al.
Published: (2025-01-01) -
Editorial: Brain-inspired computing: from neuroscience to neuromorphic electronics for new forms of artificial intelligence
by: Daniela Gandolfi, et al.
Published: (2025-02-01)