Multimodal Deep Feature Fusion (MMDFF) for RGB-D Tracking

Visual tracking is still a challenging task due to occlusion, appearance changes, complex motion, etc. We propose a novel RGB-D tracker based on multimodal deep feature fusion (MMDFF) in this paper. MMDFF model consists of four deep Convolutional Neural Networks (CNNs): Motion-specific CNN, RGB- spe...

Full description

Saved in:
Bibliographic Details
Main Authors: Ming-xin Jiang, Chao Deng, Ming-min Zhang, Jing-song Shan, Haiyan Zhang
Format: Article
Language:English
Published: Wiley 2018-01-01
Series:Complexity
Online Access:http://dx.doi.org/10.1155/2018/5676095
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Visual tracking is still a challenging task due to occlusion, appearance changes, complex motion, etc. We propose a novel RGB-D tracker based on multimodal deep feature fusion (MMDFF) in this paper. MMDFF model consists of four deep Convolutional Neural Networks (CNNs): Motion-specific CNN, RGB- specific CNN, Depth-specific CNN, and RGB-Depth correlated CNN. The depth image is encoded into three channels which are sent into depth-specific CNN to extract deep depth features. The optical flow image is calculated for every frame and then is fed to motion-specific CNN to learn deep motion features. Deep RGB, depth, and motion information can be effectively fused at multiple layers via MMDFF model. Finally, multimodal fusion deep features are sent into the C-COT tracker to obtain the tracking result. For evaluation, experiments are conducted on two recent large-scale RGB-D datasets and results demonstrate that our proposed RGB-D tracking method achieves better performance than other state-of-art RGB-D trackers.
ISSN:1076-2787
1099-0526