MCGS-ReID: A Visible-Infrared Vehicle Reidentification Method Using Modal-Cross Graph Sampler

With the rapid development of Earth observation technology and UAV technology, utilizing UAVs equipped with various sensors for cross-modal vehicle reidentification has become one of the research hotspots. Cross-modal vehicle reidentification aims to match the same target across multiple nonoverlapp...

Full description

Saved in:
Bibliographic Details
Main Authors: Jianfei Liu, Chunhui Zhao, Chen Zhao, Nan Su, Wanxuan Lu, Yiming Yan, Shou Feng, Yunfei Qu
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10801215/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the rapid development of Earth observation technology and UAV technology, utilizing UAVs equipped with various sensors for cross-modal vehicle reidentification has become one of the research hotspots. Cross-modal vehicle reidentification aims to match the same target across multiple nonoverlapping camera views in different modalities, enabling monitoring of sensitive targets from day to night. The biggest challenge in cross-modal vehicle reidentification between visible and infrared modalities lies in the differences in imaging principles and spectral bands, leading to significant modality discrepancies. To alleviate these differences, this article proposes a cross-modal vehicle reidentification network based on modal-cross graph sampling (MCGS) method: first, a MCGS method is proposed, which can help the subsequent network learn more cross-modal information and reduce the modality differences. Second, a multimodal shared feature alignment network is designed, which can enhance the representation of shared features across modalities and achieve alignment of features from different modalities. Third, in response to the scarcity of cross-modal vehicle reidentification datasets that combine visible light and infrared modalities, a dataset for cross-modal vehicle reidentification was proposed, named VT-Vehicle. It is collected using a UAV equipped with visible sensors and infrared sensors. In addition, a series of experiments was conducted on the VT-Vehicle dataset and the RGBNT100 dataset. Among them, our method achieved the best Rank-1 accuracy of 82.33% and mAP of 76.16% for visible-to-infrared matching, as well as the best Rank-1 accuracy of 77.37% and mAP of 74.9% for infrared-to-visible matching, indicating the effectiveness and superiority of our method.
ISSN:1939-1404
2151-1535