Learning a Robust Hybrid Descriptor for Robot Visual Localization

Long-term robust visual localization is one of the main challenges of long-term visual navigation for mobile robots. Due to factors such as illumination, weather, and season, mobile robots continuously navigate with visual information in a complex scene, which is likely to lead to failure localizati...

Full description

Saved in:
Bibliographic Details
Main Authors: Qingwu Shi, Junjun Wu, Zeqin Lin, Ningwei Qin
Format: Article
Language:English
Published: Wiley 2022-01-01
Series:Journal of Robotics
Online Access:http://dx.doi.org/10.1155/2022/9354909
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1850209937357012992
author Qingwu Shi
Junjun Wu
Zeqin Lin
Ningwei Qin
author_facet Qingwu Shi
Junjun Wu
Zeqin Lin
Ningwei Qin
author_sort Qingwu Shi
collection DOAJ
description Long-term robust visual localization is one of the main challenges of long-term visual navigation for mobile robots. Due to factors such as illumination, weather, and season, mobile robots continuously navigate with visual information in a complex scene, which is likely to lead to failure localization within a few hours. However, semantic segmentation images will be more stable than the original images against considerable drastically variable environments; therefore, to make full use of the advantages of both semantic segmentation image and its original image, this paper solves the above problems with the latest work of semantic segmentation and proposes the novel hybrid descriptor for long-term visual localization, which is generated by combining a semantic image descriptor extracted from segmentation images and an image descriptor extracted from RGB images with a certain weight, and then trained by a convolutional neural network. Our experiments show that the localization performance of our method combining the advantages of semantic image descriptor and image descriptor is superior to those long-term visual localization methods with only an image descriptor or semantic image descriptor. Finally, our experimental results mostly exceed state-of-the-art 2D image-based localization methods under various challenging environmental conditions in the Extended CMU Seasons and RobotCar Seasons datasets in specific precision metrics.
format Article
id doaj-art-920ac0dde6de4e5b8ebf678c732bbe2f
institution OA Journals
issn 1687-9619
language English
publishDate 2022-01-01
publisher Wiley
record_format Article
series Journal of Robotics
spelling doaj-art-920ac0dde6de4e5b8ebf678c732bbe2f2025-08-20T02:09:52ZengWileyJournal of Robotics1687-96192022-01-01202210.1155/2022/9354909Learning a Robust Hybrid Descriptor for Robot Visual LocalizationQingwu Shi0Junjun Wu1Zeqin Lin2Ningwei Qin3School of Mechatronic Engineering and AutomationSchool of Mechatronic Engineering and AutomationSchool of Mechatronic Engineering and AutomationSchool of Mechatronic Engineering and AutomationLong-term robust visual localization is one of the main challenges of long-term visual navigation for mobile robots. Due to factors such as illumination, weather, and season, mobile robots continuously navigate with visual information in a complex scene, which is likely to lead to failure localization within a few hours. However, semantic segmentation images will be more stable than the original images against considerable drastically variable environments; therefore, to make full use of the advantages of both semantic segmentation image and its original image, this paper solves the above problems with the latest work of semantic segmentation and proposes the novel hybrid descriptor for long-term visual localization, which is generated by combining a semantic image descriptor extracted from segmentation images and an image descriptor extracted from RGB images with a certain weight, and then trained by a convolutional neural network. Our experiments show that the localization performance of our method combining the advantages of semantic image descriptor and image descriptor is superior to those long-term visual localization methods with only an image descriptor or semantic image descriptor. Finally, our experimental results mostly exceed state-of-the-art 2D image-based localization methods under various challenging environmental conditions in the Extended CMU Seasons and RobotCar Seasons datasets in specific precision metrics.http://dx.doi.org/10.1155/2022/9354909
spellingShingle Qingwu Shi
Junjun Wu
Zeqin Lin
Ningwei Qin
Learning a Robust Hybrid Descriptor for Robot Visual Localization
Journal of Robotics
title Learning a Robust Hybrid Descriptor for Robot Visual Localization
title_full Learning a Robust Hybrid Descriptor for Robot Visual Localization
title_fullStr Learning a Robust Hybrid Descriptor for Robot Visual Localization
title_full_unstemmed Learning a Robust Hybrid Descriptor for Robot Visual Localization
title_short Learning a Robust Hybrid Descriptor for Robot Visual Localization
title_sort learning a robust hybrid descriptor for robot visual localization
url http://dx.doi.org/10.1155/2022/9354909
work_keys_str_mv AT qingwushi learningarobusthybriddescriptorforrobotvisuallocalization
AT junjunwu learningarobusthybriddescriptorforrobotvisuallocalization
AT zeqinlin learningarobusthybriddescriptorforrobotvisuallocalization
AT ningweiqin learningarobusthybriddescriptorforrobotvisuallocalization