Visual Semantic Navigation Based on Deep Learning for Indoor Mobile Robots

In order to improve the environmental perception ability of mobile robots during semantic navigation, a three-layer perception framework based on transfer learning is proposed, including a place recognition model, a rotation region recognition model, and a “side” recognition model. The first model i...

Full description

Saved in:
Bibliographic Details
Main Authors: Li Wang, Lijun Zhao, Guanglei Huo, Ruifeng Li, Zhenghua Hou, Pan Luo, Zhenye Sun, Ke Wang, Chenguang Yang
Format: Article
Language:English
Published: Wiley 2018-01-01
Series:Complexity
Online Access:http://dx.doi.org/10.1155/2018/1627185
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In order to improve the environmental perception ability of mobile robots during semantic navigation, a three-layer perception framework based on transfer learning is proposed, including a place recognition model, a rotation region recognition model, and a “side” recognition model. The first model is used to recognize different regions in rooms and corridors, the second one is used to determine where the robot should be rotated, and the third one is used to decide the walking side of corridors or aisles in the room. Furthermore, the “side” recognition model can also correct the motion of robots in real time, according to which accurate arrival to the specific target is guaranteed. Moreover, semantic navigation is accomplished using only one sensor (a camera). Several experiments are conducted in a real indoor environment, demonstrating the effectiveness and robustness of the proposed perception framework.
ISSN:1076-2787
1099-0526