Visual Semantic Navigation Based on Deep Learning for Indoor Mobile Robots
In order to improve the environmental perception ability of mobile robots during semantic navigation, a three-layer perception framework based on transfer learning is proposed, including a place recognition model, a rotation region recognition model, and a “side” recognition model. The first model i...
Saved in:
Main Authors: | , , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2018-01-01
|
Series: | Complexity |
Online Access: | http://dx.doi.org/10.1155/2018/1627185 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832564207981166592 |
---|---|
author | Li Wang Lijun Zhao Guanglei Huo Ruifeng Li Zhenghua Hou Pan Luo Zhenye Sun Ke Wang Chenguang Yang |
author_facet | Li Wang Lijun Zhao Guanglei Huo Ruifeng Li Zhenghua Hou Pan Luo Zhenye Sun Ke Wang Chenguang Yang |
author_sort | Li Wang |
collection | DOAJ |
description | In order to improve the environmental perception ability of mobile robots during semantic navigation, a three-layer perception framework based on transfer learning is proposed, including a place recognition model, a rotation region recognition model, and a “side” recognition model. The first model is used to recognize different regions in rooms and corridors, the second one is used to determine where the robot should be rotated, and the third one is used to decide the walking side of corridors or aisles in the room. Furthermore, the “side” recognition model can also correct the motion of robots in real time, according to which accurate arrival to the specific target is guaranteed. Moreover, semantic navigation is accomplished using only one sensor (a camera). Several experiments are conducted in a real indoor environment, demonstrating the effectiveness and robustness of the proposed perception framework. |
format | Article |
id | doaj-art-7b86be741ae14bd680d3f49a39cfa280 |
institution | Kabale University |
issn | 1076-2787 1099-0526 |
language | English |
publishDate | 2018-01-01 |
publisher | Wiley |
record_format | Article |
series | Complexity |
spelling | doaj-art-7b86be741ae14bd680d3f49a39cfa2802025-02-03T01:11:33ZengWileyComplexity1076-27871099-05262018-01-01201810.1155/2018/16271851627185Visual Semantic Navigation Based on Deep Learning for Indoor Mobile RobotsLi Wang0Lijun Zhao1Guanglei Huo2Ruifeng Li3Zhenghua Hou4Pan Luo5Zhenye Sun6Ke Wang7Chenguang Yang8State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, ChinaState Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, ChinaHNA Technology Group, Shanghai 200122, ChinaState Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, ChinaState Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, ChinaState Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, ChinaState Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, ChinaState Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, ChinaKey Laboratory of Autonomous Systems and Networked Control, College of Automation Science and Engineering, South China University of Technology, Guangzhou 510640, ChinaIn order to improve the environmental perception ability of mobile robots during semantic navigation, a three-layer perception framework based on transfer learning is proposed, including a place recognition model, a rotation region recognition model, and a “side” recognition model. The first model is used to recognize different regions in rooms and corridors, the second one is used to determine where the robot should be rotated, and the third one is used to decide the walking side of corridors or aisles in the room. Furthermore, the “side” recognition model can also correct the motion of robots in real time, according to which accurate arrival to the specific target is guaranteed. Moreover, semantic navigation is accomplished using only one sensor (a camera). Several experiments are conducted in a real indoor environment, demonstrating the effectiveness and robustness of the proposed perception framework.http://dx.doi.org/10.1155/2018/1627185 |
spellingShingle | Li Wang Lijun Zhao Guanglei Huo Ruifeng Li Zhenghua Hou Pan Luo Zhenye Sun Ke Wang Chenguang Yang Visual Semantic Navigation Based on Deep Learning for Indoor Mobile Robots Complexity |
title | Visual Semantic Navigation Based on Deep Learning for Indoor Mobile Robots |
title_full | Visual Semantic Navigation Based on Deep Learning for Indoor Mobile Robots |
title_fullStr | Visual Semantic Navigation Based on Deep Learning for Indoor Mobile Robots |
title_full_unstemmed | Visual Semantic Navigation Based on Deep Learning for Indoor Mobile Robots |
title_short | Visual Semantic Navigation Based on Deep Learning for Indoor Mobile Robots |
title_sort | visual semantic navigation based on deep learning for indoor mobile robots |
url | http://dx.doi.org/10.1155/2018/1627185 |
work_keys_str_mv | AT liwang visualsemanticnavigationbasedondeeplearningforindoormobilerobots AT lijunzhao visualsemanticnavigationbasedondeeplearningforindoormobilerobots AT guangleihuo visualsemanticnavigationbasedondeeplearningforindoormobilerobots AT ruifengli visualsemanticnavigationbasedondeeplearningforindoormobilerobots AT zhenghuahou visualsemanticnavigationbasedondeeplearningforindoormobilerobots AT panluo visualsemanticnavigationbasedondeeplearningforindoormobilerobots AT zhenyesun visualsemanticnavigationbasedondeeplearningforindoormobilerobots AT kewang visualsemanticnavigationbasedondeeplearningforindoormobilerobots AT chenguangyang visualsemanticnavigationbasedondeeplearningforindoormobilerobots |