Monocular vision based on the YOLOv7 and coordinate transformation for vehicles precise positioning

Logistics tracking and positioning is a critical part of the discrete digital workshop, which is widely applied in many fields (e.g. industry and transport). However, it is distinguished by dispersed manufacturing machinery, frequent material flows, and complicated noise environments. The positionin...

Full description

Saved in:
Bibliographic Details
Main Authors: Jingzhao Li, Huashun Li, Xiaobo Zhang, Qing Shi
Format: Article
Language:English
Published: Taylor & Francis Group 2023-12-01
Series:Connection Science
Subjects:
Online Access:http://dx.doi.org/10.1080/09540091.2023.2166903
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Logistics tracking and positioning is a critical part of the discrete digital workshop, which is widely applied in many fields (e.g. industry and transport). However, it is distinguished by dispersed manufacturing machinery, frequent material flows, and complicated noise environments. The positioning accuracy of the conventional radio frequency positioning approach is severely impacted. The latest panoramic vision positioning technology relies on binocular cameras. And that cannot be used for monocular cameras in industrial scenarios. This paper proposes a monocular vision positioning method based on YOLOv7 and coordinate transformation to solve the problem of positioning accuracy in the digital workshop. Positioning beacons are placed on the top of the moving vehicle with a uniform height. The coordinate position of the beacon on the image is obtained through the YOLOv7 model based on transfer learning. Then, coordinate transformation is applied to obtain the real space coordinates of the vehicle. Experimental results show that the proposed single-eye vision system can improve the positioning accuracy of the digital workshop. The code and pre-trained models are available on https://github.com/ZS520L/YOLO_Positioning.
ISSN:0954-0091
1360-0494