Method for Acquiring Passenger Standing Positions on Subway Platforms Based on Binocular Vision
Obtaining the standing positions of passengers on the subway platform can provide data support for more accurate passenger flow guidance measures. This paper utilizes binocular vision technology to acquire a depth matrix of the platform scene and employs the YOLOX object detection algorithm to ident...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10891384/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Obtaining the standing positions of passengers on the subway platform can provide data support for more accurate passenger flow guidance measures. This paper utilizes binocular vision technology to acquire a depth matrix of the platform scene and employs the YOLOX object detection algorithm to identify the positions of passengers’ heads within the depth matrix. The depth values of the center points of passengers’ heads are extracted, and then, through coordinate mapping, these depth values are converted into a platform coordinate system, allowing us to determine the positions of passengers on the platform and subsequently obtain the distribution data of passengers at the station. The distribution data of passengers can provide more accurate information for passenger flow guidance. Due to the lack of binocular image data for underground platform layers in existing public datasets, this study constructs an underground island platform model using Blender software and generates a dataset containing 200 test scenarios. Validation of the proposed model using this dataset shows that 91.9% of the predicted standing positions of passengers exhibit an error of less than 41.8 centimeters, with an average prediction error of 18.1 centimeters and an average computation time of 860 milliseconds. Furthermore, the model’s performance in real subway scenarios has been verified, demonstrating its effective performance in real-world environments. |
|---|---|
| ISSN: | 2169-3536 |