YOLO-LSD: A Lightweight Object Detection Model for Small Targets at Long Distances to Secure Pedestrian Safety

Ensuring the safety of children, the elderly, and individuals with disabilities remains a significant challenge in modern transportation environments, particularly in urban areas with mixed traffic comprising both vehicles and pedestrians. Real-time detection of long-distance and small-scale objects...

Full description

Saved in:
Bibliographic Details
Main Authors: Ming-An Chung, Sung-Yun Chai, Ming-Chun Hsieh, Chia-Wei Lin, Kai-Xiang Chen, Shang-Jui Huang, Jun-Hao Zhang
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10990045/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Ensuring the safety of children, the elderly, and individuals with disabilities remains a significant challenge in modern transportation environments, particularly in urban areas with mixed traffic comprising both vehicles and pedestrians. Real-time detection of long-distance and small-scale objects remains difficult due to resolution and computational constraints. This study proposes an improved lightweight object detection model, You Only Look Once &#x2013; Long-distance Small-target Detection (YOLO-LSD), designed for the long-range recognition of small objects in intelligent transportation applications. The proposed model integrates the C3C2 and the new Efficient Layer Aggregation Network - Convolutional Block Attention Module(ELAN-CBAM) modules to improve the efficiency of feature extraction while reducing computational overhead. The C3C2 module optimizes the network structure by reducing redundant operations, making it more suitable for real-time deployment on embedded devices. The CBAM module improves feature selection by incorporating channel and spatial attention mechanisms, thereby enhancing the robustness of small-object detection under complex urban conditions. The proposed YOLO-LSD model was tested on a customized wearable backpack system equipped with front and rear cameras for environmental perception. YOLO-LSD attains higher detection accuracy for small and distant objects while maintaining lower computational complexity. This study employs the PASCAL VOC<inline-formula> <tex-math notation="LaTeX">$2007+2012$ </tex-math></inline-formula> (VOC0712) dataset to evaluate the proposed model. Compared with YOLOv7, the model achieves a maximum mean Average Precision (mAP) of 80.1%, a minimum parameter count of 30.539M, and the lowest computational cost of 91.6 GFLOPS. This lightweight architecture makes it particularly suitable for intelligent transportation, pedestrian safety, and autonomous mobility applications. In summary, this lightweight architecture, YOLO-LSD, is particularly suitable for intelligent transportation, pedestrian safety and automated mobility applications.
ISSN:2169-3536