Damage Detection Method for Road Ancillary Facilities Integrating Attention Mechanism

Anti-glare reflective stickers, as essential road ancillary facilities on highways, play a crucial role in ensuring driver visibility during nighttime and adverse weather conditions. However, existing detection methods often suffer from low accuracy, high computational complexity, and large model si...

Full description

Saved in:
Bibliographic Details
Main Authors: Shuang Yang, Huiqin Wang, Ke Wang, Nan Guo
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10908380/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Anti-glare reflective stickers, as essential road ancillary facilities on highways, play a crucial role in ensuring driver visibility during nighttime and adverse weather conditions. However, existing detection methods often suffer from low accuracy, high computational complexity, and large model sizes when dealing with complex backgrounds, small targets, and long-distance conditions. To address these challenges, this paper proposes a damage detection model for road ancillary facilities that integrates an attention mechanism. The model first introduces the D-GhostNet V3Conv module, replacing the standard convolutional layers, significantly enhancing feature extraction capabilities while reducing computational costs. Additionally, the improved AR-BiFormer attention mechanism is integrated into the backbone network, enabling adaptive weight adjustment of feature maps through parallel processing of contextual information, thereby effectively improving the detection of small targets in complex scenes. Lastly, the WIoUv3 bounding box loss function is employed to optimize the regression performance of bounding boxes, ensuring higher localization accuracy under different scales and overlap conditions. The experimental results show that the improved model achieves a mAP of 89.78% and an FPS of 86.31 in complex road scenes, which are 1.57% and 1.82% higher than the original YOLOv8 model, respectively, which substantially improves the detection accuracy and real-time performance of the model, and it is especially suitable for highway monitoring tasks in edge computing environments.
ISSN:2169-3536