Generating synthetic images for construction machinery data augmentation utilizing context-aware object placement

Dataset is an essential factor influencing the accuracy of computer vision (CV) tasks in construction. Although image synthesis methods can automatically generate substantial annotated construction data compared to manual annotation, existing challenges limited the CV task accuracy, such as geometri...

Full description

Saved in:
Bibliographic Details
Main Authors: Yujie Lu, Bo Liu, Wei Wei, Bo Xiao, Zhangding Liu, Wensheng Li
Format: Article
Language:English
Published: Elsevier 2025-03-01
Series:Developments in the Built Environment
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2666165925000109
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Dataset is an essential factor influencing the accuracy of computer vision (CV) tasks in construction. Although image synthesis methods can automatically generate substantial annotated construction data compared to manual annotation, existing challenges limited the CV task accuracy, such as geometric inconsistency. To efficiently generate high-quality data, a synthesis method of construction data was proposed utilizing Unreal Engine (UE) and PlaceNet. First, the inpainting algorithm was applied to generate pure backgrounds, followed by multi-angle foreground capture within the UE. Then, the Swin Transformer and improved loss functions were integrated into PlaceNet to enhance the feature extraction of construction backgrounds, facilitating object placement accuracy. The generated synthetic dataset achieved a high average accuracy (mAP = 85.2%) in object detection tasks, 2.1% higher than the real dataset. This study offers theoretical and practical insights for synthetic dataset generation in construction, providing a future perspective to enhance CV task performance utilizing image synthesis.
ISSN:2666-1659