YOLOSeg with applications to wafer die particle defect segmentation

Abstract This study develops the you only look once segmentation (YOLOSeg), an end-to-end instance segmentation model, with applications to segment small particle defects embedded on a wafer die. YOLOSeg uses YOLOv5s as the basis and extends a UNet-like structure to form the segmentation head. YOLOS...

Full description

Saved in:
Bibliographic Details
Main Authors: Yen-Ting Li, Yu-Cheng Chan, Chen-Che Huang, Yu-Chang Hsu, Ssu-Han Chen
Format: Article
Language:English
Published: Nature Portfolio 2025-01-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-86323-1
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract This study develops the you only look once segmentation (YOLOSeg), an end-to-end instance segmentation model, with applications to segment small particle defects embedded on a wafer die. YOLOSeg uses YOLOv5s as the basis and extends a UNet-like structure to form the segmentation head. YOLOSeg can predict not only bounding boxes of particle defects but also the corresponding bounding polygons. Furthermore, YOLOSeg also attempts to obtain a set of better weights by combining with several training tricks such as freezing layers, switching mask loss, using auto-anchor and introducing denoising diffusion probabilistic models (DDPM) image augmentation. The experiment results on the testing image set show that YOLOSeg’s average precision (AP) and intersection over union (IoU) are as high as 0.821 and 0.732 respectively. Even when the sizes of particle defects are extremely small, the performance of YOLOSeg is far superior to current instance segmentation models such as mask R-CNN, YOLACT, YUSEG, and Ultralytics’s YOLOv5s-segmentation. Additionally, preparing the training image set for YOLOSeg is time-saving because it needs neither to collect a large number of defective samples, nor to annotate pseudo defects, nor to design hand-craft features.
ISSN:2045-2322