Intelligent Air Gap Segmentation in Electric Motor Color Images Using a Self-Organizing Map
Image segmentation plays an important role in automating various inspection procedures that are still performed manually. In the production of electric motors, a relevant inspection task involves examining the air gap between the rotor and the stator to identify common defects. This paper introduces...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2025-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10976649/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Image segmentation plays an important role in automating various inspection procedures that are still performed manually. In the production of electric motors, a relevant inspection task involves examining the air gap between the rotor and the stator to identify common defects. This paper introduces an intelligent approach based on a self-organizing map (SOM) that is trained individually for each image to automatically segment the air gap region in electric motor images. This procedure eliminates the need for a large dataset of manually segmented images, which is typically required by most of the intelligent segmentation methods in the literature. The procedure begins with a polar transformation of the motor image, which helps to identify the region of interest for segmentation. The selected region is then used to build a training dataset for the SOM. Each training example consists of a frame extracted around every pixel in the region. Once map training is complete, an automated procedure extracts the air gap segment from the map output by selecting the pixel cluster that corresponds to the air gap. Three configurations of this approach were evaluated, exploring the impact of using different color spaces (RGB, LAB, and HSV). To evaluate the proposed approach, six motor samples were tested, using different illumination conditions. The results were compared with two unsupervised segmentation methods: Otsu’s method and K-Means. The proposed approach outperformed both baseline methods in terms of evaluated metrics, achieving a 65% higher intersection-over-union score than the best baseline method (K-Means). This study demonstrates the efficacy of the proposed image segmentation method, particularly in scenarios where only a limited number of images are available, making it impractical to train a supervised model. |
|---|---|
| ISSN: | 2169-3536 |