Rapid and accurate detection of peanut pod appearance quality based on lightweight and improved YOLOv5_SSE model

IntroductionWith the escalating demands for agricultural product quality in modern agriculture, peanuts, as a crucial economic crop, have their pod appearance quality directly influencing market value and consumer acceptance. Traditionally, the visual inspection of peanut pod appearance quality reli...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhixia Liu, Xilin Zhong, Chunyu Wang, Guozhen Wu, Fengyu He, Jing Wang, Dexu Yang
Format: Article
Language:English
Published: Frontiers Media S.A. 2025-02-01
Series:Frontiers in Plant Science
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fpls.2025.1494688/full
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:IntroductionWith the escalating demands for agricultural product quality in modern agriculture, peanuts, as a crucial economic crop, have their pod appearance quality directly influencing market value and consumer acceptance. Traditionally, the visual inspection of peanut pod appearance quality relies heavily on manual labor, which is not only labor-intensive and inefficient but also susceptible to subjective judgments from inspectors, thereby compromising the consistency and accuracy of inspection outcomes. Consequently, the development of a rapid, accurate, and automated inspection system holds significant importance for enhancing production efficiency and quality control in the peanut industry.MethodsThis study introduces the optimization and iteration of the YOLOv5s model, aiming to swiftly and precisely identify high-quality peanuts, peanuts with mechanical damage, moldy peanuts, and germinated peanuts. The CSPDarkNet53 network of the YOLOv5s model was substituted with the ShuffleNetv2 backbone network to reduce the model’s weight. Various attention mechanisms were explored for integration and substitution with the backbone network to enhance model performance. Furthermore, the substitution of various loss functions was investigated, with the Focal-EIoU loss function employed as the regression loss term for predicting bounding boxes, thereby improving inference accuracy.ResultsCompared to the YOLOv5s network model, SSE-YOLOv5s boasts a mere 6.7% of the original model’s parameters, 7.8% of the computation, and an FPS rate 115. 1% higher. Its weight size is a mere 7.6% of the original model’s, while the detection accuracy and mean average precision (mAP) reach 98.3% and 99.3%, respectively, representing improvements of 1.6 and 0.7 percentage points over the original YOLOv5s model.DiscussionThe results underscore the superiority of the SSE-YOLOv5s model, which achieves a maximum mAP of 99.3% with a minimal model size of 1. 1MB and a peak FPS of 192.3. This optimized network model excels in rapidly, efficiently, and accurately detecting the appearance quality of mixed multi-target peanut pods, making it suitable for deployment on embedded devices. This study provides an essential reference for multi-target appearance quality inspection of peanut pods.
ISSN:1664-462X