A vision model for automated frozen tuna processing

Abstract Accurate and rapid segmentation of key parts of frozen tuna, along with precise pose estimation, is crucial for automated processing. However, challenges such as size differences and indistinct features of tuna parts, as well as the complexity of determining fish poses in multi-fish scenari...

Full description

Saved in:
Bibliographic Details
Main Authors: Richeng Wang, Xiongsheng Zheng, Yan Chen
Format: Article
Language:English
Published: Nature Portfolio 2025-01-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-87339-3
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832585721586647040
author Richeng Wang
Xiongsheng Zheng
Yan Chen
author_facet Richeng Wang
Xiongsheng Zheng
Yan Chen
author_sort Richeng Wang
collection DOAJ
description Abstract Accurate and rapid segmentation of key parts of frozen tuna, along with precise pose estimation, is crucial for automated processing. However, challenges such as size differences and indistinct features of tuna parts, as well as the complexity of determining fish poses in multi-fish scenarios, hinder this process. To address these issues, this paper introduces TunaVision, a vision model based on YOLOv8 designed for automated tuna processing. TunaVision incorporates enhancements in instance segmentation through YOLOv8m-FusionSeg, improving the segmentation of small and complex targets by increasing channel depth and optimizing feature fusion. Additionally, the YOLOv8s RSF model improves feature extraction speed and accuracy, ensuring each fish is correctly identified and localized before segmentation and pose estimation. Furthermore, TunaVision employs a vector-based approach for pose estimation, utilizing detection and segmentation results to determine fish posture and orientation. Experiments show that YOLOv8m-FusionSeg achieves an mAP@0.5 of 93.3%, while YOLOv8s RSF achieves an mAP@0.5 of 96.1%, with a mean absolute error (MAE) of 1.81 degrees in angle estimation, significantly outperforming other methods. These findings highlight TunaVision’s effectiveness in segmenting, detecting, and estimating poses of frozen tuna, offering valuable insights for the development of automated processing systems.
format Article
id doaj-art-87eaff590e1841c7a2cd6d46f64f467d
institution Kabale University
issn 2045-2322
language English
publishDate 2025-01-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj-art-87eaff590e1841c7a2cd6d46f64f467d2025-01-26T12:33:26ZengNature PortfolioScientific Reports2045-23222025-01-0115112210.1038/s41598-025-87339-3A vision model for automated frozen tuna processingRicheng Wang0Xiongsheng Zheng1Yan Chen2School of Marine Engineering Equipment, Zhejiang Ocean UniversitySchool of Marine Engineering Equipment, Zhejiang Ocean UniversitySchool of Food and Pharmacy, Zhejiang Ocean UniversityAbstract Accurate and rapid segmentation of key parts of frozen tuna, along with precise pose estimation, is crucial for automated processing. However, challenges such as size differences and indistinct features of tuna parts, as well as the complexity of determining fish poses in multi-fish scenarios, hinder this process. To address these issues, this paper introduces TunaVision, a vision model based on YOLOv8 designed for automated tuna processing. TunaVision incorporates enhancements in instance segmentation through YOLOv8m-FusionSeg, improving the segmentation of small and complex targets by increasing channel depth and optimizing feature fusion. Additionally, the YOLOv8s RSF model improves feature extraction speed and accuracy, ensuring each fish is correctly identified and localized before segmentation and pose estimation. Furthermore, TunaVision employs a vector-based approach for pose estimation, utilizing detection and segmentation results to determine fish posture and orientation. Experiments show that YOLOv8m-FusionSeg achieves an mAP@0.5 of 93.3%, while YOLOv8s RSF achieves an mAP@0.5 of 96.1%, with a mean absolute error (MAE) of 1.81 degrees in angle estimation, significantly outperforming other methods. These findings highlight TunaVision’s effectiveness in segmenting, detecting, and estimating poses of frozen tuna, offering valuable insights for the development of automated processing systems.https://doi.org/10.1038/s41598-025-87339-3Frozen tuna processingInstance segmentationObject detectionPose estimationYOLOv8
spellingShingle Richeng Wang
Xiongsheng Zheng
Yan Chen
A vision model for automated frozen tuna processing
Scientific Reports
Frozen tuna processing
Instance segmentation
Object detection
Pose estimation
YOLOv8
title A vision model for automated frozen tuna processing
title_full A vision model for automated frozen tuna processing
title_fullStr A vision model for automated frozen tuna processing
title_full_unstemmed A vision model for automated frozen tuna processing
title_short A vision model for automated frozen tuna processing
title_sort vision model for automated frozen tuna processing
topic Frozen tuna processing
Instance segmentation
Object detection
Pose estimation
YOLOv8
url https://doi.org/10.1038/s41598-025-87339-3
work_keys_str_mv AT richengwang avisionmodelforautomatedfrozentunaprocessing
AT xiongshengzheng avisionmodelforautomatedfrozentunaprocessing
AT yanchen avisionmodelforautomatedfrozentunaprocessing
AT richengwang visionmodelforautomatedfrozentunaprocessing
AT xiongshengzheng visionmodelforautomatedfrozentunaprocessing
AT yanchen visionmodelforautomatedfrozentunaprocessing