A dynamic dropout self-distillation method for object segmentation

Abstract There is a phenomenon that better teachers cannot teach out better students in knowledge distillation due to the capacity mismatch. Especially in pixel-level object segmentation, there are some challenging pixels that are difficult for the student model to learn. Even if the student model l...

Full description

Saved in:
Bibliographic Details
Main Authors: Lei Chen, Tieyong Cao, Yunfei Zheng, Yang Wang, Bo Zhang, Jibin Yang
Format: Article
Language:English
Published: Springer 2024-12-01
Series:Complex & Intelligent Systems
Subjects:
Online Access:https://doi.org/10.1007/s40747-024-01705-8
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract There is a phenomenon that better teachers cannot teach out better students in knowledge distillation due to the capacity mismatch. Especially in pixel-level object segmentation, there are some challenging pixels that are difficult for the student model to learn. Even if the student model learns from the teacher model for each pixel, the student’s performance still struggles to show significant improvement. Mimicking the learning process of human beings from easy to difficult, a dynamic dropout self-distillation method for object segmentation is proposed, which solves this problem by discarding the knowledge that the student struggles to learn. Firstly, the pixels where there is a significant difference between the teacher and student models are found according to the predicted probabilities. And these pixels are defined as difficult-to-learn pixel for the student model. Secondly, a dynamic dropout strategy is proposed to match the capability variation of the student model, which is used to discard the pixels with hard knowledge for the student model. Finally, to validate the effectiveness of the proposed method, a simple student model for object segmentation and a virtual teacher model with perfect segmentation accuracy are constructed. Experiment results on four public datasets demonstrate that, when there is a large performance gap between the teacher and student models, the proposed self-distillation method is more effective in improving the performance of the student model compared to other methods.
ISSN:2199-4536
2198-6053