Advancing Model Explainability: Visual Concept Knowledge Distillation for Concept Bottleneck Models
This study explores the integration of concept bottleneck models (CBMs) with knowledge distillation (KD) while preserving the locality characteristics of the CBM. Although KD proves effective in model compression, compressed models often lack interpretability in their decision-making process. We enh...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/15/2/493 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832589247272452096 |
---|---|
author | Ju-Hwan Lee Dang Thanh Vu Nam-Kyung Lee Il-Hong Shin Jin-Young Kim |
author_facet | Ju-Hwan Lee Dang Thanh Vu Nam-Kyung Lee Il-Hong Shin Jin-Young Kim |
author_sort | Ju-Hwan Lee |
collection | DOAJ |
description | This study explores the integration of concept bottleneck models (CBMs) with knowledge distillation (KD) while preserving the locality characteristics of the CBM. Although KD proves effective in model compression, compressed models often lack interpretability in their decision-making process. We enhance comprehensive explainability by maintaining CBMs’ inherent interpretability through our novel approach to knowledge distillation. We introduce visual concept knowledge distillation (VICO-KD), which transfers both explicit and implicit visual concepts from the teacher to the student model while preserving the local interpretability of the CBM, enabling accurate classification and clear visualization of evidence. VICO-KD demonstrates superior performance on benchmark datasets compared to Vanilla-KD, ensuring the student model learns visual concepts while maintaining the local interpretation capabilities of the teacher CBM. Our methodology shows competitive performance against existing concept models, and the student model, trained via VICO-KD, exhibits enhanced performance compared to the teacher during interventions. This study highlights the effectiveness of combining a CBM with KD to improve both interpretability and explainability in compressed models while maintaining locality properties. |
format | Article |
id | doaj-art-32a5eea7bee14ae4b7d8dd6cd55efe90 |
institution | Kabale University |
issn | 2076-3417 |
language | English |
publishDate | 2025-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Applied Sciences |
spelling | doaj-art-32a5eea7bee14ae4b7d8dd6cd55efe902025-01-24T13:19:34ZengMDPI AGApplied Sciences2076-34172025-01-0115249310.3390/app15020493Advancing Model Explainability: Visual Concept Knowledge Distillation for Concept Bottleneck ModelsJu-Hwan Lee0Dang Thanh Vu1Nam-Kyung Lee2Il-Hong Shin3Jin-Young Kim4Department of Intelligent Electronics and Computer Engineering, Chonnam National University, 77 Yongbong-ro, Buk-gu, Gwangju 61186, Republic of KoreaResearch Center, AISeed Inc., 77 Yongbong-ro, Buk-gu, Gwangju 61186, Republic of KoreaElectronics and Telecommunications Research Institute, Media Intelligence Research Section, 218 Gajeong-ro, Yuseong-gu, Daejeon 34129, Republic of KoreaElectronics and Telecommunications Research Institute, Media Intelligence Research Section, 218 Gajeong-ro, Yuseong-gu, Daejeon 34129, Republic of KoreaDepartment of Intelligent Electronics and Computer Engineering, Chonnam National University, 77 Yongbong-ro, Buk-gu, Gwangju 61186, Republic of KoreaThis study explores the integration of concept bottleneck models (CBMs) with knowledge distillation (KD) while preserving the locality characteristics of the CBM. Although KD proves effective in model compression, compressed models often lack interpretability in their decision-making process. We enhance comprehensive explainability by maintaining CBMs’ inherent interpretability through our novel approach to knowledge distillation. We introduce visual concept knowledge distillation (VICO-KD), which transfers both explicit and implicit visual concepts from the teacher to the student model while preserving the local interpretability of the CBM, enabling accurate classification and clear visualization of evidence. VICO-KD demonstrates superior performance on benchmark datasets compared to Vanilla-KD, ensuring the student model learns visual concepts while maintaining the local interpretation capabilities of the teacher CBM. Our methodology shows competitive performance against existing concept models, and the student model, trained via VICO-KD, exhibits enhanced performance compared to the teacher during interventions. This study highlights the effectiveness of combining a CBM with KD to improve both interpretability and explainability in compressed models while maintaining locality properties.https://www.mdpi.com/2076-3417/15/2/493concept bottleneck modelsknowledge distillationexplainable AIinterpretability |
spellingShingle | Ju-Hwan Lee Dang Thanh Vu Nam-Kyung Lee Il-Hong Shin Jin-Young Kim Advancing Model Explainability: Visual Concept Knowledge Distillation for Concept Bottleneck Models Applied Sciences concept bottleneck models knowledge distillation explainable AI interpretability |
title | Advancing Model Explainability: Visual Concept Knowledge Distillation for Concept Bottleneck Models |
title_full | Advancing Model Explainability: Visual Concept Knowledge Distillation for Concept Bottleneck Models |
title_fullStr | Advancing Model Explainability: Visual Concept Knowledge Distillation for Concept Bottleneck Models |
title_full_unstemmed | Advancing Model Explainability: Visual Concept Knowledge Distillation for Concept Bottleneck Models |
title_short | Advancing Model Explainability: Visual Concept Knowledge Distillation for Concept Bottleneck Models |
title_sort | advancing model explainability visual concept knowledge distillation for concept bottleneck models |
topic | concept bottleneck models knowledge distillation explainable AI interpretability |
url | https://www.mdpi.com/2076-3417/15/2/493 |
work_keys_str_mv | AT juhwanlee advancingmodelexplainabilityvisualconceptknowledgedistillationforconceptbottleneckmodels AT dangthanhvu advancingmodelexplainabilityvisualconceptknowledgedistillationforconceptbottleneckmodels AT namkyunglee advancingmodelexplainabilityvisualconceptknowledgedistillationforconceptbottleneckmodels AT ilhongshin advancingmodelexplainabilityvisualconceptknowledgedistillationforconceptbottleneckmodels AT jinyoungkim advancingmodelexplainabilityvisualconceptknowledgedistillationforconceptbottleneckmodels |