The shallowest transparent and interpretable deep neural network for image recognition

Abstract Trusting the decisions of deep learning models requires transparency of their reasoning process, especially for high-risk decisions. In this paper, a fully transparent deep learning model (Shallow-ProtoPNet) is introduced. This model consists of a transparent prototype layer, followed by an...

Full description

Saved in:
Bibliographic Details
Main Authors: Gurmail Singh, Stefano Frizzo Stefenon, Kin-Choong Yow
Format: Article
Language:English
Published: Nature Portfolio 2025-04-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-92945-2
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Trusting the decisions of deep learning models requires transparency of their reasoning process, especially for high-risk decisions. In this paper, a fully transparent deep learning model (Shallow-ProtoPNet) is introduced. This model consists of a transparent prototype layer, followed by an indispensable fully connected layer that connects prototypes and logits, whereas usually, interpretable models are not fully transparent because they use some black-box part as their baseline. This is the difference between Shallow-ProtoPNet and prototypical part network (ProtoPNet), the proposed Shallow-ProtoPNet does not use any black box part as a baseline, whereas ProtoPNet uses convolutional layers of black-box models as the baseline. On the dataset of X-ray images, the performance of the model is comparable to the other interpretable models that are not completely transparent. Since Shallow-ProtoPNet has only one (transparent) convolutional layer and a fully connected layer, it is the shallowest transparent deep neural network with only two layers between the input and output layers. Therefore, the size of our model is much smaller than that of its counterparts, making it suitable for use in embedded systems.
ISSN:2045-2322