Enhancing Explainability in Predictive Maintenance : Investigating the Impact of Data Preprocessing Techniques on XAI Effectiveness

In predictive maintenance, the complexity of the data often requires the use of Deep Learning models. These models, called “black boxes”, have proved their worth in predicting the Remaining Useful Life (RUL) of industrial machines. However, the inherent opacity of these models requires the incorpora...

Full description

Saved in:
Bibliographic Details
Main Authors: Mouhamadou Lamine NDAO, Genane YOUNESS, Ndèye NIANG, Gilbert SAPORTA
Format: Article
Language:English
Published: LibraryPress@UF 2024-05-01
Series:Proceedings of the International Florida Artificial Intelligence Research Society Conference
Subjects:
Online Access:https://journals.flvc.org/FLAIRS/article/view/135526
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In predictive maintenance, the complexity of the data often requires the use of Deep Learning models. These models, called “black boxes”, have proved their worth in predicting the Remaining Useful Life (RUL) of industrial machines. However, the inherent opacity of these models requires the incorporation of post-hoc explanation methods to enhance transparency. The quality of the explanations provided is then assessed using so-called evaluation metrics. Modeling is a whole process that includes an important data preprocessing phase, with parameter selection such as time window, smoothing parameter, or rectified RUL when dealing with multivariate time series dataset. We propose to analyze the impact of these preprocessing methods on the quality of explanations provided by the local post-hoc models LIME, KernelSHAP, and L2X, utilizing six evaluation metrics: stability, consistency, congruence, selectivity, completeness, and acumen.  This analysis will be based on NASA's Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) dataset with the LSTM model.  Our findings reveal that the choice of specific pre-processing parameters can significantly improve predictive performance. Furthermore, the quality of explanations depends on the selection of explicability methods. In addition, a factorial analysis of the evaluation metrics reveals that they do not all point in the same direction. Indeed, understanding the nuanced relationships between evaluation metrics is essential for a comprehensive and accurate assessment of explainability methods.
ISSN:2334-0754
2334-0762