Use of Explainable Artificial Intelligence for Analyzing and Explaining Intrusion Detection Systems
The increase in malicious cyber activities has generated the need to produce effective tools for the field of digital forensics and incident response. Artificial intelligence (AI) and its fields, specifically machine learning (ML) and deep learning (DL), have shown great potential to aid the task of...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-04-01
|
| Series: | Computers |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2073-431X/14/5/160 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850127422615191552 |
|---|---|
| author | Pamela Hermosilla Mauricio Díaz Sebastián Berríos Héctor Allende-Cid |
| author_facet | Pamela Hermosilla Mauricio Díaz Sebastián Berríos Héctor Allende-Cid |
| author_sort | Pamela Hermosilla |
| collection | DOAJ |
| description | The increase in malicious cyber activities has generated the need to produce effective tools for the field of digital forensics and incident response. Artificial intelligence (AI) and its fields, specifically machine learning (ML) and deep learning (DL), have shown great potential to aid the task of processing and analyzing large amounts of information. However, models generated by DL are often considered “black boxes”, a name derived due to the difficulties faced by users when trying to understand the decision-making process for obtaining results. This research seeks to address the challenges of transparency, explainability, and reliability posed by black-box models in digital forensics. To accomplish this, explainable artificial intelligence (XAI) is explored as a solution. This approach seeks to make DL models more interpretable and understandable by humans. The SHAP (SHapley Additive eXplanations) and LIME (Local Interpretable Model-agnostic Explanations) methods will be implemented and evaluated as a model-agnostic technique to explain predictions of the generated models for forensic analysis. By applying these methods to the XGBoost and TabNet models trained on the UNSW-NB15 dataset, the results indicated distinct global feature importance rankings between the model types and revealed greater consistency of local explanations for the tree-based XGBoost model compared to the deep learning-based TabNet. This study aims to make the decision-making process in these models transparent and to assess the confidence and consistency of XAI-generated explanations in a forensic context. |
| format | Article |
| id | doaj-art-e6af4a1b3f1641d596bca2a82d08a2db |
| institution | OA Journals |
| issn | 2073-431X |
| language | English |
| publishDate | 2025-04-01 |
| publisher | MDPI AG |
| record_format | Article |
| series | Computers |
| spelling | doaj-art-e6af4a1b3f1641d596bca2a82d08a2db2025-08-20T02:33:42ZengMDPI AGComputers2073-431X2025-04-0114516010.3390/computers14050160Use of Explainable Artificial Intelligence for Analyzing and Explaining Intrusion Detection SystemsPamela Hermosilla0Mauricio Díaz1Sebastián Berríos2Héctor Allende-Cid3Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, ChileEscuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, ChileEscuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, ChileEscuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Valparaíso 2362807, ChileThe increase in malicious cyber activities has generated the need to produce effective tools for the field of digital forensics and incident response. Artificial intelligence (AI) and its fields, specifically machine learning (ML) and deep learning (DL), have shown great potential to aid the task of processing and analyzing large amounts of information. However, models generated by DL are often considered “black boxes”, a name derived due to the difficulties faced by users when trying to understand the decision-making process for obtaining results. This research seeks to address the challenges of transparency, explainability, and reliability posed by black-box models in digital forensics. To accomplish this, explainable artificial intelligence (XAI) is explored as a solution. This approach seeks to make DL models more interpretable and understandable by humans. The SHAP (SHapley Additive eXplanations) and LIME (Local Interpretable Model-agnostic Explanations) methods will be implemented and evaluated as a model-agnostic technique to explain predictions of the generated models for forensic analysis. By applying these methods to the XGBoost and TabNet models trained on the UNSW-NB15 dataset, the results indicated distinct global feature importance rankings between the model types and revealed greater consistency of local explanations for the tree-based XGBoost model compared to the deep learning-based TabNet. This study aims to make the decision-making process in these models transparent and to assess the confidence and consistency of XAI-generated explanations in a forensic context.https://www.mdpi.com/2073-431X/14/5/160forensic analysisXAIUNSW-NB15SHAPLIMEXGBoost |
| spellingShingle | Pamela Hermosilla Mauricio Díaz Sebastián Berríos Héctor Allende-Cid Use of Explainable Artificial Intelligence for Analyzing and Explaining Intrusion Detection Systems Computers forensic analysis XAI UNSW-NB15 SHAP LIME XGBoost |
| title | Use of Explainable Artificial Intelligence for Analyzing and Explaining Intrusion Detection Systems |
| title_full | Use of Explainable Artificial Intelligence for Analyzing and Explaining Intrusion Detection Systems |
| title_fullStr | Use of Explainable Artificial Intelligence for Analyzing and Explaining Intrusion Detection Systems |
| title_full_unstemmed | Use of Explainable Artificial Intelligence for Analyzing and Explaining Intrusion Detection Systems |
| title_short | Use of Explainable Artificial Intelligence for Analyzing and Explaining Intrusion Detection Systems |
| title_sort | use of explainable artificial intelligence for analyzing and explaining intrusion detection systems |
| topic | forensic analysis XAI UNSW-NB15 SHAP LIME XGBoost |
| url | https://www.mdpi.com/2073-431X/14/5/160 |
| work_keys_str_mv | AT pamelahermosilla useofexplainableartificialintelligenceforanalyzingandexplainingintrusiondetectionsystems AT mauriciodiaz useofexplainableartificialintelligenceforanalyzingandexplainingintrusiondetectionsystems AT sebastianberrios useofexplainableartificialintelligenceforanalyzingandexplainingintrusiondetectionsystems AT hectorallendecid useofexplainableartificialintelligenceforanalyzingandexplainingintrusiondetectionsystems |