Explainable Artificial Intelligence (XAI) to Enhance Trust Management in Intrusion Detection Systems Using Decision Tree Model
Despite the growing popularity of machine learning models in the cyber-security applications (e.g., an intrusion detection system (IDS)), most of these models are perceived as a black-box. The eXplainable Artificial Intelligence (XAI) has become increasingly important to interpret the machine learni...
Saved in:
Main Authors: | Basim Mahbooba, Mohan Timilsina, Radhya Sahal, Martin Serrano |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2021-01-01
|
Series: | Complexity |
Online Access: | http://dx.doi.org/10.1155/2021/6634811 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Trust in Intrusion Detection Systems: An Investigation of Performance Analysis for Machine Learning and Deep Learning Models
by: Basim Mahbooba, et al.
Published: (2021-01-01) -
Designing a Model for Brand Engagement Value Creation through the Integration of Gamification Technology and Explainable Artificial Intelligence (XAI)
by: Zahra. Atf, et al.
Published: (2024-12-01) -
Explainable AI chatbots towards XAI ChatGPT: A review
by: Attila Kovari
Published: (2025-01-01) -
Urban Vegetation Mapping from Aerial Imagery Using Explainable AI (XAI)
by: Arnick Abdollahi, et al.
Published: (2021-07-01) -
Is human-like decision making explainable? Towards an explainable artificial intelligence for autonomous vehicles
by: Jiming Xie, et al.
Published: (2025-01-01)