Explainable AI supported hybrid deep learnig method for layer 2 intrusion detection
With rapidly developing technology, digitalization environments are also expanding. Although this situation has many positive effects on daily life, the security vulnerabilities brought about by digitalization continue to be a major concern. There is a large network structure behind many application...
Saved in:
| Main Author: | Ilhan Firat Kilincer |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Elsevier
2025-06-01
|
| Series: | Egyptian Informatics Journal |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S1110866525000623 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Federated XAI IDS: An Explainable and Safeguarding Privacy Approach to Detect Intrusion Combining Federated Learning and SHAP
by: Kazi Fatema, et al.
Published: (2025-05-01) -
Explainable AI for Forensic Analysis: A Comparative Study of SHAP and LIME in Intrusion Detection Models
by: Pamela Hermosilla, et al.
Published: (2025-06-01) -
Interpretable deep learning for gastric cancer detection: a fusion of AI architectures and explainability analysis
by: Junjie Ma, et al.
Published: (2025-05-01) -
Using Explainable AI to Measure Feature Contribution to Uncertainty
by: Katherine Elizabeth Brown, et al.
Published: (2022-05-01) -
Editorial: Explainable, trustworthy, and responsible AI in image processing
by: Akshay Agarwal
Published: (2025-05-01)