Vision-Language Model-Based Local Interpretable Model-Agnostic Explanations Analysis for Explainable In-Vehicle Controller Area Network Intrusion Detection

The Controller Area Network (CAN) facilitates efficient communication among vehicle components. While it ensures fast and reliable data transmission, its lightweight design makes it susceptible to data manipulation in the absence of security layers. To address these vulnerabilities, machine learning...

Full description

Saved in:
Bibliographic Details
Main Authors: Jaeseung Lee, Jehyeok Rew
Format: Article
Language:English
Published: MDPI AG 2025-05-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/25/10/3020
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The Controller Area Network (CAN) facilitates efficient communication among vehicle components. While it ensures fast and reliable data transmission, its lightweight design makes it susceptible to data manipulation in the absence of security layers. To address these vulnerabilities, machine learning (ML)-based intrusion detection systems (IDS) have been developed and shown to be effective in identifying anomalous CAN traffic. However, these models often function as black boxes, offering limited transparency into their decision-making processes, which hinders trust in safety-critical environments. To overcome these limitations, this paper proposes a novel method that combines Local Interpretable Model-agnostic Explanations (LIME) with a vision-language model (VLM) to generate detailed textual interpretations of an ML-based CAN IDS. This integration mitigates the challenges of visual-only explanations in traditional XAI and enhances the intuitiveness of IDS outputs. By leveraging the multimodal reasoning capabilities of VLMs, the proposed method bridges the gap between visual and textual interpretability. The method supports both global and local explanations by analyzing feature importance with LIME and translating results into human-readable narratives via VLM. Experiments using a publicly available CAN intrusion detection dataset demonstrate that the proposed method provides coherent, text-based explanations, thereby improving interpretability and end-user trust.
ISSN:1424-8220