A Study Comparing Explainability Methods: A Medical User Perspective

In recent years, we have witnessed the rapid development of artificial intelligence systems and their presence in various fields. These systems are very efficient and powerful, but often unclear and insufficiently transparent. Explainable artificial intelligence (XAI) methods try to solve this probl...

Full description

Saved in:
Bibliographic Details
Main Authors: Matejová Miroslava, Gojdičová Lucia, Paralič Ján
Format: Article
Language:English
Published: Sciendo 2025-06-01
Series:Acta Electrotechnica et Informatica
Subjects:
Online Access:https://doi.org/10.2478/aei-2025-0005
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In recent years, we have witnessed the rapid development of artificial intelligence systems and their presence in various fields. These systems are very efficient and powerful, but often unclear and insufficiently transparent. Explainable artificial intelligence (XAI) methods try to solve this problem. XAI is still a developing area of research, but it already has considerable potential for improving the transparency and trustworthiness of AI models. Thanks to XAI, we can build more responsible and ethical AI systems that better serve people’s needs. The aim of this study is to focus on the role of the user. Part of the work is a comparison of several explainability methods such as LIME, SHAP, ANCHORS and PDP on a selected data set from the field of medicine. The comparison of individual explainability methods from various aspects was carried out using a user study.
ISSN:1338-3957