Legal Perspectives for Explainable Artificial Intelligence in Medicine - Quo Vadis?
Explainable Artificial Intelligence (XAI) can offer an insight into the inner workings of AI models. The new EU Artificial Intelligence Act that came into force in August 2024 and will be fully applicable in August 2026, classifies the AI used in medical domain as “high-risk”. For high-risk applica...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Iuliu Hatieganu University of Medicine and Pharmacy, Cluj-Napoca
2025-05-01
|
| Series: | Applied Medical Informatics |
| Subjects: | |
| Online Access: | https://ami.info.umfcluj.ro/index.php/AMI/article/view/1174 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1850218494151360512 |
|---|---|
| author | Cătălin-Mihai PESECAN Lăcrămioara STOICU-TIVADAR |
| author_facet | Cătălin-Mihai PESECAN Lăcrămioara STOICU-TIVADAR |
| author_sort | Cătălin-Mihai PESECAN |
| collection | DOAJ |
| description |
Explainable Artificial Intelligence (XAI) can offer an insight into the inner workings of AI models. The new EU Artificial Intelligence Act that came into force in August 2024 and will be fully applicable in August 2026, classifies the AI used in medical domain as “high-risk”. For high-risk applications the requirements are “to ensure … operation is sufficiently transparent to enable deployers to interpret a system's output and use it appropriately. An appropriate type and degree of transparency shall be ensured with a view to achieving compliance with the relevant obligations of the provider and deployer”. In this work we present how XAI methods can help in explaining medical AI models. We present a mapping for 3 types of models (for tabular data classification, for image data classification and for diagnostic prognosis data). In order to understand for example images, we can deploy techniques like Grad-CAM. For tabular data we can use both LIME or Grad-CAM. The first method generates a new dataset consisting of perturbed samples and offers local approximations. Grad-CAM will generate heatmaps based on the gradient from the last layer (because it contains the most information) of a convolutional neural network. Explainable Artificial Intelligence methods come in multiple flavors and options and can offer different perspectives. Multiple XAI methods can offer a broader perspective for the models used in the medical area. It is also very important to make sure that the medical experts trust and understand the explanations, so the evaluation of each method before integrating it with the medical experts can help them to accept the models.
|
| format | Article |
| id | doaj-art-2535a2b0f842404d97a35fb84d2cbf1a |
| institution | OA Journals |
| issn | 2067-7855 |
| language | English |
| publishDate | 2025-05-01 |
| publisher | Iuliu Hatieganu University of Medicine and Pharmacy, Cluj-Napoca |
| record_format | Article |
| series | Applied Medical Informatics |
| spelling | doaj-art-2535a2b0f842404d97a35fb84d2cbf1a2025-08-20T02:07:41ZengIuliu Hatieganu University of Medicine and Pharmacy, Cluj-NapocaApplied Medical Informatics2067-78552025-05-0147Suppl. 1Legal Perspectives for Explainable Artificial Intelligence in Medicine - Quo Vadis?Cătălin-Mihai PESECAN0Lăcrămioara STOICU-TIVADAR1University Politehnica TimișoaraDepartment of Automation and Applied Informatics, University Politehnica Timișoara Explainable Artificial Intelligence (XAI) can offer an insight into the inner workings of AI models. The new EU Artificial Intelligence Act that came into force in August 2024 and will be fully applicable in August 2026, classifies the AI used in medical domain as “high-risk”. For high-risk applications the requirements are “to ensure … operation is sufficiently transparent to enable deployers to interpret a system's output and use it appropriately. An appropriate type and degree of transparency shall be ensured with a view to achieving compliance with the relevant obligations of the provider and deployer”. In this work we present how XAI methods can help in explaining medical AI models. We present a mapping for 3 types of models (for tabular data classification, for image data classification and for diagnostic prognosis data). In order to understand for example images, we can deploy techniques like Grad-CAM. For tabular data we can use both LIME or Grad-CAM. The first method generates a new dataset consisting of perturbed samples and offers local approximations. Grad-CAM will generate heatmaps based on the gradient from the last layer (because it contains the most information) of a convolutional neural network. Explainable Artificial Intelligence methods come in multiple flavors and options and can offer different perspectives. Multiple XAI methods can offer a broader perspective for the models used in the medical area. It is also very important to make sure that the medical experts trust and understand the explanations, so the evaluation of each method before integrating it with the medical experts can help them to accept the models. https://ami.info.umfcluj.ro/index.php/AMI/article/view/1174Artificial Intelligence (AI)Explainable Artificial Intelligence (XAI) |
| spellingShingle | Cătălin-Mihai PESECAN Lăcrămioara STOICU-TIVADAR Legal Perspectives for Explainable Artificial Intelligence in Medicine - Quo Vadis? Applied Medical Informatics Artificial Intelligence (AI) Explainable Artificial Intelligence (XAI) |
| title | Legal Perspectives for Explainable Artificial Intelligence in Medicine - Quo Vadis? |
| title_full | Legal Perspectives for Explainable Artificial Intelligence in Medicine - Quo Vadis? |
| title_fullStr | Legal Perspectives for Explainable Artificial Intelligence in Medicine - Quo Vadis? |
| title_full_unstemmed | Legal Perspectives for Explainable Artificial Intelligence in Medicine - Quo Vadis? |
| title_short | Legal Perspectives for Explainable Artificial Intelligence in Medicine - Quo Vadis? |
| title_sort | legal perspectives for explainable artificial intelligence in medicine quo vadis |
| topic | Artificial Intelligence (AI) Explainable Artificial Intelligence (XAI) |
| url | https://ami.info.umfcluj.ro/index.php/AMI/article/view/1174 |
| work_keys_str_mv | AT catalinmihaipesecan legalperspectivesforexplainableartificialintelligenceinmedicinequovadis AT lacramioarastoicutivadar legalperspectivesforexplainableartificialintelligenceinmedicinequovadis |