On the black-box explainability of object detection models for safe and trustworthy industrial applications
In the realm of human-machine interaction, artificial intelligence has become a powerful tool for accelerating data modeling tasks. Object detection methods have achieved outstanding results and are widely used in critical domains like autonomous driving and video surveillance. However, their adopti...
Saved in:
| Main Authors: | Alain Andres, Aitor Martinez-Seras, Ibai Laña, Javier Del Ser |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Elsevier
2024-12-01
|
| Series: | Results in Engineering |
| Subjects: | |
| Online Access: | http://www.sciencedirect.com/science/article/pii/S259012302401750X |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
AI_TAF: A Human-Centric Trustworthiness Risk Assessment Framework for AI Systems
by: Eleni Seralidou, et al.
Published: (2025-06-01) -
From theory to practice: Harmonizing taxonomies of trustworthy AI
by: Christos A. Makridis, et al.
Published: (2024-12-01) -
XAI Unveiled: Revealing the Potential of Explainable AI in Medicine: A Systematic Review
by: Noemi Scarpato, et al.
Published: (2024-01-01) -
Explainable Machine Learning in Critical Decision Systems: Ensuring Safe Application and Correctness
by: Julius Wiggerthale, et al.
Published: (2024-12-01) -
Achieving On-Site Trustworthy AI Implementation in the Construction Industry: A Framework Across the AI Lifecycle
by: Lichao Yang, et al.
Published: (2024-12-01)