Ethical Analysis of the Responsibility Gap in Artificial Intelligence

Introduction: The concept of the “responsibility gap” in artificial intelligence (AI) was first raised in philosophical discussions to reflect concerns that learning and partially autonomous technologies may make it more difficult or impossible to attribute moral blame to individuals for adverse eve...

Full description

Saved in:
Bibliographic Details
Main Authors: Eva Schur, Anna Brouns, Peter Lee
Format: Article
Language:English
Published: Iranian Association for Ethics in Science and Technology 2025-01-01
Series:International Journal of Ethics and Society
Subjects:
Online Access:http://ijethics.com/article-1-356-en.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832589627944337408
author Eva Schur
Anna Brouns
Peter Lee
author_facet Eva Schur
Anna Brouns
Peter Lee
author_sort Eva Schur
collection DOAJ
description Introduction: The concept of the “responsibility gap” in artificial intelligence (AI) was first raised in philosophical discussions to reflect concerns that learning and partially autonomous technologies may make it more difficult or impossible to attribute moral blame to individuals for adverse events. This is because in addition to designers, the environment and users also participate in the development process. This ambiguity and complexity sometimes makes it seem that the output of these technologies is beyond the control of human individuals and that no one can be held responsible for it, which is known as the “responsibility gap”. In this article, the issue of the responsibility gap in artificial intelligence technologies will be explained and strategies for the responsible development of artificial intelligence that prevent such a gap from occurring as much as possible are presented. Material and Methods: The present article examined responsibility gap in AI. In order to achieve this goal, related articles and books were examined. Conclusion: There have been various responses to the issue of the responsibility gap. Some believe that society can hold the technology responsible for its outcomes. Others disagree. Accordingly, only the human actors involved in the development of these technologies can be held responsible, and they should be expected to use their freedom and awareness to shape the path of technological development in a way that prevents undesirable and unethical events. In summary, the three principles of routing, tracking, and engaging public opinion and attention to public emotions in policymaking can be useful as three effective strategies for the responsible development of AI technologies.
format Article
id doaj-art-5a05f15a7d114fe0876e4e59f1c44f8e
institution Kabale University
issn 2981-1848
2676-3338
language English
publishDate 2025-01-01
publisher Iranian Association for Ethics in Science and Technology
record_format Article
series International Journal of Ethics and Society
spelling doaj-art-5a05f15a7d114fe0876e4e59f1c44f8e2025-01-24T09:57:47ZengIranian Association for Ethics in Science and TechnologyInternational Journal of Ethics and Society2981-18482676-33382025-01-0164110Ethical Analysis of the Responsibility Gap in Artificial IntelligenceEva Schur0Anna Brouns1Peter Lee2 Department of Artificial Intelligence and Cybersecurity, Faculty of Technical Sciences, University of Klagenfurt, Austria Department of Artificial Intelligence and Cybersecurity, Faculty of Technical Sciences, University of Klagenfurt, Austria Department of Artificial Intelligence and Cybersecurity, Faculty of Technical Sciences, University of Klagenfurt, Austria Introduction: The concept of the “responsibility gap” in artificial intelligence (AI) was first raised in philosophical discussions to reflect concerns that learning and partially autonomous technologies may make it more difficult or impossible to attribute moral blame to individuals for adverse events. This is because in addition to designers, the environment and users also participate in the development process. This ambiguity and complexity sometimes makes it seem that the output of these technologies is beyond the control of human individuals and that no one can be held responsible for it, which is known as the “responsibility gap”. In this article, the issue of the responsibility gap in artificial intelligence technologies will be explained and strategies for the responsible development of artificial intelligence that prevent such a gap from occurring as much as possible are presented. Material and Methods: The present article examined responsibility gap in AI. In order to achieve this goal, related articles and books were examined. Conclusion: There have been various responses to the issue of the responsibility gap. Some believe that society can hold the technology responsible for its outcomes. Others disagree. Accordingly, only the human actors involved in the development of these technologies can be held responsible, and they should be expected to use their freedom and awareness to shape the path of technological development in a way that prevents undesirable and unethical events. In summary, the three principles of routing, tracking, and engaging public opinion and attention to public emotions in policymaking can be useful as three effective strategies for the responsible development of AI technologies.http://ijethics.com/article-1-356-en.pdfethicsresponsibility gapartificial intelligence
spellingShingle Eva Schur
Anna Brouns
Peter Lee
Ethical Analysis of the Responsibility Gap in Artificial Intelligence
International Journal of Ethics and Society
ethics
responsibility gap
artificial intelligence
title Ethical Analysis of the Responsibility Gap in Artificial Intelligence
title_full Ethical Analysis of the Responsibility Gap in Artificial Intelligence
title_fullStr Ethical Analysis of the Responsibility Gap in Artificial Intelligence
title_full_unstemmed Ethical Analysis of the Responsibility Gap in Artificial Intelligence
title_short Ethical Analysis of the Responsibility Gap in Artificial Intelligence
title_sort ethical analysis of the responsibility gap in artificial intelligence
topic ethics
responsibility gap
artificial intelligence
url http://ijethics.com/article-1-356-en.pdf
work_keys_str_mv AT evaschur ethicalanalysisoftheresponsibilitygapinartificialintelligence
AT annabrouns ethicalanalysisoftheresponsibilitygapinartificialintelligence
AT peterlee ethicalanalysisoftheresponsibilitygapinartificialintelligence