Ethical Analysis of the Responsibility Gap in Artificial Intelligence

Introduction: The concept of the “responsibility gap” in artificial intelligence (AI) was first raised in philosophical discussions to reflect concerns that learning and partially autonomous technologies may make it more difficult or impossible to attribute moral blame to individuals for adverse eve...

Full description

Saved in:
Bibliographic Details
Main Authors: Eva Schur, Anna Brouns, Peter Lee
Format: Article
Language:English
Published: Iranian Association for Ethics in Science and Technology 2025-01-01
Series:International Journal of Ethics and Society
Subjects:
Online Access:http://ijethics.com/article-1-356-en.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Introduction: The concept of the “responsibility gap” in artificial intelligence (AI) was first raised in philosophical discussions to reflect concerns that learning and partially autonomous technologies may make it more difficult or impossible to attribute moral blame to individuals for adverse events. This is because in addition to designers, the environment and users also participate in the development process. This ambiguity and complexity sometimes makes it seem that the output of these technologies is beyond the control of human individuals and that no one can be held responsible for it, which is known as the “responsibility gap”. In this article, the issue of the responsibility gap in artificial intelligence technologies will be explained and strategies for the responsible development of artificial intelligence that prevent such a gap from occurring as much as possible are presented. Material and Methods: The present article examined responsibility gap in AI. In order to achieve this goal, related articles and books were examined. Conclusion: There have been various responses to the issue of the responsibility gap. Some believe that society can hold the technology responsible for its outcomes. Others disagree. Accordingly, only the human actors involved in the development of these technologies can be held responsible, and they should be expected to use their freedom and awareness to shape the path of technological development in a way that prevents undesirable and unethical events. In summary, the three principles of routing, tracking, and engaging public opinion and attention to public emotions in policymaking can be useful as three effective strategies for the responsible development of AI technologies.
ISSN:2981-1848
2676-3338