Authors
Joerg H. Hardy, Free University of Berlin, Germany
Abstract
Autonomous robots will need to form relationships with humans that are built on reliability and (social) trust. The source of reliability and trust in human relationships is (human) ethical competence, which includes the capability of moral decision-making. As autonomous robots cannot act with the ethical competence of human agents, a kind of human-like ethical competence has to be implemented into autonomous robots (AI-systems of various kinds) by way of ethical algorithms. In this paper I suggest a model of the general logical form of (human) meta-ethical arguments that can be used as a pattern for the programming of ethical algorithms for autonomous robots.
Keywords
AI Algorithms, Ethical Algorithms, Ethics of Artificial Intelligence, Human-Robot-Interaction.