Can we Bridge AI’s responsibility gap at Will?

Link:
Autor/in:
Verlag/Körperschaft:
Hamburg University of Technology
Erscheinungsjahr:
2022
Medientyp:
Text
Schlagworte:
  • Answerability
  • Artificial Intelligence
  • Liability
  • Normative Powers
  • Responsibility gap
Beschreibung:
  • Artificial intelligence (AI) increasingly executes tasks that previously only humans could do, such as drive a car, fight in war, or perform a medical operation. However, as the very best AI systems tend to be the least controllable and the least transparent, some scholars argued that humans can no longer be morally responsible for some of the AI-caused outcomes, which would then result in a responsibility gap. In this paper, I assume, for the sake of argument, that at least some of the most sophisticated AI systems do indeed create responsibility gaps, and I ask whether we can bridge these gaps at will, viz. whether certain people could take responsibility for AI-caused harm simply by performing a certain speech act, just as people can give permission for something simply by performing the act of consent. So understood, taking responsibility would be a genuine normative power. I first discuss and reject the view of Champagne and Tonkens, who advocate a view of taking liability. According to this view, a military commander can and must, ahead of time, accept liability to blame and punishment for any harm caused by autonomous weapon systems under her command. I then defend my own proposal of taking answerability, viz. the view that people can makes themselves morally answerable for the harm caused by AI systems, not only ahead of time but also when harm has already been caused.
Beziehungen:
DOI 10.1007/s10677-022-10313-9
Quellsystem:
TUHH Open Research

Interne Metadaten
Quelldatensatz
oai:tore.tuhh.de:11420/15272