Balancing public interest, fundamental rights, and innovation : the EU’s governance model for non-high-risk AI systems

Link:
Autor/in:
Verlag/Körperschaft:
Alexander von Humboldt Institut für Internet und Gesellschaft
Erscheinungsjahr:
2024
Medientyp:
Text
Schlagworte:
  • Artificial intelligence
  • Co-regulation
  • Self-regulation
  • Codes of conduct
  • AI Act
  • 320: Politik
  • ddc:320
Beschreibung:
  • The question of the concrete design of a fair and efficient governance framework to ensure responsible technology development and implementation concerns not only high-risk artificial intelligence systems. Everyday applications with a limited ability to inflict harm are also addressed. This article examines the European Union's approach to regulating these non-high-risk systems. We focus on the governance model for these systems established by the Artificial Intelligence Act. Based on a doctrinal legal reconstruction of the rules for codes of conduct and considering the European Union's stated goal of achieving a market-oriented balance between innovation, fundamental rights, and public interest, we explore our topic from three different perspectives: an analysis of specific regulatory components of the governance mechanism is followed by a reflection on ethics and trustworthiness implications of the EU´s approach and concluded by an analysis of a case study from an NLP-based, language-simplifying artificial intelligence application for assistive purposes.
  • PeerReviewed
Lizenz:
  • https://creativecommons.org/licenses/by/3.0/de/
Quellsystem:
ReposIt

Interne Metadaten
Quelldatensatz
oai:reposit.haw-hamburg.de:20.500.12738/16739