Some AI-based technologies entail risks that may be unacceptable for society. But how to determine what is unacceptable for society is not an easy task. This paper maps the debate surrounding bans (red lines) for AI – from early demands made by diverse stakeholders to non-binding self-regulatory instruments (AI ethics guidelines) and, finally, to the evaluation of the EU AI Act that includes prohibitions for some AI systems. It thereby points to the challenges in determining and governing AI systems with unacceptable risk primarily from a societal and political perspective. It first offers an overview of the different calls made to ban certain AI systems (or their uses). It thereby sketches the arguments brought forward against AI applications capturing the concerns and perspectives from different stakeholders. From the demands and arguments made, we deduced four major issues that are brought forward as a justification for a ban: 1) the violation of human and fundamental rights; 2) epistemic issues; 3) the violation of ethical and societal issues; 4) sectoral risks (albeit in limited form). Third, the paper analyses to what extent the many calls for bans are reflected in the EU AI Act that has finally been adopted in May 2024. The analysis demonstrates that the EU institutions have sought compromises and enacted many exemptions in the determination and regulation of prohibited AI practices and thereby the Act falls somewhat short on sufficiently accounting for the demanded bans. Consequently, there is a need for continued public debate and a subsequent refinement of the Act.