Debate continues in the European Union about AI legislation. The European Commission had promised comprehensive AI legislation in “the first 100 days” but that initiative stalled as did efforts to establish red lines for certain AI techniques, such as facial recognition. The Commission’s White Paper on AI now describes several policy options. Access Now and EDRi said that the Commission’s “risk-based approach” fails to safeguard fundamental rights. As they explained, “the burden of proof to demonstrate that an AI system does not violate human rights should be on the entity that develops or deploys the system” and “such proof should be established through a mandatory human rights impact assessment.”
This week the European Parliament adopted three resolutions on AI policy intended to influence legislation from the Commission. The resolution from Iban García del Blanco (S&D, ES) urged the Commission to establish legal obligations for artificial intelligence and robotics, including software, algorithms and data. The legislative initiative of Axel Voss (EPP, DE) would make those operating high-risk AI systems strictly liable for any resulting damage. And a third resolution on intellectual property rights, by Stéphane Séjourné (Renew Europe, FR), makes clear that AI should not have legal personality; only people may claim IP rights.
The European Parliament adopted all of these proposals in sweeping majorities, across parties and regions. But even those proposals are unlikely to meet the concerns of civil society. As Access Now and EDRi said of the resolution on AI ethics, “They are cautious and restrained on fundamental rights, taking only tentative steps to outline the biggest threats that artificial intelligence pose to people and society, while also failing to propose a legislative framework that would address these threats or provide any substantive protections for people’s rights.”
Still ahead is the Commission proposal on AI regulation. Here again EU NGOs have made clear their position: (1) prohibit systems that infringe fundamental rights; (2) establish mandatory human rights impact assessments for all AI systems and (3) strengthen enforcement of existing law, including data protection.
Marc Rotenberg, Director
Center for AI and Digital Policy at Michael Dukakis Institute
The Center for AI and Digital Policy, founded in 2020, advises governments on technology policy