The European Commission recently announced several options for the regulation of Artificial Intelligence. As the Commission has stated, “Artificial intelligence (AI) is a fast-evolving and strategic technology with tremendous opportunities. However, some of its uses pose specific significant risks to the application of various EU rules designed to protect fundamental rights, ensure safety and attribute liability.” Following a public consultation earlier this year, the Commission has now proposed four options in the Inception Impact Assessment:
- Option 1 – a non-legislative approach that relies on industry self-regulation
- Option 2 – voluntary labeling schemes to promote trustworthy applications
- Option 3 – legal standards for applications ranging from biometric identification, high-risk AI, to all AI applications
- Option 4 – some combination of the above
Speaking at the G-20 summit last year in Japan, Chancellor Angela Merkel had called for the European Commission to propose comprehensive regulation for artificial intelligence. “It will be the job of the next Commission to deliver something so that we have regulation similar to the General Data Protection Regulation that makes it clear that artificial intelligence serves humanity,” she stated. Incoming Commission President Ursula von der Leyen echoed these views in December 2019, and recommended new rules on Artificial Intelligence that respect human safety and rights.
The European Commission’s work to promote public participation in decisions concerning AI policy should be commended. AI policy will have far-reaching social and economic consequences. Democratic societies make possible the opportunity for the public to help shape these outcomes. At the same time, AI and human rights experts share the views of Chancellor Merkel, President von don Leyen and others. They have endorsed legal standards for AI (Option 3 in the Commission’s framing), warning that ethical guidance and voluntary frameworks alone will not ensure necessary safeguards. The Universal Guidelines for AI for example, set out 12 standards for AI governance, including several red lines for certain AI applications, such as profiling and scoring by national governments. Many members of the European Union are also members of the OECD, and have previously endorsed the OECD AI Principles, which seek to promote fundamental rights, democratic institutions, and the rule of law.
The Commission will take public comments on the Ethical and Legal Requirements for AI until 10 September 2020. A final report and action is planned for first quarter 2021.
Marc Rotenberg, Director
Center for AI and Digital Policy at Michael Dukakis Institute
“The Center for AI and Digital Policy, founded in 2020, advises governments on technology policy.”