February 20 – Setting a foundation for future legislation, the European Commission launches a public consultation on ethical and legal requirements for artificial intelligence. The initiative produces wide-ranging responses from member states, trade groups, academics, and civil society. In December 2019, incoming President von der Leyen called for “new rules on Artificial Intelligence that respect human safety and rights.” [CAIDP 1.3]
June – Hong Kong’s Chief Executive, appointed by China’s Communist Party, deploys AI-enabled face surveillance, communications tracking, and transit record analysis to suppress opponents of a security law. China has pursued “secure cities” as part of the Belt and Road Initiative
July 22 – OECD Secretary General Angel Gurria told the G-20 Digital Economy Ministers “AI’s full potential is still to come. To achieve this potential we must advance a human-centred and trustworthy AI, that respects the rule of law, human rights, democratic values and diversity, and that includes appropriate safeguards to ensure a fair and just society.” In June 2019, the G20 adopted AI Guidelines based on the 2019 OECD AI Principles. [CAIDP 1.2]
August – The UK government withdraws biased algorithm after widespread public protest. Researchers determined that the AI-generated scores favored students enrolled at elite schools over actual test results. This disadvantaged high-performing students at underperforming schools.
September 9 – The Boston Global Forum and the Michael Dukakis Institute announce the Social Contract for the AI Age. The Social Contract sets out a vision for a new society, based on innovation, creativity, and mutual respect.
September 7 – The European consumer association BEUC finds widespread public concerns about reliable AI. In comments to the European Commission, BEUC wrote “a strong regulatory framework is necessary” to guarantee consumers are protected against the risks posed by AI.
September 16-18 – The Boston Global Forum and the Club de Madrid, the world’s largest forum of former heads of democratic governments, organize a global policy dialogue on “A New Social Contract for the Age of AI.”
October 13 – Access Now resigns from Partnership on AI, citing a lack of influence with companies that established the “multi-stakeholder” process. Access Now said, “we support human rights impact assessments and red lines around use of these technologies, rather than an ethics, risk-based, or sandboxing approach.”
October – The Global Privacy Assembly, the worldwide conference of privacy commissioners, establishes accountability measures for AI, including human rights impact assessments. The Privacy Assembly also said that changes to data protection law are necessary for accountability in the development and use of AI. [CAIDP 1.15]
October 21 – The European Council adopts a resolution on protecting fundamental rights in the era of AI. Alone among EU countries, Poland objected to the resolution, citing the inclusion of the phrase “gender equality” among the fundamental rights.
November 3 – California passes Proposition 24, updating the state privacy law and creating new rights for algorithmic transparency. California residents also reject a ballot proposal to establish AI-based predictive policing.
November 5 – The Pope warns that AI could exacerbate economic inequalities around the world if a common good is not pursued. “Artificial intelligence is at the heart of the epochal change we are experiencing.” Earlier in the year, the Pope endorsed the Rome Call for AI Ethics to “promote a sense of responsibility among organizations, governments and institutions.” [CAIDP 1.19]
December 10 – In a joint report, Transatlantic Approaches on Digital Governance, the Boston Global Forum and the Club de Madrid proposed global policies for a better management of digital technologies and Artificial Intelligence. The groups also urged world leaders to adopt and implement the Social Contract for the AI Age
December 12 – The Center for AI and Digital Policy (CAIDP) releases “Artificial Intelligence and Democratic Values,” the first comprehensive review of national AI policies and practices. The CAIDP Report followed an extensive review of 30 countries and government organizations, undertaken by a team of international experts. [CAIDP 1.23]
December 12 – Speaking to the Boston Global Forum, President Ursula von der Leyen proposes a Transatlantic Agreement on Artificial Intelligence. “We want to set a blueprint for regional and global standards aligned with our values: Human rights, and pluralism, inclusion and the protection of privacy,” said von der Leyen. Professor Nazi Choucri, on behalf of the Michael Dukakis Institute, proposed an International Accord on AI in 2021.[CAIDP 1.24]
Marc Rotenberg, Director
Center for AI and Digital Policy at Michael Dukakis Institute
The Center for AI and Digital Policy, founded in 2020, advises governments on technology policy