Advanced software and cyber-physical systems, so called ‘Artificial Intelligence’ (AI) systems, and data driven business models increasingly govern portions of our lives: they influence how we work, love, buy, sell, communicate, meet, and navigate. They impact individual rights, social interactions, the economy, and politics. They pose new risks to national security, democratic institutions, individual dignity and human wellbeing.
Yet, citizens and their elected, accountable representatives still lack the institutional means to govern these technologies and to hold their developers and providers accountable. The ubiquitous, pervasive, and invasive, providers of these technologies have used their concentrated economic power to shield themselves from meaningful independent oversight. They work with unique power dynamics, including the ‘winners take all’ effects and a race for a limited pool of talent. Digitization has led to the emergence of what we call hereafter private corporate hegemony. The challenges to both individual rights and democratic institutions by the power they wield include unaccountable governance of communication (controlling who and what gets heard in the public square), spreading mis- and dis-information, mass surveillance, and cyber-vulnerabilities and threats. Meeting these challenges requires more than just incremental legal adjustments on both sides of the Atlantic.
Governments worldwide desire to reap the economic benefits of technologies provided by the hegemon, while at the same time aiming to constrain their power. A fundamental mismatch exists, however, between the pace at which innovative yet destabilizing digital applications can be deployed and the pace, as well as rigor, with which norms, standards and regulations are put in place. To complicate matters, traditional narratives around competition, security, and unfettered innovation undermine the adoption of proper constraints. This creates unhealthy degrees of freedom for hegemony that drive a trajectory of the digital and technological revolution toward unprecedented forms of surveillance capitalism and strategic instability. Governments, stakeholders and citizens on both sides of the Atlantic have therefore rightfully expressed concern that this situation will continue to fragment the societies for which they hold responsibility, weaken democracy and the rule of law, as well as compromise fundamental human and constitutional rights.
In light of these challenges, we have come together in an interdisciplinary Transatlantic Reflection Group on Democracy and the Rule of Law in the Age of “Artificial Intelligence” to address the systemic challenges to democracy emanating from monopolies and centralised governance of AI. We believe democracy and the rule of law themselves are at stake and we share reflections on the principles that should govern how these challenges are to be addressed.
Technological solutionism should not replace democracy
Without historic, radical reform, citizens and their elected representatives will be disempowered and lose the means for effective self-governance. The promise of future technologies delivering economic growth cannot justify today’s erosion of democratic norms, fundamental rights, and the rule of law. Neither should the vulnerability and monopolization of digitized systems jeopardize the peaceful cooperation of states. Unregulated technological deployment will exacerbate inequalities and undermine trust. The short-term benefits could be far outweighed by longer-term risks and undesirable societal as well as political consequences.
The challenges to institutions, laws, and democratic processes – combined with the litany of claims that unregulated business interests can better address global challenges than democracy – have weakened trust in democracy and played into the hands of authoritarianism.
In this situation, there is a need to reaffirm that democratically established laws, by democratically elected and accountable representatives, are the noble and legitimate expressions of the people. To preserve these fundamental principles, radical reform is needed.
Affirming the primacy of democracy
AI cuts to the heart of how we live. Decisions on how to govern such systems, their data and process oversight must not be decided by economic players who continue to overwhelm policy makers with demands to be either left alone or given special treatment.
Rather than stand by as witnesses to the erosion of democracy, we call for policy processes that empower citizens and guarantee vibrant, reflective, and free societies, where citizens and regions can have true influence. Our societies must be based on horizontal and vertical divisions of power, and improved checks and balances to safeguard against monopolistic concentration and abuse, such as the abuse of democracy through regulatory capture.
The rules must go beyond technology-specific regulation, or the regulation of business practices. Like the technological and economic developments they are intended to address, they, by their nature, will influence how humans interact with each other and with their civic institutions, how democracy and markets function and how people live. Done well, however, they will ensure the rule of law curtails overreaches of power, by private or governmental actors.
These laws must serve the public interest and in doing so they may well be asymmetrical by creating stronger obligations that bind the big and powerful. To ensure that citizens are democratically empowered and can trust the legislative process at the local, national, and supranational level, it is important that the law-making be accessible, clear and transparent and that the laws are not only enacted but also enforced.
Empowering citizens through institutions of countervailing power
Democracies must foster and strengthen countervailing powers and the type of checks and balances necessary to control power in the age of AI. Countervailing powers can arise from scalable new technologies and business models on the one hand, and from effective and enforceable legislation on the other. Democracy must preserve space for both.
As a starting point, democracies must protect individuals and political systems from both governmental and non-governmental abuses of power facilitated by the targeted use of predictive technologies and personal data collection. Democracies must also prevent the weakening of local entrepreneurial activity through killer acquisitions and other anti-competitive behaviours.
Citizens and their elected representatives will be capable of responding to the challenges posed by the new technologies and new economic dynamics only if they are equipped with sound information about the real effectiveness and impact of the technologies. To that end, policymakers should address the need for an evidence-based public policy dialogue and to empower citizens and civil society to meaningfully participate. They should also encourage US-European cooperation in the development of AI benchmarking protocols in order to promote values-driven, evidence-based policy cooperation.
Where massive computing systems can effectively regulate human behaviour or dictate government behaviour, it is important that our societies preserve and strengthen the democratic accountability of policy actors and demand that those actors defend the public interest and work together to develop policies to avoid capture by private economic interests. Governments and legislators must equip themselves with their own, state of the art science and technology impact assessment capabilities and share the results of any such assessments with the public. That should help to empower citizens and make them and their representatives less dependent on the sometimes incomplete or false information provided by corporations on technological capabilities, risks, and solutions.
Countervailing power can also come from governmental authorities (e.g., consumer protection, data protection, or competition authorities), non-governmental civic institutions (e.g., unions, non-governmental organisations, civil society, academia, and the free press), and citizens themselves. That power can be effectively exercised, however, only if steps are taken to ensure that providers of technological products and services are held democratically accountable, that mechanisms exist for citizens to assert their basic human and civil rights against economically more powerful actors, and that the public is well informed about both the benefits and the risks presented by the emerging technologies and business models.
Universities, media, and civil society should be empowered to renew and strengthen their commitment to supporting the exercise of reason, inquiry for truth and informed opinion. The freedom of academics and civil society to criticize state and corporate conduct must be protected.
Building bridges between technologists and policy communities
Independent technology experts are needed as participants in reliably inclusive democratic processes protected from private economic interests. While the number and importance of scientists and engineers in our societies has increased, their participation in public policy and formal democratic fora, such as parliaments, has declined. On the one hand, engineers and scientists should engage more in formal democratic decision-making institutions, on the other hand political and democratic actors and institutions must build bridges for meaningful and visible engagement. Independent experts informing, training and collaborating with policy and decision makers must fill gaps in how governments understand technology and familiarise technologists and scientists with the specificities and complexity of decision-making in participatory democracies.
A broad array of perspectives are needed to formulate effective measures for understanding and mitigating the risks posed by advanced technologies. The role of technology as part of the proper functioning of our democracies should be informed by diverse and multidisciplinary stakeholders, from philosophers, youth representatives and labour unions to the impacted communities themselves.
Joining forces: Jointly defending democracy across the Atlantic
The survival of democracy on both sides of the Atlantic requires that American and European governments demonstrate their ability to act decisively and deliver efficiently in the face of great challenges. Without claiming perfection as to their democratic practices, public authorities and legislators in Europe and the Americas should join forces to support a new upward dynamic in order to develop effective legislative frameworks to address the challenges outlined here. It may be only through such a cooperation and partnership that they are able to acquire the strength and reach needed to protect and empower the individual and defend democratic values against the hegemonial power of tech corporations.
With coherence across European and transatlantic jurisdictions, laws will have greater scale and will be more effective at addressing these challenges. Societies need not suffer from a competitive race to the bottom on standards for public life and protection of fundamental rights caused by free riding, forum shopping, and the exploitation of international tensions.
The fast pace of technological innovation and the economic success of the platform economy must not slow or erode the democratic process nor disempower individual human beings. While public institutions will need to reform to remain at the forefront of emerging technologies, this will be of little use if basic principles of self-governance are not maintained and protected through appropriate regulatory mechanisms and rigorously enforced.
Democratic deliberation to develop consensus, as well as the human ability to re-interpret legal norms in consideration of new technologies and new economic conditions are strengths, not weaknesses. A stable legal environment is crucial for accountability and certainty: while there is no doubt that legislation must evolve over time, democratic lawmakers should not be expected to publish and amend laws as frequently as software developers code, nor would this make for good law.
Technology regulation must not focus narrowly on zeitgeist trends. In contrast, technology-neutral laws, which are drafted in open language and without reliance on buzzwords, enable re-interpretation and will remain relevant as technologies and business models evolve. They must start with core values and principles and focus on what is needed to protect and advance those values and principles. The process of filling in the details should be left to delegated legislation, transparent standard setting processes, and bodies responsible for enforcement.
We ask that US and European leadership remain committed to coherent laws, the primacy of the public interest, and the shaping of the digital economy through democracy on both sides of the Atlantic. We call on decision makers to remember the crucial importance of transatlantic coherence for reasons of both transatlantic democratic accountability and the rapidly advancing global competition. What is needed now are well-designed legal frameworks and strong institutions, which empower and enfranchise citizens and serve the common interest of people in Europe, the US, and beyond, and ensure:
FAIR COMPETITION AND TAXATION
- Competition law: Review competition rules to enable antitrust authorities to better pre-empt and tackle anti -competitive behaviour, including acquisitions of emerging competitors and the appropriation of innovative ideas by dominant incumbents, while reducing barriers to entry. Explore transatlantic regulatory cooperation to overcome the territorial market segmentation that ultimately favours transnationally operating digital corporations.
- Upskilling of supervisory authorities: Give supervisory authorities the mandate, skills and resources needed to understand, oversee, and address how AI affects their respective domains.
- Address tax ‘free riding’: Ensure that those who benefit from the digital economy the most contribute financially to sustain core functions of democracy and public infrastructure, and pay for the undesirable societal impact of their technologies, through taxes where profit is generated.
DATA GOVERNANCE
- Data quality: Ensure that data used to train AI systems with potentially major impacts, is governed by legal frameworks that incorporate proper quality requirements, including reliability of testing and verification.
- Behavioural and biometric data: Ensure that behavioural and biometric data does not serve the training of AI systems capable of manipulation, discrimination and disinformation, in particular with regard to biometric profiling and emotion recognition technologies
- Data access: Ensure access to and use of data which is in the public interest, provided this does not interfere with human rights. Such access should also be ensured between services, especially when data and computation capabilities are held by companies, which calls for private and public open data policies and protocols that support interoperability as well as data portability.
- Transatlantic data flows: Create conditions of trust enabling transatlantic data flows based on the fundamental rights to data protection and privacy; create mechanisms of mutual protection and respect firmly grounded in ‘effective and practical protection’ within the relevant jurisdictions.
FUNDAMENTAL RIGHTS
- Human rights and dignity: Establish practical and effective protection of human rights and liberties such as human dignity, non-discrimination, the presumption of innocence, due process, and the protection of children’s rights
- AI Safety: Put in place regulatory frameworks promoting AI governance, transparency, robustness, and cybersecurity, update legislation to tackle unacceptable cases of algorithmic discrimination and limit corporations’ ability to escape liability for their AI systems.
- Transparency and access: Supervisory authorities need access to government and corporate infrastructure, processes, and ecosystems, including algorithms and databases, and policies, to ensure adequate oversight and accountability. This should not be prohibited in the name of either government or corporate secrecy.
INDEPENDENCE AND HUMAN AGENCY
- Free press and academia: Provide a framework for a vibrant, independent press and funding to foster independent academic and civil society organisations and empower them to scrutinise and investigate the impacts, abuses, and misuses of emerging technologies.
- Values-based technology design: Ensure all types of processes that include automated decision making and AI operate according to principles of responsibility and accountability, transparency, explainability, respect for human dignity and meaningful control. These values, the rule of law, democracy and fundamental rights must be protected by design throughout the life cycle of the advanced software and cyber-physical systems.
- Evidence-based decision-making and assessment: Address the need for the public policy dialogue to be evidence-based and for citizens and civil society to be empowered to participate meaningfully in this dialogue. Establish regular open AI benchmarks intended to soundly assess and report on the extent to which AI-enabled systems comply with the values set out above. Encourage the US, Europe, and other interested parties to cooperate in the development of any such benchmarking protocols.
NATIONAL AND INTERNATIONAL SECURITY
- Risk monitoring and mitigation: Ensure that emerging technologies, including those developed in the private sector, do not undermine national security or international peace in unforeseen ways. Such risks must be continuously monitored by supervisory authorities and addressed in an anticipatory manner.
- National security: Ensure public oversight over AI systems to keep them safe and secure. Identify mechanisms and instruments to better integrate safety, security and economic considerations in regulatory policy.
- AI in the military: Leverage towards a comprehensive treaty based mechanism on the use of autonomous decision-making systems and AI; especially for military purposes.
- Fighting digital authoritarianism: Scale multilateral engagement on technical norms and standards to defend against digital authoritarianism and provide positive alternatives to authoritarian digital and physical entanglements by supporting bottom-up, self-determined digital development strategies.
ACKNOWLEDGEMENTS AND SIGNATORIES
Contributors participated in their personal capacity. The views expressed do not necessarily reflect the views of their employers or organisations they might be associated with. The signatories support the general gist of the statement, without necessarily agreeing to the details of every formulation.
CONTRIBUTORS to the Reflection Group on Democracy and the Rule of Law in the Age of “Artificial Intelligence” are, in alphabetical order:
Cathryn Culver Ashbrook, Nicolas Economou, Dr. Bruce Hedin, Dr. Mireille Hildebrandt, Dr. Konstantinos Karachalios, Anja Kaspersen, Paul Nemitz, Tuan Anh Nguyen, Marietje Schaake, Dr. Sarah Spiekermann, Alex Stamos, Dr. Thomas Streinz, Wendell Wallach
SIGNATRORIES
Dr. Greg Adamson
Honorary, Computing and Information Systems
The University of Melbourne
Nicolas Economou
Chief Executive Officer, H5
Chair, Law Committee, IEEE Global Initiative on the Ethics of A/I Systems
Chair, Law, Science, and Society Initiative, The Future Society
Principal Coordinator, The Athens Roundtable on AI and the Rule of Law
Dr. Bruce Hedin
Principal Scientist, H5
Dr. Mireille Hildebrandt
Co-Director, PI ERC ADG COHUBICOL; Co-Editor in Chief CRCL
Radboud University, Nijmegen
Senior researcher to Law Science Technology and Society (LSTS)
Vrije Universiteit Brussels
Dr. Konstantinos Karachalios
Managing Director, IEEE-SA
Member, IEEE Management Council
Baroness Beeban Kidron OBE
Member of the U.K. House of Lords
Democracy and Digital Technologies Committee
Commissioner, UNESCO’s Broadband Commission for Sustainable Development
Member, UNESCO Working Group on Child Online Safety
Nicolas Miailhe
Founder & President, The Future Society
Paul Nemitz
Principal Adviser on Justice Policy in the European Commission and visiting Professor of Law at the College of Europe
Tuan Anh Nguyen
CEO of Boston Global Forum
Executive Director of Michael Dukakis Institute
Co-founder of AI World Society
Marietje Schaake
International Policy Director at the Cyber Policy Center
International Policy Fellow at the Institute for Human-Centered Artificial Intelligence
Stanford University
Former Member of European Parliament
Dr. Sarah Spiekermann
Chair, Institute for Information Systems & Society
Vienna University of Economics and Business
Alex Stamos
Director, Stanford Internet Observatory
Stanford University
Former chief security officer (CSO) at Facebook
Dr. Thomas Streinz
Adjunct Professor of Law
New York University School of Law
Wendell Wallach
Chair, Technology and Ethics Research Group
Yale Interdisciplinary Center for Bioethics