The importance of ethics in artificial intelligence stems from the power of what can be accomplished with AI, especially when combined with machines: self-driving cars, robots co-working with humans in factories, remote surgery robots that allow doctors to operate on a patient across the globe, intelligent software systems that help pilots navigate, and so forth.
However, like other transformative technologies that came before, AI arouses a fair amount of skepticism and even fear, giving birth to regulations and policies constraining the scope of its application. What’s being done to promote the ethical application of one of today’s most empowering technologies?
As with other disruptive technologies that preceded AI, the formulation of laws and regulations to manage this area is playing catch-up. There are significant technical efforts to detect and remove bias from AI systems, but they are in early stages. In addition, technological fixes have their limits in that they need to develop a mathematical notion of fairness, which is hard to come by.
Though very little actual policy has been produced, there have been some notable beginnings. A 2019 EU policy document from the Center for Data Innovation posited that “Trustworthy AI” should be lawful, ethical and technically resilient, and spelled out the requirements to meet those objectives: human oversight, technical robustness, privacy and data governance, transparency, fairness, well-being, and accountability. This has since been codified into proposed legislation.
Also, in the fall of 2021, top science advisers to President Joe Biden started calling for a new “bill of rights” to guard against powerful new artificial intelligence technology, according to the Associated Press.
AI can deliver substantial benefits to companies that successfully leverage its power, but if implemented without ethical safeguards, it can also damage a company’s reputation and future performance. Developing standards or drafting legislation is not easily accomplished. That’s because AI covers a broad, amorphous territory—everything from battlefield robots to self-driving cars to legal assistants used for reviewing contracts. Indeed, just about anything related to machine learning and data science is now considered a form of AI.
The original article was posted at Forbes.
The Boston Global Forum (BGF), in collaboration with the United Nations Centennial Initiative, released a major work entitled Remaking the World – Toward an Age of Global Enlightenment. More than twenty distinguished leaders, scholars, analysts, and thinkers put forth unprecedented approaches to the challenges before us. These include President of the European Commission Ursula von der Leyen, Governor Michael Dukakis, Father of Internet Vint Cerf, Former Secretary of Defense Ash Carter, Harvard University Professors Joseph Nye and Thomas Patterson, MIT Professors Nazli Choucri and Alex ‘Sandy’ Pentland, and Vice President of European Parliament Eva Kaili. The BGF introduced core concepts shaping pathbreaking international initiatives, notably, the Social Contract for the AI Age, an AI International Accord, the Global Alliance for Digital Governance, the AI World Society (AIWS) Ecosystem, and AIWS City.