Home » Shaping Futures » To make AI fair, here’s what we must learn to do

To make AI fair, here’s what we must learn to do

Developers of artificial intelligence must learn to collaborate with social scientists and the people affected by its applications.

Beginning in 2013, the Dutch government used an algorithm to wreak havoc in the lives of 25,000 parents. The software was meant to predict which people were most likely to commit childcare-benefit fraud, but the government did not wait for proof before penalizing families and demanding that they pay back years of allowances. Families were flagged on the basis of ‘risk factors’ such as having a low income or dual nationality. As a result, tens of thousands were needlessly impoverished, and more than 1,000 children were placed in foster care.

From New York City to California and the European Union, many artificial intelligence (AI) regulations are in the works. The intent is to promote equity, accountability and transparency, and to avoid tragedies similar to the Dutch childcare-benefits scandal.

But these won’t be enough to make AI equitable. There must be practical know-how on how to build AI so that it does not exacerbate social inequality. In my view, that means setting out clear ways for social scientists, affected communities and developers to work together.

Right now, developers who design AI work in different realms from the social scientists who can anticipate what might go wrong. As a sociologist focusing on inequality and technology, I rarely get to have a productive conversation with a technologist, or with my fellow social scientists, that moves beyond flagging problems. When I look through conference proceedings, I see the same: very few projects integrate social needs with engineering innovation.

The article was originally posted at Nature.

The Boston Global Forum (BGF), in collaboration with the United Nations Centennial Initiative, released a major work entitled Remaking the World – Toward an Age of Global Enlightenment.   More than twenty distinguished leaders, scholars, analysts, and thinkers put forth unprecedented approaches to the challenges before us. These include President of the European Commission Ursula von der Leyen, Governor Michael Dukakis, Father of Internet Vint Cerf, Former Secretary of Defense Ash Carter, Harvard University Professors Joseph Nye and Thomas Patterson, MIT Professors Nazli Choucri and Alex ‘Sandy’ Pentland, and Vice President of European Parliament Eva Kaili.  The BGF introduced core concepts shaping pathbreaking international initiatives, notably, the Social Contract for the AI Age, an AI International Accord, the Global Alliance for Digital Governance, the AI World Society (AIWS) Ecosystem, and AIWS City.