Home » Shaping Futures » Five ways to make AI a greater force for good in 2021

Five ways to make AI a greater force for good in 2021

A year ago, none the wiser about what 2020 would bring, I reflected on the pivotal moment that the AI community was in. The previous year, 2018, had seen a series of high-profile automated failures, like self-driving-car crashes and discriminatory recruiting tools. In 2019, the field responded with more talk of AI ethics than ever before. But talk, I said, was not enough. We needed to take tangible actions. Two months later, the coronavirus shut down the world.

In our new socially distanced, remote-everything reality, these conversations about algorithmic harms suddenly came to a head. Systems that had been at the fringe, like HireVue’s face-scanning algorithms and workplace surveillance tools, were going mainstream. Others, like tools to monitor and evaluate students, were spinning up in real time. In August, after a spectacular failure of the UK government to replace in-person exams with an algorithm for university admissions, hundreds of students gathered in London to chant, “Fuck the algorithm.” “This is becoming the battle cry of 2020,” tweeted AI accountability researcher Deb Raji, when a Stanford protestor yelled it again in response to a different debacle a few months later.

At the same time, there was indeed more action. In one major victory, Amazon, Microsoft, and IBM banned or suspended their sale of face recognition to law enforcement, after the killing of George Floyd spurred global protests against police brutality. It was the culmination of two years of fighting by researchers and civil rights activists to demonstrate the ineffective and discriminatory effects of the companies’ technologies. Another change was small yet notable: for the first time ever, NeurIPS, one of the most prominent AI research conferences, required researchers to submit an ethics statement with their papers.

So here we are at the start of 2021, with more public and regulatory attention on AI’s influence than ever before. My New Year’s resolution: Let’s make it count. Here are five hopes that I have for AI in the coming year.

1 – Reduce corporate influence in research

2 – Refocus on common-sense understanding

3 – Empower marginalized researchers

4 – Center the perspectives of impacted communities

5- Codify guard rails into regulation

The original article was published here at the MIT Tech Review.

To support for AI Ethics, Michael Dukakis Institute for Leadership and Innovation (MDI) and Artificial Intelligence World Society (AIWS.net) has developed AIWS Ethics and Practice Index to measure the ethical values and help people achieve well-being and happiness, as well as solve important issues, such as SDGs. Regarding to AI Ethics, AI World Society (AIWS.net) initiated and promoted to design AIWS Ethics framework within four components including transparency, regulation, promotion and implementation for constructive use of AI. In this effort, Michael Dukakis Institute for Leadership and Innovation (MDI) invites participation and collaboration with think tanks, universities, non-profits, firms, and other entities that share its commitment to the constructive and development of full-scale AI for world society.