Home » Shaping Futures » “Doing machine learning the right way”

“Doing machine learning the right way”

Professor Aleksander Madry, MIT, strives to build machine-learning models that are more reliable, understandable, and robust.

Madry’s research centers largely on making machine learning — a type of artificial intelligence — more accurate, efficient, and robust against errors. In his classroom and beyond, he also worries about questions of ethical computing, as we approach an age where artificial intelligence will have great impact on many sectors of society.

“I want society to truly embrace machine learning,” says Madry, “To do that, we need to figure out how to train models that people can use safely, reliably, and in a way that they understand.”

In the end, he aims to make each model’s decisions more interpretable by humans, so researchers can peer inside to see where things went awry. At the same time, he wants to enable nonexperts to deploy the improved models in the real world for, say, helping diagnose disease or control driverless cars.

“It’s not just about trying to crack open the machine-learning black box. I want to open it up, see how it works, and pack it back up, so people can use it without needing to understand what’s going on inside,” he says.

That’s why Madry seeks to make machine-learning models more interpretable to humans. New models he’s developed show how much certain pixels in images the system is trained on can influence the system’s predictions.

Madry has also been engaging in conversations about laws and policies to help regulate machine learning. A point of these discussions, he says, is to better understand the costs and benefits of unleashing machine-learning technologies on society.

“Sometimes we overestimate the power of machine learning, thinking it will be our salvation. Sometimes we underestimate the cost it may have on society,” Madry says. “To do machine learning right, there’s still a lot still left to figure out.”

The original article can be found here.

AIWS Innovation Network encourages researchers and experts to contribute solutions and models for transparency in AI. It is fundamental to audit AI and create a better world with it.

In this time when all of the world tries to defeat COVID-19, all governments have to create transparency, accountability and collaboration.