Diversity and inclusion took centre stage at one of the world’s major artificial-intelligence (AI) conferences in 2018. But once a meeting with a controversial reputation, last month’s Neural Information Processing Systems (NeurIPS) conference in Vancouver, Canada, saw attention shift to another big issue in the field: ethics.
The focus comes as AI research increasingly deals with ethical controversies surrounding the application of its technologies — such as in predictive policing or facial recognition. Issues include tackling biases in algorithms that reflect existing patterns of discrimination in data, and avoiding affecting already vulnerable populations. “There is no such thing as a neutral tech platform,” warned Celeste Kidd, a developmental psychologist at University of California, Berkeley, during her NeurIPS keynote talk about how algorithms can influence human beliefs. At the meeting, which hosted a record 13,000 attendees, researchers grappled with how to meaningfully address the ethical and societal implications of their work.
Ethicists have long debated the impacts of AI and sought ways to use the technology for good, such as in health care. But researchers are now realizing that they need to embed ethics into the formulation of their research and understand the potential harms of algorithmic injustice, says Meredith Whittaker, an AI researcher at New York University and founder of the AI Now Institute, which seeks to understand the social implications of the technology. At the latest NeurIPS, researchers couldn’t “write, talk or think” about these systems without considering possible social harms, she says. “The question is, will the change in the conversation result in the structural change we need to actually ensure these systems don’t cause harm?”
The article proceeds to describe potential solutions to solve this “ethics gap”.
The original article can be found here.