Home » News and Events » TECH COMPANIES SHAPING THE RULES GOVERNING AI

TECH COMPANIES SHAPING THE RULES GOVERNING AI

IN EARLY APRIL, the European Commission published guidelines intended to keep any artificial intelligence technology used on the EU’s 500 million citizens trustworthy. The bloc’s commissioner for digital economy and society, Bulgaria’s Mariya Gabriel, called them “a solid foundation based on EU values.”

One of the 52 experts who worked on the guidelines argues that foundation is flawed—thanks to the tech industry. Thomas Metzinger, a philosopher from the University of Mainz, in Germany, says too many of the experts who created the guidelines came from or were aligned with industry interests. Metzinger says he and another member of the group were asked to draft a list of AI uses that should be prohibited. That list included autonomous weapons, and government social scoring systems similar to those under development in China. But Metzinger alleges tech’s allies later convinced the broader group that it shouldn’t draw any “red lines” around uses of AI.

Metzinger says that spoiled a chance for the EU to set an influential example that—like the bloc’s GDPR privacy rules—showed technology must operate within clear limits. “Now everything is up for negotiation,” he says.

When a formal draft was released in December, uses that had been suggested as requiring “red lines” were presented as examples of “critical concerns.” That shift appeared to please Microsoft. The company didn’t have its own seat on the EU expert group, but like Facebook, Apple, and others, was represented via trade group DigitalEurope. In a public comment on the draft, Cornelia Kutterer, Microsoft’s senior director for EU government affairs, said the group had “taken the right approach in choosing to cast these as ‘concerns,’ rather than as ‘red lines.’” Microsoft did not provide further comment. Cecilia Bonefeld-Dahl, director general for DigitalEurope and a member of the expert group, said its work had been balanced, and not tilted toward industry. “We need to get it right, not to stop European innovation and welfare, but also to avoid the risks of misuse of AI.”

The brouhaha over Europe’s guidelines for AI was an early skirmish in a debate that’s likely to recur around the globe, as policymakers consider installing guardrails on artificial intelligence to prevent harm to society. Tech companies are taking a close interest—and in some cases appear to be trying to steer construction of any new guardrails to their own benefit.

Harvard law professor Yochai Benkler warned in the journalNature this month that “industry has mobilized to shape the science, morality and laws of artificial intelligence.”

Benkler cited Metzinger’s experience in that op-ed. He also joined other academics in criticizing a National Science Foundation program for research into “Fairness in Artificial Intelligence” that is co-funded by Amazon. The company will not participate in the peer review process that allocates the grants. But NSF documents say it can ask recipients to share updates on their work, and will retain a right to royalty-free license to any intellectual property developed.

Amazon declined to comment on the program; an NSF spokesperson said that tools, data, and research papers produced under the grants would all be made available to the public. Benkler says the program is an example of how the tech industry is becoming too influential over how society governs and scrutinizes the effects of AI. “Government actors need to rediscover their own sense of purpose as an indispensable counterweight to industry power,” he says.

Microsoft used some of its power when Washington state considered proposals to restrict facial recognition technology. The company’s cloud unit offers such technology, but it has also said that technology should be subject to new federal regulation.

In February, Microsoft loudly supported a privacy bill being considered in Washington’s state Senate that reflected its preferred rules, which included a requirement that vendors allow outsiders to test their technology for accuracy or biases. The company spoke against a stricter bill that would have placed a moratorium on local and state government use of the technology.

By April, Microsoft found itself fighting against a House version of the bill it had supported, after the addition of firmer language on facial recognition. The House bill would have required that companies obtain independent confirmation that their technology worked equally well for all skin tones and genders before deploying it. Irene Plenefisch, Microsoft’s director of government affairs, testified against that version of the bill, saying it “would effectively ban facial recognition technology [which] has many beneficial uses.” The house bill stalled. With lawmakers unable to reconcile differing visions for the legislation, Washington’s attempt to pass a new privacy law collapsed.

In a statement, a Microsoft spokesperson said that the company’s actions in Washington sprang from its belief in “strong regulation of facial recognition technology to ensure it is used responsibly.”

Shankar Narayan, director of the technology and liberty project of the ACLU’s Washington chapter, says the episode shows how tech companies are trying to steer legislators toward their favored, looser, rules for AI. But, Narayan says, they won’t always succeed. “My hope is that more policymakers will see these companies as entities that need to be regulated and stand up for consumers and communities,” he says. On Tuesday, San Francisco supervisors voted to ban the use of facial recognition by city agencies.

Washington lawmakers—and Microsoft—hope to try again for new privacy and facial recognition legislation next year. By then, AI may also be a subject of debate in Washington, DC.

Last month, Senators Cory Booker (D-New Jersey) and Ron Wyden (D-Oregon) and Representative Yvette Clarke (D-New York) introduced bills dubbed the Algorithmic Accountability Act. It includes a requirement that companies assess whether AI systems and their training data have built-in biases, or could harm consumers through discrimination.

Mutale Nkonde, a fellow at the Data and Society research institute, participated in discussions during the bill’s drafting. She is hopeful it will trigger discussion in DC about AI’s societal impacts, which she says is long overdue.

The tech industry will make itself a part of any such conversations. Nkonde says that when talking with lawmakers about topics such as racial disparities in face analysis algorithms, some have seemed surprised, and said they have been briefed by tech companies on how AI technology benefits society.

Google is one company that has briefed federal lawmakers about AI. Its parent Alphabet spent $22 million, more than any other company, on lobbying last year. In January, Google issued a white paper arguing that although the technology comes with hazards, existing rules and self-regulation will be sufficient “in the vast majority of instances.”

Metzinger, the German philosophy professor, believes the EU can still break free from industry influence over its AI policy. The expert group that produced the guidelines is now devising recommendations for how the European Commission should invest billions of euros it plans to spend in coming years to strengthening Europe’s competitiveness.

Metzinger wants some of it to fund a new center to study the effects and ethics of AI, and similar work throughout Europe. That would create a new class of experts who could keep evolving the EU’s AI ethics guidelines in a less industry-centric direction, he says.