Initiatives in Europe could change the way companies use the internet and AI to do business
Waves of new technology policy initiatives from the European Commission and U.K. government reflect the end of an era of self-regulation by the tech sector and the emergence of a new oversight model in which lawmakers and the public have more of a say over the way data and algorithms are deployed.
The European Commission plans to test ethical guidelines for the use of artificial intelligence in a pilot project planned for this summer. The U.K. proposes to create a regulatory body to force internet companies to remove harmful content from their sites. Policy advocates and technology experts said these efforts, along with Europe’s General Data Protection Regulation privacy regulation that went into effect last year, will change the way companies use the internet and AI to do business.
Dean Harvey, a partner with Perkins Coie LLP law firm, said the legal and regulatory obligations will come in several waves. The first to be affected will be the technology developers who are creating the algorithms. The second wave will be the companies that use the algorithms with customer data that contains personally identifiable information. “You’ll have to put in safeguards to ensure you’re handling data and developing AI models in an ethical manner,” he said.
The second wave will wash over companies in retail, marketing and finance, for example, that could be penalized if their business practices are influenced by biased or discriminatory computer programs. Further down the line are businesses that use AI without human data at all—for example, manufacturing companies that use machine learning for predictive maintenance of equipment or trucking fleets.
Just as corporations have developed intricate practices to cope with regulation in industries such as finance and energy, they will need to create governance mechanisms to cope with the onslaught of new tech rules. Here are some steps:
• Get ready for the chief ethics officer. While existing corporate structures in legal, compliance and auditing will play a role, many corporations will need a new leader specifically focused on ethics. Thomas Creely, an associate professor in the college of leadership and ethics at the U.S. Naval War College, said hiring ethicists are going to be vital because “the dilemmas are going to be constant and complex.”
In situations where profits and ethics may be at odds, businesses should be prepared to act in the long-term interests of the corporation and the brand. “Some highly accurate models may need to be jettisoned for less performant, but more ethical or transparent, ones,” said Brandon Purcell, principal analyst with Forrester Research Inc.
• Vet vendors and corporate partners. Natasha Duarte, a policy analyst with the nonprofit Center for Democracy and Technology, said nontech companies will need to more thoroughly evaluate third-party AI vendors to find out, for instance, what data their algorithms were trained on and what safeguards are in place for data privacy.
• Keep meticulous records. Business may need to adopt policies to report their AI activities. Such reporting could be to the board of directors, to shareholders, to an industry group or to a regulatory agency. “Adopt leading practices that will help mitigate the inherent risk with AI with respect to explainability and bias,” said Martin Sokalski, global leader for emerging technology risk at KPMG LLP.
• Educate management. Companies could be held accountable for actions around AI, so business leaders need to become better educated on the powers and limitations of AI, said Illah R. Nourbakhsh, professor of Ethics and Computational Technologies in the Robotics Institute at Carnegie Mellon University. “There needs to be more AI fluency among business leaders.”