Home » Shaping Futures » The problem with the EU’s AI strategy

The problem with the EU’s AI strategy

Last week, the European Union issued its long-anticipated white paper on artificial intelligence. The document is a prequel to new legislation and regulations governing the technology that are likely to have global consequences.

That’s because, as with Europe’s privacy law, GDPR, any new A.I. rules are likely to apply to anyone who sells to an EU customer, processes the data of an EU citizen, or has a European employee. And, as with GDPR, any rules Europe enacts may serve as a model for other nations—or even individual U.S. states—looking to regulate A.I.

The paper says that the 27-nation bloc should have strict legal requirements for “high-risk” uses of the technology.

What’s high-risk? Any scenario with “a risk of injury, death or significant material or immaterial damage; that produce effects that cannot reasonably be avoided by individuals or legal entities,” especially in sectors such as healthcare, transportation, energy and government.

The original article can be found here.

AIWS Innovation Network includes distinguished thinkers from top universities such as Harvard, MIT, Stanford, Berkeley, Princeton, Yale, Columbia, Brown, UCLA, Oxford, Cambridge, Carnegie Mellon, and more, and preeminent leaders will be a platform for the Transatlantic Alliance for Digital Governance to collaborate with EU and countries to make AI for good.