This article was originally published on the IBM blog.
The European Union’s AI Act entered into force on August 1. The EU AI Act is one of the first regulations of artificial intelligence to be adopted in one of the world’s largest markets. What is it going to change for businesses?
What is the EU AI Act?
The European Union (EU) is the first major market to define new rules around AI.
“The aim is to turn the EU into a global hub for trustworthy AI,” according to EU officials.
The AI Act takes a risk-based approach, meaning that it categorizes applications according to their potential risk to fundamental rights and safety. Some of the most important provisions include: a prohibition on certain AI practices that are deemed to pose unacceptable risk, standards for developing and deploying certain high-risk AI systems and rules for general-purpose AI (GPAI) models. AI systems that do not fall within one of the risk categories in the EU AI Act are not subject to requirements under the act (these are often dubbed the ‘minimal risk’ category), although some may need to meet transparency obligations and they must comply with other existing laws.
“It is a form of regulatory pragmatism that aims to adapt constraints with the level of risk,” says Bruno Massot, Vice President, Assistant General Counsel and Head of Legal IBM Europe.
Under these new rules, certain AI applications that threaten citizens’ rights will be banned. These include biometric categorization systems based on sensitive characteristics, untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases, emotion recognition in the workplace and schools, and predictive policing.
The AI Act was approved by the EU Parliament on April 22 and by the EU Member States on May 21. It was published in the Official Journal of the European Union on July 12, and entered into force on August 1, with different provisions of the law going into effect in stages.
The EU AI Act will follow a very fast-paced schedule.
“It is coming very rapidly, but this is not a surprise. The drafts have been long known. It is also important to go fast as the technology is advancing very quickly,” notes Massot.
On month six, bans on prohibited AI practices come into force. On month nine, codes of practice become applicable. On month 12, general-purpose AI rules, including governance, come into force. At 24 months, the rules for high-risk AI systems will take effect. Finally, 36 months after the entry into force, the rules for AI systems that are products or safety components of products regulated under specific EU laws will apply.
Non-compliance with the law can result in hefty fines, up to 35 million euros or 7% of a company’s annual turnover.
What steps should businesses be taking?
“There is a sense of urgency,” observes IBM’s Dasha Simons, Managing Consultant, Trustworthy AI. “Two or three years might seem long for some of the things to get ready. But if you’re an international or multinational organization with a lot of AI systems across the globe, it’s not that long.”
So, where to start? The first step, Simons says, is to determine which rules will apply to your business.
“The AI Act defines different rules and definitions for deployers, providers, importers. Depending on which role you have as a company, you will need to comply with different requirements,” Simons explains. “Are you buying models and just kind of rebranding them and not changing the model itself? Or are you actually fine-tuning the model quite a bit? This is really the first step.”
The second step would be to determine which AI systems are used within your organization, and what the risk levels are associated with each one of them.
“The important thing is to know where you want to focus at first,” says Simons. “A lot of companies don’t even know what type of AI systems and models they have in production or in development.”
Doing this assessment will help businesses determine their priorities by assessing first the prohibited systems, then the high-risk ones.
“This will help determine which processes you need to put in place when new AI systems are launched, and make sure they are already compliant with the AI Act proactively, and not as an afterthought.”
Simons believes that the regulations outline the need for businesses to be strategic with AI and for the C-suite to be highly involved in this conversation.
“You need technical expertise, perhaps also the expertise from a Chief Privacy Officer that has implemented GDPR. You need that knowledge on the table, but the responsibility and the direction should be set at the C-level as well,” says Simons.
What will the EU AI Act change for businesses?
The European Union’s ambition is to establish a standard for AI development and act as a blueprint for the rest of the world. The EU already enacted a comprehensive data privacy and security law in 2018 with the GDPR.
The AI Act is “the first regulation in the world that is putting a clear path on a safe and human-centric development of AI,” said Dragoș Tudorache, a member of the Parliament and lead negotiator for the AI Act, during a press conference.
Around the world, many regulations are being adopted regarding AI use.
“Consistency is important because it would be complicated [for businesses] to develop different systems with diverging restraints,” believes Massot.
Hans-Petter Dalen, IBM’s EMEA Business Leader for watsonx and embeddable AI, notes that an interesting change will be the need for organizations and businesses operating in the EU to educate their users or employees about AI.
“Interestingly, in the EU AI Act, there is an article about AI literacy. So every company or organization in the EU single market that’s using AI will have a requirement to educate their users and employees to a certain level. We don’t know what the level is yet, but it is a very good thing to increase the base level of what AI is,” explains Dalen.
“An example in that area is HR systems that already today come with the ability to screen CVs and recommend candidates. That is a high-risk use case and comes with seven essential requirements that you have to comply with. Are you, as a purchaser of that software, educated enough in AI to understand the questions you need to ask about the algorithms?” asks Dalen.
There are still many unknowns around technical standards that will be required by the AI Act.
“There are seven essential requirements to comply with high-risk use cases that are formulated quite loosely in the law itself. And those seven requirements have resulted in ten requests for technical standards which two of the European standard decision organizations are now developing,” explains Dalen. “The technical standards will give us clarity on what the actual requirements from the regulation are and we clearly expect that implementing the technical standards will be the fastest and the cheapest way to achieve conformity.”