Five Points to Keep in Mind Before the EU’s AI Act Takes Effect
By Emmanouil Patavos Senior Director, FTI Consulting
Even as the EU’s Artificial Intelligence Act continues to take shape, businesses should look to clues about the future of AI regulation.
The era of artificial intelligence has arrived in full force.
From adaptable AI chatbots to machine learning-enabled facial recognition software, AI-powered technologies are now actively engaging with almost every aspect of our daily lives. Certainly the public’s quick embrace of ChatGPT, the generative AI chatbot that debuted last November, is a sign of the times. According to one report, an estimated 1 billion users visit the ChatGPT website every month.1
Meanwhile, AI regulations scramble to catch up. Although more than 60 countries have some form of rules governing the use of artificial intelligence, none are comprehensive. This has led to a patchwork of compliance expectations and a general lack of accountability.
The EU is looking to remedy the confusion with the Artificial Intelligence Act (the “AI Act”). Initially proposed in April 2021, the AI Act represents the first comprehensive attempt by any major body to accentuate the positives of AI while trying to eliminate potential negatives. It is currently under review by the European Parliament, with approval expected by the end of next year and full enactment in 2026.
Because the AI Act is both wide-ranging in scope and an early attempt at comprehensive regulation, its impact, especially beyond the EU, is difficult to determine right now. Still, there are several points every business leader should be aware of as they prepare for its enactment.
The AI Act is only the first of many similar laws anticipated from regulatory bodies.
As the first major law of its scope anywhere in the world, the AI Act will likely influence subsequent laws enacted beyond the borders of the EU.
That means companies will need to make sure that they can navigate regulation both where they currently operate and where they plan to operate. But keep in mind that the EU’s approach, while influential, could be fundamentally different from emerging regulations in other jurisdictions. So, companies must be prepared to confront a variety of subsequent regional AI laws.
In the EU, the precautionary principle rules.
Unlike many other jurisdictions, the EU tends to regulate before a problem happens rather than afterward. Therefore, a careful reading of the AI Act may provide a preview of anticipated problems that could occur in places where regulators are less cautious.
As currently written, the AI Act restricts the use of AI systems that manipulate human behavior or employ subliminal techniques to influence human decision-making. While this may not yet be a major problem, the AI Act is trying to create a clear set of standards for what is considered acceptable and ethical use of AI that will stand the test of time.
This proactive approach may help prevent the development and deployment of harmful AI systems. However, the power of the precautionary principle to influence the development of AI largely resides in whether other major regulators adopt the same approach.
It is a matter of clarity versus innovation.
Do you need to set up strong guardrails before you start the massive rollout of AI to bring clarity, or should you have fewer guardrails at the start so that you can innovate? Be aware that subsequent laws will try to balance the struggle between clarity and innovation.
For its part, the EU is counting on its brand of clear and predictable legal frameworks to help foster the efficient rollout of AI without throttling innovation. Specifically, Brussels is hoping that a risk-based approach to regulation will provide a balance.
By applying varying levels of regulation to different AI systems based on potential risk, EU officials believe their approach can ensure AI is developed and implemented in a way that is both safe and ethical. At the same time, they are hoping this approach is flexible enough to allow the EU to continue adapting to a rapidly evolving AI landscape.
It is unclear whether regulation will be primarily horizontal or vertical.
There is tension within European institutions about where regulation should focus. On the one hand, they want to better protect every citizen, which means many horizontal regulations across the economy. On the other hand, the current form of the AI Act also regulates specific sectors, which potentially sets the stage for additional vertical legislation.
The AI Act in its current form has elements of both horizontal and vertical regulation. It sets out general requirements like transparency for all AI systems, but it also includes a particular focus on certain sectors considered high-risk. These sectors include critical infrastructure like energy and transport, education, employment and law enforcement and essential services ranging from medical diagnoses to credit scoring.
It is inevitable that creativity will run into the rules.
The AI Act shows that regulation will play an important role in the future development of artificial intelligence. One of the key aims of the AI Act is to ensure that AI systems are transparent, explainable and trustworthy. This means that developers will need to carefully consider the potential impacts of their AI systems on users, society and the environment and take steps to mitigate any risks.
Companies will still be allowed to experiment and innovate within certain parameters prior to integrating AI into their operations. But the coming wave of regulation represented by the AI Act means that once a tool or software hits the market, it must be in compliance.
1: Bauer, Robert. “ChatGPT Statistics: The Numbers Behind the AI Language Model.” ToolTester, April 19, 2021. https://www.tooltester.com/en/blog/chatgpt-statistics/.
© Copyright 2023. The views expressed herein are those of the author(s) and not necessarily the views of FTI Consulting, Inc., its management, its subsidiaries, its affiliates, or its other professionals.