The Four Risks of the EU’s Artificial Intelligence Act: Is Your Company Ready?

By Dr. Claudio Calvino, Senior Managing Director, FTI Consulting, Dr. Meloria Meschi, Senior Managing Director, FTI Consulting, Dr. Dimitris Korres Managing Director, FTI Consulting

The Act divides AI systems into four risk categories that range from minimal to unacceptable. Knowing the differences is critical for compliance.

Since its public debut in November, ChatGPT has been in the headlines for reasons both good and bad. A prime example of generative AI, it is a highly sophisticated tool that leverages algorithms to generate longform text. But another aspect of ChatGPT is not so advanced: Like other AI systems, it still carries inherent and unintended risks, one of which is producing biased results.1

Bias within generative AI is a long-standing and serious issue that can lead to skewed outcomes with real-world consequences on people’s lives. AI algorithms may exclude job applicants based on age, for example, or predict who might commit a crime based on racial profiling.2,3 Although policymakers and the business community are taking steps to try to address the issue, regulators are discovering just how difficult it is to hit a moving target in such a quickly-evolving market.4

The EU’s Artificial Intelligence Act (“AI Act”) is an attempt at introducing a common regulatory and legal framework for AI. First proposed in April 2021, the AI Act, which is still in draft form and evolving as industries continue to offer commentary, would require any company that uses AI to comply with its regulations. Combined with the EU’s General Data Protection Regulation and Digital Services Act and Digital Markets Act,5 the AI Act intends to boost public confidence and trust in technology in general.6

Know Your Risks

One way the AI Act seeks to mitigate risk is by reducing the chance of human bias creeping into AI systems. That is a tall order: To comply, companies must be keenly aware of the coded algorithmic models that make up their AI systems as well as the data that are fed into the systems. Even then, control can be tenuous. Introducing an AI system into a new environment — to the public, for instance — can lead to unforeseen issues down the road.

Getting a handle on AI bias is essentially a technical exercise for companies, but it is a job that typically goes beyond technical experts or legal and compliance departments. A cross-functional team that specialises in identifying bias in both human and machine realms is best equipped to holistically tackle the challenge.

The good news is that the AI Act recognises that the risk of bias is not the same across all AI applications or deployments. Some applications present substantially higher risk than others. To help companies in their quest to remediate, the AI Act categorises potential risks into four buckets.7 At the time of writing, they were:

Unacceptable: Applications that comprise subliminal techniques, exploitative systems or social scoring systems used by public authorities are strictly prohibited. Also prohibited are any real-time remote biometric identification systems used by law enforcement in publicly-accessible spaces.

High Risk: These include applications related to transport, education, employment and welfare, among others. Before putting a high-risk AI system on the market or in service in the EU, companies must conduct a prior “conformity assessment” and meet a long list of requirements to ensure the system is safe. As a pragmatic measure, the regulation also calls for the European Commission to create and sustain a publicly-accessible database where providers will be obligated to provide information about their high-risk AI systems, ensuring transparency for all stakeholders.

Limited Risk: These refer to AI systems that meet specific transparency obligations. For instance, an individual interacting with a chatbot must be informed that they are engaging with a machine so they can decide whether to proceed (or request to speak with a human instead).

Minimal Risk: These applications are already widely deployed and make up most of the AI systems we interact with today. Examples include spam filters, AI-enabled video games and inventory-management systems.

Primary responsibility will be shouldered by the “providers” of AI systems; however, certain responsibilities will also be assigned to distributors, importers, users and other third parties, impacting the entire AI ecosystem.

Getting Ahead of Regulations

With strict rules potentially taking effect in the EU in as little as two years, organisations that develop and deploy AI systems will need to ensure they have robust governance structures and management systems in place to mitigate risk. Here are four ways companies can do so:

Introduce an AI Risk Assessment Framework: This should address bias risk at every stage of a project, from design to retirement. It means understanding and documenting the intrinsic characteristics of the data, carefully deciding on the goal of the algorithm, using appropriate information to train the AI and capturing all model parameters and performance metrics in a model registry.

Establish a Governance Infrastructure: Most companies should be familiar with creating AI systems that are compliant with existing regulations, so the proposed regulations should not come as a shock. Setting up a risk management system and complying with best practices on technical robustness, testing, data training, data governance and cybersecurity is expected. Human oversight throughout the system’s life cycle will be mandatory, and transparency must be built in so that users can interpret the system’s output (and challenge it if necessary).

Evaluate and Enhance Privacy Programs: The AI Act contains an exemption that allows providers to process select types of personal data when monitoring, detection and bias correction are needed. With this in mind, it is crucial for companies to evaluate and improve their privacy programs not only to ensure proper safeguards are in place to uphold the rights of affected individuals, but to maintain public trust.

Become AI Fluent across the Enterprise: The need for greater AI skills is paramount, not only for teams working directly with the systems, but for those that operate adjacent to them. Legal departments, for instance, will need to be familiar with how their organisations’ AI systems operate to ensure they comply with the proposed regulations.

It is likely the AI Act will undergo heavy editing over the next year. In December 2022, the Council of the European Union met to review the definition of the four risk categories.8 While amendments from the December meeting still need to finalised by the European Parliament, the pace demonstrates just how quickly the Act is evolving. Companies that want to stay ahead of the regulations and be ready to comply should consider taking action now.

Footnotes:

1: Matthew Gault. “Conservatives Are Panicking About AI Bias, Think ChatGPT Has Gone ‘Woke’.” Vice. (January 17, 2023). https://www.vice.com/en/article/93a4qe/conservatives-panicking-about-ai-bias-years-too-late-think-chatgpt-has-gone-woke

2: Agbolade Omowole. “Research shows AI is often biased. Here’s how to make algorithms work for all of us. World Economic Forum. (July 19, 2021). https://www.weforum.org/agenda/2021/07/ai-machine-learning-bias-discrimination/

3: Will Douglas Heaven. “Predictive policing algorithms are racist. They need to be dismantled.” MIT Technology Review.. (July 17, 2020). https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/

4: Alex Trollip. “Advancing AI: Key AI Issue Areas Policymakers Should Consider.” Bipartisan Policy Center. (March 8, 2021). https://bipartisanpolicy.org/blog/advancing-ai-key-ai-issue-areas-policymakers-should-consider/

5: “The Digital Services Act Package.” European Commission. (Accessed March 12, 2022). https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package

6: Claudio Calvino, et al. “Mitigating Artificial Intelligence Bias in Preparation for EU Regulation.” FTI Consulting. (November 22, 2022). https://www.fticonsulting.com/insights/articles/mitigating-artificial-intelligence-bias-risk-preparation-eu-regulation

7: “Regulatory framework proposal on artificial intelligence.” European Commission. (Accessed March 12, 2022). https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

8: Laura De Boel. “Council of the EU Proposes Amendment to Draft AI Act.” Wilson Sonsini. (December 22, 2022). https://www.wsgr.com/en/insights/council-of-the-eu-proposes-amendments-to-draft-ai-act.html

© Copyright 2023. The views expressed herein are those of the author(s) and not necessarily the views of FTI Consulting, Inc., its management, its subsidiaries, its affiliates, or its other professionals.

Read more by clicking here: