Artificial Intelligence

Governments and regulators across the globe are considering the rapid changes being brought about by artificial intelligence. There are huge potential benefits, but also significant potential harms. The EU AI Act is the new European regulatory framework in the EU, with the USA and other major economies discussing similar issues, and with much discussion at the UN and OECD. In the UK, some aspects of AI are included in the new online safety bill, and the UK was also the host in November 2023 of the first global AI Safety Summit. On Wednesday 28th February, The Foundation for Science and Technology held an event to explore the different challenges of regulating AI and how different jurisdictions approach that challenge. Speakers included: Stephen Almond, Executive Director for Regulatory Risk at the Information Commissioner’s Office; Professor Sana Khareghani, Professor of AI Practice at Kings College London; Dr Cosmina Dorobantu, Co-Director and Policy Fellow for the Public Policy Programme at The Alan Turing Institute, Professor Dame Wendy Hall DBE FRS FREng, Regius Professor of Computer Science at the University of Southampton; and John Gibson, Chief Commercial Officer at Faculty AI.

DOI: https://www.doi.org/10.53289/RFPS4564

EU and UK approaches to AI regulation: A world apart?

Dr Cosmina Dorobantu

Dr Cosmina Dorobantu is the Co-Director and Co-Founder of the Public Policy Programme at The Alan Turing Institute, the UK's national institute for data science and artificial intelligence. With a team of 55+ full-time academics, the Programme is one of the largest research programmes in the world focusing on AI for the public sector.

Summary:

  • The EU AI Act classifies AI systems according to risk.
  • The Act is ‘rules-based’ which means it relies on rules that establish obligations for providers and users depending on an AI system’s level of risk.
  • Despite clear differences between the EU and UK’s approaches to AI regulation, there are points of overlap and intersection.
  • AI governance and regulation is an area that is in desperate need of international cooperation, as AI development and advancement are global issues. 

Over the past few years, some of the brightest minds in the world have been thinking about how to regulate artificial intelligence (AI). This is a tremendously exciting time to be alive because we are at a point at which we are starting to see the results of their efforts. 

We are at the very beginning of the journey to regulate AI, where several national and multi-national initiatives are starting to make an impact in the real world. At home, we have had the publication of the UK Government’s response to the ‘A pro-innovation approach to AI regulation’ consultation. And in the EU, the European Parliament passed the EU AI Act, which is the world’s first comprehensive AI law. Today, I want to take you on a journey through the continent and tell you a little bit about the EU AI Act and the points of intersection with the UK approach.

How does the EU AI Act work?

The EU AI Act, which is a 450+ page document, classifies AI systems according to risk. We have ‘minimal-risk’ AI systems, which the EU claims are the majority of AI applications currently available in the Single Market. These include AI-enabled video games and spam filters, and the EU AI Act stipulates that they are free to function as they are. We also have ‘limited-risk’ AI systems, an example of which are chatbots (though not Chat GPT and others like it, which fall under the generative AI guidance). ‘Limited-risk’ AI systems are subject to light transparency obligations under the Act, such as developers and deployers ensuring that end users are aware that they are interacting with AI. Then we have ‘high-risk’ AI systems, and the vast majority of text in the EU AI Act is about them. There is a lot of detail on how a system might be classified as ‘high-risk.’ When an AI system is classified as ‘high-risk,’ the AI system’s providers have a fairly long list of obligations, including establishing a risk management system and a quality management system. Finally, there are ‘unacceptable-risk’ AI systems, such as social scoring systems, which are prohibited from the Union altogether.

You might hear some buzzwords that describe the European approach to AI regulation. One of them is ‘rules-based,’ which simply means that the EU AI Act relies on new rules which create obligations for providers and users depending on an AI system’s level of risk. Another term is ‘statutory,’ because the Act introduces new legislation and heavy penalties. Non-compliance with the EU AI Act will lead to fines ranging from €7.5 million or 1.5% of global turnover, to €35 million or 7% of global turnover, depending on the infringement and the size of the company. For a company like Google or Microsoft, 7% of global turnover can be in the billions. Finally, you might also hear the EU approach to AI regulation described as being ‘horizontal.’ Horizontal legislation means that the Act applies across all the AI systems placed or used within the EU, regardless of the sector in which they are used. 

How does the UK’s approach differ from the EU’s?

When the EU and the UK started to work on their approaches to AI regulation, they seemed to be taking two diametrically opposed views. Rather than opt for a ‘rules-based approach’, like the EU, the UK decided to take a ‘principles-based approach’ which has five core principles underpinning it. These are (1) safety, security and robustness; (2) appropriate transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress. Unlike the EU, the UK did not introduce any new legislation and did not put the five principles on a statutory footing. Finally, the UK went for what we call a ‘vertical approach’ to AI regulation, relying on the expertise of existing regulators and their deep sectoral knowledge to tailor the implementation of the principles to the specific context where the AI system is used. 

Common ground

Now, I want to challenge the notion that the two approaches are diametrically opposed because however different they seem, there are points of overlap and intersection and as time passes, we do see them slowly converging towards a healthy middle. 

 What are the points of commonality? If you look at the ‘rules-based’ and the ‘principles-based’ approaches, there is a common denominator between them, which is that both approaches are risk based. The level of risk provides the sliding scale that determines the extent to which the rules or the principles apply. 

 Regarding new laws, it is true that at the time of writing, the EU is bringing in new legislation and the UK is not. However, the UK’s Government is signaling that legislation might follow. In the ‘A pro-innovation approach to AI regulation’ policy paper, the UK Government's stance is that it “will not put these principles on a statutory footing initially.” The word ‘initially’ is a clear signal that legislation may follow within the UK, as well.

 Something that we knew from the very beginning is that a horizontal approach to AI regulation misses out on the sectoral nuances of AI design, development, and deployment, while a vertical approach misses out on the coordination mechanisms that a centralised approach brings. Despite going for a horizontal approach, the EU did concede to the addition of some sector-specific guidance in the EU AI Act, while the UK, despite going for a vertical approach, is building a centralised function to ensure regulatory coherence and to create mechanisms for regulatory coordination and oversight.

 The other large point of commonality between the EU and the UK’s approaches to AI regulation is standards. I think this matters an awful lot, because the successful practice of both approaches relies heavily on AI standards. 

 Regardless of what your views are of the UK, the EU, and what their relationship should be, AI governance and regulation is an area that is in desperate need of international cooperation. AI development and advancement are global issues. If countries follow their own path when it comes to AI governance and regulation, not only will this lead to a fragmented and inefficient market, but it will also fail to prevent current and future harms linked to AI innovation. I believe that the world should – and can – come together to address these challenges.