Artificial Intelligence

DOI: https://www.doi.org/10.53289/SMTI1488

How the UK set out to regulate AI

Volume 23, Issue 8 - June 2024

Professor Sana Khareghani

Professor Sana Khareghani

Sana is Professor of Practice in AI at King's College London and AI Policy lead for Responsi-ble AI UK, a £31Mn grant focused on seeding and connecting the international ecosystem for responsible AI research and innovation.

Summary:

  • Adoption of AI technologies can only happen with clear and appropriate regulation in place. 
  • For the AI Regulation white paper, rather than coming in hard with rules and regulations, the UK government wanted to work hand in hand with existing regulators to try and figure out the right approach.
  • Regulation needs to be a coordinated effort.
  • The Digital Regulation Coordination Forum, and the core principles were set up and designed to try and ensure that AI is used safely, is transparent and technically secure. The Alan Turing Institute has worked alongside standards agencies and the UK Government to create the AI standards hub.
  • There are still many areas that the Government needs help with regarding AI and there is room for more expertise on the matter.

When the UK government was setting out to regulate AI, I was the head of the Office for AI and during that time, we came up with the outline framework for the AI regulation white paper. We realised that we needed to spend a lot of time with the regulator's themselves as well as other experts in the ecosystem. We have an incredibly rich regulatory landscape here in the UK, and the regulators are experts in their area. The Information Commissioners Office (ICO) for example, has been leading the way on thinking about data, the fuel for AI and there are many other regulators who are looking at the applications of AI and how these applications manifest within their sectors (e.g., MHRA). It became apparent that there are several challenges when looking to regulate AI, including a lack of clarity on remit, consistency, approach, to name a few.

 How did the UK begin defining its regulation of AI?

I think it is important to put some context around the ‘pro-innovation framework’. We cannot have adoption of AI technologies without appropriate regulation in place. There will be no innovation in companies if we don't have clear guidelines, because nobody wants to take the risk of being the one who made a change without the regulator on their side. We saw examples of this with the finance sector where the government had to create financial sandboxes to encourage innovation. 

When we were preparing the bones of the white paper, we had to ask how do we put the right guardrails in place to allow innovation and adoption of AI technologies to happen? Our conversations with regulators and the broader ecosystem of advisors led us to a context specific approach ie: where applications land within a specific sector. The approach also had to be risk based (similar to the EU AI Act), coherent - simple, clear and predictable, and proportionate and adaptable. So rather than coming in hard with rules and regulations, the UK government wanted to work hand in hand with the regulators to try and figure out the right way to approach these questions. Lastly, regulation needed to be a coordinated effort for two main reason, 1. so that understanding where something ends and where something else begins is clear and 2. that regulators can help each other along the journey. 

With this in mind, the Digital Regulation Coordination Forum and the core principles were created. These principles were designed to try and ensure that AI is used safely, is technically secure and does what it says on the tin, that it's transparent and explainable. It also considers fairness and responsibility as well as routes to redress and contestability. None of this approach was done in a vacuum. This was done hand in hand in consultation with the ICO and many other regulators and organisations that helped create the AI regulation white paper.

The broader ecosystem outside of the AI regulation landscape includes organisations such as the Alan Turing Institute, who have worked alongside our standards agencies, as well as the UK Government to create the AI standards hub. We also have the AI assurance roadmap which is part of the arsenal to help assure organisations are adhering to the guardrails. 

I was no longer in the government when the AI Reguation white paper was published. The government’s response to the Whitepaper is available and shares the number of responses that were received as well as next steps that the government is taking forward including funding into the regulatory ecosystem. 

Just before the publishing of the whitepaper response the government also hosted the first ever AI Safety Summit in Bletchley Park, with focus on safety and security of AI systems.

AI regulation is a big monolithic thing. We need to start somewhere. The UK has started on the application side, the EU is using a different approach, and the US is looking at the whole thing. There is plenty of work to be done across this, it doesn't matter where we start, I suspect the answer will be somewhere in the middle.