Artificial Intelligence

Governments and regulators across the globe are considering the rapid changes being brought about by artificial intelligence. There are huge potential benefits, but also significant potential harms. The EU AI Act is the new European regulatory framework in the EU, with the USA and other major economies discussing similar issues, and with much discussion at the UN and OECD. In the UK, some aspects of AI are included in the new online safety bill, and the UK was also the host in November 2023 of the first global AI Safety Summit. On Wednesday 28th February, The Foundation for Science and Technology held an event to explore the different challenges of regulating AI and how different jurisdictions approach that challenge. Speakers included: Stephen Almond, Executive Director for Regulatory Risk at the Information Commissioner’s Office; Professor Sana Khareghani, Professor of AI Practice at Kings College London; Dr Cosmina Dorobantu, Co-Director and Policy Fellow for the Public Policy Programme at The Alan Turing Institute, Professor Dame Wendy Hall DBE FRS FREng, Regius Professor of Computer Science at the University of Southampton; and John Gibson, Chief Commercial Officer at Faculty AI.

The debate

The UK AI safety agenda has gone down a rabbit hole. This was the statement which opened the debate. Research into AI is incredibly important but it feels like the UK Government has focused exclusively on this, while neglecting other important aspects. We must look at a variety of levers to manage the different risks that new AI technology could pose.

Should we reframe regulation before elections and also determine the personality of an AI model? We should follow the engineering philosophy that we know so well and design responsibility in from the outset, rather than applying safety features later on.

UK regulation might not stop individuals going elsewhere, to a country acting in a more fast and loose regulatory fashion. These are technologies that don’t act within borders. They require inter operable rules and regulations that states sign up to. This is where the UN come in. This is a challenge that is bigger than any individual country and one that needs us all to come together.

We need to talk to the public more about their views on AI and this is important in terms of how Artificial Intelligence will affect democracy. Who decides on fairness? It is incredibly important that the answer to that is democratically elected governments and the regulators enforcing that. Otherwise, there is a risk that fairness will be decided by the tech companies that build the models. AI software can embody a world model which comes through data or safety training. Standards need to be clearly defined according to the way that we (as a democracy) choose and those that build and design models must be held accountable. 

Small steps are important. There are practical things we can do now before answering philosophical questions about more advanced technology - such as making legislation for deep fakes.

Our ability to implement systems to govern the way that AI models make decisions can be very precise around controlling and monitoring bias, so this may paint a more positive future. We can use technology to lay some bias bare but there are still reservation on the ability to correct bias in models. We are not there yet.

In the absence of the right regulatory landscape, some people are using data poisoning (the deliberate and malicious contamination of data to compromise the performance of AI and ML systems) to protect themselves. For example, entire companies have been created to ‘poison’ the IP of creative artist’s online assets so that their work cannot be reproduced.

Do we wait to see what unintended consequences come out of the use of AI in education and by young people? One panelist named this area the ‘wild west’ but warmed that schools should take control and put in the right principles and guidelines now about how these technologies should be used by their students and teachers (they are probably already being used). Things should be embraced, but handled with care. Children are early adopters and proficient users of these technologies but they don’t understand the risk of AI and are often left behind in decision making. Concern around AI use in research and it’s impacts on peer review was shared across several panelists. Some optimism on the benefits of AI on the education system followed.

Further Information

Implementing the UK’s AI Regulatory Principles

Consultation response to the Government white paper on AI regulation

Artificial Intelligence in schools

Data poisoning and the creative sector