The UK AI safety agenda has gone down a rabbit hole. This was the statement which opened the debate. Research into AI is incredibly important but it feels like the UK Government has focused exclusively on this, while neglecting other important aspects. We must look at a variety of levers to manage the different risks that new AI technology could pose.
Should we reframe regulation before elections and also determine the personality of an AI model? We should follow the engineering philosophy that we know so well and design responsibility in from the outset, rather than applying safety features later on.
UK regulation might not stop individuals going elsewhere, to a country acting in a more fast and loose regulatory fashion. These are technologies that don’t act within borders. They require inter operable rules and regulations that states sign up to. This is where the UN come in. This is a challenge that is bigger than any individual country and one that needs us all to come together.
We need to talk to the public more about their views on AI and this is important in terms of how Artificial Intelligence will affect democracy. Who decides on fairness? It is incredibly important that the answer to that is democratically elected governments and the regulators enforcing that. Otherwise, there is a risk that fairness will be decided by the tech companies that build the models. AI software can embody a world model which comes through data or safety training. Standards need to be clearly defined according to the way that we (as a democracy) choose and those that build and design models must be held accountable.
Small steps are important. There are practical things we can do now before answering philosophical questions about more advanced technology - such as making legislation for deep fakes.
Our ability to implement systems to govern the way that AI models make decisions can be very precise around controlling and monitoring bias, so this may paint a more positive future. We can use technology to lay some bias bare but there are still reservation on the ability to correct bias in models. We are not there yet.
In the absence of the right regulatory landscape, some people are using data poisoning (the deliberate and malicious contamination of data to compromise the performance of AI and ML systems) to protect themselves. For example, entire companies have been created to ‘poison’ the IP of creative artist’s online assets so that their work cannot be reproduced.
Do we wait to see what unintended consequences come out of the use of AI in education and by young people? One panelist named this area the ‘wild west’ but warmed that schools should take control and put in the right principles and guidelines now about how these technologies should be used by their students and teachers (they are probably already being used). Things should be embraced, but handled with care. Children are early adopters and proficient users of these technologies but they don’t understand the risk of AI and are often left behind in decision making. Concern around AI use in research and it’s impacts on peer review was shared across several panelists. Some optimism on the benefits of AI on the education system followed.
Further Information
Implementing the UK’s AI Regulatory Principles
Consultation response to the Government white paper on AI regulation
Artificial Intelligence in schools
Data poisoning and the creative sector