DOI: https://www.doi.org/10.53289/KXUX6804
Professor Geraint Rees FMedSci is Pro-Provost (Academic Planning), Pro-Vice-Provost (Artificial Intelligence) and Dean of Life Sciences at UCL. He is UCL’s strategic lead for AI and leads the development of UCL’s ‘AI for People and Planet’ strategy. As a Director of UCL Business and a former Senior Scientific Advisor at DeepMind, he is also deeply involved in commercialising AI technologies. He shapes training of the next generation of AI healthcare expertise as co-Director of the AI-Enabled Healthcare Systems Centre for Doctoral Training at UCL.
Delivering strategic advantage through the National AI strategy will depend on us thinking more about people than about the technology. At the start of the pandemic, we were told that AI would change the world and I am confident it still will.
Despite the outstanding efforts of staff in adapting to deliver teaching and learning online, students have been clear that they would like to return to face-to-face education and shared campus experiences. Advanced technology and innovative blended approaches will remain part of their education, but students also demand human interaction in our great seats of learning.
It is clear that artificial intelligence and associated technologies are a really important part of our shared future. Applied mathematics and computer science have had a crucial role in addressing some of the consequences of our enforced isolation. Yet that technology comes into existence on a planet already populated by human societies. I would echo the OECD statement: “AI should be for people and planet.” UCL indeed makes this insight the centrepiece of its AI strategy, trying to position AI as a force for good in the world through considering the human dimension, inclusion and diversity.
A blended approach to the future does, though, have implications for the kind of workforce that will be needed. Today, the conventional means of delivering machine learning and its benefits directly to individuals involves a browser and computer screen. In such an environment, the answer to any question might seem to be: we need more software engineers. But the specific environments in which humans live and work are much messier and more uncontrolled. Mastering successful interactions often requires domain-specific knowledge. Think of going to the doctor or a hospital for example: if the answer to the question is more software engineers, then we have not really properly understood the question.
Successful delivery of the benefits of AI will depend on many disciplines.
Many disciplines
Successful delivery of the benefits of AI in every sector will therefore depend on many disciplines. It is necessary to recognise the central role of the Arts and Humanities in understanding and interpreting human experience, and the part of the Social Sciences in helping to shape how AI might fruitfully interact with people in society.
For example, medical applications will clearly benefit from structured interaction between doctors, healthcare professionals, computer scientists and software engineers. A project by Google DeepMind, on acute kidney injury, where the medical condition is identified through a longitudinal time series analysis of healthcare data, picks out what medical professionals know is one of the most common causes of unexpected deterioration in a patient's condition in a general hospital anywhere in the world. It provides an algorithm with general applicability. And it came about not by chance, but because medical professionals worked together with DeepMind in a systematic and structured way.
In bringing together the required skills across different sectors, there are two obvious approaches. Either train a single individual in both areas, or bring together individuals with complementary skills. However, training clinician computer scientists is still in its infancy and individuals with these skills are pretty rare. The alternative approach is not just about putting people in the same room and hoping they get on with it, though. Universities as well as other bodies are improving their ability to create interdisciplinary dialogue.
Yet there are risks. In the USA, rapid deployment of electronic healthcare record systems has taken place over the past decade. This has not unfortunately created a technological nirvana of elegant, unobtrusive and effective data capture from healthcare consultations. Indeed, a much more challenging situation has emerged. Essentially, instead of making healthcare easier and effective, many doctors feel trapped behind their screens, spending about two hours on computer data entry for every hour spent face-to-face with the patient. Now that should not be the future of AI.
Bias
Fairness and bias are topics of enduring interest in human societies. It is surprising that the discipline of artificial intelligence has only recently recognised that it has a significant problem here. Machine learning systems based on historical or incomplete data have duly learned to produce unfair results. Examples include corporate HR tools that are prejudiced against women, as well as gender and dialect bias in automated captioning. These are shocking examples, but they hide even deeper challenges as AI progresses to consider complex issues like fairness in healthcare.
Both model performance and healthcare outcomes depend in part on an unknown combination of biological, environmental and economic factors. Indeed, some level of modelled performance that differs across a specific group may be desirable, say, if a particular ethnic group is more susceptible to a specific disease. Unlike the earlier examples, a major problem affecting that sort of machine learning is the absence in many perhaps most cases of any reliable ground truth. Unlike other areas of machine learning, where data can be labelled by human with a high degree of accuracy, medical diagnoses are fraught with uncertainty. Indeed, some conditions essentially exist as social constructs based on constellations of symptoms, whose underlying causes are not fully known, or agreed, and change over time.
Such challenges may not be overcome by a particular ethical code. Instead, they are conceptual and fundamental. So the development of AI is not just about ethical frameworks and conceptions of how we might address bias. We might wish to invest in R&D to develop agreed frameworks that explore, quantify and correct model performance across particular populations. How that correction is applied is fundamentally a question about what is fair, rather than a computable function.
The AI roadmap sets out how AI can benefit every sector in every region of the UK. No organisation can undertake this alone. Rather, a complex combination of infrastructure, expertise and entrepreneurship is needed across the AI ecosystem. That does not happen by accident it does happen in particular locations and in response to particular sets of incentives.
Partners
This happens again and again in cities. My own university was founded in London in 1826. Other European institutions were founded at the same time. There was a huge flowering of talent and activity in the natural and life sciences. Every region of the UK contains at least one world-leading comprehensive university, an anchor partner in its city for jobs and independent innovation, deeply embedded in its local environment and culture. They have a crucial role in bringing together the different elements that will enable us to compete on a global stage.
A complex combination of infrastructure, expertise and entrepreneurship is needed across the AI ecosystem.