Defining and teaching AI literacy in education

  • 17 June 2024
  • Education, Technology
  • Jonathan Lloyd

Earlier this year, at a Foundation for Science and Technology event titled "Can Artificial Intelligence be Regulated and if so, How?", AI's role in education emerged as a key point of discussion. Panellists, including Professor Sana Khareghani, described the current state of AI in education as the ‘Wild West’ - a term that underscores the urgent need for structured guidelines, definitions, and understanding. The relevance of AI literacy within school contexts is growing rapidly, a sentiment echoed by the results of a recent LinkedIn poll I conducted: 87% of respondents agreed that schools should teach AI literacy, while 2% disagreed and 11% requested more details. The comments were equally insightful, with many advocating for compulsory AI literacy education and others questioning the potential arguments against its inclusion in curricula.

Defining AI literacy is crucial because teaching it effectively is essential for gaining the most value and understanding from it. If it is not properly defined and taught, it could have social consequences: “In the near future, AI literacy or a lack of it and what facets of AI knowledge have been taught may turn out to be a critical social issue” (Rizvi, Wait, and Sentence, 2023). However, defining AI literacy remains the first challenge. As Philip Colligan, CEO of the Raspberry Pi Foundation, noted in a blog post: "The first problem is defining what AI literacy actually means" (October 2023).

So, what is AI literacy?

In this blog, I analyse definitions from wider reading, their history, and the influence of soft power, and explore how we can arrive at a more comprehensive definition of the term AI literacy.

Charles Logan, a doctoral candidate in Learning Sciences at Northwestern University, recently released a new paper, "Learning About and Against Generative AI Through Mapping Generative AI's Ecologies and Developing a Luddite Praxis" (2024). In it, he provides examples of definitions for AI literacy. Tracing the emergence of the term, Logan states it was first used by Konishi (2015) in her work on the Japanese economy: "AI literacy is concerned with identifying new situations for applying AI, focusing on monetising often very personal artifacts." Examples might include recording a recipe as digitised text for a cooking robot. Another definition is from Kandelhofer et al. (2016): "AI literacy allows people to understand the techniques and concepts behind AI products and services instead of just learning how to use certain technologies or current applications."

As Logan (2024) points out, these definitions are driven by "speculative economic trends." He highlights the limitations of this focus as "troubling," exemplified by Amazon’s $10 billion loss on Alexa. Moreover, it seems insufficient to focus solely on the economic aspects without considering the social, environmental, and political factors.

Beyond this, even the term 'literacy' is, according to Logan (2024), a "contentious term laden with power." It should be accessible to everyone. He further cites Magerko (2020), who defines AI literacy as: "a set of competencies that enables individuals to critically evaluate AI technologies; communicate and collaborate effectively with AI; and use AI as a tool online, at home, and in the workplace."

In my view, there is scope to discuss and define the term in a school and teaching context, and having this conversation aids the process. There are many further questions such as the curriculum, how it is delivered, gender parity, diversity, equipment, and access (Rizvi, Wait, and Sentence, 2023). If we don’t engage in this and define and design AI literacy, technology companies will. In fact, they already are. Take this example from Logan (2024): "Common Sense Education publishes an AI literacy curriculum for grades 6-12; we must consider how its parent company Common Sense Media's partnership with OpenAI (Common Sense Media, 2024) may be shaping how GenAI is framed in the lesson plans. An AI literacy that fails to examine how BigTech influences students, educators, and administrators allows corporations to exert a form of soft power through embedding preferred versions of digital literacy that lack analyses of power" (Pangrazio & Sefton-Green, 2023).

The 'soft power' exerted by technology companies is something digital leaders and educators should not be naive about. We need to be educated to recognise this influence and be clear about what we mean by AI literacy. This awareness will enable us to teach AI literacy responsibly and ethically, ensuring a comprehensive understanding that encompasses its historical, geographical, mathematical, scientific, and technological foundations. To achieve this, we must also prioritise our own learning as educators. As Colligan puts it: “If we’re serious about AI literacy for young people, we have to get serious about AI literacy for teachers.”

There’s a lot to play for, and I am both worried and excited.

 

Jonathan Lloyd is Head of Digital Learning and AI at The Pilgrims’ School, UK. He is also an independent researcher on virtual schools and publishes a newsletter about digital learning at Class Futures.