How an AI-powered lion became a teaching tool
The mascot for King’s College London helped Andrés Gvirtz to teach a class, with a little help from generative artificial intelligence.
Reggie the Lion, the beloved mascot of King’s College London (KCL), turns 100 this month, and earlier this year he became the muse for my first foray into using artificial intelligence (AI) in the classroom.
I decided to use Reggie and this year’s most talked-about technology to create illustrations for my lecture materials. I’m based in KCL’s business school, where I’m an assistant professor in marketing technology and innovation. Previously I would have relied on generic stock images, but on this occasion, I used generative AI — specifically text-to-image models that take natural language as input and produce images as output — to breathe life into bespoke illustrations, creating tailor-made content for my students.
In my opinion, there’s still lots of uncertainty and apprehension about AI in higher education, particularly among teachers. Many are unsure about how to harness its potential. In a survey of researchers earlier this year, Nature learnt that more than half thought that generative AI would make it harder to assess student learning.
After playing around with AI tools, it didn’t take long for me to make Reggie the lion a key part of my teaching slides. When I talked about a psychological study as part of my curriculum on consumer behaviour, there was Reggie, donning an electroencephalogram headset, which measures the electrical activity of the brain. When we talked about retail and merchandise placing, Reggie was strolling through a grocery looking for the best merchandise.
Using generative AI to create imagery can create complications in copyright law. In my case, KCL has the rights to its own mascot, and because I’m using these images for internal teaching purposes, I’ve yet to encounter any copyright issues. (Nature doesn’t allow the use of generative AI to create imagery for its pages.)
My use of generative AI might not make the headlines as dramatically as students using ChatGPT to write coursework, but we rely heavily on slides in higher education and use imagery to amplify the content. I argue that optimizing visuals, to make them more personal and relatable is a compelling use case for generative AI.
What I use
I’ve experimented with OpenAI’s tool DALL-E, Stability AI’s Stable Diffusion and Midjourney text-to-image models. When I started playing around in the area in 2022, some of the solutions still required coding and knowledge of computer infrastructure to get them to work; now they simply require a log-in.
All of the models can create photorealistic images of humans, but they struggle more when it comes to imagery that wasn’t in the algorithms’ training data — a lion wearing a red T-shirt, for example, can be a big ask. After struggling to produce consistent images, I started to test various illustration styles. I’ve had the most success with an approach mimicking the computer-animation style of Pixar. What works best depends on the desired content and context — experimentation is required.
For the students and me, the ‘AI Reggie campaign’ wasn’t just a creative exercise; by visualizing abstract concepts through personalized AI-generated art, I caused students’ enthusiasm and attention to spike. As an instructor, I found that the images sparked joy and curiosity, and bridged the gap between individual students’ journeys and the rich tapestry of our institution’s history. It also garnered students’ interest in the actual technology, with many embracing and exploring the tech frontier themselves.
I plan for Reggie to become a more active part of student life. The original images personalized the experience, made it memorable and brought a sense of spirit and connection to the university mascot, according to a short survey of my students. For me, the next step is to co-create with them. I hope that this will help to strengthen my lecture materials.