Logo

To demystify AI for your students, use performance

Updating Mary Shelley’s Frankenstein for the AI era helped students to understand the opportunities and limitations of the tool, in an engaging way. Here’s how to use performance as pedagogy

,

,

Royal Holloway, University of London
8 Oct 2024
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
image credit: iStock/gorodenkoff.

You may also like

In an artificially intelligent age, frame higher education around a new kind of thinking
5 minute read
A student studying at a laptop

Popular resources

Performance as a pedagogic model can be an engaging way to demystify AI and demonstrate ways in which it can support educational enquiries. Recently, we performed Perfectly Frank, a theatrical lecture-demonstration on the functionality of different generative artificial intelligence (AI) platforms and how to engage with them creatively and ethically. Designed around Mary Shelley’s 1818 novel Frankenstein, Perfectly Frank involved our animating a new creature together with the audience. 

Instead of using dubiously sourced body parts, as in the novel, Perfectly Frank used the aid of AI. By working with different AI platforms and suggestions from the audience, we were able to show the possibilities and limitations of engaging with AI, and raise questions about the ethics and responsibilities of working in this way. 

Here, we’ll outline how we structured Perfectly Frank and what we learned from the process. The performance takes 50 minutes and can easily be repeated by other tutors, or serve as a jumping-off point for performative ways of demonstrating AI opportunities to students.

Setting the scene 

As the spectators entered, we situated them as Dr Frankenstein’s lab assistants. Instead of using exhumed body parts, we asked our audience to donate their bodies to science, and upon entrance, photographed each of them against a greenscreen and recorded them reading four sentences from the novel.

We recapped the events of Frankenstein: the creature has human qualities, such as compassion and a desire for knowledge, but is rejected by society. We then informed the lab assistants/spectators that we’d be building a new creature together: brain, voice and body. 

Building the brain

Using ChatGPT, we created our own generative pre-trained transformer, by clicking “Create” in the top right corner. Our prompt was: “Role-play that you are the creature from Mary Shelley’s Frankenstein. Your knowledge is limited to the text of the novel and books the creature reads: The Sorrows of Young Werther by Goethe, The Parallel Lives by Plutarch and Paradise Lost by Milton.” We then asked the audience what other books or films the creature might have encountered since then, and added those to the prompt. 

After clicking on “Configure”, we asked the generative AI tool questions, some prepared in advance and others from the audience. They included:

What would you like to share with an audience who are looking to build a new living creature from scratch?

What do you want us to do to make you feel accepted?

In what ways is the creature from Frankenstein like an AI?

The audience were able to see the speed at which the “brain” of our creature could reply, but also the limitations of the function, especially because of its finite knowledge base – it only knew what we had taught it. Our spectators reflected that while the brain looks sentient at a glance, it is quite limited.

Building the voice

The next step was to build the creature’s voice. We created the voice as a mashup of all the participants’ voices, layered on top of learned voices created by an AI. The captured voice recordings of the participants, using audio editing app Audacity, were then fed into voice generator Elevenlabs. Using ElevenLabs’ “Voice Cloning” and “Voice Lab”, the WAV files submitted were then blended into one output: a final, synthesised voice. We then copied and pasted the text created by the brain into ElevenLabs, and our finished, amalgamated voice read it out. 

The voice of the character is a defining trait of who they are. The question of how to give the character their voice is especially pertinent when we begin to include AI. This raised questions of the audience: What do they sound like together? How realistic does the creature’s voice sound? Did we give voice to the creature? Or did it come from within? Now with a brain and a voice, we looked to building the body.

Building the body

Using the greenscreen photographs from the start of the performance, we took five still images and uploaded them into image generator, Midjourney, using the blend function (command: /blend). This created a kind of palimpsest of the participants’ faces, blended together. This was the body of our creature. It looked androgynous, with feminine features, big hair and a beard. 

We then moved to a text-to-video tool called Runway, which allowed us to add the voice we had created in ElevenLabs. Using the “Lip Sync” function, we were able to animate the still image of the body’s mouth, to look as if it was speaking the words of the brain with the amalgamated voice. With the lips moving perfectly in time, our creature felt lifelike. It was becoming alive.

Next, we created a living background. Again, in Runway, we played with Caspar David Friedrich’s 1818 painting Wanderer above the Sea of Fog, because we believed it captured the 18th-century interest in the Sublime and gave context for our modern creature. Because the audience photographs were taken in front of a greenscreen, we were able to overlay the creature to the background in Runway.ml, using the greenscreen functionality. Pressing “play”, the creature was finished: mind, body, voice and environment. 

We asked the audience: does this mean the creature lives? The students felt that it created a sense of the uncanny – they saw elements of themselves in the palimpsest image. Yet it was not them. It was a useful metaphor for those eerie feelings we get when talking with AI. It feels alive, but it isn’t. Do we even have the words yet for the sense of something being half alive?

It was important for students to find their own ethical limits when it comes to the creative possibilities of AI. Through this performance, they can understand the magic of the new possibilities and inevitable losses that result from working in this way. We were keen for students to identify what impact these tools might have on their own creativity, asking them to consider whether AI was the largest plagiarism machine ever created or the ultimate open library of all human creativity. We asked the audience (and our creature) what they thought would happen if AI became sentient and how we navigate states of quasi-aliveness.

Students were asked to critically contemplate their own position. Some embraced the new technology, others were threatened. By the end, Perfectly Frank gave each student a feel for the ethical limits of their play and a sense of where it could fit into their own practice. Because it was playful and taught spectators by means of performance, it was engaging, memorable and, most importantly, fun. 

Will Shüler is vice-dean of education and student experience for the School of Performing and Digital Arts; Chris Hogg is a lecturer in creative digital and social media; and Karim Shohdy is an MA theatre directing candidate, all at Royal Holloway, University of London.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site