Seminar “Didactic Experiences with ChatGPT and More”
Center for Language Generation AI’s seminar on September 8, "Didactic Experiences with ChatGPT and More," brought together a diverse group of researchers who shared their invaluable insights into the intersection of Language Model Models (LLMs) and AI in pedagogics.

Seminar “Didactic Experiences with ChatGPT and More”
Last Friday, we were fortunate to have so many great researchers present their practice-based research connected to LLMs and AI in pedagogics. The programme spanned themes like how to introduce LLMs driven tools to humanities students from very different backgrounds and levels of expertise, to imagining the future of global warming through image generation. The seminar was focused on pressing issues in pedagogics, as Mads Rosendahl Thomsen said in his opening remarks, such as experiences in teaching related to LLMs and AI, how to prompt well and getting a sense of tools like ChatGPT, but also ethical and educational issues connected to usage of these tools.
In his presentation "Teaching Co-Creativity with GPT", Joe Dumit presented experiences and thoughts on teaching the AU summer course this summer, “Artificial Intelligence and Co-creativity”, to a classroom of humanities students with various disciplinary backgrounds. An important aspect of teaching these students was looking at how users imagine LLMs, both which metaphors people use to talk about them, and how one might visualize workflows.
In Ulf Dalvad Bertelsen’s presentation, "Didactic shift: AI in the classroom," he discussed teaching in this time when tools like ChatGPT are used extensively outside of the classroom. Where teachers would previously rely on assignments and written production to reflect students’ competence levels, these tools introduce a disruption in the form of a hack into the assessment process.
This “hack” is a problem whether they chose to “cheat” or not, because the uncertainty is now always present: “The possibility of a hack exists and that’s really the big problem. Now we don’t know if their products reflect their competence level,” said Berthelsen. However, it is possible to integrate what Berthelsen calls a “shift” into the assessment process, where the stage in which students might use such tools outside of class, is brought into the classroom, and where this stage is used as an opportunity for learning, having students compare and assess the output in various steps.
In his presentation "3 short stories of generative AI across the educational pipeline", Jacob Sherson outlined experiences from courses with students and corporate professionals. For example, in the context of the crea.visions project, where Sherson and his colleagues have studied how AI tools can be leveraged for helping citizens communicate ideas and participate in conversations about larger, important societal issues, such as climate change. The exhibition of generated images is still open (until this Friday at noon) at Aarhus City Hall.
In their presentation "Experiments with prompting in the classroom", Rebekah Baglini and Arthur Hjorth presented their work with students on teaching the use and refined use of ChatGPT, with an extra special focus on AI detection tools.
What does it mean to be “good” at using generative AI, Hjorth asked, showing how we might think of LLMs as a constellation of cogs, where we can only “turn one handle”, namely getting output by designing our prompts well. Trying out different prompts helps students to get a sense of the workings of the model.
The seminar audience could try out the students’ exercises themselves.
Rebekah Baglini continued presenting the coursework she did with students on trying out and “cheating” AI detection software. Baglini showed how these softwares are usually very simplistic, based on two main features: the perplexity of language (how “surprising” sentences are) and its “burstiness” (the variation between sentences), and are easily mislead to either report that human-made texts are generated, or that generated texts are made by humans. In fact, using “AI cheat detection software is worse than doing nothing”. Instead, Baglini underlines, students should be “invited into realizing the learning objectives - with and without AI tools” by affording them agency rather than creating an adversarial relationship between instructors and students based on mistrust.
Moreover, discriminating between human and generated texts is a complex issue - that extends beyond “perplexity” and “burstiness” - a focus of one of Baglini’s studies with other colleagues.
In the final presentation "Socratic ignorance and learning with ChatGPT" Maja Hojer Bruun and Cathrine Hasse from the Department of Education shared insights from their SHAPE funded study of learning in working with generative AI in the context of their programme Future technology, culture and learning. Their study examined students’ very different possibilities for learning in their encounter with ChatGPT, showing how students are both affected by, but also show the abilities to bargain with the chatbot - or “fight back” - on the basis of their own prior knowledge.