Aarhus University Seal

CLAI talks programme

Our online, bi-weekly CLAI-talks continue in 2024. The full programme will be continually updated here.

The Center for Language Generation and Artificial Intelligence (CLAI) is happy to announce our bi-weekly talks of 2024. The talks by esteemed associated researchers will be open to the public and are given on Thursdays via zoom. They provide valuable insights into cutting-edge research and developments in the fields of language generation and artificial intelligence.

Program Schedule:

January 11

Cathrine Hasse and Maja Hojer Bruun will present a recent paper with student helper Kasper Ørsted Knaap, entitled "ChatGPT as collaborator and opponent in education":

In this paper, we introduce the concept of Relational Socratic Ignorance (RSI) within a framework of educational anthropology by connecting it with students’ preceding learning when using ChatGPT. In a project funded by Aarhus University, we have developed a new ethnographic method for exploring how students utilize their learning potentials when engaging in dialog with ChatGPT. We conducted workshops with 11 participants and documented the prompt history and compared it to the interviews and written papers throughout the process. This allowed us to examine how students learn when interacting with ChatGPT, given their diverse backgrounds of preceding learning.

January 25 No talk
Februrary 8

Cancelled

February 22

Jens Kaas, Alexandra Institute, will present a talk on "LLMs in Practice and Creating Danish Models"

ChatGPT has taken the world by storm. Some are scared and think that AI will take over the world and make humans redundant. In this talk, I'll talk about AI and language models in a more down-to-earth way and show how they can be used in practice. I will also talk about why it makes sense to develop a Danish focused language model.

March 7

Arthur Hjorth, Aarhus University, will present the talk "AUFF-project: Trust, Revisited":

In this talk, I will present the new AUFF NOVA-project, Trust, revisited, including the planned studies, hypotheses, and LLM-focused design work. Background: The release of ChatGPT and OpenAI’s Large Language Model (LLM) API, enabling anyone to generate massive amounts of text indistinguishable from human-generated text, came during an unprecedented drop in the general public’s trust in what we read in general and about science in particular. LLMs can engage in conversations with people through chatbots and adapt their responses to the topic and tone of the conversation, cultivate trust and emotional connections, thus making them more persuasive and morally corruptive. The ease with which they can programmed to automatically respond to newspaper comment sections and on social media has the potential to create adaptive and trustworthy misinformation at a scale never seen before. The confluence of these technical and social developments creates an urgent need to better understand whether and how the adaptive nature of LLM-based chatbots can engage with humans in trust-cultivating ways – for good and for bad – and potentially change attitudes, knowledge, and behaviors. Trust, Revisited does so by "revisiting" existing, non-AI-based studies on people’s attitudes, knowledge, and behavior with online experiments using chatbots powered by LLMs, thereby working from established and comparable baselines to explore trust-cultivation and LLMs.

March 21

Tina Paulsen Christensen and Helle Dam Jensen wil present the talk "AI and translation: Levels of translation automation in the translation industry and in translation education": 

Translation technology that attempts to automate the translation process partially or completely is being developed at high speed these years. Thus, in the translation industry, the control of the translation process is being transferred from translators to computers to an increasing extent, and in society at large, free online machine translation (MT) systems and large language models are now an integrated part of many people’s digital practices. This situation creates new challenges to translation education, questioning the competence profile of future translators.

In this presentation, we present examples of translations tools (e.g. Translation Memories and Machine Translation) used in the translation industry today with a point of departure in a taxonomy of six levels of translation automation. In extension, we will explain how we have integrated the recent developments in translation automation into our BA and MA programmes in business communication, drawing on a Machine-Translation Literacy competence model.

April 4

Peder Hammerskov, DMJX will present the talk "The AI Newsroom":

Delve into the role of generative AI in modern newsrooms. We'll explore its applications in content creation, the ethical considerations of AI-driven journalism, and the balance between efficiency and journalistic quality. The talk also highlights the future potential of AI in enhancing investigative journalism.

April 18

Lindsay Weinberg, Purdue University and Aarhus Insitute of Advanced Studies (AIAS) Fellow, will present the talk: "AI and the Politics of Refusal":

AI tools are being integrated into higher education at a rapid pace, often without time for university communities to meaningfully deliberate about whether these tools should be used for teaching, learning, or university-related decision-making. In this talk, we’ll consider under what circumstances (if any) should educators refuse to engage with AI tools? Is refusal even possible, and if it is, is it desirable? Drawing from feminist thinkers who imagine refusal as a way to open up and insist on a radically different future, this talk argues that it's crucial for educators to have the power to refuse the use of unethical AI tools on their campuses. A collective capacity to exercise refusal is needed for countering the corporate consolidation of control over the terms and priorities of AI research and development, and over higher education in the age of the neoliberal university.

May 16

Ulf Dalvad Berthelsen, Aarhus University, will present the talk "Do Large Language Models Have a Disciplinary Voice?":

 

A comparative corpus-based study of GPT4’s ability to reproduce disciplinary voice in AI-generated linguistic prose: The purpose of our study is to uncover the extent to which generative AI models - with GPT4 as an example - can reproduce disciplinary voice in academic prose written in Danish. As these models are trained on vast amounts of natural language, we must, from a functional linguistic point of view, expect that the genre and register variation we find in natural language is reproduced to some extent in the auto-generated output. We are particularly interested in the phenomenon of disciplinary voice, partly because it is assumed to be a difficult feature to reproduce, and partly because it is a relatively well-described phenomenon that can be investigated quantitatively through analysis of the surface structure of texts. We focus specifically on three aspects of disciplinary voice: stance, engagement, and subject-specific vocabulary. We investigate the phenomenon quantitatively through a corpus-based comparative study in which we compare a corpus consisting of linguistic articles written in Danish with a corpus of AI-generated academic prose with linguistic content.

We invite you to mark your calendars and join us for these sessions. 

For more information and updates on our biweekly talks, please check our website or follow us on LinkedIn or Twitter.

For media inquiries and questions, please contact: pascale.moreira@cc.au.dk