AU professor leads development of new EU code to ensure transparency in AI-generated content
New rules on the labelling of AI-generated content will enter into force on 2 August 2026. A working group established by the European Commission, chaired by Anja Bechmann, professor of Media Studies at Aarhus University, is developing a code of conduct to help content producers comply with the legislation.
Research clearly shows that it is difficult to determine whether videos, images, text, or other forms of content are generated by artificial intelligence. New legislation aims to address this issue.
From 2 August 2026, a number of transparency obligations will take effect, making it mandatory to label AI-generated content. The regulation has been adopted by Denmark and the other EU member states as part of the AI Act, which sets requirements for the responsible use of artificial intelligence in Europe.
The European Commission has appointed Anja Bechmann as chair of the working group tasked with developing a code of conduct to operationalise the legislation and assist content producers in meeting its requirements.
Anja Bechmann is director of DATALAB – Center for Digital Social Research at Aarhus University and conducts research on AI–human collective behaviour and democratic challenges, including misinformation. She has previously collaborated with the European Commission as a member of an expert group on disinformation.
“The intention is clear: it should be easier for ordinary users to recognise when they are encountering AI-generated content. The task of the working group is to propose how this can be achieved,” says Anja Bechmann.
The difficulty for ordinary users in distinguishing synthetic from human-generated content is highlighted in the first annual report on the impact of tech companies on democracy, well-being, and social cohesion, to which Anja Bechmann contributed together with colleagues from the University of Copenhagen and the University of Southern Denmark. The report, published by the Danish Ministry of Digital Government in February 2025, concludes, among other findings, that Danes perform no better than random guessing when asked to identify whether images are generated with or without AI.
Implications for 450 million citizens
The aim of the code of conduct is to ensure that all commercially driven content directed at Europe’s 450 million citizens is clearly declared. As a result, all actors producing content for public audiences for commercial purposes will be subject to the transparency obligations. This includes advertising agencies, media companies, influencers, the gaming and film industries, music services, photographers, search engines, prompt-based systems and chatbots, among others.
“In short, all companies that produce deepfakes in the form of images, video, and audio, or AI-generated text for public dissemination without human editorial oversight and a responsible individual behind it,” explains Anja Bechmann.
During autumn 2025, expressions of interest were collected from large technology companies, businesses, NGOs, and academics wishing to participate in the process. A total of 200 stakeholders have been selected to contribute to the development of the code, ensuring that a broad range of perspectives is represented and that the code’s usability is continuously assessed.
One of the working group’s current tasks is to engage in dialogue with stakeholders and to present draft versions of the code to the European Commission’s AI Board, all 27 member states, and the European Parliament for feedback. The code will propose solutions for how labelling should be made clear and visible and should appear the first time a citizen encounters the content.
“It has almost become the norm to use AI to create content, either fully or partially, and therefore it is not straightforward to develop a model for labelling that works across all contexts, modalities, and media platforms, both online and offline, while also meeting EU accessibility requirements. How should labelling be designed so that it does not unnecessarily create distrust, as existing studies have shown? Where should it be placed so that it is visible when users are uncertain and seek clarification? How much information should it contain, given that research shows that too much information leads users to ignore it?” asks Anja Bechmann.
The code of conduct is being developed in parallel with the EU’s guidelines for the legal text, which include important definitions of what must be labelled. For example, should influencers label images enhanced with AI beauty filters? Should product catalogues indicate when AI filters have been used to enhance products? These are among the nuances the working group must clarify. The legal text states that content must be labelled when it is deceptive and presents itself as a reality that it is not. However, questions remain as to whether there should be a threshold for trivial cases, and if so, where that threshold should lie.
A second draft of the code of conduct is currently available. Based on stakeholder feedback, the working group will prepare a third and final version in spring 2026, which is expected to be published in June 2026.
Facts
The working group
Chair: Anja Bechmann, professor of Media Studies and director of DATALAB – Center for Digital Social Research at Aarhus University.
Vice-chair: Giovanni De Gregorio, PLMJ Chair of Law and Technology at Católica Global School of Law and Católica Lisbon School of Law.
Vice-chair: Madalina Botan, associate professor at SNSPA (Bucharest) and coordinator of the Center for Media Studies.
Facts
Labelling of AI-generated content
The AI Act, adopted by EU member states, sets specific transparency requirements for content created by or with artificial intelligence.
Article 50 of the regulation outlines a number of transparency obligations that will enter into force on 2 August 2026. The purpose is to ensure that it is clear to ordinary users when content is a deepfake or consists of AI-generated or manipulated text that informs the public about matters of public interest without human editorial oversight and without a responsible individual behind it.
A deepfake refers to image, audio, or video content generated or manipulated using AI that closely resembles real persons, objects, places, entities, or events, and which may falsely appear authentic or truthful.
The European Commission has established a working group of independent experts that is currently developing a code of conduct to operationalise the legal framework. The code is intended to serve as a voluntary tool for content producers who are subject to the transparency obligations.
Contact
Anja Bechmann
Professor, Media and JOurnalism Studies
School of Communication and Culture
Aarhus Universitet
Mail: anjabechmann@cc.au.dk