Weekly AI Governance Brief #8 — February 2026

Updates on the EU AI Act Article 50: transparency obligations

On 24 February 2026, the European Commission published a news update describing progress in working group discussions on transparency obligations under Article 50 of the EU Artificial Intelligence Act. The communication was published through the Commission’s Shaping Europe’s digital future platform and reflects work coordinated by the EU AI Office and DG CONNECT.

The update summarises discussions held during January 2026 meetings and workshops involving two working groups focused on transparency obligations for generative AI content. The first working group examines marking and detection techniques for AI-generated content. The second focuses on disclosure and labelling obligations for such content.

According to the Commission update, the meetings gathered stakeholder input through workshops and written submissions. The page also provides access to downloadable meeting minutes and workshop materials documenting the discussions. These materials form part of the Commission’s ongoing process to elaborate implementation approaches for the transparency provisions of the AI Act.

The communication states that written stakeholder contributions collected during this phase will inform the preparation of a second draft document, expected to be published in early March 2026.

Why this matters

Article 50 of the AI Act introduces transparency requirements relating to AI-generated or manipulated content, including obligations concerning disclosure and labelling. The working group process described in the Commission update indicates how the implementation of these provisions is being developed through a structured stakeholder engagement process.

The publication of meeting minutes and workshop materials also provides documentation that organisations monitoring the implementation of Article 50 can review to understand the technical and operational issues under discussion, including approaches to marking AI-generated content and disclosure mechanisms. This process illustrates how the Commission is using working groups to elaborate practical implementation considerations before formal guidance is issued.

Commission outlines Code of Practice process for AI-generated content transparency

The European Commission updated its policy page describing the Code of Practice on marking and labelling of AI-generated content on 24 February 2026. The page describes a structured drafting process intended to support compliance with Article 50 transparency obligations of the AI Act.

According to the Commission description, the Code of Practice aims to address obligations relating to AI-generated or manipulated content, including deepfakes. The drafting process is organised into two working groups, reflecting different roles within the AI ecosystem.

One group focuses on providers of AI systems, addressing technical approaches and implementation practices relevant to AI system developers. The second group focuses on deployers, examining how organisations using AI systems should approach disclosure and labelling practices when deploying generative AI outputs.

The page outlines how the drafting process includes stakeholder participation and observers, and it describes a series of drafting rounds scheduled to continue through 2026. It also provides links to participant lists and related materials connected to the drafting process.

Why this matters

The Commission frames the Code of Practice as an instrument intended to support the implementation of Article 50 transparency obligations. Although not itself a legislative instrument, the drafting process indicates how operational guidance may emerge to assist organisations in implementing disclosure and labelling practices for AI-generated content.

By publishing a consolidated page describing the structure of the process, the Commission provides a reference point for stakeholders following the development of implementation guidance related to generative AI transparency obligations under the AI Act.

CNIL launches call to test PANAME GDPR audit tool for AI models

On 26 February 2026, the French data protection authority CNIL published a call for expressions of interest inviting organisations to participate in testing a tool designed to audit AI models for GDPR compliance.

The initiative concerns PANAME, described by CNIL as a library intended to audit the confidentiality of AI models. The project is being conducted in partnership with ANSSI, PEReN, and Inria.

According to the notice, PANAME is intended to support GDPR conformity assessments by enabling testing of trained AI models for risks, including data extraction or re-identification. Such testing can be relevant when determining whether training data can be considered anonymous or whether personal data remains identifiable within the model.

The call states that the expression-of-interest process is open from 26 February to 28 March 2026, and the CNIL page provides instructions for submission together with downloadable application materials.

Why this matters

The PANAME initiative reflects an effort by a national supervisory authority to develop practical auditing tools for assessing privacy risks in AI models. Testing techniques that evaluate whether trained models allow the extraction or reconstruction of personal data are directly relevant to determining whether a system remains within the scope of GDPR.

By inviting organisations to participate in testing the tool, CNIL is establishing an experimental compliance environment where model auditing approaches can be evaluated in practice.

Looking ahead

Recent institutional updates highlight ongoing implementation activity around transparency obligations under the EU Artificial Intelligence Act, particularly through Commission-led working groups and the drafting process for the Code of Practice on AI-generated content. These processes illustrate how technical and operational questions related to generative AI transparency are being explored through structured stakeholder engagement.

At the same time, supervisory authorities are beginning to experiment with practical compliance tools for AI systems, as illustrated by CNIL’s PANAME initiative aimed at testing methods for auditing privacy risks in trained models. Together, these developments show continuing activity across EU institutions and national authorities focused on operationalising governance frameworks relevant to AI systems.

Sources

European Commission news update on working groups discussing Article 50 transparency obligations under the AI Act:
https://digital-strategy.ec.europa.eu/en/news/working-groups-advance-discussions-transparency-obligations-under-article-50-ai-act

European Commission policy page describing the Code of Practice on marking and labelling of AI-generated content:
https://digital-strategy.ec.europa.eu/en/policies/code-practice-ai-generated-content

CNIL notice launching the PANAME call for expressions of interest to test an AI model GDPR audit tool:
https://www.cnil.fr/fr/projet-paname-participez-aux-tests-dun-outil-daudit-rgpd-des-modeles-dia

Read more