Weekly AI Governance Brief #9 — March 2026
European Commission releases updated guidelines addressing ethical AI use in education
On 5 March 2026, the European Commission announced the publication of four sets of guidelines aimed at supporting teachers in key digital education priorities. The package includes two new guidelines and two updated ones, with the revised materials addressing the ethical use of artificial intelligence and data in teaching and learning.
According to the Commission announcement, the updated guidelines incorporate considerations relating to the use of AI technologies in educational environments. The publication also states that guidance addressing disinformation and digital literacy has been revised to reflect the role of generative AI in disinformation dynamics.
The Commission news item provides access to the guideline resources through official EU distribution platforms, including data.europa.eu and the European Education Area portal. The materials are presented as part of the Commission’s digital education policy work supporting teachers and educational institutions in navigating evolving digital technologies.
Why this matters
The publication represents an EU-level guidance initiative addressing the use of AI technologies in education settings.
Although the guidelines are not part of the AI Act’s legislative framework, they reflect ongoing institutional activity related to responsible AI adoption in public-sector environments, particularly where data use and automated tools intersect with educational processes. By explicitly referencing the ethical use of AI and data, the guidelines indicate how governance considerations are being integrated into sector-specific policy resources.
The updates addressing generative-AI-related disinformation also illustrate how EU policy initiatives are adapting existing digital literacy frameworks to account for emerging technological capabilities and associated information dynamics.
EDPS outlines AI Act governance and enforcement structure
On 4 March 2026, the European Data Protection Supervisor (EDPS) published a speech by Wojciech Wiewiórowski delivered to the IMCO–LIBE AI Act Working Group of the European Parliament. The publication includes the speech text and a downloadable PDF version.
In the address, the EDPS outlines the institution’s roles under the AI Act in relation to EU institutions and bodies. The speech describes supervisory responsibilities connected to AI systems used by EU institutions, including references to market surveillance functions and notified-body responsibilities in certain contexts involving high-risk AI systems.
The text also discusses the relationship between AI Act oversight and data protection supervision, reflecting the EDPS’s mandate as the authority responsible for supervising personal data processing by EU institutions. The speech refers to cooperation with other actors within the AI Act governance framework, including market surveillance authorities and fundamental rights authorities.
The publication situates these elements within the broader governance structure established by the AI Act for the oversight of AI systems.
Why this matters
The speech provides an institutional articulation of supervisory roles under the AI Act as they apply to EU institutions.
By describing the interaction between AI Act oversight and data protection supervision, the publication highlights how different regulatory mandates intersect within the EU’s governance architecture for AI systems used in public administration. The references to cooperation with other supervisory actors also reflect the multi-authority structure embedded in the AI Act.
For governance and compliance functions within EU institutions, the speech outlines how AI system oversight may operate alongside existing data protection supervision mechanisms.
CNIL publishes guidance on AI development and evaluation in healthcare
On 5 March 2026, the French data protection authority (CNIL) published a guidance sheet titled “IA et santé : développer et évaluer des systèmes d’IA en conformité avec la réglementation” (AI and health: developing and evaluating AI systems in compliance with regulatory requirements).
The publication provides sector-focused information on projects involving the development or evaluation of AI systems that process health data. According to CNIL, the document sets out procedural and precautionary considerations for organisations undertaking such activities.
The guidance sheet distinguishes several stages in the lifecycle of an AI project and clarifies which stages the guidance addresses. CNIL states that the document complements the authority’s broader recommendations on artificial intelligence while reflecting the specific regulatory protections that apply to health data.
The guidance also references prior formalities and regulatory procedures administered by CNIL where relevant, situating AI development and evaluation activities within the framework of applicable data protection requirements.
Why this matters
The CNIL publication represents a sector-specific articulation of governance considerations where AI development intersects with sensitive personal data processing.
Healthcare data is subject to heightened regulatory protections under European data protection law. The guidance therefore illustrates how national supervisory authorities may interpret governance expectations for organisations developing or testing AI systems that rely on such data.
By structuring the guidance around stages of the AI project lifecycle, the document highlights organisational and procedural considerations associated with AI development and evaluation activities in a regulated sector.
Looking ahead
The developments published during 2–8 March 2026 reflect continued institutional activity across the European AI governance landscape.
At the EU level, policy initiatives addressing the ethical use of AI in education indicate how AI governance considerations are being incorporated into sectoral policy guidance. Institutional publications such as the EDPS speech also provide insight into the evolving oversight structure surrounding the AI Act.
At the national level, guidance issued by CNIL illustrates how supervisory authorities are beginning to articulate governance considerations for AI development and evaluation in specific sectors, particularly where sensitive personal data such as health data is involved.
Sources
European Commission news item announcing guidelines supporting teachers in key digital education priorities:
https://digital-strategy.ec.europa.eu/en/news/commission-publishes-guidelines-support-teachers-key-digital-education-priorities
European Data Protection Supervisor speech “Governance and Enforcement structure of the AI Act”:
https://www.edps.europa.eu/data-protection/our-work/publications/speeches-articles/2026-03-04-governance-and-enforcement-structure-ai-act_en
CNIL guidance on developing and evaluating AI systems using health data:
https://www.cnil.fr/fr/ia-et-sante-developper-et-evaluer-des-systemes-ia-conformes