Weekly AI Governance Brief #11 — March 2026
Seventh AI Board meeting signals progress on implementation coordination
On 20 March 2026, the European Commission published a summary of the seventh meeting of the AI Board, held under the chairmanship of the Cyprus Presidency. The meeting addressed updates on the Commission’s broader AI strategy, including discussion of priorities for implementing both the AI Continent Action Plan and the AI Act.
Working Group leadership presented a second draft of the Code of Practice on labelling and marking of AI-generated content, followed by a question-and-answer session among participants. The meeting record also notes discussion of progress on national AI regulatory sandboxes. In addition, the Board endorsed several subgroup documents, including three reports identified as forthcoming and to be published at a later stage.
The published summary reflects ongoing coordination between the Commission and the EU Member States within the governance structure established under the AI Act.
Why this matters
The AI Board functions as a central coordination mechanism for implementation of the AI Act, bringing together Member State representatives and the Commission. The discussion of a second draft Code of Practice on labelling AI-generated content indicates continued work on operationalising transparency-related provisions.
References to national regulatory sandboxes suggest ongoing efforts to align experimentation frameworks across Member States. The endorsement of subgroup documents also indicates that technical and policy workstreams are advancing in parallel, even where outputs are not yet publicly available.
ISO working draft initiated on generative AI risk guidance
On 18 March 2026, the ISO catalogue recorded that the work item ISO/IEC WD TS 25568 reached the milestone of “Working draft (WD) study initiated.” The document is being developed as a Technical Specification under ISO/IEC JTC 1/SC 42.
According to the published abstract, the draft provides guidance on addressing risks in generative AI systems. It sets out objectives used when identifying risks, describes risk sources and relevant stakeholders across the AI lifecycle, and outlines approaches for risk analysis, risk treatment, and the implementation of controls.
The work remains at an early stage of development, with the working draft marking the beginning of formal drafting activity.
Why this matters
Although not part of EU law, ISO/IEC standardisation work provides a structured reference point for risk management approaches that may be used alongside regulatory requirements. The focus on generative AI systems aligns with areas of active policy attention and regulatory development.
The initiation of a working draft indicates that formal guidance on generative AI risk is being developed at international level. Organisations tracking governance frameworks may monitor such developments to understand how emerging technical guidance aligns with regulatory expectations.
EU Commission study examines AI-enabled digital health technologies in Europe
On 17 March 2026, the European Commission published a study examining artificial intelligence in healthcare as part of its digital policy library. The publication summarises findings from the “Observatory for digital health technologies in Europe,” a study commissioned by the Directorate-General for Communications Networks, Content and Technology.
The study analyses the EU digital health market and assesses the economic impact of digital health technologies, including AI-enabled solutions. The Commission’s summary indicates that the underlying research draws on a combination of EU-wide surveys, interviews, market mapping, and analysis of investment records.
The publication also notes that AI in healthcare is described as a priority workstream for the AI Office. The study presents its findings alongside policy recommendations developed by the authors.
Why this matters
The publication provides an institutional evidence base on the deployment and impact of AI-enabled systems in a sector associated with safety and fundamental rights considerations. While not a regulatory instrument, it contributes to the information environment supporting policy development and implementation.
The explicit reference to AI in healthcare as a priority workstream for the AI Office indicates that sector-specific analysis may inform ongoing governance activities. Such studies can shape how institutional actors understand risks, market dynamics, and implementation challenges in practice.
Looking ahead
The developments observed during this period reflect continued progress in the institutional and technical infrastructure supporting the EU AI Act. Coordination mechanisms such as the AI Board are advancing implementation discussions, while standardisation planning indicates forthcoming technical work linked to compliance.
At the same time, international standardisation activity and sector-specific studies continue to develop in parallel. Together, these strands suggest an ongoing alignment between regulatory implementation, technical standard-setting, and evidence-based policy inputs.
Sources
European Commission summary of the seventh AI Board meeting: https://digital-strategy.ec.europa.eu/en/news/seventh-ai-board-meeting
European Commission Notice on the 2026 annual Union work programme for European standardisation (OJ C): https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=OJ:C_202601695
ISO catalogue entry for ISO/IEC WD TS 25568 (generative AI risk guidance): https://www.iso.org/standard/90754.html
European Commission study on artificial intelligence in healthcare: https://digital-strategy.ec.europa.eu/en/library/study-artificial-intelligence-healthcare-dissects-digital-health-technologies-europe