Weekly AI Governance Brief: 6–12 April 2026

Share

European Commission and European AI Office launch consultation on measuring AI energy use and emissions

On 7 April 2026, the European Commission, through the European AI Office, opened a consultation on measuring the energy consumption, efficiency, and emissions of AI models and systems. The consultation forms part of the implementation of the AI Act’s provisions related to energy transparency.

The consultation runs from 7 April to 15 May 2026 and is directed at developers and deployers of general-purpose AI models, as well as suppliers of components and services. According to the Commission’s consultation materials, responses will inform the development of a measurement framework and contribute to a broader study on energy-efficient and low-emission AI in the EU. The consultation also explores how energy data should be collected across both training and inference phases and which performance indicators may be appropriate for a future framework, including the possible introduction of an energy and emissions label.

Why this matters

This consultation signals a concrete extension of EU AI governance into environmental performance metrics. It introduces the prospect of standardised measurement and reporting obligations for energy use and emissions associated with AI systems. For model providers and large deployers, this points toward future requirements around documentation, benchmarking, and potentially labelling. These elements may integrate into procurement processes, transparency obligations, or conformity assessments, expanding the operational scope of compliance beyond safety and transparency into measurable environmental accountability.

On 9 April 2026, the European Commission published an implementation update marking one year since the launch of the AI Continent Action Plan. Issued as a press release, the update consolidates developments across infrastructure, policy coordination, and uptake measures.

The Commission reported the deployment of 19 AI factories across the EU supercomputing infrastructure, alongside 13 AI Factory antennas and plans for future AI gigafactories. The update also connects AI policy with broader initiatives, including the Data Union Strategy and the AI Omnibus, which is intended to simplify regulatory requirements and provide greater legal certainty for businesses. Additional elements highlighted include ongoing work on the AI Skills Academy, the Apply AI Strategy, and approximately €1 billion in funding calls aimed at accelerating AI adoption.

Why this matters

The update reflects an institutional shift in how AI governance is being operationalised at the EU level. Rather than treating the AI Act as a standalone regulatory instrument, the Commission is embedding governance within a wider framework that includes compute infrastructure, data access policy, and regulatory simplification. For organisations, this suggests that compliance obligations and market access conditions will increasingly be shaped alongside industrial and data policy measures. It also indicates that regulatory burden may be recalibrated through simplification initiatives, even as oversight expands in parallel domains.

China adopts interim measures on anthropomorphic AI interaction services

On 10 April 2026, the Cyberspace Administration of China, together with four ministries, issued interim measures governing AI systems that simulate human personality traits and engage in sustained emotional interaction. The measures were adopted as Order No. 21 and will take effect on 15 July 2026.

The regulation establishes a dedicated governance framework for anthropomorphic or companion-style AI services. It requires providers to implement governance systems covering algorithm review, ethics review, content management, cybersecurity, and data security. The measures prohibit virtual intimate-relationship services for minors, impose restrictions on the use of interaction data for model training, and require clear labelling to indicate that users are interacting with AI rather than a natural person. Additional provisions include requirements for exit mechanisms and mandatory security assessments in specified scenarios, such as the launch of new services, significant functional changes, or reaching large user thresholds.

Why this matters

This development represents a move toward product-level regulation of AI systems based on their behavioural characteristics and user interaction patterns. It extends beyond general-purpose AI governance into specific use cases involving emotional engagement and human-like interaction. For organisations operating or distributing similar services, the measures introduce detailed requirements around user protection, data use, and system design. More broadly, it signals a regulatory trajectory in which AI governance frameworks differentiate between types of AI applications based on their societal and psychological impact.

South Africa publishes draft national AI policy for consultation

On 10 April 2026, South Africa’s Department of Communications and Digital Technologies published a draft National Artificial Intelligence Policy in the Government Gazette for public comment. The draft follows Cabinet approval on 25 March 2026 and invites submissions until 10 June 2026.

The draft policy outlines a national framework for AI governance, including a responsible governance pillar and proposed measures aligned with existing data protection law under the Protection of Personal Information Act (POPIA). It includes provisions for bias detection and mitigation protocols, human rights and regulatory impact assessments, and auditability requirements for public-sector AI systems. The draft also proposes institutional developments such as a National AI Safety Institute and discusses the potential establishment of a National AI Commission or Office. The governance approach combines horizontal principles with sector-specific regulatory elements.

Why this matters

The draft policy represents a structured attempt to integrate AI governance with broader development and institutional objectives. It combines elements of data governance, oversight mechanisms, and institutional capacity building within a single framework. For organisations operating in or engaging with South Africa, it indicates a governance model that emphasises auditability, rights-based safeguards, and institutional coordination. At a broader level, it contributes to emerging global patterns in which national AI strategies incorporate both regulatory and development-focused components.

Looking ahead

Across the developments in this period, a consistent pattern is the expansion of AI governance beyond core regulatory instruments into adjacent domains. Environmental performance, infrastructure policy, supervisory reporting, and product-specific regulation are increasingly integrated into governance frameworks. Institutional activity also reflects a combination of simplification efforts and new oversight mechanisms. These developments indicate a continued broadening of the scope and operational depth of AI governance across jurisdictions.

European Commission consultation on measuring energy consumption and emissions of AI models and systems: https://ec.europa.eu/info/law/better-regulation/have-your-say_en
European Commission press release on AI Continent Action Plan implementation update: https://ec.europa.eu/commission/presscorner/home/en
Interim Measures for the Administration of Artificial Intelligence Anthropomorphic Interaction Services (Order No. 21): http://www.cac.gov.cn
South Africa Government Gazette notice on Draft National Artificial Intelligence Policy (No. 54477): https://www.gov.za/documents/notices

Read more