Weekly AI Governance Brief: 20-26 April 2026
Netherlands advances AI Act implementation and consults on automated decision-making
On 20 April 2026, the Government of the Netherlands opened a public consultation on the Uitvoeringswet AI-verordening, the draft law intended to operationalise enforcement of the EU AI Act at national level. A follow-up notice to parliament on 21 April 2026 confirmed the proposed supervisory structure.
The consultation materials indicate that the Autoriteit Persoonsgegevens would act as the market-surveillance authority for a substantial portion of AI Act provisions, while sectoral regulators would oversee AI systems within their respective domains. The draft also sets out supervisory powers and introduces cost-recovery mechanisms linked to enforcement. The consultation is open until 1 June 2026.
In parallel, on 21 April 2026, the Dutch Data Protection Authority launched a separate consultation on explanations in automated decision-making, with a closing date of 26 May 2026. Based on available primary-source information, the consultation seeks input on how organisations should provide explanations when automated decisions affect individuals. The accessible material confirms the consultation’s existence, topic, and timeline, but does not provide detailed insight into the draft guidance content.
Why this matters
Taken together, these developments show how AI governance is being operationalised across both new and existing legal frameworks in the Netherlands.
The draft implementation law addresses institutional design, clarifying supervisory responsibilities and enforcement mechanisms under the AI Act. At the same time, the Data Protection Authority’s consultation focuses on interpretive expectations under data protection law, particularly in relation to automated decision-making.
For organisations, this creates a dual compliance track. AI systems will be subject not only to new AI Act obligations enforced through a defined supervisory structure, but also to evolving expectations around transparency and explanation under existing data protection regimes. This overlap has practical implications for documentation, governance processes, and how organisations prepare for regulatory scrutiny across multiple authorities.
Brazil’s competition authority intensifies scrutiny of AI in search and messaging platforms
On 23 April 2026, Brazil’s Administrative Council for Economic Defense (CADE) took two related actions addressing the role of AI in digital markets.
First, CADE’s tribunal unanimously supported opening a deeper administrative investigation into Google’s use of journalistic content in the context of generative AI search features. According to the official notice, the case builds on a 2019 inquiry but now reflects the integration of AI systems that synthesise information directly within search interfaces. The tribunal identified potential implications for content visibility, traffic distribution, and monetisation for publishers. The case is registered under no. 08700.003498/2019-03, and the interim president referenced a possible theory of abuse of dominant position.
Second, CADE upheld a notice of violation against Meta and WhatsApp concerning compliance with a preventive measure related to access conditions for AI chatbot providers using WhatsApp Business. The tribunal maintained a daily fine of BRL 250,000, stating that restoring competitive conditions requires allowing AI chatbot providers to operate without additional access charges, as such charges may function as barriers to entry or expansion.
Why this matters
These actions reflect a consistent enforcement approach in which AI is treated as part of broader competition dynamics in digital markets.
In the search context, generative AI features are being assessed for their impact on value distribution and platform power. In messaging infrastructure, access conditions for AI chatbot providers are being evaluated for their potential to restrict competition.
For organisations, this signals that AI-enabled functionalities are not insulated from existing competition law. Changes to product design, access rules, or pricing structures that affect how AI services interact with platforms may be subject to scrutiny where they influence market entry, expansion, or the allocation of economic value.
UK regulator Ofcom issues warnings on cyber risks from frontier AI
On 21 April 2026, the United Kingdom’s communications regulator Ofcom published letters addressed to communications providers and operators of essential services concerning the cyber-security implications of frontier AI models.
In its letter to communications providers, Ofcom refers to rapid increases in AI capability and highlights the emergence of models with advanced cyber-related functionalities. The regulator instructs firms to assess the risks associated with such capabilities and take appropriate action where necessary.
The letters were published on Ofcom’s public correspondence page. A related UK government open letter to business leaders, published on 22 April 2026 and dated 15 April, provides broader policy context regarding AI-related cyber threats.
Why this matters
This development signals the integration of AI capability considerations into existing cyber-security and operational resilience supervision. Frontier AI is being treated as a factor that may alter threat landscapes, rather than as a separate policy domain.
For regulated entities, this introduces an expectation that AI developments are incorporated into ongoing risk assessment processes. It also implies that responsibility for AI-related risks may fall within established governance structures for cyber security and resilience, including board oversight and internal control systems.
Looking ahead
The developments this week point to a convergence between AI-specific regulation and existing governance frameworks. National implementation of the EU AI Act is progressing into detailed supervisory design, while data protection authorities continue to refine expectations around automated decision-making.
At the same time, competition and sector regulators are incorporating AI into established oversight domains. AI is being addressed in relation to market structure, infrastructure access, and cyber risk, rather than as a separate regulatory category.
This reflects a broader pattern in which AI governance is being embedded across multiple areas of regulation, shaping how obligations are interpreted and enforced in practice.
Sources
Dutch government notice on AI Act supervision and consultation: https://www.rijksoverheid.nl/actueel/nieuws/2026/04/21/kabinet-zet-stap-met-toezicht-op-europese-ai-regels
Dutch consultation file on AI Act implementation law: https://www.internetconsultatie.nl/uitvoeringswet_ai_verordening
Dutch Data Protection Authority consultation on automated decision-making explanations: https://www.autoriteitpersoonsgegevens.nl
CADE decision to investigate Google’s use of journalistic content in generative AI search: https://www.gov.br/cade/en/matters/news/cade-further-investigates-googles-use-of-journalistic-content
CADE decision upholding sanctions on Meta and WhatsApp: https://www.gov.br/cade/en/matters/news/cade-upholds-fine-against-meta-and-whatsapp
Ofcom letter on cyber security implications of frontier AI: https://www.ofcom.org.uk
UK government open letter on AI cyber threats: https://www.gov.uk