AI Governance Brief: 30 March – 05 April 2026
European Commission updates signatory list for the General-Purpose AI Code of Practice
On 01 April 2026, the European Commission updated the official webpage for the General-Purpose AI Code of Practice, including a revised public list of signatories. The page, hosted under the EU’s digital strategy portal and attributed to the European AI Office, presents the Code as a voluntary instrument intended to support providers of general-purpose AI models in demonstrating compliance with obligations under the AI Act.
The updated webpage consolidates access to the Code’s three chapters—Transparency, Copyright, and Safety and Security—and links associated documentation resources. It also records the participation status of individual signatories across these chapters. The update specifies that xAI has signed the Safety and Security chapter only, with compliance relating to transparency and copyright expected to be demonstrated through alternative means.
The update does not introduce new obligations or amend the structure of the Code itself. It reflects a revision of the publicly disclosed participation landscape and provides a current snapshot of uptake among providers engaging with this voluntary framework.
Why this matters
The publication provides an official and current reference point for the adoption of a Commission-supported compliance tool designed to align with AI Act requirements for general-purpose AI models. For organisations interacting with such models—whether as deployers, integrators, or downstream users—the signatory list functions as an observable indicator of provider commitments to specific governance domains.
The distinction between participation across different chapters is operationally relevant. It signals that adherence to the Code may be partial and that providers may rely on alternative mechanisms to demonstrate compliance in certain areas. This has implications for how organisations assess upstream governance practices and document reliance on third-party AI systems within their own compliance frameworks.
CNIL sets 2026 supervisory priorities, highlighting recruitment and automated decision-making
On 03 April 2026, the French data protection authority (CNIL) published its priority control themes for 2026. Among these, recruitment was identified as a focal area for supervisory activity.
The CNIL states that its controls in this domain will assess compliance with the General Data Protection Regulation (GDPR), drawing on themes outlined in its existing recruitment guidance. The announcement specifies that supervisory attention will include automated decision-making systems, the provision of information to candidates, and data retention practices.
The publication further notes that the recruitment theme anticipates CNIL’s future role as a market surveillance authority under the EU AI regulation in the employment context. This reference situates the planned GDPR-based controls within a broader institutional transition linked to the implementation of the AI Act.
The statement does not introduce new enforcement measures or guidance. It sets out the authority’s prioritisation for supervisory activity in the coming year and identifies areas where existing legal frameworks will be applied.
Why this matters
The prioritisation provides an explicit indication of near-term supervisory focus on recruitment-related automated decision-making within a GDPR framework. Organisations operating recruitment systems, including those incorporating algorithmic or AI components, are therefore placed on notice regarding the specific compliance dimensions likely to be examined.
The reference to future market surveillance responsibilities under the AI regulation connects current data protection oversight with forthcoming AI-specific supervisory roles. This signals an alignment between existing GDPR enforcement practices and the anticipated implementation structure of the AI Act, particularly in the employment domain.
For organisations, this creates continuity between current compliance obligations and future regulatory expectations. It also suggests that documentation, transparency measures, and safeguards implemented under GDPR will form part of the evidentiary basis for AI-related oversight once the new framework is operational.
ICO publishes recruitment findings and opens consultation on automated decision-making guidance
On 31 March 2026, the UK Information Commissioner’s Office (ICO) published a report titled Recruitment rewired: an update on the ICO’s work on the fair and responsible use of automation in recruitment. The report presents findings derived from voluntary engagement with employers and outlines regulatory expectations relating to the use of automated systems in hiring processes.
The report identifies areas of focus, including transparency towards candidates, the implementation of safeguards, and monitoring for bias. It reflects observations from supervisory engagement rather than the outcome of formal enforcement actions.
On the same date, the ICO launched a consultation on draft guidance concerning automated decision-making, including profiling. The consultation is described as an update to existing guidance, taking into account legislative changes introduced by the Data (Use and Access) Act 2025. The consultation process is open to stakeholders until 29 May 2026.
The consultation page indicates that the draft guidance addresses the interpretation and application of rules governing automated decision-making and profiling under the UK data protection framework. It does not constitute final guidance at this stage.
Why this matters
The combined publication of a supervisory findings report and the opening of a formal consultation represent a coordinated step in updating regulatory expectations in the area of automated decision-making. The focus on recruitment aligns with supervisory attention observed in EU member states, particularly in relation to transparency and safeguards.
The consultation introduces an opportunity for stakeholders to engage with proposed updates to core guidance on automated decision-making. For organisations operating in the UK, this indicates that existing compliance approaches may require review in light of forthcoming interpretative changes.
The linkage to recent legislative amendments situates the consultation within an evolving legal context. This underscores the need for organisations to monitor both statutory developments and associated guidance to ensure alignment between operational practices and regulatory expectations.
Looking ahead
Across the period covered, supervisory and institutional activity converged on recruitment-related automated decision-making as an area of regulatory attention. Both CNIL and the ICO referenced transparency, safeguards, and oversight of automated systems within hiring processes, each within their respective legal frameworks.
At the EU level, the update to the General-Purpose AI Code of Practice signatory list provides a contemporaneous view of voluntary engagement with a compliance-support mechanism linked to the AI Act. Together, these developments reflect ongoing alignment between data protection oversight and emerging AI-specific governance structures.
European Commission webpage update on the General-Purpose AI Code of Practice signatory list: https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai
CNIL announcement of 2026 priority controls, including recruitment and automated decision-making: https://cnil.fr/fr/controles-prioritaires-2026
ICO report “Recruitment rewired” on automation in recruitment: https://ico.org.uk/about-the-ico/what-we-do/recruitment-rewired/
ICO consultation on draft guidance about automated decision-making, including profiling: https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/2026/03/ico-consultation-on-the-draft-guidance-about-automated-decision-making-including-profiling/