Weekly AI Governance Brief #4 — January 2026

Introduction

This brief reviews selected institutional and regulatory developments in artificial intelligence governance published between 26 January and 1 February 2026. It focuses on actions and processes relevant to the European Union’s evolving AI governance framework, including enforcement under existing digital regulation, international institutional coordination, and standardisation activity linked to the EU AI Act.

The developments covered relate to formal regulatory action by EU institutions, international treaty activity with relevance to European policy, and progress on technical standards intended to support AI Act compliance. The brief reports observable actions and documented processes only, based strictly on the sources provided.

European Commission investigates Grok and X’s recommender systems under the Digital Services Act

On 26 January 2026, the European Commission announced the opening of a new formal investigation into the social media platform X under the EU Digital Services Act (DSA). The investigation was communicated through a Commission press release and concerns the deployment and operation of the platform’s AI chatbot, Grok, together with related recommender systems made available to users in the European Union.

According to the Commission, the inquiry will assess whether X complied with its DSA obligations to identify, assess, and mitigate systemic risks arising from the design and use of its services. The focus is on risks linked to AI-generated content, following findings that Grok had produced manipulated sexualised images. These outputs reportedly included content that may constitute illegal material, including potential child sexual abuse material, and exposed users to serious harm.

The Commission indicated that the investigation will examine X’s internal risk assessment processes, mitigation measures, and safeguards related to the generation and amplification of such content. The inquiry will also consider how recommender systems interact with AI-generated outputs and whether their operation contributed to the dissemination of harmful or illegal material.

This new investigation runs in parallel with an earlier DSA inquiry into X’s recommender systems that the Commission launched in 2023. The Commission has not merged the proceedings, but noted their substantive connection. Under the DSA framework, confirmed breaches can lead to fines of up to six percent of a company’s global annual turnover, depending on the findings and procedural outcomes.

The Commission’s announcement did not set a fixed timeline for the conclusion of the investigation, nor did it prejudge the outcome. It stated that the inquiry would proceed in line with DSA enforcement procedures.

Why this matters

This case illustrates how existing EU digital regulation is being applied to AI-driven functionalities already in operation. The DSA does not regulate AI systems as such, but it imposes binding obligations on very large online platforms to manage systemic risks, including risks arising from automated content generation and recommendation. The investigation demonstrates the Commission’s use of these mechanisms to address harms linked to AI outputs without relying on future AI-specific legislation.

From an institutional perspective, the action shows how AI governance responsibilities are currently exercised through established enforcement channels. It also clarifies the compliance expectations placed on platform providers deploying generative AI tools within user-facing services. The case is relevant for understanding how oversight of AI-related risks is structured in practice ahead of the full application of the AI Act.

Armenia signs the Council of Europe AI Framework Convention

On 27 January 2026, Armenia signed the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, developed under the auspices of the Council of Europe. The signature was confirmed in a Council of Europe news release published the following day.

The Convention is the first binding international treaty dedicated specifically to artificial intelligence. It establishes common safeguards to ensure that AI systems are developed, deployed, and used in a manner consistent with fundamental rights, democratic values, and the rule of law. The instrument covers public and private sector uses of AI and requires parties to adopt domestic measures addressing transparency, accountability, risk management, and oversight.

Armenia’s signature brings the number of signatories to 19 as of that date. The list includes both EU Member States and non-EU countries. The Convention opened for signature in 2024 and remains open to additional states and to the European Union itself. Signature indicates political commitment, while legal obligations become binding following ratification in accordance with national procedures.

The treaty also establishes international monitoring and cooperation mechanisms, intended to support consistent interpretation and implementation among parties. The Council of Europe noted that the Convention is designed to be technologically neutral and adaptable to future AI developments.

Why this matters

The continued expansion of signatories to the Convention reflects sustained institutional momentum toward shared international approaches to AI governance. For the EU context, the treaty is relevant as a parallel framework grounded in human rights law that aligns with many of the principles underlying the EU AI Act.

While the Convention does not replicate EU regulatory structures, its growing adoption has implications for policy coherence and cross-border governance. EU institutions monitoring international AI governance developments may need to consider how commitments under the Convention interact with EU law, particularly in areas such as oversight mechanisms and rights-based safeguards. Armenia’s signature during this period is an observable step in that broader institutional process.

First EU AI Act harmonised standard on quality management systems advances

In January 2026, the European standardisation bodies CEN-CENELEC progressed the development of the first harmonised standard intended to support the implementation of the EU AI Act. The draft standard, prEN 18286, titled “Artificial intelligence – Quality management system for EU AI Act regulatory purposes”, completed its public enquiry and voting phase, which closed on 22 January 2026.

The draft standard specifies requirements for AI providers to establish and maintain a quality management system aligned with Article 17 of the AI Act. That article sets out core organisational obligations for providers of high-risk AI systems, including documented procedures for risk management, design control, testing, and corrective actions.

CEN-CENELEC launched the public enquiry for prEN 18286 in October 2025 under an accelerated procedure. This approach allows the standard to proceed directly to publication if enquiry results are positive, bypassing a separate final vote. According to the available progress updates, the enquiry concluded without major objections being reported, and the draft entered the “close of voting” stage.

Subject to final editorial work and formal adoption, the standard is expected to be published as EN 18286. Once cited in the EU’s Official Journal, it can be used by AI providers to demonstrate presumption of conformity with the AI Act’s quality management system requirements. The timing is intended to align with the AI Act’s phased application schedule, with publication targeted ahead of the Act’s main obligations becoming applicable.

Why this matters

Harmonised standards are a central implementation tool within the AI Act framework. They translate legal obligations into operational requirements that organisations can apply in practice. Progress on prEN 18286 is therefore a key step in enabling compliance, particularly for providers of high-risk AI systems that must demonstrate structured internal governance.

The development also illustrates coordination between EU institutions and European standardisation bodies. While the Commission sets the legal framework, technical standards provide the practical means for meeting those requirements. The accelerated process used for this standard reflects the institutional priority placed on having compliance mechanisms available in time for enforcement.

Looking ahead

Across the reporting period, developments point to parallel tracks in AI governance. Enforcement under existing EU digital law is being used to address concrete risks associated with deployed AI systems. At the same time, international institutions continue to build shared normative frameworks, and technical standards necessary for AI Act implementation are moving through formal processes.

These actions collectively illustrate how AI governance in Europe is being operationalised through a combination of enforcement, international coordination, and standardisation, without reliance on speculative or future measures.

Sources

European Commission press release announcing a formal DSA investigation into X and Grok (26 January 2026): https://ec.europa.eu/commission/presscorner/detail/en/IP_26_203

Council of Europe news release on Armenia signing the Framework Convention on Artificial Intelligence (28 January 2026): https://www.coe.int/en/web/artificial-intelligence/-/armenia-signs-the-framework-convention-on-artificial-intelligence

CEN-CENELEC Joint Technical Committee 21 update on prEN 18286 entering close-of-voting stage (January 2026): https://www.cencenelec.eu/news-and-events/news/2025/brief-news/2025-10-23-fast-track-ai-qms/

Read more