IAPP Europe: April AI Regulatory Update

0
4

Key Takeaways

  • The European Parliament and Council failed to reach an agreement on the Digital Omnibus on AI Regulation during the 28 April trilogue, postponing needed changes to the EU AI Act.
  • The main sticking point is Parliament’s proposal to move sector‑specific rules from Annex I Section A to B, which the Council rejects because it would merge the AI Act’s obligations into existing machinery and toy regulations.
  • Without a deal before 2 August, the current high‑risk AI obligations will become enforceable, undermining the intended postponement of those rules.
  • Negotiations are expected to resume only after the new Council mandate is in place, likely a few weeks away.
  • Meanwhile, the EU continues to advance broader AI strategies (AI Continent Action Plan, Data Union, Apply AI, AI Gigafactories, and the forthcoming Cloud and AI Development Act).
  • Member‑state authorities are issuing national guidance on AI‑related cybersecurity, privacy, and responsible AI use, highlighting a multi‑layered approach to AI governance across Europe.

Overview of the April Trilogue Impasse on the Digital Omnibus AI Regulation
April concluded without the widely anticipated milestone for artificial‑intelligence regulation in Europe. The European Parliament and the European Council were unable to agree on the Digital Omnibus on AI Regulation proposal during the latest trilogue held on 28 April, despite expectations that this session would finalize the text. The interinstitutional negotiations, which had begun only a month earlier, showed rapid alignment on many technical issues but stalled over a substantive disagreement concerning the structure of the AI Act’s annexes. This impasse threatens to delay the implementation of targeted simplifications that were meant to soften the Act’s burden on high‑risk AI systems.

Negotiation Background and Speed of Talks
The talks on the Digital Omnibus commenced roughly a month before the failed trilogue, with both institutions first meeting to discuss technical matters surrounding the proposal. From the outset, negotiators reported a swift pace, aligning on numerous points such as definitions, risk‑classification criteria, and procedural mechanisms. The rapid progress fostered optimism that a compromise could be reached quickly, enabling the proposed adjustments to take effect before the upcoming enforcement deadline for high‑risk AI obligations. However, the speed of the discussions also meant that substantive disagreements surfaced later in the process, leaving little time for compromise once the core issue emerged.

Core Dispute: Annex I Merger and Sectoral Rules Integration
The central point of contention revolves around the EU AI Act’s Annex I. Parliament proposes moving sectoral rules currently located in Annex I Section A to Section B, aiming to streamline the legislation by integrating AI‑specific requirements into existing sectoral regimes such as machinery and toys. The Council, however, opposes this merger, arguing that it would dilute the specificity of the AI Act’s high‑risk obligations and create legal uncertainty for manufacturers already subject to sector‑specific safety laws. This disagreement over where the AI Act’s rules should reside proved intractable in the 28 April meeting, halting further progress despite consensus on ancillary matters.

Implications of the Stalemate for the AI Act’s High‑Risk Obligations Timeline
The Digital Omnibus was designed to introduce several targeted changes, most notably a postponement of the entry‑into‑application date for obligations governing high‑risk AI systems. If the agreement is not reached before 2 August—the date when the current AI Act’s high‑risk provisions become enforceable—the postponement will not take effect, and operators will immediately face the full compliance burden. This outcome could disrupt planned AI deployments, increase compliance costs for developers, and potentially slow innovation in sectors where high‑risk AI is prevalent, such as healthcare, transportation, and industrial automation.

Prospects for Resuming Negotiations After the Council Mandate Shift
Observers anticipate that negotiations will not resume until the new Council mandate is formally in place, a process expected to take at least a few more weeks. The upcoming shift in Council leadership may bring fresh perspectives, but it also introduces uncertainty about whether the incoming presidency will prioritize a rapid resolution or seek to revisit the text from scratch. Stakeholders are urging both institutions to find a compromise quickly, emphasizing that any further delay risks undermining the EU’s goal of delivering a predictable, innovation‑friendly AI regulatory framework.

EU AI Continent Action Plan: One‑Year Review and Ongoing Strands
Parallel to the trilogue difficulties, the EU marked the one‑year anniversary of the AI Continent Action Plan’s publication on 9 April. This comprehensive strategy aims to position the Union as a global leader in artificial intelligence by coordinating investment, talent development, and infrastructure initiatives. Several of its strands have already been launched, notably the European Data Union Strategy, which seeks to create a seamless, cross‑border data environment, and the Apply AI Strategy, focused on accelerating AI adoption in public services and industry. Ongoing work continues to strengthen Europe’s supercomputing capacity through projects such as the AI Gigafactories, which aim to provide large‑scale AI training facilities accessible to researchers and businesses across the continent.

Supporting Initiatives: Data Union, Apply AI, AI Gigafactories, and the Cloud and AI Development Act
Beyond the Action Plan’s flagship programs, the Commission is advancing complementary measures. The Cloud and AI Development Act, intended to facilitate the provision of reliable, secure cloud services tailored to AI workloads, was originally slated for earlier publication but has been postponed until 27 May to allow further consultation. Meanwhile, the AI Gigafactories project continues to receive funding commitments, with the goal of establishing a network of high‑performance computing hubs that can support the training of foundation models while adhering to EU standards on energy efficiency and data sovereignty. These initiatives collectively reinforce the EU’s ambition to build a robust, home‑grown AI ecosystem that can compete with counterparts in the United States and China.

United Kingdom’s Response to AI‑Driven Cyber Threats
Across the Channel, UK authorities have been actively addressing the security implications of AI. The Department for Science, Innovation and Technology issued an open letter to business leaders warning about AI‑driven cyber threats, highlighting how generative models and vulnerability‑scanning tools can be exploited to accelerate attacks. The letter outlined practical steps for organisations, such as adopting AI‑specific threat‑intelligence feeds and reinforcing incident‑response plans. Complementarily, the UK’s newly formed AI Security Institute is tasked with researching AI safety risks, while the Cyber Security and Resilience Bill—currently progressing through Parliament—seeks to introduce mandatory reporting requirements for significant AI‑related incidents. The National Cyber Security Centre also released guidance urging firms to harden their defences against AI‑enabled tactics, reflecting a coordinated governmental push to mitigate emerging risks.

Netherlands’ Collaborative Approach to Chatbot Guidelines
In the Netherlands, data‑protection and competition regulators are jointly drafting guidelines for the responsible use of chatbots in customer‑service contexts. The agencies recognise that conversational AI raises both privacy concerns—related to the collection and processing of personal data—and competition issues, insofar as dominant platforms could leverage superior chatbot capabilities to foreclose market entry. The draft guidelines, slated for public consultation this summer, will address topics such as transparency about AI involvement, data minimisation, and mechanisms for users to opt out or seek human assistance. A final version is expected by autumn, aiming to provide clear, enforceable standards that balance innovation with consumer protection.

Belgium, Sweden, and Spain: National Data‑Protection Authorities’ AI Guidance
Several national data‑protection authorities have also published AI‑focused guidance this month. Belgium’s DPA released the first installment of a series examining AI’s impact on privacy, setting the stage for forthcoming recommendations on lawful basis, purpose limitation, and accountability. Sweden’s Integritetsskyddsmyndigheten announced a budget increase intended to expand its capacity to oversee compliance with the EU AI Act, signalling a proactive stance on enforcement. Meanwhile, Spain’s Agencia Española de Protección de Datos issued detailed guidance on voice‑transcription services that rely on AI, clarifying obligations concerning transparency, accuracy, the right of access, and consent. These national outputs illustrate a harmonising trend: while the EU framework sets baseline rules, member‑state authorities are tailoring advice to sector‑specific applications and local legal nuances.

Conclusion: What the Delays Mean for Europe’s AI Governance Landscape
The failure to secure an agreement on the Digital Omnibus on AI Regulation in April introduces notable uncertainty into the EU’s AI governance timetable. Although the broader AI Continent Action Plan and ancillary initiatives continue to advance, the postponement of high‑risk AI obligations hinges on a swift resolution of the Annex I dispute. Should the stalemate persist beyond the August deadline, businesses may face immediate compliance pressures, potentially dampening AI investment at a critical juncture. Conversely, the parallel efforts underway in the United Kingdom and various Member States demonstrate that AI governance is evolving on multiple fronts—supranational, national, and sectoral—each contributing pieces to a cohesive, risk‑aware regulatory mosaic. The coming weeks will be decisive: a rapid breakthrough in the Council‑Parliament talks could restore the intended timeline for the AI Act’s easing measures, while continued deadlock may force the EU to reconsider its approach to balancing innovation protection with fundamental rights safeguards.

SignUpSignUp form

LEAVE A REPLY

Please enter your comment!
Please enter your name here