Computerworld NZ
EU member states and the European Parliament failed to agree on changes that would have softened the bloc’s AI Act and pushed back its toughest enforcement deadlines. The talks ran for about 12 hours on Tuesday and ended without an agreement, Reuters reported, citing a Cypriot official who said it had not been possible to reach a deal with Parliament. Cyprus holds the rotating presidency of the EU Council, which negotiates on behalf of member states. According to the report, the talks broke down over the insistence by some member states and lawmakers that industries already covered by sectoral safety rules be left out of the AI legislation. Tuesday’s session was the last political trilogue on the Digital Omnibus on AI scheduled before formal adoption, according to the European Parliament’s legislative tracker. Talks will resume in May, and if no deal is reached before August 2, the AI Act’s high-risk obligations will apply that day as originally drafted. The European Parliament’s co-rapporteurs on the file, Arba Kokalari and Michael McNamara, were scheduled to brief journalists in Strasbourg on Wednesday on the negotiations to update EU rules, but the briefing was cancelled at the last moment. Neither of the rapporteurs’ offices immediately responded to a request for comment. The Cypriot presidency press service also did not respond by the deadline. Why were the deadlines to be pushed back The Digital Omnibus on AI, which the trilogue was meant to finalise, was proposed by the European Commission on November 19 last year . The Commission framed it as part of a wider effort to simplify the EU’s digital rulebook for businesses, in response to the Draghi report on EU competitiveness. Both the Council and the Parliament had agreed before trilogue that the deadlines should be pushed back. The Council, in its March 13 negotiating mandate , proposed new dates of “2 December 2027 for stand-alone high-risk AI systems, and 2 August 2028 for high-risk AI systems embedded in products.” Parliament voted to adopt the same dates on Mar. 26 by 569 votes to 45, with 23 abstentions. The deadlines were pushed back because the technical standards that companies need to demonstrate compliance with are not ready. Communications from CEN-CENELEC’s Joint Technical Committee 21, which is drafting the standards, suggest the full set may not be available before December 2026, according to a client note from law firm Morrison Foerster. What Council and Parliament could not agree on was an exemption Parliament wanted for AI used in products that already fall under EU safety rules, such as machinery, toys, and medical devices, the report added. The exemption “faced limited enthusiasm in the Council, with different compromise proposals being discussed,” the Center for Democracy and Technology Europe said in its April bulletin . Consumer, medical, and academic groups have opposed the exemption. Forty such organisations warned in an open letter earlier this month that the proposals “still risk reopening core elements of this framework, crucially weakening the AI Act.” What happens next If lawmakers fail to land a deal before August 2, the high-risk obligations apply as drafted, regardless of whether harmonised standards or national enforcement authorities are ready. Patchy readiness across member states does not reduce the risk for businesses, said Enza Iannopollo, vice president and principal analyst at Forrester. “It’s obvious that if the authorities responsible for enforcing the rules are not in place, there won’t be enforcement, despite the deadlines,” she said. “But Member States can accelerate that process and put those authorities in place rather quickly. Some countries have already named them. The risk is that businesses lose track of developments across each Member State and find themselves exposed to regulatory scrutiny and fines.” Other parts of the AI Act will keep moving on their original schedule. The prohibitions on unacceptable-risk AI have applied since February 2025. The general-purpose AI rules came into force in August 2025. The transparency obligations under Article 50, including disclosure for chatbot interactions and labelling of deepfakes, are set to apply from August 2. For CIOs, Iannopollo said, the underlying compliance work continues regardless of trilogue politics. “Waiting is not an option. CIOs must start building the foundations of AI governance and compliance,” she said. “If they are not inventorying their AI use cases, assessing risks in light (also) of the EU AI Act’s risk categorisation, and defining risk management measures, they risk not only fines. They risk reputational damage and the inability to effectively scale their AI initiatives.” The Cypriot presidency runs until June 30, after which Ireland takes over.
Go to News Site