European Union lawmakers and governments engaged in extended discussions on landmark rules for artificial intelligence (AI) on Thursday. Negotiations continued into a second day as they grappled with significant issues, including the regulation of generative AI systems like ChatGPT.

A provisional deal on regulating generative AI systems was reportedly reached in the early hours of Thursday, marking progress in the discussions. However, debates on the use of AI in biometric surveillance and source code access were yet to be addressed after 20 hours of talks.

The Council of the European Union delayed a press conference scheduled for 0700 GMT as discussions persisted. Talks between EU governments and lawmakers, which began at 1400 GMT on Wednesday, faced challenges, including running out of food and coffee, as noted by sources familiar with the matter.

EU industry chief Thierry Breton acknowledged the ongoing negotiations in a social media post, expressing humour and referring to the process as “New day, same trilogue!” The discussions are part of the trilogue process involving negotiations between Parliament, the EU Commission, and the Council.

The proposed AI rules, introduced by the European Commission two years ago, aim to provide a comprehensive framework for governing AI technology. However, keeping pace with rapidly evolving technology has made it challenging for EU countries and lawmakers to reach a consensus.

The outcome of these regulations could set a precedent for other governments globally seeking to develop rules for their AI industries. The EU’s approach stands in contrast to the U.S.’ light-touch strategy and China’s interim rules.

The urgency to finalise the draft rules is heightened by the upcoming Parliamentary elections in June. EU countries and lawmakers aim to secure a final deal for a vote in the spring, failing which the legislative process would face delays.

The proposed legislation was first outlined in early 2021, predating significant advancements in AI technology. The transformative impact of generative AI models, exemplified by OpenAI’s ChatGPT, has added complexity to the regulatory framework.

While provisional terms on foundation models, such as generative AI, have been tentatively agreed upon, details of the agreement remain unclear. Disagreements persist, including a proposal by France, Germany, and Italy suggesting that makers of generative AI models should engage in self-regulation.

Contentious points also include the use of AI in biometric surveillance, with EU lawmakers advocating for a ban while governments seek exceptions for national security and military purposes. The outcome of these discussions holds implications not only for the EU but potentially for global standards in regulating AI technology.