How the EU Rushed to Regulate ChatGPT and Generative AI

As recently as February, Parent oh Artificial intelligence technologies such as those not featured prominently in EU lawmakers’ plans chatgpt,

Block’s 108-page proposal for the AI ​​Act, published two years ago, contained only a single mention of the word “chatbot”. Deepfakes refer largely to AI-generated content: images or audio designed to impersonate humans.

By mid-April, however, Members of the European Parliament (MEPs) were racing to update those rules to catch up with the explosion of interest in generative AI, which has stirred fear and concern ever since. OpenAI ChatGPT was unveiled six months ago.

That scramble culminated Thursday with a new draft law that identified copyright protection as a core part of the effort to keep AI under control.

Interviews with four lawmakers and two other sources close to the discussion reveal for the first time how in just 11 days this small group of politicians managed to push through what could be landmark legislation reshaping the regulatory landscape for OpenAI and its competitors.

The draft bill is not final and lawyers say it will take years before it is implemented.

However, the speed of his work is a rare example of unanimity in Brussels, which is often criticized for its slow pace of decision-making.

last minute changes

Since launching in November, ChatGPT has become the fastest-growing app in history and has sparked a flurry of activity from Big Tech competitors and investment in generative AI startups like Anthropic and Midjourney.

The extreme popularity of such applications prompted EU industry chief Thierry Breton and others to call for regulation of services such as ChatGPT.

an organization supported by Elon MuskTesla’s billionaire CEO and Twittertook it a notch higher by issuing a letter warning of the existential risk from AI and calling for tighter regulations.

On 17 April, dozens of MEPs involved in drafting the law signed an open letter agreeing with parts of Musk’s letter and called for a summit with world leaders to find ways to control the development of advanced AI. requested to organize.

That same day, however, two of them – Dragos Tudorache and Brando Benifie – proposed changes that would force companies with generative AI systems to disclose any copyrighted material used to train their models, present at the meetings. According to four sources, who requested anonymity because of the sensitivity of the discussions.

Sources said that the tough new proposal got support from all the parties.

A proposal by Conservative MEP Axel Voss – forcing companies to request permission from rights holders before using data – is far too restrictive and something that could stifle the budding industry.

After hammering out the details next week, the EU outlined proposed laws that could force an uncomfortable level of transparency on a notoriously secretive industry.

“I must admit that I was positively surprised by how easily we converged on what should be in the text on these models,” Tudorache told Reuters on Friday.

“It shows that there is a strong consensus and a shared understanding on how to regulate at this time.”

The committee will vote on the deal on 11 May and, if successful, it will move on to the next stage of negotiations, the trilogy, where EU member states will debate the content with the European Commission and Parliament.

A source familiar with the matter said, “We are waiting to see if the deal remains in place until then.”

Big brother VS. the Terminator

Until recently, MEPs were still not convinced that generative AI deserves any special consideration.

In February, Tudorache told Reuters that generative AI was “not going to be covered” in depth. “It’s another discussion I don’t think we’re going to tackle with this lesson,” he said.

Citing data security risks over warnings of human-like intelligence, he said: “I fear Big Brother more than the Terminator.”

But Tudorache and his colleagues now agree on the need for laws specifically targeting the use of generative AI.

Under new proposals targeting the “Foundation Model”, companies such as OpenAI, which is backed by MicrosoftAny copyrighted material used to train your system – books, photographs, videos, and more – must be disclosed.

Copyright infringement claims have rankled AI firms in recent months, with Getty Images suing Stable Diffusion for using copyrighted photos to train its system. OpenAI has also faced criticism for refusing to share details of the datasets used to train its software.

“There have been calls from outside and inside parliament to ban or classify ChatGPT as high risk,” said MEP Svenja Hahn. “The final settlement is innovation-friendly as it does not classify these models as ‘high risk’ but sets requirements for transparency and quality.”

© Thomson Reuters 2023


Affiliate links may be automatically generated – see our moral statement for information.