As recently as February, EU lawmakers’ plans to control generative artificial intelligence (AI) technologies like ChatGPT did not give generative AI a lot of attention.
Block’s 108-page plan for the AI Act, which came out two years ago, did not use the word “chatbot” even once. Deepfakes mostly refer to content made by AI, like pictures or sounds that are meant to look or sound like people.
But by the middle of April, members of the European Parliament (MEPs) were rushing to update these rules to keep up with the interest in generative AI, which has exploded since OpenAI showed off ChatGPT six months ago and made people gasp. and has made people worried.
This scurrying around came to an end Thursday with a new draft law that says protecting copyright is a key part of keeping AI under control.
Interviews with four lawmakers and two other people involved in the discussion show for the first time how a small group of politicians were able to pass what could be a landmark law in just 11 days. This law could change the way OpenAI and its rivals are regulated.
The draft bill isn’t the final version yet, and lawyers say it will be years before it’s put into action.
But the speed with which he works is rare in Brussels, which is often attacked for taking too long to make decisions.
last minute changes
Since it came out in November, ChatGPT has become the fastest-growing app in history. This has led to a lot of action from Big Tech competitors and investment in generative AI startups like Anthropic and Midjourney.
Because of how popular these apps are, EU industry boss Thierry Breton and others have called for services like ChatGPT to be regulated.
A group backed by Elon Musk, the billionaire CEO of Tesla Inc., and Twitter stepped it up by sending a letter warning of existential risks from AI and asking for stricter rules.
On April 17, dozens of MEPs who helped write the law signed an open letter agreeing with parts of Musk’s letter and asking world leaders to meet to find ways to control the growth of advanced AI.
On the same day, however, two of them, Dragos Tudorache and Brando Benifie, suggested changes that would force companies with generative AI systems to reveal any copyrighted materials used to train their models. They did this while they were at the meetings. According to four people, who asked to remain anonymous because the talks were so sensitive.
Sources said that all the parties agreed with the tough new plan.
Axel Voss, a Conservative MEP, wants to make it so that companies have to ask permission from rights holders before they can use data. This is too restrictive and could kill the new industry.
The EU has suggested laws that could force a notoriously secretive industry to be more open than it would like. The details will be worked out next week.
“I have to say that I was pleasantly surprised by how quickly we agreed on what should be in the text of these models,” Tudorache told Reuters on Friday.
“It shows that there is a strong agreement and a shared understanding of how to regulate at this time.”
The committee will decide on the deal on May 11, and if it passes, it will move on to the next stage of negotiations, called the trilogy, where EU member states will talk about the details with the European Commission and Parliament.
A person who knows about the situation said, “We’re waiting to see if the deal holds until then.”
The older brother vs. the Terminator
MEPs were still not sure that generative AI deserved special attention until just recently.
In February, Tudorache told Reuters that generative AI was “not going to be covered” in detail. He said, “This is another topic I don’t think we’ll talk about in this lesson.”
He said, “I fear Big Brother more than the Terminator.” He said this because he was more worried about data security risks than about threats about intelligence that seemed human-like.
But Tudorache and his colleagues now agree that rules need to be made about how generative AI is used.
Under new plans for the “foundation model,” companies like OpenAI, which is backed by Microsoft Corp, would have to say where they got books, photos, movies, and other copyrighted materials used to train their systems. Have to do
In the past few months, AI companies have been upset by claims of copyright violations. For example, Getty Images sued Stable Diffusion for using copyrighted pictures to train its system. OpenAI has also been criticized for not giving information about the datasets it used to teach its software.
“People both inside and outside of parliament have asked for ChatGPT to be banned or labeled as high risk,” said MEP Svenja Hahn. “The final settlement is good for innovation because it doesn’t label these models as “high risk,” but it does set rules for quality and transparency.”