Authorities around the world are racing to frame rules for artificial intelligence, including the European Union, where the draft law faced a pivotal moment on Thursday.
A European Parliament committee voted to strengthen the key legislative proposal as part of a years-long effort by Brussels to build guardrails for artificial intelligence. Those efforts are made all the more urgent because the rapid progress of chatbots like ChatGPT can benefit the emerging technology – and the new threats it poses.
Here’s a look at the EU’s Artificial Intelligence Act:
How do rules work?
The AI Act, first proposed in 2021, would regulate any product or service that uses artificial intelligence systems. The Act will classify AI systems according to four levels of risk, from minimal to unacceptable. Applications with higher risks will face tougher requirements, including being more transparent and using accurate data. Think of it as a “risk management system for AI,” said Johan Lox, an expert at the Oxford Internet Institute.
What are the risks?
One of the main goals of the European Union is to defend against any AI threats to health and safety, and to protect fundamental rights and values.
This means that some AI uses are an absolute no-no, such as “social scoring” systems that judge people based on their behavior. AI that exploits vulnerable people, including children, or that uses subliminal manipulation that could result in harm, such as an interactive talking toy that encourages dangerous behavior, is also prohibited.
Lawmakers upped the ante by voting to ban predictive police equipment, which lacks the data to predict where crimes will occur and who will commit them. He also approved a broad ban on remote facial recognition except for some law enforcement exceptions, such as preventing a specific terrorist threat. The technology scans passersby and uses AI to match their faces to a database.
“The aim is to avoid a controlled society based on AI,” Brando Benifie, an Italian lawmaker who helped lead the European Parliament’s AI efforts, told reporters on Wednesday. “We think these technologies can also be used for bad rather than good, and we consider the risks to be very high.”
AI systems used in high-risk categories such as employment and education, which affect the course of a person’s life, face tough requirements such as being transparent with users and implementing risk assessment and mitigation measures.
The EU’s executive branch says most AI systems, such as video games or spam filters, fall into the low or no risk category.
What about ChatGPT?
The original 108-page proposal barely mentioned chatbots, only requiring them to be labeled so users knew they were interacting with a machine. Negotiators later added provisions to cover general-purpose AI such as ChatGPT, subjecting them to some of the same requirements as high-risk systems.
An important additional requirement is to fully document any copyrighted material used to teach AI systems how to produce text, images, videos or music similar to human work. This will let content creators know whether their blog posts, digital books, scientific articles or pop songs have been used to train the algorithms that power systems like ChatGPT. Then they could decide whether their work had been copied and seek redress.
Why are EU regulations so important?
The EU is not a big player in cutting edge AI development. That role has been taken by America and China. But Brussels often plays a trendsetting role with regulations that become de facto global standards.
“Europeans, globally speaking, are quite wealthy and there are a lot of them,” so companies and organizations often decide that the sheer size of the bloc’s single market with 450 million consumers allows different products for different regions. That makes it easier to comply than to develop, Lax said.
But it is not just a matter of scolding. Lux said that by creating general rules for AI, Brussels is also trying to develop the market by building trust among users.
“The thinking behind this is that if you can get people to trust AI and applications, they will use it more,” Lox said. “And when they use more of it, they will unlock the economic and social potential of AI.”
What if you break the rules?
Violations would attract fines of up to 30 million euros ($33 million) or 6% of a company’s annual global revenue, which in the case of tech companies such as Google and Microsoft could amount to billions.
What will happen next?
It could take years for the rules to take full effect. EU lawmakers are now due to vote on the draft legislation in a plenary session in mid-June. It then moves into three-way talks involving the bloc’s 27 member states, parliament and an executive commission, where it could face more changes as they wrangle over details. Final approval is expected by the end of the year, or early 2024 at the latest, followed by a grace period for companies and organizations to adapt, often around two years.
(This story has not been edited by News18 staff and is published from a syndicated news agency feed)