Private firms were left to develop AI technology at a breakneck pace, giving rise to systems reminiscent of Microsoft-backed OpenAI’s ChatGPT and Google’s Bard.
Lionel Bonaventure | AFP | Getty’s paintings
A key committee of lawmakers within the European Parliament has approved the first-of-its-kind regulation on artificial intelligence — bringing it one step closer to becoming law.
The approval marks a breakthrough within the race between authorities to master an AI that’s evolving at breakneck speed. The law, generally known as the European Artificial Intelligence Act, is the primary law on artificial intelligence systems within the West. China has already drafted rules to manipulate how firms develop generative AI products like ChatGPT.
The law takes a risk-based approach to AI regulation, where the obligations to the system are proportionate to the extent of risk it poses.
The principles also set out requirements for providers of so-called “basic models” reminiscent of ChatGPT, which has turn out to be a significant concern for regulators given their sophistication and concerns that even expert employees can be displaced.
What do the principles say?
The Artificial Intelligence Act divides using artificial intelligence into 4 risk levels: unacceptable risk, high risk, limited risk and minimal or no risk.
Applications with unacceptable risk are blocked by default and can’t be deployed in a block.
include:
- Artificial intelligence systems that use subliminal techniques or manipulative or deceptive techniques to distort behavior
- Artificial intelligence systems that exploit the weaknesses of people or specific groups
- Biometric categorization systems based on sensitive attributes or characteristics
- Artificial intelligence systems used for social scoring or credibility assessment
- Artificial intelligence systems used for risk assessment predicting criminal or administrative offences
- Artificial intelligence systems that create or augment facial recognition databases through untargeted scraping
- Artificial intelligence systems inferring emotions in law enforcement, border management, workplace and education
Several lawmakers have called for raising the associated fee of funds to make sure ChatGPT coverage.
To this end, requirements were placed on “core models” reminiscent of large language models and generative AI.
Core modelers might want to apply security controls, data management measures and risk mitigations before making their models public.
They may also be required to make sure that the training data used to tell their systems doesn’t infringe copyright.
“Suppliers of such AI models could be required to take measures to evaluate and mitigate risks to fundamental rights, health and safety, and the environment, democracy and the rule of law” of Madrid’s telecommunications, media and technology, and mental property practice, CNBC told CNBC.
“They might even be subject to data management requirements, reminiscent of examining the suitability of knowledge sources and possible errors.”
It needs to be emphasized that although the law was passed by lawmakers within the European Parliament, it continues to be a great distance from becoming law.
Why now?
Private firms have been left to develop AI technology at a breakneck pace, giving rise to systems like Microsoft-supported by OpenAI ChatGPT and Google’s Bard.
On Wednesday, Google announced a slew of recent AI updates, including a sophisticated language model called PaLM 2, which the corporate says outperforms other leading systems in some tasks.
Progressive AI-based chatbots reminiscent of ChatGPT have fascinated many technologists and scientists with their ability to generate human responses to user prompts based on large language models trained on massive amounts of knowledge.
But AI technology has been around for years and is integrated into more applications and systems than you would possibly think. It determines, for instance, what viral videos or food photos you see in your TikTok or Instagram feed.
The aim of the EU proposal is to supply certain rules of conduct for firms and organizations using AI.
Tech industry response
The regulations have raised concerns within the tech industry.
The Computer and Communications Industry Association has expressed concern that the scope of the Artificial Intelligence Act has been widened too far and will include types of artificial intelligence which might be harmless.
“It’s worrying that broad categories of useful AI applications – which pose very limited or no risk – will now be subject to strict requirements and will even be banned in Europe,” Boniface de Champris, Policy Manager at CCIA Europe, he told CNBC via email.
“The European Commission’s original proposal for the Artificial Intelligence Law takes a risk-based approach by regulating specific AI systems that pose a transparent risk,” de Champris added.
“MEPs have now introduced all varieties of amendments that change the very nature of the AI Act, which now assumes that very broad categories of AI are inherently dangerous.”
What the experts say
Dessi Savova, head of the technology group at law firm Clifford Probability for continental Europe, said the EU rules would set a “global standard” for AI regulation. But she added that other jurisdictions, including China, the US and the UK, were rapidly developing their responses.
“The long reach of the proposed AI principles inherently signifies that AI players in all corners of the world have to care,” Savova told CNBC by email.
“The true query is whether or not the AI Act will set the one standard for AI. China, the US and the UK, to call a couple of, define their very own AI policies and regulatory approaches. Undoubtedly, everyone can be watching the negotiations on the Artificial Intelligence Act closely as they adjust their very own approach.”
Savova added that the most recent artificial intelligence bill introduced by parliament would put into practice most of the ethical principles around artificial intelligence that organizations strive for.
Sarah Chander, senior policy adviser at European Digital Rights, a Brussels-based digital rights group, said the principles would require entry-level models reminiscent of ChatGPT to “pass testing, documentation and transparency requirements.”
“While these transparency requirements won’t eliminate the infrastructural and economic issues associated with the event of those vast AI systems, they do require technology firms to reveal the quantity of computing power required to develop them,” Chander told CNBC.
“Currently, there are several initiatives regulating generative AI around the globe, reminiscent of China and the US,” Pehlivan said.
“Nonetheless, the EU’s Artificial Intelligence Act is prone to play a key role in the event of such legislative initiatives around the globe and make the EU a standard-setter on the international stage again, just like what happened with the General Data Protection Regulation.”