The EU has adopted the Artificial Intelligence Law – but there are nuances

by alex

The EU has tentatively agreed on rules for artificial intelligence that should limit the use of the new technology and force developers to be more transparent. At the same time, these changes remain completely unclear – and, perhaps, very distant.

First proposed in 2021, the AI ​​Act has yet to be fully approved, and heated debate in the last minutes has softened some of the strictest regulatory threats.

“In the very short term, the law will not have much direct impact on established artificial intelligence developers working in the US, since, under its terms, it will likely not take effect until 2025,” says Paul Barrett, Associate Director, NYU Stern Center for Business and Human Rights.

Major artificial intelligence players such as OpenAI, Microsoft, Google and Meta will likely continue to battle for dominance, especially as they navigate regulatory uncertainty in the US, Barrett said.

AI law began to be drafted before the hype around general purpose artificial intelligence (GPAI) tools such as OpenAI's massive GPT-4 language model, and regulation has become a sticking point in recent discussions. The law categorizes its rules according to the level of risk the AI ​​system poses to society, or, as the EU statement puts it, “the higher the risk, the stricter the rules.” But some member states are concerned that such strictness could make the EU an unattractive market for AI.

Vacancies

Journalist, author of stories about IT, business and people in MC.today MC.today

Full Stack developerRozetka, Remote

Team Lead (Golang) Rozetka, Remote

LEAD RESEARCHER SPECIALIST Che IT Group

France, Germany and Italy lobbied for a relaxation of GPAI restrictions during negotiations, and received some compromises on the strictest rules. For example, instead of classifying all general-purpose AI as a high-risk technology, a two-tier system is proposed with exceptions for law enforcement agencies regarding the outright ban on the use of artificial intelligence such as biometric identification.

French President Emmanuel Macron criticized the rules, saying the AI ​​Law creates a harsh regulatory environment that discourages innovation.

C++ course for GameDevData analyst. A comprehensive course on how to learn to code in C++ from scratch, learn the necessary skills to master the required programming. Find out about the course

Barrett said some new European artificial intelligence companies may find it difficult to raise capital given legal terms that favor American developers. Companies outside the EU may even refuse to open a branch in the region or block access to platforms to avoid being fined for breaking the rules – a potential risk that Europe also faces in the non-AI tech industry by complying with rules such as the Digital Act. markets and the Digital Services Act.

For example, artificial intelligence models trained on publicly available—but sensitive and potentially copyrighted—data have become a big controversy for organizations. However, the temporary rules do not create new data collection laws. Although the EU has introduced data protection laws through the General Data Protection Regulation (GDPR), its AI rules do not prevent companies from collecting information beyond the requirement to comply with the regulations' guidelines.

“Under the rules, companies may have to provide transparent information, or so-called label data,” says Susan Ariel Aaronson, director of the Center for Digital Commerce and Data Governance and a research professor of international affairs at George Washington University. “But it won’t actually change how companies behave around data.”

READ
Russia will begin producing a new Haval crossover, the engine of which was recognized as one of the best in China. Details and price of the Russian Haval Xiaolong Max

Aaronson notes that the AI ​​Act still doesn't clarify how companies should treat copyrighted material that is part of a model's training data, other than that developers must comply with existing copyright laws (which leave a lot of gray areas around AI). Thus, it does not encourage modelers to avoid using copyrighted data.

The Artificial Intelligence Act will also not apply its potentially harsh penalties to open source developers, researchers, and smaller companies further down the chain. GitHub Chief Legal Officer Shelley McKinley says this is “a positive development for the development of open innovation and developers helping solve some of society's most important problems.”

Some believe the law's most concrete impact may be putting pressure on other, including American, policymakers to act more quickly. This is not the first significant regulatory framework for artificial intelligence—in July, China passed guidelines for companies wishing to sell artificial intelligence services to the public. But the EU's relatively transparent and highly debated development process has given the AI ​​field a clear idea of ​​what to expect. While the AI ​​Act may still change, Aaronson says it shows the EU has listened to the public and responded to concerns about the technology.

Lothar Determann, a data privacy and information technology partner at law firm Baker McKenzie, says the fact that the law builds on existing data rules could also prompt governments to evaluate what regulations are already working. And Blake Brannon, chief strategy officer at data privacy platform OneTrust, said more mature AI companies are setting privacy guidelines under laws like GDPR and in anticipation of stricter policies. He said that depending on the company, the AI ​​Act is an “add-on” to existing strategies.

The US, by contrast, has largely failed to begin regulating artificial intelligence, despite being home to major players such as Meta, Amazon, Adobe, Google, Nvidia and OpenAI. Their biggest move is an executive order from the Biden administration that requires government agencies to develop security standards and build on voluntary, non-binding agreements signed by major AI players. Several bills filed in the Senate mainly dealt with deepfakes and watermarking.

This doesn't mean the US will take the same risk-based approach, but it may try to expand data transparency rules or allow GPAI models to feel a little looser.

Navreena Singh, founder of Credo AI and member of the National Artificial Intelligence Advisory Committee, believes that while the AI ​​Act is a huge solution for the AI ​​field, things will not change quickly and there is still a lot of work ahead.

“Regulators on both sides of the Atlantic must focus on helping organizations of all sizes design, develop and deploy artificial intelligence safely, transparently and accountably,” Singh said, adding that standards and benchmarking processes are still lacking analysis, especially regarding transparency.

Although the Artificial Intelligence Act is not yet finalized, the vast majority of EU countries have accepted that this is the direction they want to go. The law does not retroactively regulate existing models or programs, but future versions of OpenAI GPT, Meta Llama or Google Gemini will have to take into account transparency requirements set by the EU. This may not lead to dramatic changes overnight, but it demonstrates the EU's position on AI.

You may also like

Leave a Comment