Andrada Fiscutean
Freelance writer

Ways IT leaders can meet the EU AI Act head on

Feature
May 08, 20246 mins
Artificial IntelligenceCIODevelopment Tools

The biggest mistake companies of all sizes could make is to put conformity before innovation, according to EU AI Act co-rapporteur Dragoș Tudorache.

EU flags
Credit: jorisvo / Shutterstock

The European Union’s AI Act, a regulation aiming to ensure AI is human-centric and trustworthy, is one step away from becoming reality. The document is expected to be published in the Official Journal at the beginning of June, and will come into effect 20 days after.

This regulation, poised to be the world’s first comprehensive law for AI, is designed to maintain a balance between encouraging technological advancement and protecting the rights of European consumers.

“We are regulating as little as possible and as much as needed, with proportionate measures for AI models,” says Commissioner for the Internal Market Thierry Breton, adding that the AI Act “will be a launchpad for EU startups to lead the global race for trustworthy AI,” and will also benefit small and midsize businesses. “We’re making Europe the best place in the world for trustworthy AI,” he says.

The AI Act primarily affects developers of AI tools, as well as deployers of high-risk AI systems, and applies to both public and private organizations within and outside the EU, as long as their AI systems are used or sold in the European market or affect European citizens. Barring a few exceptions, it will apply across all EU member states 24 months after it takes effect, and by the end of this year, rules banning certain AI practices will have to be followed.

Also, regulations on general-purpose AI models, governance, and penalties will begin 12 months after the law becomes active, and requirements for high-risk systems will start 36 months after it comes into force.

“Before going into panic mode, organizations need to understand what this legislation actually changes,” says co-rapporteur Dragoș Tudorache, who was lead negotiator of the AI Act in the European Parliament together with Brando Benifei. “The vast majority of AI out there wouldn’t be touched by the act, because it happens in areas that aren’t identified as risk areas by the regulation.”

The AI systems the document mostly focuses on fall into the unacceptable risk and high-risk categories. The former includes banned AI applications, such as those assessing individuals based on socioeconomic status. The EU also prohibits law enforcement from performing real-time remote biometric identification in public spaces, and it’s also against emotion recognition in the workplace and at school. The latter category covers areas like critical infrastructure, exam scoring, robot-assisted surgery, credit scoring that could deny loans, and resume-sorting software.

Organizations that work with high-risk systems and know they’ll be affected by this law should start preparing. “If it’s a company that develops AI systems, then all of those obligations that have to do with technical documentation, with transparency for data sets, can be anticipated,” Tudorache says.

Additionally, companies looking to incorporate AI into their business model should ensure they trust the technology they integrate by first thoroughly understanding the systems they deploy to prevent complications down the line.

The biggest mistake organizations can make is failing to take the AI Act seriously because it’s disruptive and will massively affect many business models. “I expect the AI Act to create bigger ripples than the GDPR,” says Tim Wybitul, head of privacy and partner at Latham & Watkins in Germany.

Adapting to a moving target

As the AI Act begins to reshape the landscape of European technology, industry leaders are trying to navigate its implications. Danielle Jacobs, CEO of Beltug, the largest Belgian association of CIOs and digital technology leaders, has been discussing the AI Act with her colleagues, and they’ve identified several key challenges and actions.

Many Belgian CIOs, for instance, want to educate their employees and set up awareness programs focused on exploring the most effective ways to use gen AI.

When it comes to AI, every company is an early adopter, says Jacobs, because the landscape is continuously evolving. “There are no established best practices or IT plans,” she says, which complicates preparations for the AI Act.

Some of her colleagues have voiced security and privacy concerns about AI-powered tools typically used in the business environment, such as those for transcribing meetings. Others feel that third-party tools used by employees aren’t always reported.

Beltug recommends organizations to start with data classification, and companies should also carefully review the permissions associated with the AI applications they use, Jacobs adds.

These steps can help gain a clear understanding of where and how AI is used. This clarity is crucial because most organizations underestimate the wide range of systems that apply to the AI Act. “They often concentrate on what they perceive to be ‘classic AI’ and overlook plugins and other integrated AI features,” Wybitul says.

He also recommends companies actually carefully read the AI Act in full. “This isn’t easy, as the law is very complex and often vague,” he says. “The many references to other EU laws don’t make this task any easier.”

But the vagueness of the text is a feature, not a bug, says Tudorache, because it allows for flexibility. “We recognized we don’t have enough knowledge and experience, given the early stage of the technology, to know exactly how to measure the compliance of these models, therefore we introduced flexibility in implementation,” he explains. “We thought it would be good to allow the interaction between the regulator and the developers, and build a code of practice that afterward will be implemented by the EC. You won’t find this in many other pieces of legislation at European Union level.”

In addition to the AI Act, organizations have to stay updated with other laws and directives discussed in the EU. The EC issued a proposal for an AI liability directive in September 2022 that won’t be adopted in this Parliament, but will likely be a priority for the next one, say EU policy experts Rob van Kranenburg and Gaelle Le Gars.

What the AI Act doesn’t include

Although the AI Act is hundreds of pages long, it doesn’t cover everything AI-related. “The most surprising is how little in the text addresses the key question of autonomous and semi-autonomous systems involving robots, vehicles, and drones,” Kranenburg says. “There are only two explicit mentions of autonomous systems in the entire text.”

Le Gars adds that she and Kranenburg expected more protective measures in the legislation, given the ongoing military conflicts in Europe and elsewhere.

Tudorache argues that while AI will indeed become a new trend in warfare, the EU has limited capacity to regulate its military uses. “Defense is not a competence where the EU can regulate,” he says. “It remains a national competence,” which means that member states should independently establish and enforce their own rules. Tudorache adds that NATO, the military alliance most European countries are part of, “has already started having a very serious debate on the impact of AI on warfare.”

As for civilian sectors such as aviation, automotive, and medical devices, these are already heavily regulated within the EU, Tudorache adds. The AI Act is designed to enhance, not duplicate, existing regulations, so the rationale was to integrate the AI Act with the already established rules, rather than impose additional layers.

Plus, the document includes very little about the labor market, a sector that will be impacted profoundly by AI since the EU doesn’t have competence to regulate the labor market, Tudorache says. Such decisions are also made at the state level.

A warning not to stifle innovation

Some argue that the AI Act could place Europe at a competitive disadvantage, given that the US and China have fewer guardrails for AI. Tech lawyer Jan Czarnocki, based in Switzerland — a non-EU country — suggests that this regulation might deter foreign companies from entering the European market and impede local innovation.

Contrary to these concerns, Tudorache argues that regulating AI is essential and the AI Act should promote, not obstruct, innovation. In fact, the biggest mistake companies of all sizes could make is to be discouraged by this legislation and put conformity before innovation, he says.

“First, we need to be able to innovate, and then we see how that innovation fits and respects the norms or not,” Tudorache adds. “If we subsume innovation to compliance rather than the other way around, that would be a mistake, because it would create almost a self-censoring attitude toward innovation and creativity, and that’s exactly what we don’t want to achieve with this regulation.”

Moreover, Tudorache said that the regulation was crafted to support the growth of SMEs in a responsible manner. “I think the word SME appears at least 40 times in the AI Act,” he says. “And it appears because there are dedicated rules for SMEs meant to facilitate their access to [regulatory] sandboxes for free.” And these allow businesses to test innovative products, services, or business models under the oversight of a regulator.

Tudorache further highlights that the AI Act will enable constant interaction between companies and future regulators at both national and European levels.

“It’ll be important to bring the two worlds together: those responsible for implementing the legislation, and those meant to apply it,” he says. “From our point of view, it would make the implementation much smoother and the compliance much easier to bear.”

Andrada Fiscutean
Freelance writer

International science and technology journalist with features in Ars Technica, Vice Motherboard, ZDNet, Nature, CSO Online, and more. Over 20 years of experience working as a radio journalist, 10 as a science and technology reporter, and four as a TV news voice-over.

More from this author