BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

How The EU AI Act Stands To Impact Business

Forbes Technology Council

Karthik Sj, General Manager, AI at LogicMonitor. Built & Scaled multiple 0-1 AI products across public, PE and VC backed companies.

AI is in desperate need of a framework—this much, we can agree on. The European Union member states recently announced just that: a comprehensive set of rules surrounding AI, known as the EU AI Act, which allows third parties to assess the risk of AI innovations. The result is a lot of confusion about whether or not the EU has overstepped, which wouldn’t be the first time.

It was eight years ago that the GDPR was introduced—the first broadband set of rules on information privacy for the EU member states. I remember it vividly, not fondly, but fearing what would happen to organizations should they not pass GDPR standards, and how the rules halted innovation for a time.

This AI regulation will be no different. And while I believe this new set of standards misses the mark, it does have a few benefits. These regulations will stand as a foundation that can increase transparency and accountability in AI development. They may offer greater protection for customers, consumers and their data now that AI is becoming more prevalent in our daily lives, and similar to GDPR, this could encourage more responsible technology practices across the industry.

However, I believe these rules go too far, and organizations in the EU (and those with customers operating in the EU) must be prepared for an epic slowdown.

What exactly are these regulations?

Before we pick up our pitchforks, it’s important to understand the facts and dive deeper into what the EU is facing.

According to the EU Parliament, "The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed." The Act categorizes AI systems into four risk levels: unacceptable risk, high risk, limited risk and minimal risk, with different requirements for each.

Each AI tool will need to be assessed by a third-party provider prior to being put onto the market to weed out any risk in its system, and if it’s a threat to people, the tech will likely be banned. Other requirements apply to tech that classify individuals based on behavior, economic status or personal characteristics. This includes AI systems that impact safety or fundamental rights, such as those used in toys, aviation and medical devices. Additionally, technologies like ChatGPT must comply with the EU’s transparency requirements.

This field is evolving rapidly, and these regulations, being overly specific, may not leave room for development as the industry changes and old technologies become outdated. Categorizing risk is subjective and blanket bans on technologies might prevent benefits that we haven’t yet imagined. Not all AI systems will pose the same level of risk, and treating them all with the same level of scrutiny could waste resources and slow innovation.

So you might be affected…

Building an AI product isn't easy. Traditionally, it involves identifying a problem, gathering data, developing and training models and then deploying and monitoring its performance. With these regulations, several layers of complexity are knocking on the door of an already challenging process.

EU businesses need to be proactive and strategic while they wait. First, they must conduct thorough assessments of their AI systems—not just ticking boxes and calling it quits. These organizations need to document their development processes meticulously, creating a clear audit trail that demonstrates their commitment to ethical AI development. For example, understanding what data is being used for training LLMs and where it’s stored.

Investing in robust testing and validation procedures is also crucial. This might involve developing new methodologies to assess AI fairness, bias and potential negative impacts. It’s also imperative to start an ethics committee, or a cross-functional AI committee, to oversee projects and give a diverse perspective to catch issues early on in the development process.

Above all, it’s most important to remain informed about the evolving regulatory requirements. This requires ongoing vigilance and willingness to adapt quickly as interpretations of regulations evolve. The organizations that are prepared are the ones that will succeed as new rules develop over time.

Prepare for an innovation winter.

Since GDPR went into effect in Europe, a staggering 60% of companies' business processes have become more complicated as a result. However, it’s worth noting that GDPR has also led to improved data protection practices and increased consumer trust in many cases.

It will be no different when it comes to these new AI standards because the moment you require companies to seek third-party assessments of risk, things slow down. Initially, businesses should expect a steady decline in AI innovation and adoption as companies grapple with the regulatory requirements: an innovation winter.

As the impact of these guidelines becomes more apparent, there will likely be an increase in pressure from European technology companies for flexibility in these regulations, broadening the definition of risk and giving companies additional room to prosper.

Organizations should also anticipate debate on a global scale regarding how to balance innovation with regulation. Other countries will be watching closely to see how the EU’s approach advances, which could inform regulatory strategies in other regions.

Ultimately, there will be a period of adjustment, confusion and discomfort. As innovation slows and the EU falls behind, the regulatory committee may need to fine-tune its approach to find a better balance that allows businesses to succeed.

Where do we go from here?

Regardless of whether your organization operates within the EU or not, be prepared to feel the impact. Organizations in other regions that have customers located in the EU will still need to comply. Non-EU companies may find themselves at a disadvantage compared to companies that are built from the ground up to comply.

But if each business successfully navigates these standards, they could position themselves as a trusted provider in a market where many have been scared away. The goal should be to find a balance between innovation and responsible development, ensuring that AI technologies benefit society.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Follow me on Twitter or LinkedInCheck out my website