Navigating the EU's AI Regulation: Balancing Innovation and Ethics

2 May 2024

By Aki Antman, CEO & Founder of Sulava, President of AI & Copilot at The Digital Neighborhood, Peter Charquero Kestenholz, Founder, Head of Innovation & AI at Projectum and Erik David Johnson, Chief AI Officer at Delegate.

This blog is the third in a series of eight blogs about the challenges and opportunities in AI technology.

Is the EU's new AI regulatory path the right one, and what does it mean for European businesses?

Understanding the regulation framework

Recently, the European Parliament made history by passing the world's first law on artificial intelligence. The AI Act not only sets limits on AI usage but also prioritizes the preservation of innovation and fair competition.

Now, will this stifle innovation? Yes, of course. However, generative AI and large language models are sufficiently transformative to re-shape society – regulation or not –just like the Internet and the Industrial Revolution. The mission must be to ensure that we shape society in a meaningful way for citizens and businesses.

To set the framework, the EU AI Act will force any application of AI to be divided into four risk segments: No or low risk, limited risk, high risk, and unacceptable risk.

So, if you are developing an AI application, you must eventually place it in one of these four risk segments, and each risk segment requires you to have different obligations.

“You can compare our approach to the energy sector; the easiest way to get some energy up and running right now is to think of coal, gas, and oil. In Europe, we strive to produce energy from water, wind, and sun, which is more complicated, takes more time, requires investments, and can fail along the way.”

Peter Kestenholz
Founder, Head of Innovation & AI, Projectum

The definition of AI application

The AI Act is directed towards the actual application of AI. So, even if you are in a country outside of the EU, the law will come into play if it is applied inside the EU. The definition of AI will determine whether a solution has used this technology and must adhere to the AI Act and all that follows.

For now, the EU's definition of AI is very wide, and many things fall under the term. This is one area where the definitions of things in the worst case can impact the pace of innovation.

If AI becomes too regulated and too complex for non-AI experts, European Countries will fall behind to companies in countries with a more laidback approach to regulation.

Short-term gains versus long-term ethical solutions

We have seen companies fined billions of dollars repeatedly for illegal solutions, but for some of these companies, the billions are just a week's income. So, maybe it is worth it for the affected businesses to operate on the edge of laws?

The correct answer is, of course, no. Maybe European businesses could lose short-term to the competition, but the EU's perspective is also the future-proof, responsible long-term perspective.

You can compare it to the energy sector; the easiest way to get some energy up and running right now is to think of coal, gas, and oil. In Europe, we strive to produce energy from water, wind, and sun, which is more complicated, takes more time, requires investments, and can fail along the way.

Surely, choosing the future-proof, ethical solution impacts your ability to capitalize on opportunities right now. But capitalizing on quick wins versus what is long-term, ethically, and morally correct is not always aligned.

Stop thinking and start doing

The US is undoubtedly way ahead of European businesses when it comes to AI, and so are Asian countries and some parts of the Middle East.

And roughly said, there are three types of models for embracing AI in the world. One is the model with no regulation, and the other is the American model, which is also applied in many other countries. In this model, you can do almost anything, but you can be stopped at any time if it turns out your solution is not good for the public or for ethical reasons. And then there's the European model, which has, in many ways, potentially slowed things down because of its ‘preventive’ approach.

The EU's AI Act has just been passed and has not yet entered into force. Still, European businesses are hesitant to use AI because they fear they may not adhere to the upcoming regulation.

But there is no more waiting game—and rapid global AI development is not waiting for Europe. Everyone now knows the details of the EU AI Act, so it is all about doing instead of pondering whether you can do this or not. Now is the time to come up with AI innovation. If the idea is great and the solution is ethical, then start investing and make sure you are compliant.

en_GBEnglish