The Problems with The Recent AI Regulation
You can watch the video below or on my YouTube channel here.
In this article, I analyze differences in the recent attempts towards regulating Artificial Intelligence.
The European Commission recently released a proposal for a legal framework but as detailed as it is, however, it still seems like it’s far from what everyone was hoping for. At the same time, the FTC published a blog post with a different idea on how to regulate AI. Finally, in Norway, they are experimenting with an innovation-lab approach towards a more comprehensive strategy.
Recently the EC released a proposal for a legal framework to regulate artificial intelligence. Unfortunately, it seems to still be quite far from what we need.
Hi, friends welcome back to my channel. If you’re new here my name’s Bobby, I’m am a tech entrepreneur and business advisor based in the Netherlands. On this channel, I explore the strategies and tools that help us understand technology and build better business ventures.
Today I look at quite a controversial topic – the recent development in the regulatory space towards building a legal framework for artificial intelligence.
First of all, why do we need to regulate AI?
The need for regulation
Right now anyone can develop an algorithm without any concerns about biases and discrimination and can easily deploy it to the public. There are plenty of examples of such cases but let’s look at one from last year. Amid the BLM protests, 3 technology giants IBM, Microsoft, and Amazon banned the police use of their own facial recognition software which has proven biases against people with darker skin tones.
Back then they said that they would prohibit the use of such tools until there’s an appropriate law governing it. One year later, we still don’t have one. In the end, no one was really happy with this decision as activists said the corporate’s reaction was too little too late as they were the ones responsible for popularizing it in the first place. On the other hand, the police and the government realized that they are quite dependent on the will of the tech giants. Since then several state jurisdictions, like California, have decided to completely ban facial recognition used by police, hotels, restaurants, and retail business.
And another, more personal example.
Two years ago I attended an event organized by the UN on accountability of AI and Big Data. At that event, there were quite a few high-ranking governmental representatives all giving reasons as to why AI needs to be regulated urgently. At that event, there was a person from the EC who argued that AI for anything different than entertainment should be banned. I asked her the question “what if there’s a company that claims to be able to cure cancer using an AI system”? Her response was still negative. Then I followed with the question “imagine your child is suffering from cancer, would you still ban that same company to build its AI system”? At this point, she stopped to think and answered “I don’t know, I haven’t thought of that”.
If this sounds like a mess, it is just the tip of the iceberg and why we need a framework dealing with such cases. And the existence of such a framework will also help AI with its image problem.
The EC risk-based approach
The most important recent move was a couple of months ago, at the end of April when the European Commission released their proposal towards building a framework and an environment where:
- AI systems offered and used on the market are safe and respect existing laws on fundamental rights and EU values
- There’s a legal certainty facilitating investment and innovation in AI
- Existing laws on fundamental rights and safety applicable to AI systems are enforceable
- The development of a single market for lawful, safe, and trustworthy AI applications is facilitated and market fragmentation is prevented
If we stop here, it sounds really good and we can see that the core of the EC’s intentions is to preserve fundamental rights and European values. They are basically saying that we can’t keep winging it and turning a blind eye to biases and discrimination and allowing the development of applications just because they can be developed. Rather they aim to turn it around and build the image of AI as a technology that we can trust.
Around the same time, Ursula von der Leyen, president of the European Commission, said: “Artificial intelligence must serve people, and therefore artificial intelligence must always comply with people’s rights. We want to encourage our citizens to feel confident to use it.”
HOWEVER, even though it’s a whopping 108 pages long (with another 17 pages in annexes) this is still very much a work in progress. It has a high level of complexity which would make it difficult to follow and there are already a bunch of backdoors, exceptions, and ambiguity which leaves too much room for interpretation. Let’s look at some examples.
The EU AI risk categories
The EC defines a list of applications separated into different risk categories.
There’s the minimal risk which is as they call it “free use of applications such as AI-enabled video games or spam filters. The vast majority of AI systems fall into this category.”
Okay, that’s fine.
Then, we have limited risk, where chatbots fall into. According to the EC, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.
I think that’s only fair and should have been the case already for a few years.
Let’s move to the spicy stuff, high-risk applications which also received the most attention. These have to go through an “adequate risk assessment”, a complete logging and traceability, detailed documentation on user data and techniques, and the highest level of security and accuracy. Notice how none of these “strict obligations” performance measures to assess against.
The applications falling into this category include: Transportation of citizens (e.g. self-driving cars), educational tools like automated exam scoring, robot-assisted surgeries, worker management and recruitment, credit scoring, border control, and the list goes on.
This makes perfect sense… BUT… what about a chatbot used as the first filter for recruitment or loan applications? I assume it falls into the high-risk category but who will be looking into this? Where is the line?
Well on the question of who…. they are setting up a “European Artificial Intelligence Board”, an entity that so far seems to have bureaucracy ingrained in its core that’s supposed to advise the EC on the integration of the regulation. This, however, still leaves all the backdoors open.
Finally, there’s the unacceptable risk. Let me quote this directly from the EC proposal: Unacceptable risk: AI systems considered a clear threat to the safety, livelihoods, and rights of people will be banned. This includes AI systems or applications that manipulate human behavior to circumvent users’ free will (e.g. toys using voice assistance encouraging dangerous behavior of minors) and systems that allow ‘social scoring’ by governments.
DOES A TOY REALLY NEED TO HAVE ML IN ITSELF TO BE BANNED FOR MANIPULATING CHILDREN INTO DANGEROUS BEHAVIOR?
The flaws in the proposal
Do you see the problem? We move from too vague to be applicable to too narrow raising the question of why is this even an issue in 2021…
According to Sarah Chander, a senior policy adviser at European Digital Rights, the list of exceptions is so wide it defeats the purpose for even trying to ban anything. Other critics argued that the framework with its current direction would hamper innovation in the bloc and give an advantage to Chinese players which wouldn’t have nearly as strict limitations.
So what happens if you don’t comply? Well, the fines can go up to €30mil or 6% of the total worldwide annual turnover.
As I said in the beginning I fully understand the need for a regulatory framework, but this seems like it’s following in the steps of the GDPR – it makes sense on paper and in theory is great but in reality, only the large corporates who can afford to be fully compliant will do so. The rest, small and medium enterprises copy-paste cookie and privacy policies, swap a popup window on their websites and call it a day. The way this is approached tells me that we might not just end up with the same situation of copy-pasting regulation-compliant texts to launch a new image-filtering app but in some cases, it might actually put a hold on developments and innovation.
Talking about GDPR, it already does a lot of the things that this AI regulation is trying to do. Under the GDPR, you can’t use a person’s data for anything they haven’t given their consent to. This includes using face images captured from street cameras to recognize them or even to build a profile of that person.
In any case, the uncertainty of such regulations is the biggest detriment to innovation. Even if right now an application falls under the low-risk category, we cannot be sure that in the following months the EC won’t decide to move it to high risk. That kind of uncertainty is what increases the already high risks associated with tech startups. And it will be quite some time until this proposal becomes law. Possibly a time period of at least a couple of years until it’s finalized and another couple of years until it becomes enforceable.
The FTC’s approach
On the other side of the pond, the US Federal Trade Commission, a.k.a. the FTC, came up with its own idea of an approach to regulate AI just 2 days before the EC officially released its take on AI regulations. Put in simplest terms, their idea is that a piece of software, like AI, should always do exactly what it promises to its users. They give the example with a recruitment algorithm – if the developer says it’s 100% unbiased but it’s been built on a biased dataset then that’s deception and it should be punishable. They also stress out the importance of transparency on what kind of data and how it’s been used in the training of the algorithm. Finally, they address accountability which should always be with the developer.
The approaches of the EC and the FTC are dissimilar but it’s also important to note that they can’t really be compared. The FTC published a blog post while the EC basically treated us to a small book. Obviously, the FTC is way behind in its analysis let alone in the proposal of a potential policy.
One of the biggest limitations of both approaches is the inability to control the governmental use of harmful AI. The EC has already defined exceptions to the rules when the user is the state, for example for surveillance. And the FTC can only go after private entities. It could theoretically stop them from selling harmful tools to law-enforcement agencies but normally these are not fully public.
Of course, there’s the silver lining that at least governments will be slightly more open towards AI technologies. I’ve worked on AI projects for the government here in the Netherlands. And I tell you, it’s not easy because the public servants are not aware of what they are allowed to do with AI. They are so cautious that even great ideas are put on hold because there is no clear framework to follow.
My opinion is that something in between the two approaches of the EC and FTC could be the golden balance. Keep the low-risk regulations for chatbots and the high-risk regulations of biometrics but force the developer to be completely transparent and explicit in their communication with users. Something like that might be narrow enough to actually have an impact without putting hard limitations on startups and innovators.
The Norway solution
Let’s look at a third option.
In March this year, the world’s first and only AI regulatory sandbox was launched. The first on AI but such sandboxes are not new – they already exist for financial services. The initiative aims to **help** companies comply with the AI-specific regulations coming from the EU GDPR. It allows companies to test new technologies and prototypes in a monitored environment before releasing them to the market.
In Norway people from both the business and the regulatory side seem to think this is the way to strike the right balance of capturing potentially harmful applications without limiting innovation. And it works both ways because the results help the law-makers to gather enough information to better understand the subject that they are trying to regulate.
The idea is not to have every single developer go through this but rather to produce a precedent in the already vague and confusing areas which can then be turned into a standard that developers can follow. Eventually, these tests won’t even be performed by the developers but rather by an inspection authority that can randomly decide to investigate the regulations’ compliance. Similar to how restaurant inspection works.
I really like this. Finally, someone realizes that there’s a big gap between the technology and the regulation that needs to be bridged without forcing the formal to slow down until the latter catches up. It’s still not perfect because even if it works in Norway which has a population of less than 6mil, this definitely doesn’t mean it will work in other or all countries, especially the US or China.
But this is a step in the right direction and there’s already an initiative from NYU to test such an approach in the US.
So friends, what do you think about these attempts to regulate AI?
If you liked this video, give it a thumbs up and check this article where I cover some of the most interesting applications of AI in the space industry.
Thank you for watching and see you next time!
Work With Me
Do you need help with your business idea or scaling an existing venture?
Send me a message or contact me on LinkedIn.
Bobby Bahov
Business and Technology Advisor