Transforma logo

The EU’s AI Act is ambitious and laudable, but encounters with the real world will be challenging

MAR 30, 2023 | Jim Morrish
 
Europe vertical: ALL Artificial Intelligence

The EU AI Act

Work is underway in the European Commission to develop regulations for Artificial Intelligence (AI). To help build a resilient ‘Europe for the Digital Decade’, the European Commission asserts that people and businesses should be able to enjoy the benefits of AI while feeling safe and protected.

The European AI Strategy intends to make the EU a world-class hub for AI and ensure that AI is human-centric and trustworthy. The aim is that such an objective translates into a European approach to excellence and trust through concrete rules and actions.

In this blog we outline some key aspects of the emerging legislation (that is still subject to refinement) together with some of the potential implications of the approach being taken.

The Definition of AI

In April 2021, the European Commission presented its AI package, including a Communication on fostering a European approach to artificial intelligence, an update of the Coordinated Plan on Artificial Intelligence (with EU Member States), and its proposal for a regulation laying down harmonised rules on AI (AI Act) and relevant Impact assessment. It is envisaged that the draft Act published in April 2021 will become law in Q4 2023 and that the law will become applicable from Q4 2025.

An AI system is defined as “a software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” Annex I meanwhile, identifies machine learning approaches, logic- and knowledge-based approaches, and statistical approaches.

The EU envisages a risk-based approach to regulation with potentially high-risk solutions permitted subject to ex-ante (before the event) conformity assessment. Solutions with lower risk are subject to lighter-touch regulation whilst those deemed to represent an ‘unacceptable’ risk will be banned.

AI that contradicts EU values is prohibited, including aspects such as subliminal manipulation or the exploitation of children resulting in harm, social scoring and remote biometric identification.

High risk solutions include those relating to the safety components of regulated products (e.g. medical devices or machinery) and certain stand-alone AI systems in fields such as biometric identification, management and operation of critical infrastructure, education and employment and law enforcement and the administration of justice.

The limits of an algorithm

One of the suggested aspects of the regulation is that all AI computer algorithms that influence decisions are to be within scope. But such algorithms are commonly found in fields that are deemed high risk already today and once a human has seen the conclusions (or ratings) assigned by an algorithm then it is nearly impossible not to be influenced by that information. So must all such algorithms be regulated even if they have a ‘human in the loop’ and are not part of a fully automated process?

Some such algorithms are very basic. Take, for example, resume-screening for a job that requires ‘tertiary level education’. Would an algorithm that simply screens out job applicants without tertiary education be subject to regulation?

Also, sticking with resume screening, there’s no significant difference between a human screening out all resumes that do not mention tertiary education and an AI-enabled, character recognising, natural language processing, software engine doing exactly the same thing.

So not only are computer algorithms already widely used today, but also humans can behave as if they were governed by the same algorithms. Conversely, any AI algorithm that is ‘explainable’ could theoretically be administered by a human. Regulating the ‘computerised’ operation of an algorithm whist not regulating a human doing the exact same thing makes little sense. Perhaps the EU perspective is that the application of (any) algorithms is generally undesirable, and regulating algorithms when they are executed by a computer is a crude way to limit the extent to which algorithms are adopted.

Fair for me, but not for you

Much of the EU AI Act seems intended to ensure that citizens are not unfairly disadvantaged or, at least, to ensure transparency so that potentially unfair decisions can be challenged. However, the concept of ‘fairness’ is subjective.

Compas (Correctional Offender Management Profiling for Alternative Sanctions) is a case management and decision support tool developed and owned by Northpointe (now Equivant) used by U.S. courts to assess the likelihood of a defendant reoffending. It is used to make recommendations concerning parole, pre-trial detention and sentencing. In 2016, a team of reporters at ProPublica concluded that Compas was much more likely to rate white defendants as lower risk than black defendants. Also, “black defendants were twice as likely to be rated as higher risk but not reoffend. And white defendants were twice as likely to be charged with new crimes after being classed as low risk.” Northpointe countered that defendants given the same risk rating score had equal chance of being rearrested, be they black or white.

This highlights different perspectives on the definition of ‘fair’. In this specific case, a number of teams of academics investigated the situation and concluded that there are several different definitions of ‘fair’, and furthermore that it was mathematically impossible to be fair in all ways at once.

Similar situations abound in everyday life. An obvious example is selective education, where it’s impossible to select evenly from different pools of more relatively advantaged/disadvantaged students and also select only the most talented individuals. Any selection process can only ever satisfy one (or neither) of those ambitions.

What does that mean for the EU’s putative AI Act? Well, it’s going to highlight some very awkward dynamics that have until now simply existed under the radar. And that’s the nub of the issue: if the aspiration is that AI should essentially codify what we as human society judge to be ‘fair and equal’, then we’re going to have to define what ‘fair and equal’ means before we can sign off any AI algorithm. And often that’s impossible, at least impossible in a way that satisfies everybody.

Similar problems have been with us for a while

In the case of Compas, really nothing has changed with the implementation of a computer-algorithm based approach. Historically, the same decisions to assign risks of reoffending to individual offenders would have been taken by judges and other legal professionals. If anybody had been motivated to look closely enough, they would have identified exactly the same conflict: it’s impossible for any assignation of reoffending risks to be ‘fair’ in multiple different ways at the same time.

But the implementation of automated algorithms to perform the same task highlighted a conflict that will have existed for decades or more. This illustrates one of the key challenges of new technology deployment, and AI in particular, when human and societal impacts are considered. Often there will be established processes for undertaking a particular task that have been adopted and refined over decades, and nobody will have questioned fundamentally what these processes have been intended to deliver and whether that is ‘fair’, or at all desirable. So a fundamental challenge with the implementation of new technology with a desire to be ‘fair’ is that we as a society first have to define what ‘fair’ might be, and often that hasn’t been done. New technology just serves to highlight already-existing biases, prejudices, and conflicts.

The certification of legal documents provides a very basic example. It is common practice for contracts between companies to be certified by affixing the signature of representative individuals. But often such signatures appear on a final ‘signature sheet’ that is often not even attached to the physical document to which it relates. It would be trivially simple to adjust the wording of a legal contract and then append a signature sheet from a previous version of a document. In day-to-day business today, this seems to be a risk that people mostly accept. But it would be inconceivable for an equivalent process to be implemented in a digital system. More likely an entire agreement would be hashed to a checksum and a digital signature attached.

MAR 28, 2023| Jim Morrish Previous Post
Digital transformation in healthcare is driven by AI and IoT
All Blog Posts