As AI technologies are rapidly evolving around the world, regulators are prompted to keep up with a flurry of initiatives regarding AI regulations. Four of the most significant initiatives are in the European Union, USA, UK and China.
In April 2023, Transforma Insights published a report, ‘Proposed AI regulations are very much a mixed bag’, which unpicked and analysed these initiatives. This blog summarises each of the four AI initiatives at a high level. We have also included some of our own observations and analyses and conclusions.
Based on the chronological order of publication, we have summarised each of the four initiatives.
The EU was the first to draft its AI regulations and published the draft in April 2021. It is the most ambitious approach to date and includes comprehensive frameworks for risk-based regulation of AI. Based on the context for the use of AI, the draft provides various regulatory approaches, ranging from “Unacceptable Risk” (where AI is prohibited) to “Minimal or No Risk” (where there is no restraint on the use of AI). The EU has also explored in some detail the concept of “High Risk” (such as using AI to screen job applications), where prior approval is required before AI solutions can be deployed. The EU’s definition of AI is also potentially extremely broad (including software that is developed with, for instance, “logic- and knowledge-based approaches”). It is predicted that the current draft will become law in Q4 2023, which will then be applicable from Q4 2025.
After the EU, in October 2022, the USA published a “Blueprint for an AI Bill of Rights” including five principles and associated practices to guide the design, use, and deployment of “automated systems”. The five principles are:
To determine what systems are in scope, the framework identifies automated systems that have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services. This is potentially a far broader definition than what people typically consider as “AI”.
The UK government published a white paper on AI regulations in March 2023. Here, AI’s definition is based on ‘adaptive’ and ‘autonomous’ characteristics, and it establishes five guiding principles, which are:
Actual regulations are expected to be defined and implemented by existing industry regulators, but some central support functions are envisaged to ensure that the “overall framework offers a proportionate but effective response to risk while promoting innovation across the regulatory landscape”. Needless to say, different industry regulators may take different approaches when they regulate processes that involve AI, but it can be surmised that the approach will be as light touch as possible.
In April 2023, China came up with a draft regulation aimed at “generative AI”. These regulations stress that AI service providers must ensure that their outputs reflect the core values of socialism and must not contain any subversion of state power. The regulations also state that the generated content should be true and accurate and false information must be prevented, which can be a challenge for generative AI platforms such as ChatGPT, where results are based on a balance of probabilities rather than on the notion of absolute truth.
In this section, we analyse the aforementioned four major regulations in terms of the definition of AI, pre-certification mandates, training data restrictions, risk-based aspects, output regulations, and current responsibilities.
The adopted definition of AI varies according to the geographies. While China’s draft regulations are more reactive in nature (only aimed at ‘generative AI’), the EU, the UK and the USA have attempted to define ‘AI’ more generically. The UK’s definition of AI is fairly loose, and that of the EU and the USA are far-reaching. As a result, many discussions are likely to follow over what exactly counts as AI.
While the EU and China have explicitly defined the requirements for precertification, the USA’s draft regulations are comparatively loose, and the UK has not mentioned any pre-certification in its white paper, preferring to leave the decision to industry regulators.
In terms of training data, China and the EU (in the context of ‘High Risk’ applications) have chosen to regulate ‘how’ AI is undertaken rather than focussing on AI systems’ results and outputs. They have explicitly described the kinds of training data that can be used. This may challenge putative AI service providers to prove the compliance of their training data with the requirements. Although the requirements are looser, both the UK and USA draft regulations have specified ‘fairness’ in the output of AI systems, which may feed through to restrictions on training data used for live systems.
Both the EU and the USA envisage a risk-based approach (with the EU focussing mostly on ‘High Risk’ applications). The USA’s approach is implicit, where the scope of the Blueprint for an AI Bill of Rights applies to automated systems that can meaningfully impact individuals or communities. The UK’s approach to AI regulations may ultimately be risk-based, wherein various existing industry regulators will appropriately regulate their own sectors.
Both the EU and China are trying to control AI system outputs to protect their respective values. For China, the AI-based output should reflect core socialist values, and the EU has pointed out some of the values it wants to be respected. Surprisingly, the USA envisages allowing individuals to “opt out from automated systems in favour of a human alternative”. The UK has left such decisions to the relevant existing industry regulators.
Proposed approaches towards the ongoing monitoring of AI vary significantly. China intends to hold individuals and organisations liable for any outputs produced. In case of a breach, the AI service provider has three months to rectify its mistakes. China has also stipulated that providers must watermark AI-generated pictures, videos and other content. The EU has proposed quite burdensome reporting requirements, along with relevant documentation and instructions for use, and disclosure in case generated materials are mistaken as being real. The USA’s stand is quite similar to that of the EU, (though it doesn’t entail ongoing reporting requirements) whilst in the UK proposals envisage standing back with a view to catalysing the market.
Based on the comprehensive study of AI regulations in these four major regions, Transforma Insights concludes the following (which are discussed in more detail in the report ‘Proposed AI regulations are very much a mixed bag’):