Transforma logo

How are regulators shaping the future of Artificial Intelligence around the globe?

JUL 14, 2025 | Joydeep Bhattacharyya
 
region: ALL vertical: ALL Artificial Intelligence

In April 2023 Transforma Insights published a report, Proposed AI regulations are very much a mixed bag, which primarily focussed on the definition and scope of AI, requirements for pre-certification, restrictions on training data, risk-based aspects, regulations on outputs, and ongoing responsibilities contained in four major AI regulations: China’s Administrative Measures for Generative Artificial Intelligence Services, the European Union’s Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act), also known as the “AI Act”, the UK’s A pro-innovation approach to AI regulation, and the USA’s Blueprint for an AI Bill of Rights.

Spin forward to July 2025 and countries are still trying to regulate AI to the best of their capabilities. This blog is a short summary of how AI regulations have fared in the last two years. For more information and an extensive analysis of AI regulations around the world, please refer to our Regulatory Database.

Headwinds for European AI regulations

Many EU countries have not come up with their own AI regulations and instead are in the process of figuring out ways to implement the EU AI Act. The roadmap to the adoption of the EU AI Act by respective countries is probably not going to be an easy one and top executives of large European corporates have urged Brussels to ‘pause’ the implementation of the Artificial Intelligence Act. An open letter titled ‘Stop the Clock’, and signed by senior managers of organisations including Airbus, ASML, BNP Paribas, Dassault Systèmes, Lufthansa, Mercedes-Benz and others, refers to the EU AI Act and clearly states: “we urge the Commission to propose a two-year ‘clock-stop’ on the AI Act before key obligations enter into force, in order to allow both for reasonable implementation by companies, and for further simplification of the new rules.” Although the commission has clearly declined the request, what eventually follows is something all will be eagerly looking out for.

Generative AI is often a key focus area

The regulatory landscape is certainly evolving as policymakers introduce changes to address recently emerged generative technologies. Generative AI has been a key focus for enterprises for some time now and AI regulators have been quick to integrate it into their respective regulations. For instance, the EU’s AI Act is one of the most ambitious AI regulations in the world, and it aims to establish a harmonised legal framework for Artificial Intelligence within the European Union. The regulation clearly mentions, “General-purpose AI models, in particular large generative AI models, capable of generating text, images, and other content, present unique innovation opportunities but also challenges to artists, authors, and other creators and the way their creative content is created, distributed, used and consumed”.

The US has been one of the forerunners when it comes to AI regulations. Interestingly, certain state laws in the US have also started including generative AI. For instance, the California AI Transparency Act (which will be operative from January 1, 2026) explains that a covered provider (defined as a creator of a generative artificial intelligence system with 1,000,000+ monthly visits and which is publicly accessible) should make available a free artificial intelligence detection tool; shall not collect or retain personal information from users of the covered provider’s AI detection tool, retain any content submitted to the AI detection tool for longer than is necessary, or retain any personal provenance data from content submitted to the AI detection tool by a user.

China is also moving in a similar direction. It published the Administrative Measures for Generative Artificial Intelligence Services (which came into effect on August 15, 2023) and explains that the content generated by generative artificial intelligence should be true and accurate, and measures should be taken to prevent generation of false information. It further adds that in the process of algorithm design, training data selection, model generation and optimisation, and service provision, measures should be taken to prevent discrimination based on race, ethnicity, belief, country, region, gender, age, and occupation.

Agentic AI has generally not yet been catered for

Agentic AI is a relatively new and evolving area of artificial intelligence focused on systems that can operate autonomously and pursue goals without much supervision. While the concept is gaining increasing attention from researchers and policymakers alike, no specific regulations have yet been established to govern agentic AI systems - at least, none that have been officially enacted or publicly disclosed, based on our current knowledge of the AI regulatory landscape and analysis. For a deeper analysis of some of the regulatory issues associated specifically with agentic AI, you can explore our recent report Agentic AI: next generation AI that works autonomously.

A wide range of countries are designing AI policies and frameworks

Recently various governments from around the world have been ramping up their efforts to formulate AI policies and frameworks to ensure safe, fair, and ethical development and usage of AI-based solutions and services across industries. Not only are developed and larger economies driving the passage of AI regulations, but smaller countries have also realised the importance of enacting similar regulations to drive the successful adoption of AI-related services.

The bullets below provide a snapshot summary of a selected few of the emerging AI regulations, policies and frameworks around the world.

  • In the Greater China region, Taiwan has published the Draft of the Basic Law on Artificial Intelligence, to promote the development of AI technology and application in the country. It adds that the government should focus on human autonomy, privacy, transparency, fairness, and accountability in the context of AI research, development, and application.
  • In Latin America, Ecuador presented Regulation and Promotion of Artificial Intelligence (in consultation stage) to establish a regulatory framework and public policies for AI control. It particularly focuses on the protection of personal data in all processing activities and on technical robustness, security, and risk management.
  • In the MENA region, Oman recently published the National Artificial Intelligence Policy to foster an AI-friendly environment, promote awareness, drive innovation, and ensure ethical use and the National Charter for Artificial Intelligence Ethics to establish general rules and ethical practices for the development and use of AI systems, ensuring responsible, safe, and risk-mitigated deployment. Qatar has also published the Guidelines for Secure Adoption and Usage of Artificial Intelligence to guide organisations on the secure adoption of AI. In addition to transparency, accountability, and safety, it also explains the importance of human oversight in the context of high-impact AI systems and points out certain steps that organisations can take to ensure data security.
  • South Africa has published the South Africa National Artificial Intelligence Policy Framework to promote the integration of AI technologies to drive economic growth and enhance societal well-being. The framework identifies strategic pillars for the country’s AI policy including regulatory compliance and governance, data protection laws, risk management, and cybersecurity measures.

Conclusions

The world of AI regulation is moving quickly, but the development of AI technology is moving faster and regulations are struggling to keep up. Several regulators (not least, the EU) had plans in place for regulating AI at around the time generative AI burst on to the scene with ChatGPT in early 2023 and these had to be rapidly adjusted to cater for the new technology. At the same time, more adventurous and braver regulators (in Europe and the U.S.) are proceeding to regulate AI without the full backing of end-user adopting enterprises and AI vendors both. Meanwhile, newly emerging agentic AI concepts are the new frontier technology for the AI industry but remain generally unaddressed by regulations (which tend not to cater adequately for systems that can evolve independently based on their specific contexts).

The arms race between regulators and the AI vendor community is likely to continue for some time yet. The best approach, as we outlined back in 2023, is probably for AI regulation to be context specific and it will probably be most effective to rely on existing industry regulatory frameworks and to regulate AI within those frameworks as ‘just another technology’.

JUL 03, 2025| Joydeep Bhattacharyya Previous Post
Smart public alarms: the role of IoT in real-time emergency alerts
All Blog Posts