The complex relationship between regulation and innovation: How are countries regulating AI to ensure the safe development of new technologies?

September 18, 2023

Over the last few years, artificial intelligence (AI) tools have become increasingly prevalent and have been catching the attention of policymakers, governments, the private sector, and international organizations. Perhaps one of the most popular technologies released recently was the third version of ChatGPT (GPT-3), a chatbot developed by OpenAI that has significantly contributed to advances in natural language processing and generation. GPT-3, whose user-friendly interface has gone viral, reached over a million users just 5 days after its launch in November of 2022. The generative AI ​​domain, however, is much broader than some might think and is growing at an unprecedented pace. According to a report produced by Holistic AI, the global revenue of the AI market is set to grow by 19.6% each year and reach $500 billion in 2023.

Despite the groundbreaking changes AI can bring to society, an increasing concern is occupying the public debate about fairness, transparency, and explicability of its algorithms. This is mainly because, since the advent of these technologies, machine bias towards race, gender, or minorities has been a huge concern for experts and policymakers (see ProPublica report). More recently, in May 2023, nearly 350 AI scientists and experts, including OpenAI CEO Sam Altman, Bill Gates, C-level executives from Google and Microsoft, and professors from leading universities, signed a letter stating that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

In this context, one potential avenue for mitigating the risks around AI development is through a regulatory framework that aims to ensure the safe development of new technologies. However, even though a regulatory debate has been taking place in several countries around the world, the exponential technological development of the last few years has outpaced the capacity of governments to implement timely and effective policy changes. Data suggests that since 2012, the growth of AI computing power has risen to doubling every 3 months, exceeding Moore’s law, which states that the number of transistors in an integrated circuit doubles every two years.

Considering the significant gap between technology development and policy making, the debate concerning the legal framework that should govern AI is very complex.  After all, what is the best way to approach AI regulation? Is it a principle-based regulation or a more incisive state-control rule? How can states ensure the responsible development of AI technologies without constraining innovation?

On the one hand, an in depth state regulation is very challenging to put into place. Considering the rapid changes in the AI development landscape, it is virtually impossible to establish a state regulation that would encompass all aspects of the technology. For example, the applications of AI today can be significantly different a year from now, and the regulatory framework cannot be changed with such rapidity. Further, over-regulation is not desirable in a field so dynamic, because of the risk of unduly constraining or hindering technological development or otherwise disproportionately increasing the cost of placing AI solutions on the market.

On the other hand, soft regulations, such as principle-based norms, also have their own limitations. Many of the general principles that are brought up as cornerstones for the development of safer and more responsible AI are usually already set on domestic legislation and international treaties (those concerning non-discrimination, human dignity, damage prevention, etc.). Moreover, since principles are normative models endowed with intense abstraction and generality, their enforcement on a concrete basis is always challenging and does not always give an appropriate answer to the “hard cases”.

In this context, the question at hand is not whether AI should be regulated or not, but rather how it should be regulated. Countries around the world have taken different approaches to this challenge.

In July 2023, China got ahead on the regulatory landscape of AI and published the “Interim Measures for the Management of Generative Artificial Intelligence Services”. This new regulation aimed primarily to “manage its booming generative artificial intelligence (AI) industry […] and said regulators would seek to support development of the technology”, to consolidate China’s goal to be a leader in AI development by 2030. This approved “interim” legislation was much softer than its previous drafts, and some have argued that its 24-article rule is quite concise and broad for such a complex matter.

The European Union has been developing the “AI Act”, which is being called “the world’s first comprehensive AI law”. The proposed act relies on a risk-based approach, which means that it imposes different requirements in accordance with the level of risk of respective AI systems. Moreover, if and when the legal framework is approved, businesses will have to carefully analyze its content, because the draft currently under discussion sets forth hefty penalties for non-compliance, extra-territorial scope, and a broad set of mandatory requirements for organizations which develop and deploy AI. Although the project is still in the process of being approved, some member states are already taking independent approaches. Spain, for instance, recently created an AI supervision agency, which signals the European Union’s concern with the issue. In addition, France has fined Clearview AI € 20 million for collecting and using data on individuals in France without a legal basis. 

The United States also has also increased its focus on  AI regulation, with 2022 marking the 117th Congress as the most AI-focused Congress in history. As a result,  new documents, rules, and studies started to emerge: the White House published the “Blueprint for an AI Bill of Rights: Making Automated Systems work for the American People”,  bills, such as the Algorithmic Accountability Act of 2022 were introduced in Congress, and states, like California for instance, have increased efforts towards addressing AI concerns through consumer privacy acts. Specialists have hypothesized that 2023 may see a significant shift in focus from the more voluntary and aspirational measures to those heavier in enforcement by regulators such as the Federal Trade Commission (FTC), and more high-profile cases regarding algorithms.

In addition to the above-mentioned cases, there are also dozens of countries currently discussing AI regulation. Along with international organizations such as UNESCO, countries such as Namibia are publishing international documents on the issue. The question of how to ensure the safe development of AI will be at the center of debate at all levels of government throughout the world in the coming years. 

While Artificial Intelligence can provide a wide array of economic and societal benefits, it must be properly regulated and monitored in order for its benefits prevail over the potential harms it could bring to humanity.


Tainá Aguiar Junquilho is a professor of Law and Technology at the Brazilian Institute for Development, Research and Education (IDP). She holds a PhD in Law with emphasis on Artificial Intelligence from the University of Brasília, and a Master’s in Law from Federal University of Espírito Santo. She is also the Vice-Leader of the Public Policy Observatory Study Group (GEOPP) at UnB.

Matheus de Souza Depieri is a Fellow at Portulans Institute. He is currently an LL.M. Candidate at the University of Cambridge (King’s College), Associate Editor of the International Review of Constitutional Reform (IRCR), and Researcher at the Center for Comparative Constitutional Studies at the University of Brasília.

Connect with Portulans Institute

twitter portulans institute linkedin portulans institute instagram portulans institute youtube portulans institute