Governing AI for Humanity: Thoughts on the UN AI Advisory Body Interim Report

April 8, 2024

The UN Secretary General appointed a high-level multistakeholder advisory body on artificial intelligence (AI) in October 2023. The fast-working body was tasked with making recommendations in three areas:

  • The international governance of artificial intelligence.
  • A shared understanding of risks and challenges.
  • Key opportunities and enablers to leverage AI to accelerate the delivery of the Sustainable Development Goals (SDG).

The advisory body’s recommendations will be discussed at the UN Summit of the Future in September 2024, and form part of the background for negotiations around the proposed Global Digital Compact. The advisory body launched its interim report in December 2023. At the launch the advisory body encouraged individuals, groups, and organizations to provide feedback . This blog is based on the feedback submitted to the UN in March 2024.

The report

The motivation for establishing the high-level advisory body was the rapid development and uptake of AI applications. The report notes that AI applications could potentially become a game changer, assisting humanity in reaching the SDGs, monitoring and helping design policies to mitigate and adapt to climate change, discovering new medicines, and many more. At the same time, the report notes, AI poses risks to cyber security, privacy, and cultural diversity. Furthermore, AI applications are just as effective at aiding destructive as benign forces. The distributional effects of AI applications are also a concern.

The objective of the advisory body’s work is thus to come up with recommendations on global governance of AI. The report discusses the opportunities and enablers of AI, and the risks of unfettered AI applications doing harm either accidentally or willfully in the hands of groups and individuals aiming at destruction.

Areas that could benefit from further clarification

A clearer distinction between AI technology, the enablers of its deployment (skills, data, digital infrastructure, the legal framework) and investment in AI applications would be helpful for future discussions. As noted by e.g., researchers at the University of Toronto, AI is a prediction machine that supports decisions. However, the good decisions needed, for example for the transition to a greener future, still need to be taken, the adjustment costs still need to be negotiated over, and resistance to change will still be an obstacle. By the same token, AI provides tools for monitoring biodiversity, but it does not prevent humans from destroying the habitat of endangered species.

It would be helpful to be clear about in which areas the use of AI applications moves the frontier of human capability or fundamentally changes the way production, education, or social interactions are organized. It is beyond the scope of the report to spell this out in detail, but the report should be even better anchored in a clear understanding of the technology itself, its limitations as well as the limitations of our understanding of it. This would help identify in which areas existing institutions, laws and regulations apply to AI, and in which areas adjustments, revisions or new approaches are needed.

For instance, do AI technologies fundamentally change the concept of intellectual property? Can the treaties governed by the World Intellectual Property Organization (WIPO) be applied to – or adapted to AI? Or do we need a fundamentally new approach to intellectual property rights? By the same token, are existing anti-trust laws and regulation adequate in markets transformed by AI applications? And not least, does AI fundamentally change the way international trade and investment should be governed?

A clearer distinction between the development of AI and its uses would also be helpful, particularly in the context of benefits from AI. Like other general-purpose technologies before it, the initial innovations may indeed require huge investments that only very large companies or rich governments can shoulder. Nevertheless, as the technology becomes ubiquitous and embedded in the software that we all use on a daily basis, the gains are also widespread. As the report points out, AI applications are particularly helpful for assisting people with limited vision, speech, hearing, or mobility, while AI applications facilitate bringing health and education services to underserviced areas. Furthermore, early empirical research on the impact of AI applications at work finds that AI benefits workers with less experience and skills the most. It is therefore premature to state as a fact that “today’s AI benefits are accruing largely to a handful of states, companies, and individuals”.

As far as enablers are concerned, the development of infrastructure and skills is critical for developing countries in general, so this is not an AI-specific issue. Rather, AI is another reason why developing countries, supported by UN institutions, development banks and others should bolster infrastructure and skills investment. AI probably also raises the return to such investment.

Data governance

The development and use of AI have hitherto required huge amounts of data. Data governance is therefore part and parcel of AI governance. While access to data is key for AI developers, there are also trade-offs. For instance, mandating platforms to share data to facilitate AI start-ups may fall foul on privacy and cyber security regulation.

Biases in AI algorithms reflect biases in the training data. Thus, AI algorithms mirror and possibly amplify human prejudices. The root cause of biases needs to be understood and combined with insights on the workings of the AI technology itself to effectively audit AI applications for biases.

Access to data for AI developers and users, particularly in small markets, requires cross-border data flows. This has been a contentious issue in the World Trade Organization (WTO), including the Joint Initiative (JI) on E-commerce. Cross-border data flows with trust, prohibition of data localization requirements, and protection of source code have recently been taken off the JI table. Digital Economy Agreements, pioneered by Asian countries, notably Singapore, now lead the way in this space. The WTO is probably best placed to tackle global rules for cross-border data flows but has unfortunately been unable to make progress. The difficulties this poses for international governance of AI need to be considered in the final report on Governing AI for Humanity.

Intellectual property rights

Two related questions about intellectual property rights arise as AI proliferates. First, is the use of copyrighted content for training AI algorithms fair use? Second, is AI-generated content copyrightable? Several court cases involving the use of copyrighted media content and music in the training of AI-supported content creation are currently being heard. Since copyright infringement is associated with substantial fines, legal certainty is urgently needed for media and the creative sector to flourish and reap the benefit from AI.

A few countries, among them the UK, New Zealand, South Africa, and India do reward intellectual property protection to work generated by machines. But most countries, including the US and the European Union, do not consider fully AI-generated content copyrightable. This raises the question of where to draw the line between AI-supported, copyrightable content and AI-generated, not copyrightable, content. WIPO is probably best placed to coordinate national efforts and distill consensus on AI and intellectual property rights.

Open source?

The interim report also briefly discusses open source versus proprietary AI algorithms and models. It rightly does not take a position but comes across as somewhat incoherent as it appears to be advocating for open source in some parts while cautioning against it in others. The final report could spell out more clearly the benefits and risks of open-source AI algorithms, areas where open source may not be advisable, and recognize that more work needs to be done to assess to what extent policy intervention is needed in this space.

Institutional aspects

The interim report leaves institutional arrangements open, but its recommendations do envisage “new global governance institutions”. Going forward, it would be useful to go systematically through which new global governance issues AI raises, to what extent they can be accommodated within existing governance frameworks and identify possible gaps that should be filled by new institutions.


Interoperability is a frequently used word in the discussion of AI and digital governance. It seems to be the solution whenever countries cannot agree on standards and regulatory frameworks. Interoperability is, however, easier said than done. For the final report as well as future work, a deep dive into what interoperability means in practice would be helpful.


AI was first introduced as an academic field in the 1950s and has undergone several springs and winters since then. The entry of AI into the popular debate was triggered by the release of OpenAI’s ChatGPT, which reached 1 million users five days after its launch in November 2022 and 100 million users in less than a year.

The awesome capabilities of AI applications have raised deep concerns about the future of work, democracy and even humanity itself – and a sense of urgency for establishing effective governance at all levels and use cases. It has also inspired visions of a bright future where prosperity, good health and a clean environment are the norm.

Nevertheless, Artificial General Intelligence (AGI), which would imply applications capable of self-teaching and performing tasks beyond what they were initially trained for, is most likely decades away – and many doubt if AGI will ever be developed. For now, therefore, it makes sense for the UN special agencies under which different use cases fall to work out global governance systems within their fields. Thus, UNESCO, WIPO, ITU and others could draw on top notch external and internal expertise to cut through the hype, and assess the opportunities and risks involved for each use case, guided by overarching principles synthesized from the OECD AI Principles, the G20 and others.


This blog was first posted by the Council on Economic Policies.



Hildegunn Kyvik Nordås is a Senior Associate with the Council on Economic Policies, and a member of the Portulans Institute’s Advisory Board. She also holds a position as visiting professor at Örebro University in Sweden and research professor at the Norwegian Institute of International Affairs (NUPI) in Norway. Prior to that she was leading the OECD’s work on services trade policy analysis, developing the Services Trade Restrictiveness Indices and database and related analytical activities (2005-2019). She also spent two years at the research department at the WTO (2002-2004)

Connect with Portulans Institute

twitter portulans institute linkedin portulans institute instagram portulans institute youtube portulans institute