NRI 2023: The Complex AI Trust Landscape

Why talk about trust? Quite simply because trust is essential to the solidity of social exchanges and to our prosperity. Without trust, or without mechanisms to support or reinforce trust, markets and societies as a whole cannot function. In the past, trust was essentially a human-centered mechanism, enriched by institutions. As demonstrated in Why Nations Fail, economic prosperity has often been the result of a positive feedback loop between strong interpersonal and institutional trust.

With digital technologies, things have somewhat changed. Trust has expanded to include trust in the machine in the form of human-technology interactions. Indeed, researchers have explained that trust in technology is different from trust in other humans, in the sense that humans are moral agents, whereas technology is often a mere input in our exchanges. Consequently, the key elements of trust are not benevolence or integrity, but rather the functionality and reliability of the technology.

Technology-driven and innovative companies continue to build the right products and invest in a ubiquitous and functional digital infrastructure to lead the digital revolution, while other segments lag behind, creating a digital divide. However, while digital may contribute to prosperity, the link to prosperity is not necessarily automatic with technology. There has been an emerging concentration of wealth towards a few superstar digital companies, or towards social media technologies relaying fake news. The Covid-19 pandemic is a good example: some of the most digitally liberal countries recently struggled to speed up vaccination against the virus, as anti-vaccine movements aggressively used digital technology to relay outrage and structural oppression of institutions.

The next and most recent evolution in digital technology, artificial intelligence (AI), is making the trust landscape even more complex. With AI, digital technology moves from a simple input to an autonomous system where machines can make their own decisions and generate their own creations. This has happened very quickly. After a long AI winter since the 1960s, AI began a new spring ten years ago, when AI researchers succeeded in integrating machine learning into the technology. Five years ago, Google Brain researchers introduced transformers again, scaling generative AI models such as OpenAI, Bard, Claude and many others.

While this raises the stakes of trust even higher than before, it actually strengthens its human elements. Indeed, to guarantee trust, humans can imperatively define ethical principles ex-ante, to guarantee trustworthy AI , and limit uncertain flaws. Many countries and international institutions have developed their own ethical frameworks. For example, the Canadian government’s approach includes principles such as transparency, accountability and fairness in AI systems. The European Commission has published AI guidelines, developed by its expert group in 2019, comprising seven elements including fairness, accountability, safety, transparency and human oversight. It is also worth welcoming the fact that the major private players in the AI field, from Google to OpenAI, have also defined their own ethical guidelines, whether guided by the need for users to quickly accept the technology in order to secure their pioneering advantage, or because proactive self-regulation is perhaps less restrictive than later external regulations.

In addition to ethical principles, trust in AI will also come with AI exposure and familiarity. South Korea, for example, has recognized the importance of AI literacy in building trust among its citizens. The government has launched AI education programs and awareness campaigns to increase public understanding of AI technologies. The European Union has also proposed the creation of AI trust centers to guarantee the reliability of AI systems. These centers would certify AI algorithms, ensuring that they meet ethical and regulatory standards.

Five things to consider when building trust in AI

While many studies show that AI can be a powerful driver of well-being and prosperity, countries, regions, companies and people are at different stages and have different interests in trustworthy AI. If the ambition is to have such a powerful strategy for the benefit of the world, the following must be taken into account:

a)     Confidence in AI technology requires education

b)    Trust in AI technology goes hand in hand with trust in institutions

c)     Trust in AI technology goes hand in hand with ethical AI

d)    Ethical AI rules must prevail as the basis for proper AI use

e)     In the case of AI, areas such as education, healthcare and work are paramount to trustworthy AI.


The 2023 edition of the Network Readiness Index, dedicated to the theme of trust in technology and the network society, will launch on November 20th with a hybrid event at Saïd Business School, University of Oxford. Register and learn more using this link

For more information about the Network Readiness Index, visit

Dr. Jacques Bughin is the CEO of Machaon Advisory and a professor of Management. He retired from McKinsey as senior partner and director of the McKinsey Global Institute. He advises Antler and Fortino Capital, two major VC/PE firms, and serves on the board of multiple companies. He has served as a member of the Portulans Institute Advisory Board since 2019.

Connect with Portulans Institute

twitter portulans institute linkedin portulans institute instagram portulans institute youtube portulans institute