This blog post is part of a series where Portulans staff review recent developments in tech policy. Check our Twitter and LinkedIn to follow the conversation.
Last month saw the launch of the Oxford Commission on AI and Good Governance, hosted by the Oxford Internet Institute (where our Board Member, Bill Dutton, is a Senior Fellow). Over the next eighteen months, OxCAIGG will contribute research and evidence-based policy recommendations to help governments and public sectors worldwide properly understand and mobilize the power and opportunities granted by machine learning and data science. To this end, the Commission unites world-leading experts on governance, technology, security and human rights.
Portulans applauds this highly interdisciplinary, international effort, particularly at such a historic juncture for the governance of AI. On the one hand, the COVID-19 crisis has magnified the intensity of conversations about machine learning in healthcare (recently, the National Institutes of Health launched a new center for using AI for medical imaging), not to mention conversations about privacy and contact tracing (see also: AI Can Battle Coronavirus, But Privacy Shouldn’t Be A Casualty). On the other hand, recent global demonstrations in support of the Black Lives Matter movement have pushed to the front stage discussions about racist algorithmic bias (for instance, dangerous bias in facial recognition technology) and the harms of non-inclusive algorithmic design. In turn, these conversations about AI governance are situated within wider debates about the role of government in society, and the power of tech to disrupt society as we know it.
I had the opportunity to virtually interview Sir Julian King, a commissioner for OxCAIGG. King is a former British diplomat and civil servant who has previously served as Ambassador to France, Ireland, and Director General of the Northern Ireland Office. From 2016, King headed the Security Union at the European Commission, and was tasked with leading European policy on countering cyber threats, tackling disinformation and securing digital infrastructure. In conversation, King explained the Commission’s purpose, vision and research agenda; he also highlighted several recent trends in global conversations about AI governance.
|AI is an essential part of the digital plumbing of our interconnected lives. It needs to work, be transparent, and accountable. Easy to say, hard to do. But it’s vital that governments, public and private sectors, and indeed citizens get this right, if we want to live in societies that enable us to achieve our potential, while respecting our fundamental rights.
– Sir Julian King, OxCAIGG Commissioner
Building Trust in AI
King shared his insights regarding the central role of citizen expectations and preferences in AI design and implementation. By centering individual and community experiences of tech, King explained, algorithmic systems can improve their actual and perceived fairness. King noted the recent A-Level algorithm uproar as a particularly jarring case-in-point. As demonstrated by the Ada Lovelace Institute’s analysis, although Ofqual’s algorithmic system was developed mostly transparently and somewhat accountably, its efficacy and accuracy were sub-par: and in any case, the system was implemented against a backdrop of intense public skepticism about algorithmic systems. The report continues: “[Ofqual’s system] needed not only to meet, but to exceed existing standards for transparency and accountability, to avoid doing indelible harm to public confidence in data-driven decision making”. Indeed, the scale of damage will create obstacles for future reliance on wide-scale predictive algorithms by British public agencies, and will inform forthcoming global discussions about citizen trust in AI.
See also: The UK Exam Debacle Reminds Us That Algorithms Can’t Fix Broken Systems.
AI, Inclusivity and Injustice
On a related note, one of the Commission’s founding principles is Inclusive Design, which refers to issues around discrimination and bias of AI technologies in regards to “opaque” datasets, the exclusion of minorities and under-represented groups, and the lack of diversity in design. According to King, there are two central challenges for improving the inclusivity of AI: the design of the AI systems, and the quality of the data input for these systems. Inequitable design and poor datasets not only reduces the effectiveness of AI, but also reinforces historic injustices and inequalities, particularly when utilized in public governance settings like law enforcement. While AI technology can make policing more efficient, efficiency is not synonymous with fairness or lack of bias, comments Dr. Rashawn Ray, especially when systems are rooted in prejudicial, non-inclusive data that is statistically more likely to misidentify people of color than white people. To this end, the Commission will investigate which policy interventions could improve the ways in which AI tech helps overcome societal inequality, and how to improve the diversity of voices involved.
See also: Mutale Nkonde (CEO, AI For the People) and Rashad Robinson (President, Color of Change) in conversation about racial justice in tech at RightsCon 2020.
Civic Engagement and AI
King underlined the overwhelming importance of civil society participation in the design, implementation and independent evaluation of AI systems; such representation ensures, to some degree, that the expectations, preferences and concerns of citizens have a seat at the table. In fact, King noted, “if we can’t build this into our approach, we cannot deliver public goods”. Conversations at last month’s RightsCon, a world-leading summit on human rights in the digital age, were undoubtedly a step in the right direction, as is the UN Secretary General’s newly launched Roadmap for Digital Cooperation and the Network of Centers’ project exploring the “Ethics of Digitalisation”, with a strong focus on algorithmic civic accountability and how AI has the power to both amplify and curb civic freedoms. With this in mind, the Commission will also focus on how to “widen the discussion” about the impact of AI on citizens and improve civic engagement.
See also: Sorting Algorithms: The Role of Civil Society in Ensuring AI Is Fair, Accountable and Transparent.
Public-Private Partnerships for AI Governance
As King commented, the private sector has historically had the power and opportunity to set the conditions for the governance of AI (or lack thereof, in some cases): and as a result, regulatory catch-up efforts by government regulators to survey and improve algorithmic deficiencies and malign impacts are often intercepted by the business-strategic ethos of tech companies, acting as “gatekeepers” of independent scrutiny into their activities and invaluable data alike. As a result, compromise is inevitable: as King noted, we must not “make the best the enemy of the good” and must push for a multitude of flexible, context-specific models for “persistent accountability” and coordination that reflect and build on existing synergies between the public and private sectors for tech governance. As demonstrated by the 2020 Global Talent Competitive Index, turning AI into a force for public good will require a proactive, cooperative approach that bridges the gap between mostly high-income AI “talent champions” and the rest of the world.
See also: What To Expect from Biden-Harris on Tech Policy, Platform Regulation and China.
The COVID-19 crisis has intensified pre-existing debates about the dual role of government and tech in citizen lives and livelihoods. Following a thought-provoking interview, King’s concluding remarks were particularly powerful: change is unavoidable, and indeed already happening. Engaging with AI is not a decision, but rather an inevitability. To this end, we have a responsibility as technologists, policymakers, researchers and activists to design, implement and advocate for the best and most inclusive governance possible.