The UK is hosting a global AI Safety Summit. Is it really about building trust to set a tone for responsible technologies? Or simply a lot of posturing that will only serve to further entrench monopolies and concentrate power?
The AI Safety Summit is being held at Bletchley in the UK at the start of November 2023. By organising the summit, the UK government is trying to lead the world in regulation, to lead by example towards regulating technologies like AI. The European Union’s landmark data protection legislation – the GDPR – did a similar thing, and in the process entrenched European values and priorities in privacy regulation exported worldwide. Legislation from California to Kenya is now written in relation to or directly copying the GDPR. After leaving the EU, the UK government has produced a flurry of tech policies across many different departments, attempting to achieve the same feat. But it has so far led to little evidence of concrete regulation or new laws, and has shown a tendency towards insular discussions and narrowing perspectives.
This summit, and the host of events surrounding it, are an attempt to change the tune, for the UK to take a more concrete lead in defining how technologies are developed and regulated. But new national or international laws are unlikely to be an immediate outcome, with the Prime Minister stating that the UK will not ‘rush to regulate’. It would be lovely to take Rishi Sunak’s comments at face value and imagine that this meant taking the time to deliberate widely and engage with different affected groups around the world. But when these claims of not rushing to regulate are teamed with appeals to the legitimacy of nation states and an obsessive focus on innovation, it appears more as a deflection to support the continued corporate dominance of AI development and deployment.
There are a number of issues to raise concerning the summit itself, notably even in the wording that frames the event. The focus is on areas of AI with high risk. This includes not only big ‘foundational’ issues of ‘general’ AI (which is what gets labelled existential risk of the kind we see in science fiction) but also risks of narrow AI (like bioethics and bioengineering issues). This narrower type is a much more concrete and present concern with urgent risks to those minoritised by race, location, class, gender, sexuality and disability. But the number of politicians, CEOs and ‘godfathers’ of AI (and yes they are mostly men of a certain age) pushing focus onto existential risks is telling of a purposeful move to get discourse and critique away from dealing with more urgent concerns. To address these concerns would impact their current products, research programmes and business models.
The location too is problematic. Bletchley is a major site of computer history in the UK, where developments in technology aided wartime codebreaking efforts like solving the German Enigma code. But it also represents the exclusion of marginalised people, like Alan Turing who was used during WWII and later persecuted for his sexuality. The location also limits the attendees to 100 people, and we should pay special attention to who is on this exclusive list. World leaders (or, mostly, their representatives) will be there, but only from richer nations like the US (and maybe China if they have decided to take up the invitation, a move that attempts to draw them into international consensus). Even the EU will be represented only by six powerful Western European nations, and the Global South will have little to no voice at the table. From the UK, the leader of the AI taskforce, Ian Hogarth, is an entrepreneur and investor rather than a technologist, researcher or policy maker. The other attendees appear to be mostly from industry as well, with the major technology companies well represented. The list includes controversial and incendiary figures like Elon Musk, whose concern for tech safety is questionable at best. Even AI safety organisations on the list are companies like Conjecture involved in speculation about existential risk rather than immediate harms. Meanwhile public, academic, civil or community organisations concerned with the existing spiralling of AI harms on marginalised groups are excluded entirely. The guest list not only reflects the UK government’s pro-innovation rhetoric but also the focus on bigger companies as it is revealed that the billions of new investment in AI are going to tech giants like Microsoft rather than more locally representative startups.
Beneath all these issues is what is really at work in the summit. It is, first and foremost, a game of power. It is a game of power between governments, over who can frame the issues, export regulation and control discourse. It is also about the UK competing with the AI giants of the US and China by attempting to lead if not in resources then in mediation and control of the narrative. It is also a game between governments and corporations, a balancing act of who has what say over regulation without upsetting the innovation story. Those companies will themselves be competing for influence, directly as well as making a show of appearing to care about the risks AI represents. But all of this is driven by distraction, an attempt to avoid disturbing the golden goose.
The summit will be held behind closed doors between a range of governments and companies who have very little trust from the public. So a key purpose of the summit is not to build trust but to perform the conditions of trust. Safety acts as a veil for national security and business interests. Innovation acts as a veil for exploitation and monopolisation. And the public kept away becomes a resource from which to extract legitimacy. The government is more concerned with the PR mechanisms to control perception than engaging with the groups who are already being most affected by AI, or the policy and research communities who have been working tirelessly to address these harms. These groups instead are forced to accept the framing of the event while being excluded from it. This is reflected in Rishi Sunak’s announcement of a world-leading AI Safety Institute in the UK, presenting as new a watered-down version of issues that have been addressed for many years across a range of fields. The creation of this singular national institute will frame what safety means in the context of AI, leveraging its legitimacy to enrich trust in the UK as a locus (and exporter) of technology and regulation.
Garfield Benjamin is Senior Lecturer in Sociology at Solent University.
Mistrust Issues by Garfield Benjamin, is available here for £40.00 on the Bristol University Press website.
Bristol University Press/Policy Press newsletter subscribers receive a 25% discount – sign up here.
Follow Transforming Society so we can let you know when new articles publish.
The views and opinions expressed on this blog site are solely those of the original blog post authors and other contributors. These views and opinions do not necessarily represent those of the Bristol University Press and/or any/all contributors to this site.
Image credit: Sashkinw via iStock