The recent US Supreme Court decision that overturned Roe v. Wade and opened the door to making abortion illegal also highlighted the transformative impact of algorithms, with the panic around the possibility of period tracker data being used as a means to criminalise women.
It was already clear 10 years ago that data analytics could peer into people’s most intimate moments, with the revelation that the aptly named Target chain store in the US could accurately predict pregnancy from purchases of lotion, supplements and cotton balls. What’s changed in the meantime on the technical side is the exponential rise of AI, based largely on neural networks and so-called deep learning. While its logic of optimised pattern-finding underpins uncanny achievements like facial recognition and self-driving cars, it’s also clear that, applied to social issues, its net affect is to scale structural injustice.
AI provides a generalised framework of prediction and inference that can ingest any sufficiently large source of data to produce performative classifications; that is, social categories that have the effect of producing the phenomena they claim to predict. This might be, for example, a classification of ‘pre-pregnant’, which in the current moment goes beyond generating sales vouchers for nappies to being a label for a potential crime scene.
AI draws pre-emptive but mostly invisible decision boundaries based on securitising predictions of risk. Prior to the Supreme Court decision in the case of Dobbs v. Jackson there were already cases of women being prosecuted for self-induced abortions on the basis of digital evidence that supposedly showed intent. The potential of AI is not simply to target larger numbers of women for direct prosecution but to generate a wider field of inferred intent through its capacity for creating patterns of correlation, and to apply these algorithmic morality judgements to decisions about allocation and exclusion.
Needless to say, collateral damage from AI’s brittle calculations already falls most heavily on the most vulnerable and marginalised. The net effect of institutional AI applied to social questions is to create continuous partial states of exception; zones of statistical discrimination that prevent subjects from accessing resources or rights in ways that evade residual legal protections. In general, AI itself becomes a form of violence, enrolling the epistemic violence of overriding voice and testimony with computed abstractions, the administrative violence of automated decision making and the structural violence of deepening social divisions with deep learning. Its production of boundaries locks in too easily to dynamics of increased carcerality, and even to necropolitics of the kind made visible in the pandemic, where large-scale judgements are made about who is worth preserving. These affordances are a close match for a new regime regarding abortion that places patriarchal state control of bodies over the risk to life of women from ectopic pregnancies and is happy to instantiate the administrative violence of forced births.
The Dobbs decision also chimes with applied AI in a wider sense in that both are symptomatic of wider transformations. It’s clear that the Supreme Court won’t stop at abortion but is taking aim at a wide range of civil rights, even the right to a public education. AI, meanwhile, is seen as a generalisable solution to a broad range of social issues as structures of power try to realign themselves under the tectonic stresses of austerity, COVID-19 and climate change. This algorithmic restructuring is largely opaque and doesn’t involve the correction of structural injustice but the infrastructural scaling of precarity and scarcity.
But what both Dobbs and AI also have in common is the central role that feminism must play in pushing back against them. In the case of the anti-abortion law, the reasons are self-evident. With regard to AI, both feminist standpoint theory and relational ethics are direct challenges to AI’s claim to authority and objectivity. Feminist perspectives are a political challenge to AI’s disembodied thoughtlessness, placing matters of interdependence and care against its reductive optimisations. At the same time, we already know that algorithms and AI are racial projects that both recreate racial identities and reenact racism in their material effects. It turns out that AI’s generalisability and its intensification of social crises creates the conditions for intersectional resistance. But like all social movements, these ethicopolitical commitments need organisational forms.
The wider resistance to AI requires ways of coming together that invert algorithmic exclusion via mutual aid and solidarity. We saw some stirrings of this in tech worker self-organisation under the Trump years, where workers themselves opposed the fascistic application of machine learning. The opening is there for similarly self-constituting struggle in and across communities, where those impacted by AI because of gender, race or any other reason are able to push for the constraint of harmful technology while at the same time exploring the options for structural renewal; for approaches to social problems that are adaptive, inclusive and sustainable.
The dark moment of the Dobbs decision has reinvigorated the feminist movement for bodily and social autonomy and the search for safe and self-managed solutions. This reaction is also emblematic, in a positive way, of the wider potential for resisting AI.
Dan McQuillan is Lecturer in Creative and Social Computing at Goldsmiths, University of London.
Resisting AI: An Anti-Fascist Approach to Artificial Intelligence by Dan McQuillan is available on the Bristol University Press website. Order here for £19.99.
Bristol University Press/Policy Press newsletter subscribers receive a 25% discount – sign up here.
Follow Transforming Society so we can let you know when new articles publish.
The views and opinions expressed on this blog site are solely those of the original blog post authors and other contributors. These views and opinions do not necessarily represent those of the Policy Press and/or any/all contributors to this site.
Image credit: Gerd Altmann from Pixabay