One way of understanding democracy is as a means of settling differences of opinion over the direction of self-government in a binding, but finite manner. Voting takes place and all sides accept the result while preparing their case for the next election to be held at a future date. Rulers govern through consent, knowing that in time they too will return to the fold.
But democracy is always under strain. The principal reason is because the propertied (who are typically a minority) are wary that the propertyless (who are typically a majority) will vote for expropriation.
The barycentre of politics in liberal-democratic societies is the use of property and the management of these consequences like social inequality, homelessness and other social costs. At best, the propertied make strategic concessions around welfare and mild redistribution that are meant to stave off mass revolt. In practice this means that there is a dissonance between rhetorical support for rights and plain material inequalities.
Conversely, the propertyless seek to build larger coalitions from their ranks to sway democratic institutions in their favour. But they face organised opposition. One cynical tactic their opponents use is to simply deny that democracy is under strain. Another is to use wedge issues to fracture budding coalitions among the propertyless. Nationalism, identity and prejudice can also be mobilised to form lines of division.
Despite the long odds against the full exercise of democracy, there must be space for the proverbial ‘optimism of the will’. ‘As instructive as the past may be,’ Adam Przeworski writes, ‘the future is more interesting. What is the current state of democracy, and what are its prospects?’
With all these points in mind, what are the prospects for democracy with the new range of AI products coming to market? How might these products ease, tighten or alter the strain upon democracy?
Reification of AI systems
Sadly, the global discussion around AI is rife with one-dimensional narratives, the kind Herbert Marcuse described in his work. In the present discussion, there are many thought-terminating clichés about how AI will automatically bring economic development by addressing disease, famine and poverty. Other examples include vague recommendations of ‘adopting a multidisciplinary and collaborative approach to regulation’ that litter the policy space and conference circuit. Besides being banal, these sentiments pose a danger to society precisely because they smooth the findings from critical anthropological and sociological scholarship.
AI hype is the quintessential example of a one-dimensional narrative. It is the misleading overstatement of the abilities of AI systems and the speed of their progress. One purpose is to encourage AI to be applied in new domains, even ones that are prima facie inappropriate or unsuitable. These beliefs about AI are different from a naïve understanding about functional ability and inherent limitations, which is entirely forgivable. Conversely, AI hype often focuses on the product itself, pushes long-discredited technological determinist paradigms, neglects the integration of technologies into institutions, organisations and societies, and overlooks how these entities and their sub-components interact with the product to create or constrain particular possibilities.
The concept of reification is useful for understanding the consequences of one-dimensional narratives like AI hype. Put simply, reification seeks to capture how human relations are reduced to and misconstrued as physical objects, in turn giving the appearance of an inevitable naturalness. For example, there is a narrative that IBM’s Deep Blue beat Garry Kasparov at chess in 1997, a statement intended to signal a celebratory milestone in the rapidly advancing field of AI research. But this narrative overlooks the thousands of programmers and engineers who worked for decades on that project, as well as the billions of dollars in private and public funding for advanced industrial processes.
When reifying AI systems, a person engages in a process of attributing human-like intelligence and abilities to machines, or adhering to strong beliefs about the potential for and impact of AI on various domains and tasks. Reification can be seen as a form of fetishism, where the social relations around labour are forgotten or misattributed to machinery and circuitry. The concept can help explain the disconnect between social structure and social consciousness, much like the disconnect between the industrial system that produced Deep Blue and the prevailing narrative of a superior artificial intelligent system that beat Kasparov.
Adopting a critical stance towards AI in society
Decisions do not happen in a vacuum. For example, the design of recommender systems, content moderation systems and the enforcement of terms and conditions are shaped by commercial imperatives, as well as the worldviews of the ideologies guiding those actions and implementations.
It is important to ruthlessly critique the reification of AI systems, and to recognise their human and social dimensions. AI is a sociotechnical phenomenon: people do the work of statistical modelling in AI, giving ‘weight’ to different categories of data within their models. In corporate settings, they do so with the express overriding directive of returning value to shareholders. Without first fully appreciating this, it is harder to see how the presumed natural benefits of commodification, assetisation and share dividends may be another strain for democracy.
Technology is not merely a means, but also a social and political phenomenon that shapes and is shaped by human values, interests and power relations. Much of this research agenda can take inspiration from Andrew Feenberg’s critical theory of technology. Feenberg seeks to challenge the technocratic logic of modernity to advocate instead for the democratic transformation of technology to serve more humane goals.
My own work follows in this tradition while also foregrounding the racial, colonial and state formation aspects that were also striking features of Atlantic modernity. This approach can provide a depth to the current understanding of how and why AI extends processes of capitalist exploitation and domination to spheres of social life.
As they are key players in the academic attention economy, it is especially important that policy researchers exercise great caution to avoid perpetuating one-dimensional narratives that favour technocratic determinist solutions over collective democratic deliberation. Without this kind of sustained introspection, while they may have sympathy for the propertyless, policy researchers may be entrenching the interests of the propertied.
Like prior digital technologies, the rollout and adoption of AI products like automated financial investing and social media monitoring tools will likely have secondary effects on democratic processes. Knowing this, researchers are making evidence-based inferences to help elected representatives anticipate how AI products would interact with the essential features of a democratic system.
Bringing together democratic theory and political economy can help researchers discover whether there are clear and stark differences between how democracies and autocracies adopt and adapt AI systems. The priorities in this project involve identifying essential criteria to judge whether AI systems serve democratic ends as well as how to best ensure that the grand design of AI is subordinated to the grand design of democracy.
Scott Timcke studies the politics of race, class and social inequality as they are mediated by digital infrastructures.
Algorithms and the End of Politics by Scott Timcke is available Open Access on Bristol University Press Digital. You can also order a copy for £80.00 here.
Bristol University Press/Policy Press newsletter subscribers receive a 25% discount – sign up here.
Follow Transforming Society so we can let you know when new articles publish.
The views and opinions expressed on this blog site are solely those of the original blog post authors and other contributors. These views and opinions do not necessarily represent those of the Bristol University Press and/or any/all contributors to this site.
Image credit: Tara Winstead via Pexels