While AI offers many tangible benefits, the arrival of agentic AI, characterised by its capacity for relatively independent, autonomous action, can undermine core democratic commitments.
One tenet of democracy is that citizens should be able to understand how decisions are made and who is responsible for them. The legitimate use of power to make decisions requires that at least three conditions are met. These are equality before the law, the consent of the governed, and sufficient justification.
Legitimacy is undermined by the opacity of many AI systems, particularly agentic ones. These systems often operate as ‘black boxes’, where the internal logic and reasoning processes are difficult to discern, even for their developers. This discernment can be made more acute by proprietary rights. Altogether, this lack of transparency has several consequences and also introduces several conceptual errors.
To begin, AI agents do not ‘think’; their logic is rooted in mathematical models that are often difficult to translate into ordinary, let alone natural, language. This can make AI decision making difficult to audit, but also challenging to explain in ways that allow lay people to intuitively understand the implications of these models. Where there is no comprehension, there can be no informed consent.
The challenge of accountability and responsibility
One of the most difficult challenges posed by agentic AI is how to assign responsibility when something goes wrong. The use of complex AI systems often involves multiple actors –developers, deployers, and users – making it difficult to pinpoint who is accountable when a system causes harm. This diffusion of responsibility can make it more difficult to hold individuals or organisations responsible for the actions of AI systems.
One dimension of this issue is the distinction between the model developer and system deployer. The model developer may not have intended for the model to be used in a particular way, while the system deployer might not fully understand the inner workings of the model. Without clear regulations, liability will be contested in the courts.
Next, the rise of AI decision making has created an accountability gap in public life. When AI systems make determinations about loans, vehicle operations or other consequential matters, opaque calculations can obscure who bears responsibility for errors or harmful outcomes. The complexity of these systems makes it nearly impossible for affected individuals to seek redress or for operators to correct systematic biases. The result is persistent injustice, which in time could become moral bitterness if hope that wrongs will be sufficiently addressed fades.
Finally, this lack of accountability directly undermines trust in public institutions. While people can and do extend trust without complete knowledge or rely on others despite uncertainty, there must still be some baseline predictability in behaviour for trust to work. The matter of predictability is complicated by decades of data extractivism. Due to these databases, governments can predict their subjects, but the subjects cannot predict them. Power sees, while the seen remain blind. What kind of de-democratisation could be introduced by agentic AI that exploits these kinds of power and knowledge asymmetries?
Tightening agency
The relative autonomy of agentic AI systems raises questions about human control, interruptibility and the allocation of responsibility. The delegation of decision making power to autonomous systems can lead to a loss of absolute human control, potentially diminishing popular sovereignty over what can and cannot take place within or by a democracy.
Where agentic AI can take actions without human intervention, there will be unintended consequences. This is concerning in high-stakes domains like law enforcement, healthcare and military operations. When seeking to avoid responsibility, institutions will likely seek to characterise undesirable outcomes as unforeseen consequences, although they are perhaps better understood as reasonably foreseeable.
Any mandatory requirement for human approval may be undermined as users become accustomed to rapid approval requests, thus turning them into rubber stamp approvals. There are also appropriate concerns about how AI systems might spawn sub-agents that might not be easily controlled or that could circumvent restrictions.
Strategies for democratisation
There are good reasons to think that these challenges are not insurmountable. Traceability, attributability and interruption are attributes that can make agentic AI more useful. Whether through a unique identifier or other means, tracing an agent’s actions can help with evidence for attribution and liability, reducing bad faith usage. There are thoughts that any and all agentic AI operations must be interruptible. The requirement that AI systems be interruptible by users at any time is a useful safeguard, even if graceful interruption mid-task can be difficult.
Indeed, much like in the financial system, there may be value in affording the ability to reverse actions. Reversibility is crucial for agentic AI because it allows for continuous refinement and adaptation. By enabling rules to be critically examined and modified when they no longer accurately reflect their underlying relationships, organisations can maintain more flexible, responsive and accurate decision-making frameworks. This capacity for revision prevents the ossification of potentially flawed guidelines, ensuring the possibility of dynamic intervention.
A democratic future depends on our ability to make AI’s power legitimate. Decisiveness and a collective commitment to explainability are necessary to help agentic AI do good in the world.
Scott Timcke is Senior Research Associate with Research ICT Africa. His research focuses on the transformations of race, class and technology during modernity.
Algorithms and the End of Politics by Scott Timcke is available to read open access on Bristol University Press Digital here.
Bristol University Press/Policy Press newsletter subscribers receive a 25% discount – sign up here.
Follow Transforming Society so we can let you know when new articles publish.
The views and opinions expressed on this blog site are solely those of the original blog post authors and other contributors. These views and opinions do not necessarily represent those of the Policy Press and/or any/all contributors to this site.
Image credit: Graeme Worsfold via Unsplash