A screen displays images of a Watrix employee during a gait recognition demonstration in October 2018 at the company’s office in Beijing, (AP Photo/Mark Schiefelbein)
In December 2023, the United Nations (UN) Secretary-General’s High-level Advisory Body on Artificial Intelligence published its interim report highlighting the need for stronger regulatory frameworks around AI systems. In September 2024, at the UN Summit of the Future, the Global Digital Compact was adopted, establishing crucial guardrails for AI technologies.
While the compact does include a goal to “promote transparency, accountability and robust human oversight of AI systems in compliance with international law” and calls on tech companies “to enhance the transparency and accountability of their systems,” it does not include specific provisions requiring transparency for government-deployed AI systems.
Yet, governments worldwide have incorporated algorithmic decision-making into functions ranging from policing to welfare distribution, border control, and resource allocation. These political machines—AI systems used to support or automate government decision-making—are becoming central to contemporary governance while operating largely outside public scrutiny.
This transformation represents more than a technological upgrade to government operations; it marks a fundamental shift in the relationship between citizens and the state. As political machines increasingly make or inform decisions affecting people’s lives, questions of citizen participation will become an urgent concern for multilateral organizations, civil society, and institutional stakeholders.
Invisible Infrastructure of Governance
Political machines now gather data across multiple spheres of human activity. At the physiological level, biometric technologies collect information about our bodies—vitals, facial geometry, and even gait patterns—for identification. Psychologically, sentiment analysis and personality profiling techniques classify citizens based on their perceived dispositions. Socially, network analysis maps our relationships and interactions, while environmental monitoring tracks broader contextual factors.
This comprehensive data collection has enabled unprecedented surveillance capabilities and behavioral prediction. In China, facial recognition systems have been deployed in public spaces across major cities, with the government claiming success in reducing crime rates. Meanwhile, the European Border and Coast Guard Agency (Frontex) uses automated profiling tools to determine who receives additional scrutiny at borders. In the United States, research by the AI Now Institute has documented how predictive policing algorithms disproportionately target historically marginalized communities.
The operation of these systems remains largely opaque to the public. Studies by Algorithm Watch found very few government AI systems surveyed across the globe have meaningful transparency measures in place. This opacity creates what sociologist Aneesh Aneesh terms an “algocracy”—rule by algorithms—where the public has little understanding of how decisions affecting them are made.
Shifting Power Dynamics
The deployment of political machines has accelerated an asymmetry between those who calibrate these systems and those who are subject to their decisions. This creates a fundamental power imbalance that challenges traditional democratic processes.
What we’re seeing is a new form of governance where technical experts and private contractors have enormous influence over public policy implementation through the creation and maintenance of these systems. Meanwhile, citizens have minimal insight into the decision-making processes that affect their lives. Dr. Vidushi Marda, Senior Programme Officer at ARTICLE 19, describes this process at work, for example, in the predictive policing algorithms used in India.
This disconnect is particularly evident in the development of automated welfare systems. In Australia, the controversial Robodebt scheme, which used algorithmic calculations to identify and recover alleged welfare overpayments, led to thousands of incorrect debt notices being issued to vulnerable citizens. The system operated for years before legal challenges revealed its flaws, ultimately resulting in an AU$1.8 billion settlement. Similar automated benefit systems in the United Kingdom (UK), the Netherlands, and India have faced criticism for errors and lack of accountability.
The integration of political machines is also transforming international relations and peacekeeping. The UN’s Global Pulse initiatives use big data analytics to predict humanitarian crises or to gauge political stability. In 2023, the UN Office for the Coordination of Humanitarian Affairs (OCHA) launched its Anticipatory Action Framework, which uses predictive analytics to allocate resources before crises fully emerge.
Defining Rights in the Age of Automation
As political machines become more sophisticated, there is growing recognition that traditional frameworks for rights and representation are insufficient. Legacy political structures were designed for a world where human decision-makers, not algorithms, determined outcomes.
We’re entering uncharted territory in terms of defining what citizenship means. When machines can assess your behavior, predict your actions, and determine your access to resources, what constitutes meaningful rights? As a consequence of this question, several key areas requiring our attention have emerged.
First, transparency and explainability must become non-negotiable requirements for political machines. Citizens should know when automated systems are being used to make decisions about them and understand how these systems operate. This principle has gained traction in regulatory frameworks like the EU’s AI Act, which imposes stricter requirements on high-risk AI systems, including those used by government agencies.
Second, accountability mechanisms need strengthening. When political machines make errors—as they inevitably will—clear procedures for redress must exist. The Ada Lovelace Institute’s “Accountability of Algorithmic Systems” project has developed frameworks for meaningful human oversight of AI decision-making systems that are being piloted by government agencies in the UK.
Third, we must reimagine traditional notions of democratic consent and participation. In a democracy, citizens should understand what they agree to, but the complexity of political machines makes this difficult. New forms of citizen participation in governance are emerging, such as Finland’s AI Registry, which publicly documents the government’s AI systems and invites citizens to provide feedback.
Finally, the right to freedom of thought demands renewed attention. As political machines become increasingly capable of influencing not just behavior but also cognition—through targeted information delivery and emotional manipulation—the inner domain of human thought requires protection. UN Special Rapporteur Ahmed Shaheed has called freedom of thought “foundational for many other rights” yet “largely unexplored” in legal frameworks.
Emerging Civil Society Responses
Civil society organizations globally are developing strategies to reclaim citizen agency in the face of algorithmic governance. In Kenya, the Lawyers Hub does work on digital policy, AI governance, and digital rights education. The organization conduct various forums, workshops, and policy initiatives around digital rights and technology governance in Africa. In India, the Internet Freedom Foundation has successfully pushed for greater transparency in government technology deployments through public interest litigation.
In the United States, the Algorithmic Justice League combines artistic expression with technical research to make complex issues of automated governance accessible to broader audiences. Founded by Dr. Joy Buolamwini, the organization has reached thousands through exhibitions and advocacy highlighting the disparate impacts of facial recognition systems on different demographic groups.
European initiatives focus heavily on policy frameworks. AlgorithmWatch’s “Automating Society” project maps government AI use across the continent and advocates for stronger safeguards. The organization has documented cases of public sector algorithmic systems and successfully advocated for amendments to the EU AI Act that strengthened provisions around citizen rights.
These efforts, while promising, face significant challenges. Technical complexity makes oversight difficult, while commercial confidentiality can shield systems from scrutiny. Resource disparities between civil society organizations and government-corporate partnerships further complicate the picture.
Toward a More Democratic Future
As political machines become further embedded in governance, several pathways toward more democratic configurations are emerging. Some advocate for algorithmic auditing by independent third parties. The Algorithmic Impact Assessment framework developed by the Canadian government provides a structured methodology for evaluating systems before deployment. Similar approaches have been adopted by New York City’s Automated Decision Systems Task Force.
Others emphasize institutional reform. The Council of Europe’s Ad Hoc Committee on Artificial Intelligence (CAHAI) has proposed specialized regulatory bodies to oversee high-risk AI systems. More ambitious ideas include creating new democratic forums specifically focused on technological governance, as proposed by digital ethics scholar Carissa Véliz.
Education represents another crucial front. UNESCO’s “AI and Education” initiative emphasizes that digital citizenship education must include understanding how algorithmic systems shape public life.
Legal innovation is also essential. New Zealand’s Algorithm Charter, which government agencies voluntarily sign, commits signatories to transparency and accountability in algorithmic use. Though lacking enforcement mechanisms, it establishes principles that civil society can reference when challenging problematic systems.
While these systems risk undermining democratic processes and reinforcing existing inequalities, they also create openings for reimagining citizenship, participation, and governance. The most promising approaches combine technical innovation with retooled democratic institutions. Such a combination acknowledges that political machines are not merely technical systems but socio-political ones.
We face a decisive moment in determining whether AI systems will strengthen or undermine democratic governance. The outcome depends not on the technology itself, but on the institutional, legal, and social frameworks we build around it.
Strengthening Democracy through Agentic AI
Despite the various initiatives described above, the struggle for democratic control will become increasingly urgent. As I argue in my new book, Political Automation: An Introduction to AI in Government and Its Impact on Citizens, the systems making decisions about our lives cannot remain black boxes, accessible only to those with specialized knowledge. A truly democratic future requires political machines that are subject to meaningful citizen participation.
How can we achieve that? Political machines are too fast and too ubiquitous for any human to keep up with, elected or not. The rise of Agentic AI may provide an intriguing solution. Agentic AI creates the possibility of “digital citizens” who could participate in political processes on our behalf. Using vast amounts of personal and personalized data, these AI agents could be animated to speak and behave like their real-world counterparts.
For example, the use of AI agents in humanitarian operations can change the way programs are implemented to better respond to population needs. Organizations often operate on what the perceived needs of aid recipients are but struggle to know what beneficiaries actually want or how they will respond to a project. Digital citizens that represent real people could be queried to the benefit of both the recipients of aid and the government, international, and nonprofit organizations delivering it. This approach becomes particularly valuable in contexts where AI-powered decision-making systems are already being used in the policy development process.
This development may lead to a new form of governance where digital citizens interact with political machines to ensure transparency, accountability, and alignment with real citizen preferences before implementation. Digital citizens may even engage in a form of virtual deliberation, potentially offloading certain aspects of the mental work of debate.
The relationship between real and digital citizens would require careful management. Real citizens might “drive” their digital counterparts through preference-setting interfaces or conversational interactions, defining the values they want prioritized in their digital citizens’ decision-making. This type of oversight would help ensure automated deliberation reflects our collective needs while granting political machines popular legitimacy.
This vision remains conceptual and would take decades to implement, with pockets of innovation here and there and variations based on government types and regulatory frameworks. However, the stakes of inaction are high. Without proper governance, we risk a rule by algorithms that is characterized by its opacity, ultimately undermining the political legitimacy of the state even as it purportedly increases its efficiency.
Eduardo Albrecht is an Associate Professor at Mercy University, Adjunct Associate Professor at Columbia University and City University of New York, Senior Fellow at United Nations University Centre for Policy Research, and author of Political Automation: An Introduction to AI in Government and Its Impact on Citizens (Oxford University Press, 2025).