Networked authoritarianism may contain the seeds of its own undoing


James Scott's 1998 classic Seeing Like a State describes how governments shoehorn the governed into countable, manageable ways of living and working so that they can be counted and steered by state bureaucracies. Political scientist Henry Farrell (previously) discusses how networked authoritarianism is touted by its advocates as a way of resolving the problems of state-like seeing, because if a state spies on people enough and allows machine-learning systems to incorporate their behavior and respond to it, it is possible to create "a more efficient competitor that can beat democracy at its home game" — providing for everyone's needs better than a democracy could.

China is a good example of this: both its proponents and its detractors say that with machine learning and ubiquitous surveillance, China is creating a sustainable autocracy, capable of solving the "basic authoritarian dilemma": "gathering and collating information and being sufficiently responsive to its citizens' needs to remain stable."

But Farrell speculates that this isn't actually what's happening — China is actually incredibly unstable (wildcat strikes, seemingly unstoppable pro-democracy movements, concentration camps, debt bubbles, manufacturing collapse, routine kidnappings, massive corruption, etc).

If machine learning and surveillance are helping to stabilize China, it's not (merely, or primarily) by allowing for efficient allocations that diffuse anti-authoritarian sentiment — it's by spying on people so that they can be arrested and sent to prison and have their organs harvested.


Pro-democracy theorists point out that political liberty is a stabilizing force because the "openness to contrary opinions and divergent perspectives" creates "countervailing tendencies to self-reinforcing biases" — a means for the state to correct itself.


The theory of sustainable networked authoritarianism (Farrell suggests calling it "Aithoritarianism") is that surveillance and machine learning can root out "self-reinforcing biases" without giving dissidents any political space or legitimacy.

But there's lots of evidence that this isn't what's happening in China. Rather than scaling up Singapore's mix of authoritarianism and technocratic planning to 1.386b people, China could become "radically monstrous and more radically unstable."


Like all monotheoretic accounts, you should treat this post with some skepticism – political reality is always more complex and muddier than any abstraction. There are surely other effects (another, particularly interesting one for big countries such as China, is to relax the assumption that the state is a monolith, and to think about the intersection between machine learning and warring bureaucratic factions within the center, and between the center and periphery).Yet I think that it is plausible that it at least maps one significant set of causal relationships, that may push (in combination with, or against, other structural forces) towards very different outcomes than the conventional wisdom imagines. Comments, elaborations, qualifications and disagreements welcome.


Seeing Like a Finite State Machine [Henry Farrell/Crooked Timber]


(Image: Derzsi Elekes Andor , CC BY-SA, modified; Cryteria, CC BY, modified)