The emerging split in modern trustbusting: Alexander Hamilton's Fully Automated Luxury Communism vs Thomas Jefferson's Redecentralization


From the late 1970s on, the Chicago School economists worked with the likes of Ronald Reagan, Margaret Thatcher, Augusto Pinochet and Brian Mulroney to dismantle antitrust enforcement, declaring that the only time government should intervene is when monopolists conspired to raise prices — everything else was fair game.

Some 40 years later, a new generation of trustbusters have emerged, with their crosshairs centered on Big Tech, the first industry to grow up under these new, monopoly-friendly rules, and thus the first (but far from the only) example of an industry in which markets, finance and regulatory capture were allowed to work their magic to produce a highly concentrated sector of seemingly untame-able giants who are growing bigger by the day.

The new anti-monopolism is still in its infancy but a very old division has emerged within its ranks: the Jeffersonian ideal of decentralized power (represented by trustbusters who want to break up Big Tech into small, manageable firms that are too small to crush emerging rivals or capture regulators) versus the Hamiltonian ideal of efficiency at scale, tempered by expert regulation that forces firms to behave in the public interest (with the end-game sometimes being "Fully Automated Luxury Communism" in which the central planning efficiencies of Wal-Mart and Amazon are put in the public's hands, rather than their shareholders').

There are positions in between these two, of course: imagine, say, creating criteria for evaluating whether some element of scale or feature makes a company a danger to new competitors or its users, and forcing spinoffs of those parts of the business, while allowing the rest to grow to arbitrarily large scales (agreeing on those criteria might be hard! Also: one consequence of getting it wrong is that we'll end up with new giants whom we'll have to defeat in order to correct our mistake).

"Fully Automated Luxury Communism" contains two assumptions: first, that late-stage capitalism has finally proved that markets aren't the only way to allocate resources efficiently. Wal-Mart's internal economy is larger than the USSR at its peak, and it doesn't use markets to figure out how to run its internal allocations or supply chain (this is explored in detail in an upcoming book called "The People's Republic of Wal-Mart," by Leigh Phillips, author of the superb Austerity Ecology & the Collapse-Porn Addicts: A Defence Of Growth, Progress, Industry And Stuff; it's also at the heart of Paul Mason's analysis in the excellent Postcapitalism); second, that the best way to capture and improve this ability to allocate resources is by keeping these entities very large.


You see this a lot in the debate about AI: advocates for keeping firms big (but taming them through regulation) say that all the real public benefits from AI (accurate medical diagnosis, tailored education, self-driving cars, etc) are only possible with massive training-data sets of the sort that requires concentration, not decentralization.


There's also a version of it in the debate about information security: Apple (the argument goes) exercises a gatekeeper authority that keeps malicious apps out of the App Store; while simultaneously maintaining a hardware, parts and repair monopoly that keeps exploitable, substandard parts (or worse, parts that have been deliberately poisoned) out of their users' devices.

More recently, the Efail vulnerability has some security researchers revisiting the wisdom of federated systems like email, and pondering whether a central point of control over end-point design is the only way to make things secure (security is a team sport: it doesn't matter how secure your end of a conversation is, if the other end is flapping wide open in the breeze, because your adversaries get to choose where they attack, and they'll always choose the weakest point).

The redecentralizers counter by pointing out the risks of having a single point of failure: when a company has the power to deliver perfect control to authoritarian thugs, expect those thugs to do everything in their power to secure their cooperation. They point out that the benefits of AI are largely speculative, and that really large sets of training data don't do anything to root out AI's potentially fatal blind spots nor to prevent algorithmic bias — those require transparencly, peer review, disclosure and subservience to the public good.

They point out that regulatory capture is an age-old problem: Carterfone may have opened the space for innovation in telephone devices (leading, eventually, to widespread modem use and the consumer internet), but it didn't stop AT&T from becoming a wicked monopolist, and subsequent attempts to use pro-competitive rules (rather than enforced smallness) to tame AT&T have failed catastrophically.

It's very hard to craft regulations that can't be subverted by dominant giants and converted from a leash around their own necks into a whip they use to fight off new entrants to their markets.

I have been noodling with the idea of regulating monopolies by punishing companies whenever their effect on their users is harmful, rather than regulating what how they should behave themselves. My prototype for this, regarding Facebook, goes like this:

Step one: agree on some measure of "people feel they must use Facebook even though they hate it" (this is hard!)

Step two: give Facebook a year to make that number go down

Step three: if Facebook fails, assess a whopping, massive fine against it that hurts the shareholders and prompts them to fire the board/management team

Step four: lather, rinse, repeat

The devil is up there in step one, agreeing on a way to measure Facebook's coercive power. But that is, after all, the thing we really care about. Jeffersonian Decentralizers want Facebook made smaller because smaller companies are less able to coerce their users. Technocratic Hamiltonians want to regulate Facebook to prevent it from abusing its users. Both care, ultimately, about abuse — if Facebook was split into four business units that still made their users miserable, Jeffersonians would not declare victory; if it was made to operate under a set of rules that still inflicted pain on billions of Facebook users, Hamiltonians would share their pain.


Size and rules are a proxy for an outcome: harm.

The Jeffersonian and Hamiltonian visions lead to very different policy recommendations in the tech space. Jeffersonians want to end Google's acquisition spree, full stop. They believe the firm has simply gotten too powerful. But even some progressive regulators might wave through Google's purchase of Waze (the traffic monitoring app), however much it strengthens Google's power over the mapping space, in hopes that the driving data may accelerate its development of self-driving cars. The price of faster progress may be the further concentration of power in Silicon Valley. To Jeffersonians, though, it is that very concentration (of power, patents, and profits) in megafirms that deters small businesses from taking risks to develop breakthrough technologies.

Facebook's dominance in social networking raises similar concerns. Privacy regulators in the United States and Europe are investigating whether Facebook did enough to protect user data from third-party apps, like the ones that Cambridge Analytica and its allies used to harvest data on tens of millions of unsuspecting Facebook users. Note that Facebook itself clamped down on third-party access to data it gathered in 2013, in part thanks to its worries that other firms were able to construct lesser, but still powerful, versions of its famous "social graph"—the database of intentions and connections that makes the social network so valuable to advertisers.

For Jeffersonians, the Facebook crackdown on data flows to outside developers is suspicious. It looks like the social network is trying to monopolize a data hoard that could provide essential raw materials for future start-ups. From a Hamiltonian perspective, however, securing the data trove in one massive firm looks like the responsible thing to do (as long as the firm is well regulated). Once the data is permanently transferred from Facebook to other companies, it may be very hard to ensure that it is not misused. Competitors (or "frenemies," in Ariel Ezrachi and Maurice Stucke's terms) cannot access data that is secure in Facebook's servers—but neither can hackers, blackmailers, or shadowy data brokers who are specialists in military-grade psyops. To stop "runaway data" from creating a full-disclosure dystopia for all of us, "security feudalism" seems necessary.

Policy conflict between Jeffersonians and Hamiltonians, "small-isbeautiful" democratizers and centralist bureaucratizers, will heat up in coming years. To understand the role of each tendency in the digital sphere, it is helpful to consider their approaches in more detail.

Tech Platforms and the Knowledge Problem [Frank Pasquale/American Affairs]

(Thanks, Cindy!)