“Everything is computer!” Trump marveled recently from the driver’s seat of a Tesla on the White House lawn that he bought as a favor to his scheming vizier.
Understandably, the moment turned into a meme: Trump was right. Screens have stepped into every room and platforms have stepped into every interaction. Anything you do or see nowadays — whether it’s a commute, a shopping trip, a vacation, a job, a date, a product, a work of art, or a government — revolves around computer.
What I want to do in this post is look at how that happened and who (or what) is responsible. There’s three parts. First, a survey of the work of Shoshanna Zuboff, who theorizes a “governance play” on the part of tech companies and Yanis Varoufakis, who has argued we now live under “technofeudalism” instead of capitalism. Second, a sketch of how we might apply their conclusions, which largely draw from a study of politics and economics, to questions of cultural association and the arts. Third, a rough conjecture about how this relates to AI slop, offering another set of reasons for you to hate that stuff.
This post by John Ganz, author of
, perfectly phrases the bigger question I want to ask:The answer is a bit of both. I’d estimate it’s 30% the intention to control society, and 70% structural conditions that created a vacuum somebody had to fill. Powerful people are, above all else, opportunists. So, what were those “structural conditions”?
Surveillance Capitalism and Technofeudalism
Shoshanna Zuboff argues in The Age of Surveillance Capitalism (2021) that we have entered a new economic and societal order founded upon the collection of data through surveillance. She goes so far as to take Marx’s three factors of production (land, labor, capital) and propose “data” as a fourth. What data collection allows firms to do is twofold.
First, it lets them more entirely commodify what normie capitalism hadn’t yet turned into revenue streams (stuff like bodily rhythms, personal relationships, recreational time, etc.) while also disrupting and out-competing other firms and institutions that had less access to data. Zuboff refers to this discovery as “behavioral surplus,” which the platforms realized they could capitalize on through a kind of opportunistic stumbling.
Second (and more perniciously) surveillance capitalism allows firms to turn private experience into a raw material. They can manufacture desires and consent through the strategic compartmentalization of people into niches, the targeted recommendation of content, and the artificial re-ordering of the world along the lines of what their algorithms understand, allow, and control. A restaurant must have good reviews and placement on the maps app, a person must have an email, a digital ID, and a social profile. A seller must have a shop through one of the big e-retailers, and any commentator, artist, influencer, politician, or writer must have a presence on social media.
By leveraging the data they collect to craft the contexts people use to be together (in whatever way, whether economic, political, intimate, etc.) the platforms exercise a kind of sovereignty over social space. Their preferences become even more hard-coded through the collection of ever more data: once you get recommended into a niche, you interact with posts in that niche, and you get more recommended into it.
It is important to remember that, on the one hand, this is a diabolical plot by powerful people wanting to accumulate more money and power. But on the other hand, this is the result of “structural conditions” — once you gather enough of that data, and occupy so central a position, you start to drift in that direction. And the actors here are automated processes as much as they are human agents — whether it’s the computers of dead internet theory, the hive mind of the tech firms’ massive bureaucracies, or the mercurial spikes and sloughs of financial markets.
Platforms end up drawing the map of the world we all navigate by, and in so doing draw themselves into a corner. In a recent paper, Zuboff sees platforms (particularly in the wake of the coronavirus pandemic) executing what she calls a “governance play,” where authorities traditionally reserved for states (regulation of speech, regulation of markets, elections, public health, etc.) are increasingly carried out by these platforms with the help of their data.
Their authority is more immediate (expressed in automated systems that act across split seconds rather than days) more diffuse (in a million precise little actions across a whole platform rather than big policy changes) and less transparent than the power of governments. In part, it’s because these are private firms, but it’s also because of their scale, the ephemerality of data, and the complexity and arguable semi-autonomy of the systems they exercise power through.
Yanis Varoufakis makes the argument in Technofeudalism (2023) that this new order amounts to the creation of “digital fiefs” which seek rent, not profit. As imposed structures on top of our social reality, which draw power through their ability to coordinate transactions and envelop entire spheres of activity, the platforms and their masters resemble medieval lords more than entrepreneurs.

Varoufakis sees neoliberal deregulation and the availability of “easy money” as key to the rise of tech (a point which Nick Srnicek, in his history Platform Capitalism also makes). He argues that in the wake of the 2008 crash, central bank money that bailed out economies got stuck at the top, and wasn’t invested in productive, tangible assets (in part because of austerity measures, which shrunk the economy) but instead used to purchase real estate, invest in speculative tech, and buy back stock in their own companies, which in turn drove up the value of the stock capitalists already held (this is the process by which Musk and Bezos accumulated much of their wealth).
Meanwhile, more and more economic activity was moved out of traditional markets and into the controlled markets of platforms like Amazon or Uber, cloud feudalists juiced by the money rich investors got for cheap from the central banks. Actual markets, such as the real estate market, were also co-opted by algorithms that could fix prices on behalf of firms and artificially control consumer behavior, as the RealPage lawsuit about algorithmic rent-fixing winding its way through American courts right now demonstrates.
Culture and algorithms
The platformization of life, which Varoufakis and Zuboff describe mostly in economic terms, also has cultural symptoms. Both talk about the “alienation” resulting from technofeudalism or surveillance capitalism, in a very Marxian way. And we see this alienation expressed in digital culture — I’ve been thinking about this one viral TikTok audio, but there are many with the same vibe:
It’s no exaggeration to say that what Amazon did to bookshops and what Uber did to taxis, the social media platforms did to culture. Journalism in particular has been hit hard financially. But in another sense, maybe culture as a market process got pulled into the cloud rather than into real production. I’ve wondered how useful it would be to read what happened to culture as a symptom of tehcnofeudalism. Here’s how that might go:
Just like how billionaires stopped investing their money into productive projects, finding it safer and more profitable to rig society through accumulating Varoufakian (fun adjective?) “cloud capital,” movie studios, music labels, and publishing houses just merged into bigger entities, made more Marvel projects and pushed Taylor Swift (which is fine if you like them, but they shouldn’t be the only things we have). They chose a predictable, controlled market, funded by easy money and fueled by surveillance data, over a competitive one with innovation.
In "Casual Viewing, a brilliant (and, I heard, the most popular of all time) n+1 essay about all the crappy movies Netflix makes, Will Tavlin says “Netflix has created a pyramid scheme of attention, with no end in sight.” Netflix produces bad films people don’t want to watch, one a week, and throws them into its walled garden where they are insulated from competition and designed to fit particular algorithmic recommendation niches. The point is not to make good movies, but to entrap customers, control as much of the business as possible, and keep money flowing through the company. This is why people are always lamenting that “culture is stuck” — it’s because the point is no longer to sell works of art or inform, but to keep customers captive. Netflix is a cloud fief: monopolistic scale and data collection are a surer route to money for the people involved than making good stuff. The point is rent, not profit. Put another way, Netflix is an enshittified platform.
This enshittification of social life is pernicious. People stop seeing each other, talking to one another, stepping outside of their rented-at-an-artifically-high-price rooms — just have the groceries delivered, just goon instead of going on a date you’re nervous about, just have the slop movie in your feed. Americans started “bowling alone” (as Robert Putnam described it in 1993) and the tech platforms said, “why even go to the bowling alley? Just stay at home and look at reels of other people bowling.”
As it gets harder to make money or be happy because of our economy’s platformization, the platformization of culture and creativity makes it harder to imagine another way forward. The past plummets down a memory hole and the future is reduced to what a prediction market or recommendation algorithm anticipates. The present, meanwhile, becomes a fragmented slop bowl of takes and sensations that lead nowhere. We live in slop capitalism now.
Notes towards a unified theory of slop
Since I am already far into the land of speculation, let’s follow the thesis that social media platforms are starting to do governance and see where it goes.
Their authority comes from two sources: first, the collection and manipulation of user-generated data; second, the erosion or capture of institutions that used to administer cultural, economic, and political spheres, and their replacement by platforms.
There are two hazards inherent in that model: first, the very institutions which tech platforms erode are also the hands which feed them, particularly the US government, which has offered stability and a lot of financial support. Second, a reliance on user-generated data means you rely on people — and you can never entirely control what they do or say.
So far, these two threats haven’t bothered the platforms because their governance project has only just begun to coalesce. But it seems clear what part of their playbook to deal with these threats will be. It will be the ruthless acquisition, neutralization, or co-optation of anything that lives outside their ecosystems, giving people no other options. This kind of monopoly behavior was how they amassed their power in the first place. It works, but it leads to another vulnerability: bad products and less-efficient services. This makes them more likely to lose in conflicts with other technofeudalist fiefs, particularly China’s, and that conflict will be the unavoidable context of all this.
I’m essentially writing science fiction at this point, but bear with me. I think the reason the platforms are pushing AI slop — either consciously as a top-down decision, or as the result of entropy baked into the algorithmic systems themselves — is to deal with the two hazards I outlined.
First, slop helps control people. If through AI slop, they can crowd out actual human voices on platforms, not only will they be free from the obligation of paying creators but they will control more of the conversation. Then, if they can surround individual users in a cocoon of friendly bots, as Instagram has announced it seeks to do with AI avatars, they can exercise an even more encompassing control of what people see and think. The bots, whose engagement will count just like human engagement, are also another lever for controlling the flows of their own algorithms, moving attention one way or another. This is a trick they learned from bot army misinformation and cyberwar campaigns carried out in the 2010s by Russia or other actors, but they can do it in-house now.
Second, the way we’re developing it right now, artificial intelligence (and slop alongside it) can be read as an attempt to replace institutions with machines. Rather than having universities or a bureaucracy, you could have a bunch of centrally-owned compute in one place and a large language model running these systems. I’d argue that’s part of the attitude behind DOGE and the reason why they’re interested in automating away intellectual and artistic work first. If the children can be taught by computer, the art made by computer, and the streets policed by computer, you never have to build anything with other people. The tech platforms intend to replace civil society, which they relied upon without repaying and destroyed without understanding, with an unpredictable, inhuman process whose only end is the perpetuation of itself.
Aidan, a very timely piece. I wonder though, if this slop capitalism play by Meta drives humans off their platforms, wouldn't that hurt their advertising revenue model, if they push that too far? Surely there's diminishing returns to this strategy.
Well jeepers this is bleak. I mean, it seems about right, and thanks for the book recommendations, but damn. I didn’t know Meta intended to flood its own systems with fake users. Gross.