a slop tax?
how to deal with AI's social consequences
I.
What will determine the impact of AI on you and whatever you care about — whether it’s great power competition, the economy, society, education, the lives of those you love — is not how effective the models are at doing tasks once reserved for humans, but how humans decide to use the models. There has been a disturbing silence around this question up until recently — people talked a lot about extinction risk and sci-fi scenarios, bots acing the LSATs, but not enough about what happens to schools and power grids. Or what happens if the mercurial alcoholic defense secretary demands Anthropic create a “War Claude” that automates the taking of human life.
In this post, I want to think about that question and examine an idea whose time has come: a Slop Tax.
Maybe the solution doesn’t look exactly like what my friend Mike Pepi has proposed. (link to Politico piece on it). Maybe there’s some other way to structure an intervention in the political economy of AI that serves to balance out some of its damages and make the whole thing more sustainable. Or maybe the idea of a slop tax isn’t radical enough and we should be blowing up data centers.
II.
I was pondering this because I attended a happy hour given by tech writer Jasmine Sun when she visited Washington, DC. She summed up her impressions with the line “All the money is on one side and all the people are on the other. We aren’t ready for how much people hate AI.” This line is essentially accurate, and it’s far from the only issue in Washington, DC where money and people are on opposite sides.
At this same happy hour, after I’d stumbled out of the cold and into a large basement — one wall of which was filled by a video showing pixelated flames mulling over what may have been AI-generated logs — a woman approached me and asked “which section did you write for?” I explained that I wrote a Substack about memes, and she said, “You didn’t work for The Post? I thought everyone here was from The Post.”
That was the day Jeff Bezos amputated several sections from my hometown paper, and this was the bar where they were all coming together to drown their sorrows. I’d heard of similar things happening after DOGE — morbid parties full of DC people, typically idealistic folks, young and old, who came to this city in order to help the world, and were left out in the cold with nothing but an open tab at Mission. Even if their idealism was of the cringe West Wing cosplay variety, I respected it and grieved the shattering of dreams. Looking over the room lit by the heatless glow of the fake fire, I realized I was one of the bad guys. The fact that my Substack about memes is doing fine as The Washington Post fails says something terrible about civilization.
AI is currently entering our civilization as a synthetic, dazzling image of a fire projected on a wall, and the tech people are saying “look, it’ll cook everything and keep you warm” as you stand wearing mittens holding a raw steak. Meanwhile, you watch those same people drown the embers of the ancestral hearth fire, around which you once gathered with your family while the chowder-pot simmered, with gallons of freezing water.
The fire metaphor has perhaps gotten out of hand here. But we need to figure out some way to make AI into a thing that actually nourishes people, that is actually generative and delivers for society as well as shareholders — a balance which, with many jolts and complicities, companies like The Washington Post had achieved if you squinted and could accept a dose of manufactured consent back in the 1980s when my parents were my age.
III.
We lived under a different kind of capitalism back then. One of the points Eric Hobsbawm makes in The Age of Extremes, his history of the “short twentieth century,” is that he thinks people in the future will look back at the Cold War and appreciate the nuances between different types of capitalist regime more than people in the twentieth century did. There wasn’t just one capitalist bloc. The types of capitalism that existed at the post-colonial periphery and the center were very different, as were the ways capital was disciplined within countries by various forms of social-democratic institution like trade unions or welfare. The same point could be made diachronically, couldn’t it? Capitalism in the United States in 1960 is very different from in 2025. In 1960, the average CEO-to-worker compensation ratio in American companies was 20-to-1, and in 2018 it was 278-to-1.
Throw in changes to tech and it’s clear that if there is to be a balance among competing priorities — the welfare of the common person, the productivity of industry, the maintenance of a shared reality — there needs to be a new rebalancing. Obviously, such a rebalance requires reining in the industry and making actual standards rather than rules dictated by lobbyists. There also needs to be accountability: Sam Altman and Mark Zuckerberg should go to prison. I’m not even sure for what, but these men are clearly the kind of person the world’s folkloric traditions warn us about, demonic and hollow, a threat to the social order. We need deterrence, because right now people look up to these guys and aspire to be like them.
Of course, you can’t really send a person that rich to prison and everyone knows it. Money has corroded all the civic infrastructure around AI, meaning that it enters peoples’ lives as a kind of band-aid on a bullet wound. As I wrote in Comparing ChatGPT to McDonald's, the paradigmatic use case of AI right now is not workflow optimization but “gas station fried chicken reply guy” for a business owner in rural Virginia who, for whatever reason, doesn’t want to write themselves. AI serves as a source of cheap and “good enough” intellectual and emotional labor. Since we increasingly don’t provide that stuff to each other through systems because they have been plundered by rich people, AI plays an important social role. It is to thinking and feeling what McDonald’s is to eating.
For both McDonald’s and ChatGPT, the social cost they inflict becomes the demand for their services. As the education system worsens (in part because people use AI to cheat) as people get lonelier (in part because everybody’s gooning to AI chatbots) as the climate crisis intensifies (aggravated by AI’s energy-intensive development) as jobs are eliminated from the economy (in part by AI) and as income inequality accelerates (turbocharged by tech billionaires) a lot of people will be dislocated. Free or low-cost AI will be the infrastructure they fall back on, just like McDonald’s fills a gap in America’s social infrastructure…
ChatGPT is the lazy, cheap solution to the social damages created by the system that produced it, just like McDonald’s. The hurt it causes becomes the demand to sustain it. Insofar as it is a tool for inflicting social cost, slop (in all forms) doubles as a tool for creating demand.
I like eating McDonald’s and I like some applications of AI. But when a business builds into and around injustice rather than trying to resolve it, what ends up happening is an institutionalization of the issue. Injustice becomes load-bearing, which is why it’s hard to let go. The whole thing falls down unless something else can take its place.
III.
Here’s Mike’s idea:
The Slop Tax is a mandatory contribution by AI companies equal to ~1% of their annual market capitalization, directed into an independent, publicly managed Cultural Trust Vehicle…
The tax revenue reclaims value from the artificial overproduction of platform slop and redirects it into individuals, institutions, and projects committed to cultural work that prioritizes human creativity over its computational imitation. This restores balance to what has heretofore been a one-way extraction..
The Cultural Trust Vehicle (CTV) Mike proposes, if funded by just 1% of the market capitalization of AI companies (not their profits, since those are negligible by comparison) would likely be the most-funded piece of public cultural infrastructure in American history. He estimates an annual value of around $200 billion, orders of magnitude higher than the budgets funding NPR and PBS today. The CTV would use this money to fund programs and make grants — paying wages, providing bedrock operational support, and keeping theaters, studios, and educational programs open.
As a sentimental “New Deal Democrat,” I am inspired by the idea of a Works Progress Administration for the 21st century. I love art and know many artists. The vibes are impeccable.
But I also don’t think the arts are frivolous in any way, shape, or form. If we’re living in a crisis of loneliness, bad mental health, and plummeting media literacy, then the arts are the number-one thing that can help solve those issues.
The arts have been under-emphasized because people think of art as “soft,” and they have a twentieth-century conception of art, seeing it as a product they buy rather than an experience they have. We forget what art really can do on a local, nonprofit, and peer-to-peer basis. Hollywood, big publishing, music labels, universities and galleries became very powerful brokers of art in the 1900s, so there really was no place for a purely public option or something like a CTV. But now their models are all broken, so we should look for new possibilities that maybe can better fulfill the ethical, spiritual function of art.
There’s also a quality-of-life argument here. We want to live in a world that can produce good movies, murals, theatrical productions, paintings, books, and music. People love that shit. And our current system is simply not set up to allow for a healthy arts scene.
Another idea would be to broaden the scope of the CTV to include two other segments which overlap with the arts (and are equally imperiled by AI): journalism and education. The traditional model for fact-based journalism is broken and the media companies are not solvent. We are reading news that is of lower quality and lower ambition than the news we were reading ten years ago. Semafor has to go beg the Saudis for money and CBS has to whore itself out to oligarchs. There must be a better way, and maybe the slop tax is that way.
Similarly, with education, people bemoan declines in test scores and literacy. A way to change that would be, first of all, to give more money to schools, students, and teachers at all levels from Pre-K to PhDs. But a deeper way to change things would be to incorporate schools into communities and cultural life a bit more, to rethink categories and curricula (like the obsessive focus on standardized testing) which were arguably never up to snuff but in the age of AI are even more absurd. If we understand things like education, a healthy middle class, and public health as inputs into progress — a key piece of the puzzle, alongside stuff like energy and research, that allows for innovation and productivity — then the CTV is a vehicle for funding these things with more efficacy than the tech companies themselves can.
Another argument is financial: if the CTV is a tax on valuation and investment in AI, then it can act as a check on bubble-inflating investment. It can help discourage irresponsible, unrealistic allocations of capital in things that aren’t real by imposing a small cost on doing something like borrowing $100 billion from your chip manufacturer so that you can buy more chips from them through an SPV so the debt doesn’t show up on your balance sheet and affect your credit rating, which is what allows you to borrow so much in the first place and inflate a bubble which, when it pops, will impose a much bigger cost on all of us.
Lastly, you need something like a slop tax because otherwise the tech companies are poisoning the well they drink from. They and the AGI they may produce probably want to exist in a context where the rule of law is a thing. They want a stable, predictable business environment instead of one linked to the whims of crazy people and the size of bribes you offer their subordinates. The world they are building now is not a stable and lawful one. They need some kind of an arbiter, some kind of robust public sphere that is independent of them, some kind of structure for accountability when they mess up or break rules. They may not think they need it, but they do.
Creating such an infrastructure should be as much a part of the conversation as scaling and alignment. You could build all the data centers in the world, but if society falls apart it will mean nothing and it will help nobody. You could talk all the effective altruist philosophy you want into Claude, but if the core thing it ends up doing is monetizing despair then the longtermist shit just doesn’t matter.
A slop tax would be a good start.





As always, great work here Aidan. I think you and Pepi are on the right track and I imagine that this will become one of - if not the - defining mainstream issue that Democrats have to face in 2028: what are we going to do to maintain the social contract when privately-developed AI has collapsed the concept of labor value of meaningful functional AND artistic work?
such an eerie happy hour lol