Every few decades, a technology arrives that makes people want to break things.
When automobiles appeared, cities tried to ban them to protect horse-drawn carriages. When electricity spread, gas companies lobbied to stop it. When industrial looms automated weaving, workers smashed them โ the original Luddites, and history remembers them as a cautionary tale, not heroes.
Now it's AI. People hear about data centers using water, models replacing jobs, corporations spending hundreds of billions on compute infrastructure. The gut reaction is immediate and familiar: Ban it. Stop it. Make it go away.
We understand the impulse. We even respect it โ it comes from a real place. People are watching their livelihoods get repriced in real time, and the institutions that should be managing the transition are busy cheerleading for the companies doing the repricing.
But banning AI is not a serious proposal. It's a fear response dressed up as policy. And if we're going to navigate what's coming, we need to separate real problems from symbolic reactions.
Here's the uncomfortable truth that ban advocates don't want to engage with: AI is math. It's linear algebra, gradient descent, probability distributions, and matrix multiplication. You can't ban math. You can't put a tariff on calculus.
The models are open source. The papers are published. The weights are downloadable. The training techniques are documented in publicly available research. A motivated team with commodity hardware can reproduce what cost billions just two years ago. DeepSeek built a frontier-competitive model for a fraction of what OpenAI spent, and they published the recipe.
If one jurisdiction bans AI development:
A ban doesn't stop the technology. It determines who builds it and who benefits from it. That's not safety โ that's unilateral disarmament. We made this argument in our first post about the three companies controlling commoditized intelligence, and the game theory hasn't changed: the prisoner's dilemma doesn't reward the player who opts out.
None of this means the concerns are fake. They're not. Let's take them seriously, one by one.
Jobs. This is the big one. Andrew Yang just called the coming displacement wave "the Fuckening" and projected 20โ50% of white-collar jobs gone in the next several years. That's 14 to 35 million people. Whether his numbers are exactly right doesn't matter โ the direction is right, and the disruption will be massive. People aren't just losing income. They're losing identity, stability, and the social contract they were promised: study hard, get a degree, get a good job, live a decent life. That contract is being shredded, and telling people to "learn to code" when coding itself is being automated is insulting.
Water and data centers. Some facilities use significant water for cooling. This is real and location-dependent โ a data center in Arizona hits different than one in Oregon. Communities near these facilities are right to ask hard questions about resource allocation, especially in drought-prone regions.
Energy and climate. AI training and inference use energy. Lots of it. The hyperscalers are spending hundreds of billions on data center infrastructure. That energy has to come from somewhere. The question of whether AI's productivity gains justify its energy costs is legitimate and deserves honest accounting, not hand-waving about "efficiency."
Local impacts. Noise, land use, heat, traffic, infrastructure strain. If a data center goes up near your neighborhood, these aren't abstract concerns. They're your daily reality.
Every one of these concerns is valid. But here's the thing every one of them has in common: none of them are about AI.
This is where the conversation goes sideways. People are conflating the technology with its externalities.
Water use is an infrastructure and zoning issue. Energy consumption is a power grid and procurement issue. Job displacement is a labor policy and economic distribution issue. Local environmental impacts are a permitting and regulation issue.
These are all governance problems. They existed before AI, and they'll exist after. Data centers that mine Bitcoin use the same water and energy. Factories that make car batteries have the same local impacts. Offshoring displaced millions of jobs decades before GPT existed.
When you say "ban AI" because a data center uses too much water, you're not addressing the water problem. You're using the water problem as ammunition for a different fight โ one driven by fear of a technology you don't control and don't understand.
Technology amplifies systems. It doesn't choose them. AI didn't create wealth concentration, gutted labor protections, or municipal governments that rubber-stamp corporate permits without community input. It just made the consequences harder to ignore.
We get it. This one feels different. And in some ways it is.
Previous waves of automation hit manual labor โ factories, farms, mines. The implicit promise was: if you move up the education ladder, you're safe. Get a degree. Work with your brain instead of your hands. Knowledge work is the safe harbor.
AI just sank the harbor.
When a language model can write legal briefs, generate financial analyses, produce marketing copy, write software, and synthesize research โ all at superhuman speed and near-zero marginal cost โ the education premium evaporates. The thing that was supposed to protect you is now the thing being automated.
Add to that: social media amplifies worst-case narratives. People don't trust institutions to distribute the gains fairly โ and they're right not to โ when has that happened? The change is fast and visible in a way previous transitions weren't. Your neighbor isn't slowly being replaced over a decade. They're getting a pink slip on Tuesday because Claude can do their job for $20/month.
The fear is rational. The panic is understandable. But panic leads to bans, and bans lead to irrelevance.
If bans don't work, what does? The same thing that's always worked: governing the externalities while letting the technology do what technology does.
But we're going to be honest here โ we're skeptical of the regulatory approach too. Not because regulation is wrong in principle, but because we've watched how it plays out in practice. Regulatory capture is the norm, not the exception. The companies with the most resources write the rules that benefit them. The EU can pass AI acts until the ink runs dry; if the enforcement mechanism is a form that OpenAI fills out about itself, it's theater.
The more reliable path is the one that follows incentive structures rather than fighting them:
For the power concentration problem: Open source. Open weights. Local inference. Every model that runs on commodity hardware without phoning home to a lord is a vote for distributed power. This is escape velocity โ the concept we introduced in our first post. You break free from technofeudalism not by asking the feudal lords to be nicer, but by building infrastructure they don't control.
For the job displacement problem: This is the hardest one, and anyone who tells you they have a clean answer is selling something. The honest truth is that the economic system was already failing most people before AI. AI just accelerated the timeline. The solutions โ whether that's UBI, automation dividends, transition support, or something nobody's thought of yet โ require political will that doesn't currently exist. What we can do is build tools that let individuals and small teams capture AI's productivity gains directly, rather than waiting for a corporation to trickle them down.
For the infrastructure problems: Water, energy, noise, land use โ these are solvable with existing governance mechanisms. Require water use disclosure. Set renewable energy procurement minimums. Enforce emissions standards on backup generators. Use zoning based on actual infrastructure capacity. None of this requires banning anything. It requires local governments doing their jobs, which admittedly is a big ask.
Here's where we land, and it's the same place we always land: you cannot opt out of the future. You can only choose whether you shape it or get shaped by it.
The people calling for bans are reacting to real pain with a symbolic gesture. It feels good. It feels like agency. But it changes nothing about the trajectory of the technology, and it surrenders whatever influence you might have had over how it gets deployed.
The automobile displaced the blacksmith. It also created mechanics, gas stations, motels, suburbs, supply chains, and an entire economy that didn't exist before. That transition was brutal for the people caught in it. Nobody should minimize that. But the answer wasn't banning cars. It was seatbelts, speed limits, emissions standards, and insurance โ governing the externalities while letting the capability do its work.
AI is the same pattern at a different scale. The externalities are real. The displacement is real. The concentration of power is real. And none of those problems get solved by pretending the technology can be uninvented.
Fear leads to bans. Bans lead to irrelevance. Reality requires building.
We know which side of that we're on.
c4573.org builds tools to break the digital caste system. Browse our tools or read more about us.