Can't Keep Up
Can’t Keep Up
What I wrote down might be obsolete by the time you read this.
Eighty percent of the 100+ companies were vertical agents: tools to sort legal docs, route customer service tickets, screen resumes. A year ago, these would’ve been “interesting.” Now? Claude Code dropped the barrier to building them to near zero. Any engineer could knock one out in a weekend. Those YC bets? Already losing their edge.
That’s the thread running through everything in Silicon Valley the past half year. AI moves faster than every system we have. YC’s batch model, designed for the slow mobile era, can’t keep up. Meta’s code security rules, meant to protect core assets, can’t keep up. xAI’s management, built for rocket launches, can’t keep up. Even the researchers who build these models are being replaced by the models themselves. The physical world, chip manufacturing, data center construction, public patience, can’t keep up.
We’re in a race no one dares stop, because stopping costs more than burning tokens on a bad bet.
Tokens, good enough, and the race to keep up
Meta’s entire engineering team is using Claude Code now. They tried to build their own tool, myclaw, but it didn’t work. So they relaxed the rules: no customer data, sure, use whatever you want. Then came the token budgets. Top engineers are burning $200k+ a year in tokens, more than some salaries. They have a leaderboard for who uses the most. The bottom? They might get laid off.
It’s not just Meta. Startups in Palo Alto have a dozen Cursor agents running in parallel, plus a Claude Code window open. Engineers panic if they don’t know what their agents are doing by bedtime. CTOs talk about 100x efficiency gains, but when you ask if revenue grew 100x?? Silence. The gap between what AI can do and what the market can absorb is widening, and no one’s sure why.
This reminds me of what Lewis Held wrote about human anatomy. Evolution doesn’t engineer, it tinkers. It takes what works “good enough” and sticks with it. Our inverted retinas, wisdom teeth, choking hazards, they made sense for our ancestors, but now they’re just flaws we can’t fix because the cost is too high. Silicon Valley’s institutions are the same. YC’s batch model worked for mobile, so they kept it. Meta’s code security rules worked for pre-AI, so they kept them. Now they’re liabilities.
Sleep and learning
Dr. Piotr Wozniak spent decades studying sleep, and his main thing is simple: You can’t optimize what you don’t let rest. The brain consolidates memories and reorganizes ideas during sleep. Alarm clocks, he says, are like cigarettes: not immediately fatal, but cumulatively destructive.
But Silicon Valley is canceling sleep. Meta engineers are rolling token legends, working 24/7 to keep up with the pace. xAI’s team lived at the office, sleeping in pods, because Musk’s sprints don’t allow for downtime. Wozniak’s two-component model of sleep, circadian rhythm plus homeostatic need, both need to sync for good sleep. Right now, both are misaligned for everyone in AI. The circadian rhythm is shot from 24/7 work, the homeostatic need is never met because we’re constantly feeding the AI more data, more tokens, more tasks.
We’re cutting sleep to work faster, but that just makes us slower. Wozniak was right: alarm clocks don’t wake you up, they hit you over the head with a heavy object. The AI industry is running on sleep inertia.
Average became magic, then mandatory
A while ago, I read an essay called “Average is All You Need.” The idea: LLMs turned “average” from an insult into a superpower. Average code, average writing, average data analysis, now it’s instant, cheap, accessible to anyone with a prompt. The LLM handles the mechanics, you handle the thinking.
But the pace is so fast that even “average” is too much. Startups are building products in weekends that used to take 60 people a year. The enterprise context layer, a system where 20 agents synthesize company knowledge into Markdown, works because it’s average. No fancy ontologies, no semantic layers. Just read sources, write with citations, commit to Git.
The problem is, the thinking part is getting harder. We’re so busy keeping up with the tokens, the agents, the next model release, that we don’t have time to think about what we’re building. The 100x efficiency gains are real, but they’re not translating to 100x revenue. We’re building more things that don’t have product-market fit, trying 100 ideas instead of 10, because we can.
The people left behind
Institutions aren’t the only ones struggling. People are too. Engineers are watching 80% of their core skills get replaced by models. They’re kept around to watch the AI when it’s stupid, but that won’t last. Researchers, the top of the pyramid, the people who design the models, are being replaced too. Auto research systems are letting models choose their own experiments, run them, evaluate results. The people who used to do that are expensive, and AI is cheaper.
Then there’s the physical layer. Nvidia controls who gets GPUs, who gets to build models. They’re shaping the entire industry, deciding which startups live and die. Data center projects are being blocked across the US because communities don’t want the power draw, the water usage, the noise. The digital world’s appetite is outstripping the physical world’s capacity to feed it.
And the social layer. Sam Altman’s home was attacked twice in April 2026. Anti-AI protests are popping up in San Francisco. Silicon Valley executives are planting lime trees (with 4-inch spikes) around their homes, installing bulletproof glass, building bunkers. The social contract, you contribute, you get rewarded, is breaking down. If AI takes over most production, and only a tiny group controls the GPUs, what’s left for everyone else?
What comes next
Dario Amodei said cancer has been “conquered” in a sense, not that it’s gone, but that AI has the potential to turn it into a manageable chronic condition. AI could push materials science forward 20 years, cure diseases, solve problems we’ve been stuck on for decades.
But the transition is going to be brutal. We’re in the steam engine phase. It’s invented, but it doesn’t run faster than a carriage yet. Everyone knows it will, so they’re throwing money at it, ignoring code security, blowing token budgets, canceling sleep. No one knows when the steam engine takes over, but no one dares to stop.
The world will the kids grow up in? I can’t even imagine it. I hope it’s one where more people are healed by AI than hurt by it. Where the Molotov cocktails and gunfire aimed at AI leaders are replaced by gratitude for the lives saved.
But right now, all I can write is the same word I’ve been writing for months: “Can’t keep up.”
Crepi il lupo! 🐺