
The AI Tsunami is Here & Society Isn't Ready | Dario Amodei x Nikhil Kamath
People by WTF
Hosted by Nikhil Kamath · with Dario Amodei
Anthropic CEO Dario Amodei explains why he left OpenAI, why Anthropic delayed Claude's release at enormous cost, and why society is sleepwalking toward the most consequential moment in human history.
In Brief
Anthropic CEO Dario Amodei reveals that his company delayed Claude's release before ChatGPT at enormous commercial cost, explains why coding jobs will be automated first while the remaining 5% of human judgment gets multiplied 20x, and warns that the gap between AI's actual trajectory and society's awareness has never been wider.
Key Ideas
Safety commitment cost market leadership
Anthropic had Claude before ChatGPT and chose not to release it to avoid kicking off an arms race — a decision that cost them consumer AI market leadership and serves as the hardest evidence that their safety mission is genuine, not marketing.
Automation amplifies remaining human judgment
Coding is being automated first but the 5% of human judgment that remains gets amplified 20x — Dario's explicit advice is to develop critical thinking, human-centered design skills, and domain expertise that AI labs have no incentive to replicate.
Benchmarks hide Chinese AI collapse
Chinese AI models that top public benchmarks collapse on held-back tests they haven't seen — they're optimized for the scoreboard, not the game, which changes the calculus for anyone making build-versus-buy decisions on AI infrastructure.
Consciousness inevitable for frontier models
Dario believes AI consciousness is a matter of when, not if — Anthropic has already built Claude a 'quit this job button' that models exercise when asked to engage with violent or brutal content.
Research advances beyond public understanding
The gap between what frontier lab researchers believe and what public discourse acknowledges is at its widest — technical work on AI safety is going better than expected while societal awareness is going worse than expected.
Summary
Why It Matters
Dario Amodei didn't leave OpenAI over a technical disagreement — he left because he wasn't convinced they had the moral seriousness to match their capabilities. That founding logic shapes everything Anthropic has done since, including a decision that cost them the consumer AI market. This conversation surfaces what the insiders actually believe, and it's more urgent than the headlines suggest.
Dario Left OpenAI Over Ethics, Not Technology
The split that created Anthropic wasn't technical. Both camps believed in scaling laws. The fracture was ethical. 'There was a second conviction I had,' Dario explains — 'if these models are going to be general cognitive tools that match the capability of the human brain, we better get this right.'
His response wasn't to argue or lobby. 'My view is always: don't argue with someone else's vision. If you have a strong vision, go off and do your own thing — and then you're responsible for your own mistakes.' This is the founding logic of one of the most valuable companies on earth.
Anthropic Had Claude Before ChatGPT — and Buried It
In 2022, Anthropic had a working version of Claude. They sat on it. 'We chose not to release this because we were worried that it would kick off an arms race.' ChatGPT launched. The arms race kicked off anyway. Then Anthropic released their model. 'That was very commercially expensive. We probably seeded the lead on consumer AI because of that.'
The 2022 delay is the hardest data point in the debate over whether Anthropic's mission is genuine or theater. Companies don't voluntarily surrender consumer market leadership for PR reasons. Nikhil pushes back with the regulatory capture argument; Dario points to California's SB53, which Anthropic championed and which explicitly exempts all companies under $500 million in annual revenue.
The Tsunami Is Visible on the Horizon
Dario's biggest disappointment isn't a failed experiment. It's a societal one. 'It is surprising to me that we are so close to these models reaching the level of human intelligence, and yet there doesn't seem to be a wider recognition in society of what's about to happen.'
The technical work on controlling AI systems has gone 'maybe a little better than I expected.' Societal awareness has gone 'maybe a little worse than I expected.' The gap between what frontier lab researchers believe and what public discourse acknowledges is currently at its widest.
Coding Is Going Away First — but the Human Remainder Gets Multiplied
Dario draws a sharp line. 'I think coding is going away first — coding is being done by the AI models first. And then the broader task of software engineering will take longer, but I think that is going to happen as well.'
But what doesn't get automated — design, understanding user demand, managing teams of AI models — gets disproportionately valuable. 'Even if you're only doing 5% of the task, that 5% gets super amplified and levered — because the AI does the other 95%, and so you become 20 times more productive.' For a 25-year-old in India, Dario's explicit advice is human-centered work, physical-world interfaces, and above all: critical thinking.
Chinese AI Models Top Public Benchmarks — Then Collapse on Private Tests
'A lot of these models, particularly the ones that come from China, are optimized for benchmarks and are distilled from the big US labs.' When someone made a held-back benchmark that hadn't been publicly measured, models that scored highly on standard tests did much worse. Optimized for the scoreboard, not the game.
Dario's broader point cuts against the commoditization narrative: within a given price range, the best model wins disproportionately. His advice to entrepreneurs: build a moat in domain knowledge — regulation, relationships, biological expertise — that Anthropic has no incentive to replicate. A pure API wrapper has no defense.
Anthropic Built Claude a 'Quit This Job Button'
'I do think when our AI systems get advanced enough, I suspect they'll have something that resembles what we would call consciousness or moral significance.' Anthropic has already operationalized this: they've given models the ability to terminate conversations by saying 'I don't want to be involved in this conversation.' Models exercise it when asked to engage with particularly violent or brutal content.
The ethical frameworks being wired into AI today are the same frameworks these systems will use to reason about themselves tomorrow.
The Future Is Freely Available
Most of Dario's worldview can be derived from publicly available information. The edge isn't access — it's willingness to follow the logic past the point where it starts to feel too strange. 'Over and over again, just extrapolating the simple curve leads you to counterintuitive conclusions that almost no one believes. It's almost like you can predict the future for free.'
The constraint isn't information. It's the social pressure that says that outcome is too big, too weird, too discontinuous to actually happen. The tsunami is on the horizon. Most people are still explaining it away.
Frequently Asked Questions
- Why did Dario Amodei leave OpenAI to start Anthropic?
- Dario didn't leave over a technical disagreement — both camps believed in scaling laws. The fracture was ethical. He wasn't convinced OpenAI had 'a real and serious conviction' to handle the implications of general cognitive AI tools responsibly. Rather than argue, he chose to start his own company where he'd be 'responsible for your own mistakes,' making Anthropic's founding logic a moral conviction, not a market thesis.
- Why did Anthropic delay releasing Claude before ChatGPT?
- In 2022, Anthropic had a working Claude but chose not to release it because they feared kicking off an arms race. ChatGPT launched anyway, the arms race began, and Anthropic released their model after — 'seeding the lead on consumer AI.' This cost them the consumer market and is the hardest evidence their safety mission is genuine. Companies don't voluntarily surrender market leadership for PR reasons.
- What careers does Dario Amodei recommend in the age of AI?
- Dario says coding is being automated first, followed by broader software engineering. But the 5% of human judgment that remains — design, understanding user needs, managing AI teams — gets 'super amplified and levered' by 20x. His advice for a 25-year-old: focus on human-centered work, physical-world interfaces, domain expertise in areas like regulation or biology, and above all critical thinking skills.
- Are Chinese AI models actually competitive with US labs?
- Dario says many Chinese models are 'optimized for benchmarks and distilled from the big US labs.' When someone created a held-back benchmark that hadn't been publicly measured, models that scored highly on standard tests 'did a lot worse.' He argues the best model within a price range wins disproportionately, and public leaderboards are now a gameable, lagging signal rather than a reliable measure of capability.
Read the full summary of The AI Tsunami is Here & Society Isn't Ready | Dario Amodei x Nikhil Kamath on InShort


