
Dario Amodei β "We are near the end of the exponential"
Quotes & Clips
9 clipsAnthropic's revenue has grown 10x annually, hitting $9-10B in 2025
βBut what we've seen from from the beginning, you know, at least if you look within anthropic, there's this bizarre 10 x per year growth in revenue that we've seen. Right? So, you know, in 2023, it was, like, zero to a 100,000,000. 2024, it was a 100,000,000 to a billion. 2025, it was a billion to, like, 9 or 10,000,000,000.β
The country of geniuses in a data center is one to three years away
βSo so on the ten years, I'm, like, you know, 90%, which is about as certain as you can be. Like, I think it's I think it's crazy to say that this won't happen by by by 2035. I have a strong view, 99, 95% that, like, all this will happen in ten years. Like, that's I think that's just a super safe bet. And then I have a hunch this is more like a fifty fifty thing that it's gonna be more like one to two, maybe more like one to three.β
Diffusion is real but faster than any previous technology
βI think diffusion is very real and and and and doesn't have to, you know, doesn't exclusively have to do with limitation limitations on the AI models. Like, again, there are people who use diffusion to to, you know, as kind of a buzzword to say this isn't a big deal. I'm not talking about that. I think AI will diffuse much faster than previous technologies have, but but not infinitely fast.β
Buying too much compute can bankrupt you if revenue forecasts miss by a year
βAnd so I could buy a trillion dollars. Actually, it would be, like, $5,000,000,000,000 of compute because it would be a trillion dollar a year for for five years. Right? I could buy a trillion dollars of compute that starts at the 2027. And if my if my revenue is not a trillion dollars, if it's even 800,000,000,000, there's no force on earth. There's there's no hedge on earth that could stop me from going bankrupt if I if I buy that much compute.β
Authoritarianism becomes morally obsolete in the age of AGI
βI actually believe it could be the case, is that is that dictatorships become morally obsolete. They become morally unworkable forms of government, and that and that and that the the the the crisis that that creates is is is sufficient to force us to find another way. I just wonder if it will motivate new ways of thinking about with with with the new technology, how to preserve and protect freedom.β
Federal AI moratorium for ten years is reckless given the timelines
βThe thing that was being voted on is we're going to ban all state regulation of AI for ten years with no apparent plan to to do any federal regulation of AI, which would take congress to pass, which is a very high bar. Given the serious dangers that I lay out in adolescence of technology around things like the, you know, kind of biological weapons and bioterrorism, autonomy risk, and the timelines we've been talking about, like, ten years is an eternity.β
Claude Code emerged organically from internal Anthropic developer use
βYeah. So it actually happened in a pretty simple way, which is we had our own, you know, we had our coding models, which were good at coding. And and, you know, around the beginning of 2025, I said, I I think the time has come where you can have nontrivial acceleration of your own research. And then, you know, this thing, I, you know, I think it might have been originally called Claude CLI, and then the the name eventually got changed to Claude Code internally, was the thing that kind of everyone was using, and it was seeing fast internal adoption. And I looked at it, and I said, probably we should launch this externally.β
The most consequential AGI decisions will be made in two-minute hallway conversations
βSo, you know, one of my one of my, I guess, worries, although it's also an insight into into, you know, in into kind of what's happening is that, you know, some very critical decision will be will be some decision that, you know, someone just comes into my office and is like, Dario, you have two minutes. Like, you know, should we should we do, you know, should we do thing thing a or thing b on this, like, you know, someone gives me this random, you know, half page half page memo and is like, should we should we do a or b? And I'm like, I don't know. I have to eat lunch. Let's do b. And and, you know, that ends up being the most consequential thing ever.β
Anthropic's culture is held together by Dario's biweekly Vision Quest talks
βOne, I write this thing called the DVQ, Dario Vision Quest. I wasn't the one who named it that. That's the name it it it received, and it's one of these names that I kind of I tried to fight it because it made it sound like I was, like, going off and smoking peyote or something. But the name just stuck. So I get up in front of the company. Every two weeks, I have, like, a three or four page document, and I just kind of talk through, like, three or four different topics about what's going on internally.β
Want to hear more clips?
Get a daily email of the best quotes & audio clips from the top podcasts.