βSo even if the AI does develop quite quickly, in its purely cognitive sense, I don't think robotics will be at the point at which it could be a plumber. And then even when that is possible, I think it's going to take quite a while before it's price competitive with a human plumber, right? And so I think there are all kinds of work which is not purely cognitive that will be relatively protected from some of the stuff. The interesting thing is that a lot of work which currently commands very high compensation is sort of elite cognitive work. It's people doing, I don't know, sort of high-powered lawyers that are doing complex merger and acquisition deals across the globe and people doing advanced stuff in finance.β
βthe majority of the code at Anthropic is written by AI systems like Claude. Well, all of us in the room thought by April maybe a 100% of the code is, which led to us say, what are we doing? And I think that we invented a guild system where we would sit around analyzing and critiquing the code that called rights and verifying that it's correct.β
Brain hardware is dwarfed by data center potential
βThe human brain is a mobile processor. It weighs a few pounds. It consumes, I think, around 20 watts. If you compare that to what we see in a data center, instead of 20 watts, you could have 200 megawatts. Instead of a few pounds, you could have several million pounds. Instead of 100 hertz on the channel, you can have 10 billion hertz on the channel. Instead of electrochemical wave propagation at 30 meters per second, you can be at the speed of light, 300,000 kilometers per second. In terms of energy consumption, space, bandwidth on the channel, speed of signal propagation, you've got six, seven, maybe eight orders of magnitude in all four dimensions simultaneously.β
βSo I think a good metaphor for the housing market is musical chairs. Like, the people who already own houses are sitting in their chairs and whenever there's a new spring housing market, some people get up and they move and they select a new chair. And if you're not adding more chairs in, you can't get first time homebuyers into the market so easily.β
Non-experts grasp AI capability faster than specialists
βIn some ways, I actually think many people in the general public are ahead of the experts, because I think there's a human tendency. If I talk to non-tech people about current AI systems, some of the people say to me, oh, well, doesn't it already have like human intelligence? It speaks more languages than me. It can do math and physics problems better than I could ever do at high school. It knows more recipes than me. I was confused about my tax return and explain something to me or whatever. In what way is it not intelligent? But often people who are experts in a particular domain, they really like to feel that their thing is very deep and special and this AI is not really going to touch them.β
Universities must rethink every department for AGI
βI gave a talk to the Russell Group Vice Chancellor. So in the UK, the Russell Group is atop universities. I said to them, look, this AGI thing is coming, and it's not that far away. In 10 years, we're going to have it. And it's going to start being able to do a significant fraction of all kinds of cognitive labor and work and things that people do, right? We actually need people in all these different aspects of society and how society works to think about what that means in their particular area. So we really need every faculty and every department that you have in your university to take this seriously and think, what does it mean for education? What does it mean for law? What does it mean for engineering?β
βI do think that the shape of this in the future is is parts of AI need to become actually a true utility where you would expect things like cyber defense capabilities to be something that you provide, like, at cost or at, the cost of it costs you to provide you with no margin. ... you need to proliferate those into society without charging society for it or you end up in a in a really bad incentive structure.β
Build AI ethics through chain-of-thought reasoning
βYou might say, for example, I don't know, lying is bad, right? So we're not going to lie. But you could be in a particular situation where, I don't know, you know, there's some bad people coming to get somebody. And if you tell a lie, you can save their life. And then the ethical thing to do is maybe to lie. And so the simple rule is not always adequate to really make the right decision. Sometimes you need a little bit of logic and reasoning to really think through.β
Define AGI by adversarial testing for failure cases
βIf it passes that, I would propose we then go into a second phase, which is more adversarial. And we say, okay, it passed the battery of tests, so it's not failing at anything in our standard collection of however many thousands of tests or whatever we have. Now, let's do an adversarial test. Get a team of people, give them a month or two or whatever. They're allowed to look inside the AI, they're allowed to do whatever they like. Their job is to find something that we believe people can typically do, and it's cognitive, where the AI fails at. If they can find it, it fails by definition.β
βBy April 2027, AI systems should be able to do tasks that might take a person a hundred and fifty hours. Now about what is that? That's almost a month's worth of work, which it requires strange things to happen in the economy.β
Prioritize future residents over current homeowners
βMy frustration with the way that housing works in this country is that it's usually determined by local residents, usually people who are homeowners themselves, who go to their local town halls or local city halls and advocate for less housing, because they don't wanna see their neighborhoods change ... I wish that the focus was less on what's best for current residents and what's best for the residents who might live here five, ten, fifteen years from now.β
βSo my definition of AGI, or sometimes I call minimal AGI, is an artificial agent that can at least do the kinds of cognitive things people can typically do. And I like that bar because if it's less than that, it feels like, well, it's failing to do cognitive things that we'd expect people to be able to do. So it feels like we're not really there yet. We're not there yet, and it could be one year, it could be five years, I'm guessing probably about two or so.β
βI think if you end up in a world where you have a closed loop production system with just machine to machine to machine to machine, and then people buy stuff, People need money, well established. So you need to tax the robots and AI companies, significantly, and you need to somehow find a way to reallocate money from this machine economy to the human economy.β
βIn the last five years, there's been a lot of movement to eliminate single family zoning, which is this idea that you can only build one house on one plot of land in a neighborhood. ... The YIMBYs seem to be winning this ideological fight, which is really encouraging to see because it is based on economic principles, supply and demand.β