1 episode appearancesAcross 1 podcast
Home/Guests/Shane Legg

Shane Legg

1 episodes Β· 8 quicklets Β· Page 1/1

Quotes & Clips from Shane Legg

8 on this page

Minimal AGI is roughly two years away

β€œSo my definition of AGI, or sometimes I call minimal AGI, is an artificial agent that can at least do the kinds of cognitive things people can typically do. And I like that bar because if it's less than that, it feels like, well, it's failing to do cognitive things that we'd expect people to be able to do. So it feels like we're not really there yet. We're not there yet, and it could be one year, it could be five years, I'm guessing probably about two or so.”

β€” Shane Legg - co-founder of Google DeepMind

Define AGI by adversarial testing for failure cases

β€œIf it passes that, I would propose we then go into a second phase, which is more adversarial. And we say, okay, it passed the battery of tests, so it's not failing at anything in our standard collection of however many thousands of tests or whatever we have. Now, let's do an adversarial test. Get a team of people, give them a month or two or whatever. They're allowed to look inside the AI, they're allowed to do whatever they like. Their job is to find something that we believe people can typically do, and it's cognitive, where the AI fails at. If they can find it, it fails by definition.”

β€” Shane Legg - co-founder of Google DeepMind

Build AI ethics through chain-of-thought reasoning

β€œYou might say, for example, I don't know, lying is bad, right? So we're not going to lie. But you could be in a particular situation where, I don't know, you know, there's some bad people coming to get somebody. And if you tell a lie, you can save their life. And then the ethical thing to do is maybe to lie. And so the simple rule is not always adequate to really make the right decision. Sometimes you need a little bit of logic and reasoning to really think through.”

β€” Shane Legg - co-founder of Google DeepMind

Brain hardware is dwarfed by data center potential

β€œThe human brain is a mobile processor. It weighs a few pounds. It consumes, I think, around 20 watts. If you compare that to what we see in a data center, instead of 20 watts, you could have 200 megawatts. Instead of a few pounds, you could have several million pounds. Instead of 100 hertz on the channel, you can have 10 billion hertz on the channel. Instead of electrochemical wave propagation at 30 meters per second, you can be at the speed of light, 300,000 kilometers per second. In terms of energy consumption, space, bandwidth on the channel, speed of signal propagation, you've got six, seven, maybe eight orders of magnitude in all four dimensions simultaneously.”

β€” Shane Legg - co-founder of Google DeepMind

Plumbers are safer from AI than lawyers

β€œSo even if the AI does develop quite quickly, in its purely cognitive sense, I don't think robotics will be at the point at which it could be a plumber. And then even when that is possible, I think it's going to take quite a while before it's price competitive with a human plumber, right? And so I think there are all kinds of work which is not purely cognitive that will be relatively protected from some of the stuff. The interesting thing is that a lot of work which currently commands very high compensation is sort of elite cognitive work. It's people doing, I don't know, sort of high-powered lawyers that are doing complex merger and acquisition deals across the globe and people doing advanced stuff in finance.”

β€” Shane Legg - co-founder of Google DeepMind

Non-experts grasp AI capability faster than specialists

β€œIn some ways, I actually think many people in the general public are ahead of the experts, because I think there's a human tendency. If I talk to non-tech people about current AI systems, some of the people say to me, oh, well, doesn't it already have like human intelligence? It speaks more languages than me. It can do math and physics problems better than I could ever do at high school. It knows more recipes than me. I was confused about my tax return and explain something to me or whatever. In what way is it not intelligent? But often people who are experts in a particular domain, they really like to feel that their thing is very deep and special and this AI is not really going to touch them.”

β€” Shane Legg - co-founder of Google DeepMind

Universities must rethink every department for AGI

β€œI gave a talk to the Russell Group Vice Chancellor. So in the UK, the Russell Group is atop universities. I said to them, look, this AGI thing is coming, and it's not that far away. In 10 years, we're going to have it. And it's going to start being able to do a significant fraction of all kinds of cognitive labor and work and things that people do, right? We actually need people in all these different aspects of society and how society works to think about what that means in their particular area. So we really need every faculty and every department that you have in your university to take this seriously and think, what does it mean for education? What does it mean for law? What does it mean for engineering?”

β€” Shane Legg - co-founder of Google DeepMind

More clips from Shane Legg?

Get a daily email of the best quotes & audio clips from the top podcasts.

Subscribe for daily Quicklets