βI think that that was a statement that that was really heard loudly. And I remember, you know, I probably got home around like 5AM or something, went to sleep and I woke up like forty five minutes later and I checked Twitter and I saw that Ilya had posted and had signed the petition. And it said that he wanted the company to come back together. And that was this real moment of relief. I felt so much gratitude that it just felt like, okay, like we can put this back together.β
βI got a text saying, can we hop on a video call? So I hopped on the video call. I noticed that it was the board minus Sam who were on there. I was told that the board has decided that Sam would be removed, and effectively the message that I got was the same messaging that was in the public post. And I asked if I could have any more information. I was told no, not right now. Right after I hung up the call, I talked to my wife and I said, gotta quit. And she said, I agree.β
βWe learned very early on with GPT-three, we got to see this very concretely what it's like to deploy something where we spent a lot of time thinking about what are all the misuses of GPT-three, what are the ways it could go wrong, We thought about misinformation, we thought about these kinds of, you know, grand pictures. And you know what the number one misuse of GPT-three was? It was medical spam, like advertising different drugs to people. It's like not something we ever would have thought of as a problem.β
βWhere should young people be investing today? Well, I really think leaning into this technology is going to be a critical skill. Just really understanding how do you get the most out of AI. Because we're all going to be heading to a world where we're managers of agents and soon maybe the CEO of an autonomous AI corporation. Just imagine if you had the workforce of, you know, a 100,000 person company all at your disposal operating on your behalf.β
DeepMind's dominance made starting OpenAI seem nearly impossible
βIt was very much the case that Google DeepMind was the 10,000 pound gorilla in the field. They just had lots of capital. They had the track record. This was before AlphaGo. AlphaGo came out a couple months later, but it wasn't a surprise. It's like very much the momentum was very clearly there. And so the question of, is it really possible to build something independent and new? It wasn't obvious.β
The world is heading to a compute-constrained future
βI would say that we, in general, are heading to a compute constrained world. Like if you think about the amount of value that these models can produce for someone, it's extreme. If you just wanted enough compute for, you know, you wanted one GPU for every person in the world, you're talking, like, 8,000,000,000 GPUs. We are not on a trajectory to build anywhere near that level of compute.β
βIt's hard to know what percent of the code is not written by AI. It's a vanishing fraction. The actual writing of code currently, the AI is much better than humans at writing code given the right context, given the right structure. Now there's parts of the actual structure of the code that our human experts still are much better at. That's about thinking about how the module should be laid out, how the pieces should work, but the actual writing of code is essentially all AI now.β
Ilya's departure was the only moment Greg wanted to quit
βThat was an intense experience to go through an intense experience to come back to. And honestly, just one of the hardest moments for me at OpenAI was when Ilya left. And it was maybe the only moment in OpenAI's history where I felt like I didn't wanna do it anymore. I think I needed some time to kind of find my way back to remembering, like, why I was doing this and why it was so important and why it was worth the pin.β
The Napa offsite produced OpenAI's three-step technical plan
βAnd so we set up a thing in Napa and I actually made t shirts. There was no official offers. No one had joined. We didn't have a structure. We had nothing. We just had an idea. We had a vision. We had a mission. And we flew people out. We drove up to to Napa together, and it was an amazing day. The ideas were flowing. We came up with what I would really say is almost the technical plan that we have pursued for the past ten years. Number one, solve reinforcement learning. Number two, solve unsupervised learning. And number three, was gradually learn more complicated in quotes things.β
βLike in the words of Ilya, Ilya always says that you have to suffer, Right? If you're not suffering, like you're not building value. And I think there's deep truth to it. This picture of suffering was something that that we thought about throughout the course of OpenAI where it's like, we had so much uncertainty from the beginning. Is this thing going to work? And there's many reasons why it might not work, why it should not work, why you could even say it cannot work.β
OpenAI hid reasoning traces to preserve interpretability
βWe had this insight when we first developed the reasoning paradigm that it gives us a interpretability mechanism we had not been anticipating because you can really read the model's thoughts. You can see exactly how it got to an answer. Now the problem is if you train the model to have a chain of thought that looks good, then you lose all the faithfulness. So we made an early decision to say we want to avoid any temptation to train these chain of thoughts to look favorable.β
βOne of my friends was describing that his sister was describing this app that she really wished someone had created, that she had this picture of like exactly what she wanted And he in the meanwhile was typing into to codex uh-huh uh-huh and then pushed enter. And a few hours later he shows her this app and she's like wait what what what is this? Like where did this thing come from? Who built this? And he said you did.β