Halcyon flipped the model: nonprofit first, fund second
βSo we were a nonprofit first. The fund came much later. So maybe I can like back up and say what Halcyon's worldview is. And we can share on. So super powerful AI is coming, whether you believe that means AGI or super intelligence or whatever, we can get into the distinctions there. It's going to radically reshape the world. Or you could almost think of the nonprofit as the platform team to the VC fund. You woke up tomorrow with an AI security VC fund, and then you thought, what would the platform team look like? How do we develop relationships with the labs and the researchers and the lawmakers and the biggest potential customers and the CISOs of the big companies? Well, that's just the thing that the nonprofit's been doing for the last two years, and so the fund really benefits from that.β
Current AI models already uplift bioweapon creation risks
βYou could also see AI being used to do things like create pandemic viruses or bioweapons. And we already know from OpenAI and Anthropic that the current models that are released out in the world right now do provide uplift on those fronts, which is to say makes it easier for a relative amateur to do something like create a pandemic virus that could cause a COVID or worse level pandemic, right? So this is like a present-day risk. This is not something that you have to believe, you know, that some crazy super intelligence will come into existence to sort of find this salient.β
Defense in depth secures AI from training to deployment
βSo people talk a lot about defense in depth, right? It's not just about securing at one point of vulnerability or one layer. You want to secure at many points. So for example, one starts in pre-training, right? What datasets are going into the models? If your dataset includes a bunch of biological information, say about like the structure of and function of viruses, then you want to take that really seriously. And so a bunch of the organizations that are building bio models are actually choosing not to include viral data in their training sets because they know that that just opens up this can of worms of risk.β
Insurance markets historically force safety standards like seatbelts
βWell, they do partner actually with big insurance companies that you would have heard of. But think of it more akin to cyber insurance, right? So thinking about how insurance can mitigate AI risk, if you just look at other industries that have come up over the years, the insurance sector within those industries was often the force that brought more safety to those industries. So for example, if you look at the history of automobiles, they didn't put seat belts in automobiles for a long time, cars for a long time. And the thing that got them to put seat belts in was the insurance industry insisting on it, right?β
Mission-driven founders make expensive choices to commit
βI think one thing is like, has this person made expensive choices to do this thing? Did they walk away from something else that would have just been an easy path to a prestigious career, lots of money, a great life? Yeah, I took a big pay cut to do this work for sure, and so I love seeing people who do that kind of thing. I also love to see founders who really care about how their companies are governed.β
COVID showed society won't 'wake up' after mini-disasters
βBut I told you this story about COVID, right? About a year into COVID, I took a walk with a friend of mine who's like the world's largest biosecurity and pandemic preparedness grant maker. And so I was like, kind of like, well, but on the bright side, I'm sure now that COVID has happened to the world, it's like come to its senses. And now we're doing all this great biosecurity and pandemic prevention investment. And he's like, dude, you are so naΓ―ve. It is the opposite, right? Because first of all, everybody's sick of it, right? Everybody's sick of wearing masks, everybody's sick of the pandemic, they're sick of talking about pandemics. And now it's become super politicized.β
Diffusion bottlenecks mean AGI in real world takes decades
βThen there is what people call like, diffusion, right? So AI actually solving problems in the real world. So you see a lot of companies say like, well, we build this AI that is better than radiologists at diagnosing cancer. And for some use cases, AI has been better than radiologists at diagnosing cancer for like many years. And yet we still have a lot of human radiologists. In fact, we have a shortage of them. So why has AI not like in the real world solved this problem? Because there are all these either operational constraints or bureaucratic constraints to actually implementing the technology in the real world.β
βSycophantic AI, right? I hate when I ask ChatGPT a question and it's like literally like how much cumin should I put in this chili? And it's like incredible question. And I'm like, it was not an incredible question. It was a super mundane question. And I really worry that we're going to AB test AI into this mode that is on some level what we prefer, but on another level, what we really wouldn't want for society.β
White collar jobs face automation before physical jobs
βAlthough, you know, this is sort of, there's this irony, right? Because 10 years ago, if you asked anybody, what jobs is AI going to automate first, it would have been physical jobs, right? And now we realize, but like the white collar, people will all be safe. And we've realized that that's exactly wrong, right? Because the most powerful models are language models, right? And, you know, white collar knowledge workers are roughly speaking doing language-oriented work, which are typing into a screen all day, right? So of course, that's the thing that language models are really good at.β
AI researchers privately estimate 20% catastrophic risk
βIf you went into the Anthropic office or the Open AI office and just pulled out 30 random people and pulled them, and you're like, what do you think our prospects are for, you know, hitting AGI and that that has really terrible catastrophic consequences for humanity? You know, you'd hear a pretty big range, but your median answer might be something like, I don't know, 20% chance that AI really has catastrophic consequences for society in the next decade. Let's say your friend was an astronomer, right? And you went over to their house and they had a telescope, and you looked up into their telescope and you saw this, like, asteroid hurtling towards Earth.β
Talent is upstream of money in solving big problems
βThe reason I started Halcyon was much more about talent than money. Right. To solve any big problem, climate change, health care related stuff, education, AI stuff. The most important thing is that you have some of the world's most talented people, entrepreneurs, researchers, etc. dedicating themselves to solving that problem. And yeah, you got to raise more money and all the stuff, but who's good at raising money? Really excellent entrepreneurs and leaders. Okay, you need more rank and file talent. Well, you know who's really excellent at recruiting? Excellent entrepreneurs and leaders.β