
The Department of War is making a huge mistake
Quotes & Clips
7 clipsGovernment coercion of AI companies is the real danger
“But that's not what the government did. Instead, the government has threatened to destroy Anthropic as a private business, because Anthropic refuses to sell to the government on terms that the government commands. Now if upheld, the supply chain restriction would mean that companies like Amazon and NVIDIA and Google and Palantir would need to ensure that Anthropic is not touching any of their Pentagon work.”
AI makes mass surveillance economically trivial by 2030
“There are a 100,000,000 CCTV cameras in America, and you can get pretty good open source multimodal models for 10¢ per million input tokens. So if you process a frame every ten seconds, and if each frame is, say, a thousand tokens, then for $30,000,000,000, you can process every single camera in America. And remember that a given level of AI capability gets 10 x cheaper every single year. So while this year might cost $30,000,000,000, next year it'll cost $3,000,000,000, the year after that, $300,000,000. And by 2030, it'll be less expensive to monitor every single nook and cranny in this country than it is to remodel the White House.”
Alignment's hardest question is whose values AI follows
“And the question is, to what or to whom should the AIs be aligned? In what situation should the AI defer to the model company versus the end user versus the law versus to its own sense of morality. This is maybe the most important question about what happens in the future with powerful AI systems, and we barely talk about it. And it's understandable why, because if you're a model company, you don't really wanna be advertising the fact that you have complete control over the preferences and the character of the entire future labor force.”
Stanislav Petrov shows why AI needs moral conviction
“Maybe the best example of this is Stanislav Petrov, who was a Soviet lieutenant colonel stationed on duty at a nuclear early warning system. And his sensors said that The United States had launched five intercontinental ballistic missiles at the Soviet Union. But he judged it to be a false alarm, and so he refused to alert his higher ups and broke protocol. If he hadn't, Soviet high command would probably have retaliated, and hundreds of millions of people would have died. Of course, the problem is that one person's virtue is another person's misalignment.”
AI regulation hands future despots a loaded bazooka
“Now I cannot imagine how a regulatory framework built around the kinds of concepts that are used in the AI risk discourse will not be used and abused by a wannabe despot. The underlying terms here, like catastrophic risk or threats to national security or autonomy risk, are so vague and so open to interpretation that you're just handing a fully loaded bazooka to a future power hungry leader. These terms can mean whatever the government wants them to mean. Have you built a model that will tell users that the government's policy on tariffs is misguided? Well, that's a deceptive model. It's a manipulative model. You can't deploy it.”
Treat AI like industrialization, not like nuclear weapons
“Rather, it is more like the process of industrialization itself, which is a general purpose transformation of the whole economy with thousands of applications across every single sector. If you applied Ben Thompson or Leopold Lash and Bender's logic to the industrial revolution, which is also world historically important, it would imply the government had the right to requisition any factory it wanted or destroy any business it wanted, and punish and coerce anybody who refused to comply. But this is just not how free societies handle the process of industrialization, and it's also not how they should handle AI.”
Corporate courage alone cannot stop authoritarian AI use
“And unfortunately, it's for this reason that I don't think that individual acts of corporate courage solve the problem. And the problem is this, that structurally, AI favors many authoritarian applications, mass surveillance being one of them. Even if Anthropic refused to sell its models to the government to enable mass surveillance, and even if the next two companies after Anthropic did the same, in twelve months, everybody and their mother will be able to train a model as good as the current frontier. And at that point, there will be some vendor who is willing and able to help the government enforce mass surveillance.”
Want to hear more clips?
Get a daily email of the best quotes & audio clips from the top podcasts.