Speakers
Mark T. Esper, Partner and Board Member, Red Cell Venture Capital, and 27th U.S. Secretary of Defense
Bonnie D. Jenkins, Under Secretary for Arms Control and International Security, U.S. Department of State
Anja Manuel, Executive Director, Aspen Strategy Group and Aspen Security Forum
Moderator: David Sanger, White House and National Security Correspondent, The New York Times
Full Transcript
Read the full transcript below or download it to your device.
Click to read the full transcript
David Sanger
Thank you Niamh. We can tell always, who are the Hardy true Aspen security aficionados is always the ones who are, you know, left over on Friday morning,
Anja Manuel
For the arms control panel
David Sanger
Right for the arms control panel. Right. We’re here to cheer you up before Jake Sullivan comes on. So, so there you go. We’re going to talk mostly about new kinds of weaponry. But there’s a fascinating overlap with old weapons that have not gone away, but treaties that have and the lessons to be learned from that. So Secretary Esper, maybe I will start with you. So at the end of the Trump administration, you saw a Chinese buildup happening of nuclear weapons after decades in which they have kept the minimum deterrent. We heard President Trump’s suggest a few times that he would not renew the New START Treaty, the last big bilateral arms control treaty we have left unless China came in the Chinese said they weren’t interested in discussing any of this. So start us just briefly by telling us what you saw, why the Chinese were building up and why it is that you all thought that the Chinese might be willing to be part of that treaty.
Mark Esper
Yeah, thanks, David. So it’s a good question. An important question. And we talked before I mentioned this a little bit in my memoir, but clearly we saw a build up by the Chinese and now it’s become more public, that their expanding expanded their nuclear forces. It was in a few hundreds numbers, if you will, several years ago. We know they’re gonna go to 500 here soon 1000 deployed nuclear weapons by 2030 and by 2035, have around 1500 deployed nuclear weapons, which, by the way, is about where the United States and Russia are today. So they clearly have a trajectory. So we knew what was happening. We knew how, but we didn’t know why. And frankly, we still don’t know why. And I think it’s a big an unanswered question. But the pursuit and I thought it had a lot of merit was can we get China into the START treaty? So rather than it being a bilateral treaty between Washington and Moscow, could you expand it to Beijing? And and that question came up, I think we ran out of runway to do so I see a lot of merit if we can, principally because we would learn more about their systems if they had to abide by the same verification reporting other regimes that that we have done so with Russia now for decades, if you will.
David Sanger
And tell us a little bit about how in your minds that would spill over or wouldn’t to the range of other Chinese related and Russian related weapons that we’ve been concerned about in recent times, anti satellite, bio cyber hypersonics, which could be used with either conventional or with nuclear warheads, all of those new technologies, would you try to sweep them into a similar negotiation. Do you think that because many of these don’t lend themselves to treaty, you’d have to go deal with them differently?
Mark Esper
Well, the first issue you have to confront in any trade consideration is is it in our national security interest to do so. And then you have to ask yourself, can we build a verification regime sufficient enough to give us confidence or do our own national technical means but look, I I think the difference here is we have decades of of relationship between us and Moscow, to both the USSR and later, of course, Russia today that we understand these things. We have multiple treaties with them, both bilateral and multilateral. They’re cheating on most, if not all, but nonetheless, there’s that relationship if we could get China in a similar type of relationship where we have at least some type of understanding of what they’re doing, get them into those type of regimes. Again, I think it’d be helpful for us to understand more about what they’re doing and why.
David Sanger
So let me turn this to you so they can deal with the beginning of the administration. They said, Okay, so we’ve got to figure out a successor to new start. You’ve got to go get the Chinese to sign on to this as well if it’s going to be truly useful along the way. Since that time, the Chinese have said we’re still not interested in doing this. We’re engaged in pretty nasty war in Ukraine, with the Russians. First of all, do you see any reasonable chance that there will be anything to put together between now and the less than 1000 days from now, when New Start expires? And then tell us a little bit about how you’re thinking about the China problem before we get on to the new technologies, bringing them into this treaty?
Bonnie D. Jenkins
Great. Thanks for the question.s I do want to make one clarification when we started in the Biden administration. We did not start with the concept that China would have to be part of the Start. We had started with the consequent we continue to have the leads and we continue to have bilateral engagement.
David Sanger
And you did renew it in the first month.
Bonnie D. Jenkins
We did renew in the first month, because we saw the challenges that existed in trying to bring China into that treaty. However, that did not lessen our interest and our desire we still consist today to have some kind of a dialogue with China. Whether it’s where you know, we don’t think it has to be armed control. We want to do something with this risk reduction crisis management some kind of engage with with with PRC, so that there’s an understanding what’s happening and answer some questions about why they’re building up and just to reduce miscalculation that could possibly exist because there’s so much question about why they are building and so we need that kind of connection, that kind of conversation. So that remains something that were very, very important for us. As far as the new start right now is probably in everyone knows Russia has suspended. They’re legally suspended their participation in the treaty, but one of the things that they have said is that they are going to continue to abide by that America’s limits. There are a number of things that they have not done in terms of notifications. And we have done reciprocal legal reciprocal actions in response, but we are still very interested in new start we think it’s still very much in our interest, and that’s still in the interest of Russia. And we still want to continue to have the implementation of this legally, internationally, legally binding or legally binding document that we have with Russia. However, as you said, we are still thinking about China. We do recognize the build up that they are doing and we still want to have some kind of dialogue with them. We continue to press for them
David Sanger
As your office thinks about what new technologies are, might be controlled by treaty, and which others were which are just going to have to find other means norms of behavior, some kind of non treaty inspections and verifications. How do you divide those down think of the range of things you have that are now not regulated at all by treaty?
Bonnie D. Jenkins
Yeah, thank you. I’m glad to use the word range, because the way we are looking at, you know, reducing risk, reducing risk in the international system, which is really what we want to do. We want to prevent crisis. We want to prevent miscalculation. And so when we look at these new technologies, we have to think about what are the tools that we need to address these new technologies and they may not be the ones we’ve done in the past. So legally binding documents, which are still viewed, because we still have the norms. We still have the years of countries abiding by these treaties. Vast majority of countries abide by the different arbitration treaty, Biological Weapons Convention Chemical Weapons Convention. These are still important treaties and needs to remain valid. However, we have new challenges. We have new new domains, we have new emerging technologies. And as a result, we have to think about how do we address these and they will fade out and they don’t necessarily fall into areas where you have traditional treaties. We have to develop norms, we have to develop responsible behaviors, guidelines that countries should be looking at in terms of how they develop these technologies that will have to include industry and others. So it’s a different way in which we have to provide for the underlying thing that we always continue to want, which is to reduce risks in the international environment. We just have to figure out how do we do it now with these challenges?
David Sanger
So Anja, nobody in Silicon Valley that I know thinks more subtly about the intersection of AI with all what we’ve been discussing, then you do, and you’ve written a lot of really fascinating things on this in the past couple of months. So tell us just a little bit about where you think AI intersects with this. And they have in some of these more traditional approaches that that Bonnie just described.
Anja Manuel
Yeah, thank you, David. And I gotta start with a caveat. I am not an arms control expert. We rest on the shoulders of giants here. Sam Nunn, who couldn’t be with us, Joe Nye. Steve Hadley, really the entire Aspen Strategy Group, which has done so much for traditional arms control over decades. But I do work a lot at the intersection of national security and new technology. And, you know, I’ve been optimistic all week here, get ready to be terrified. So here’s how AI intersects with everything else. As these generative AI models get more and more effective, they can also write malicious code. David knows this better than anyone he’s written the book on it. So that’s the problem. You can also help of course with cyber defenses, AI will help with cyber defenses and Palo Alto Networks. Other companies out in Silicon Valley are already working on that. So that’s problem number one. Problem number two, lethal autonomous weapons. This is the killer drone swarms. They’re not quite here yet. The US has really been holding the line on keeping the human in the loop. But in Ukraine, where necessity is the mother of invention. We’re very close to having drones that both target and attack without human intervention. You had just a week or two ago, the first dogfight between two unmanned drones. So this is happening. It’s not controlled. And on bio, it’s just new we used to have CRISPR where you can change the DNA now you can actually print synthetic DNA. And how does AI fit into this? Well, researchers in Switzerland a couple of months ago, thankfully did it. They didn’t let out in the wild but they asked an AI to create more toxic substances within a couple of hours. The algorithm came up with 40,000 new incredibly toxic substances it invented VX invented a lot of other things we don’t even know about. So don’t sleep well
Mark Esper
Is this where we’re supposed to feel good about the last session today?
Anja Manuel
But Bonnie and Mark are gonna fix it all.
David Sanger
Yeah, there’s a reason that Anja told us that we weren’t able to have water bottles out here on the on the stage. So Anja let me just follow up on that. Because tell us a little bit as one develops AI. You just saw today, some voluntary guidelines that the major US developers, announced at the White House President Biden but what you are describing here sounds to me like something that would go well beyond anything that the guidelines that the makers of the tools together because anybody could do what was done in that lab, or anybody with a moderate amount of skill. So if you were beginning to think about how we would regulate this, do you have to do it at the level of the people were putting together the more large language models turning out the generative AI? Do you do it by restricting access to different customers? What are the different models here for for us to think about before we even got to the question of whether treaties would even make any sense in this case, which I think they probably wouldn’t.
Anja Manuel
Yeah, I’m going to turn to Mark on the on there was an agreement announced just today by the White House that the seven biggest AI companies in the US are gonna have some voluntary limits. That’s a little bit different from what we’re talking about here. Mark’s a real expert, by the way, Arati Prabhakar, who was with us had to leave early to get this agreement out the door. So it’s a good sign that these things are happening. Well, let me turn to Mark first with that, and then I can talk about the international piece.
Mark Esper
Okay I’m going to walk us back a few decades ago to set the stage. So when I entered West Point in 1982, we fought three domains of warfare, air, land and sea right. Fast forward 40 years and I become Secretary the Army. We are now fighting in five domains of warfare, so air, land and sea and then space and cyberspace. And I’ve said during my tenure that I believe that the first shot to the next war with Russia or certainly a China will happen in either space, or cyberspace, or both. And so my view was always that AI was the most important of the 12 critical technologies we developed to develop that we had to master and dominate and retain leverage over because whoever got there first was going to have that overmatch for years to come by the same thing, so we put a lot of money into that into both space and cyberspace. I stood up Space Force, Space Command, restructured Cyber Command with additional authorities money, so forth, and so on to do what it needed to do in those domains. But the same time I recommended portents, of the ethics piece of this and in 2020, after 18 months of work, a lot of engaging with business, academia, so forth and so on. I released what were five principles of ethical use of AI and kind of set the stage for what for what DOD would do. And so when I saw today, this release, the announcement with the White House by the AI companies is important, significant, it’s more of a bottom up driven lot of what I read top line was consistent with what we did with DOD and 2020. And I think it’s important we set those guidelines, we being out the United States first and foremost, and then expanding it to other Western democracies. So we set those norms and expectations whether it’s about safety, security, effectively so forth and so on that will guide us because look, AI is an enabling technology for everything else it’s gonna affect not just warfighting, but how we live our lives, our way of life, everything. And so I think we got to get those basics right, before we continue down this path.
David Sanger
Okay. Anja do you want to follow up on that? Or Bonnie, do you want to talk a little bit about your declaration about how we would think about beginning to regulate their use of military technologies?
Bonnie D. Jenkins
Yeah, so happy to do that. I’ll turn it over to Anja. Just a couple of things to add to this. Just want to make sure that we understand that AI and all these emerging technologies have both positive and negatives. There’s a lot that AI could do in the military space, to help provide, you know, help with decision making, you know, identifying targets, objects, whatever. There’s a lot of things that it can do for the military that can help the military do their work. But there’s also possibility of miscalculation and unintended consequences. And in that respect, one of the things that one of my offices architectural office has been doing for over a year now is looking at how we set up these norms. We’ve been talking about how do we set these guardrails for these technologies that are developing so fast? Without enough regulation, I guess you could call it in terms of, you know, what are the considerations? What should you be thinking about when you do this work, particularly military. So we started out following Vice President Harris’s statement on not doing anti missile tests by my bureau started a declaration or the resolution on commitment not to conduct direct acids, anti missile tests, and we were able to get that through the UN and 155 countries signed on and only nine countries did not. They followed up with a declaration that I was able to announce this past February on how to you know, responsible behavior for military use of AI and autonomy. And this provides guidelines for countries to take into account when they are developing AI in the military. And one of the important ones which is something that the US the UK and France has already agreed to is ensuring there’s humans involved in any decision making on nuclear employment. And so this is something that we are currently working with a number of countries around the world so that we can actually propose that again and get more countries to sign off. But the bottom line with all this, this is following this whole idea, you know, happy to see this going on to the private sector as well. But we have to develop some guidelines. Regardless, what are we thinking about when we are developing AI, not just in the private sector, but in the military as well.
Anja Manuel
Can I jump in with on Bonnie’s point just for a second? I think this what you did Bonnie and the State Department in coordination with tends to put out these AI guidelines even though they’re voluntary so far, I understand China hasn’t totally signed up to them. You got a little bit of criticism in the press. Oh, you know, the Chinese will never agree. Why are we doing this? I just have to say the administration is exactly right here. The United States is currently ahead in this technology. So it is our role to be magnanimous, and to think carefully about how to harness it for the better. Good. We’ve done this before. We were ahead in the nuclear race in 1945. In 1946, the United States government proposed in the UN, that nuclear technology would in part be controlled by the UN now, did that work immediately? No, we didn’t get the first real arms control treaty on nuclear weapons till 1963. It was limited, but this is not quixotic. It’s not just tilting at windmills. We have to start now. And we have to be talking as Bonnie is to the Chinese to the Russians and to everyone else who’s willing to listen.
David Sanger
So I only ask you actually ask all three of you. But let’s start with you Anja on the on this. You wrote a really interesting piece in the FT earlier this year. That said the place to start with this is a commitment not to use AI in nuclear decision making. Now, as Bonnie just described it, that is part of the declaration. But if I understand it from talking to people, Pentagon, they’re talking about humans on the loop as necessarily, not necessarily human in the loop. And there’s a little bit of a distinction there, but it could be a critical one. So Anja tell us a little bit about how you think this could be a first model and then we’ll go to the code everything else here.
Anja Manuel
Yeah. And Mark will know all the details of what the Pentagon has actually agreed to and not. This is one of those. This is what we’re talking about here. Nuclear weapons currently, you have a very strict protocol for who signs off on what to actually launch nuclear weapons. Okay. There is a lot of conversation around well are you going to win any nuclear fight if the AI shoots first if it’s an automatic launch, that is totally terrifying. And so what Bonnie and others Mark, you did this to the Defense Department talking about saying, let’s just have that limit. And interestingly, John Allen, who led Brookings was was here recently and he has had a quiet behind the scenes dialogue with the Chinese for a couple of years, where they also attract to dialogue agreed to this. So this is a small starting point. But I think the most important one because it’s one I think we can all agree on.
Mark Esper
You have to be careful with hard and fast rules. So you have to look at different weapons systems consequences, so forth and so on. I’m fine with a with a person on the loop when it comes to examples such as ship self defense, and we have that now with what’s called the Phalanx gun system. Very clear, we not discriminate what’s going on so forth and so on. But with regard to nuclear decision making, you want a person in the loop. Now, how does we think about in terms of decision making, but this is where you can leverage AI and the firm I work with red cell ventures, which is AI based really heavy into both healthcare and defense. We’re thinking about a wide range of issues. But when it comes to nuclear decision making, think about using, first of all accumulating large amounts of data, which is what DOD needs to do. Now, currently, DOD producers each day about 20 terabytes of data, and yet we’re not using most of that, but in the nuclear context, could you get Could you put that in a central repository, label it, identify it and then be able to use it to help tell you what’s going on with an adversary? Are they moving toward a nuclear type of option? And if and what are those indicators can improve our understanding of what the enemy is doing? And And if something happens, help us with identification, discrimination, so forth and so on, to speed our decision making, because in a nuclear conflict, you only have so much time to decide what you’re going to do or not do. So there are different ways to think about that. It’s not just you push the button or not push the button, but there’s a whole lot of enabling things that can happen well before you get to that point in time.
David Sanger
So Bonnie Mark’s raising a really interesting point, which is one of the things these enabling technologies can do is even if you’re a human on or in the loop, you have shortened your decision space, right? Because your adversary may be using AI enabled decision making on how they’re doing so how do you approach this as a problem as you think about how you would get a agreement to keep AI away from the nuclear decision making, and how would you verify that?
Bonnie D. Jenkins
Well, I think one important things is to continue to work with countries to focus on this particular thing and try to you want to bring this into these conversations and into these discussions. One of the things I want to highlight is that National Security Adviser Jake Sullivan made a speech earlier in June, where he highlighted the potential discussion with the nuclear five on a number of issues and one happened to be on this issue about nuclear decision making and use of a AI to do so in new decision making. And so in these types of environments where you can have these kind of conversations, this is where we try to address these issues is where we try to tackle these kinds of issues. On the verification side, this is one of the reasons why trying to do something like this to a treaty has some limitations as we try to develop, continue to develop AI and continue to develop verification possibilities. We’re not there yet. We’re not there yet. In many respects. I mean, one of the things we’re doing get state is funding through our V fund. Governments are working together with with NGOs to figure out how we can increase verification by using emerging technologies like AI. So we’re hoping that there’s things that we can do to verify all of these types of emerging technologies. But in many ways, we’re still not there yet. And that’s one of the reasons that puts a limit on how much we could do, which are the traditional treaties.
David Sanger
So Jake Sullivan’s speech on this was really fascinating, and I hope that maybe he’ll he’ll talk about that a little bit when he’s up next. So we only have two minutes left here, but Anja I wanted to pick up on Bonnie’s point here, which is the speed at which we used to think about treaties or even executive agreements just think of say how long it took to negotiate the Iran deal in the run up to 2015. It was basically 15 months of my life, I’ll never have all those going back right? And much I’m sure it felt much longer for the negotiators. But here we’ve seen a speed of development, where it would be ridiculous to think about a treaty process. And it might be ridiculous to think about formal government or government agreements, as many of these technologies are in the hands of non government. So how do you begin to think about that?
Anja Manuel
I don’t think it’s ridiculous at all, but it has to be in all of the above approach, and you’re seeing that happen. So you’re seeing pretty strict export controls by the United States on the most advanced chips. They’re slowing down a little bit Chinese development of generative AI models, but you could do a little bit of that, and it will work in part for a limited amount of time. So that’s not a perfect solution, but it’s one solution that we’re working you’re working on.
David Sanger
And that wouldn’t be the chips alone if you’re gonna do this for AI you need to do the chips you can do on the
Anja Manuel
You could foundation models, problem is you can’t do it for all algorithms. You can’t do it for open source. So this is hence it gets so complicated. It’s an all of the above approach, but I do think Bonnie made a really important point in the beginning. You still negotiate the treaties and the executive agreements just because some people still steal and commit murder doesn’t mean you don’t have laws against stealing and committing murders. So this is a really important field that’s being updated at warp speed to the new technologies we’re seeing, but it’s far from dead.
David Sanger
Great, Mark, you have something you wanted to add,
Mark Esper
I was gonna say look, AI is with us. It has pros and cons. It’s been a from a DOD perspective. Again, whoever gets there first and dominates, is going to be able to control warfare and dominate it for some period of time. So I think we need to embrace that. We shouldn’t hold it back. But I do think the finding those barbells, which is what we tried to do a few years ago is important. Our challenges is I don’t think we’re moving fast enough. I think we do have an advantage of the Chinese but we need to move faster and faster. And frankly, treaties and documents important, as Anja just mentioned, won’t be able to keep up with that. So we need to think about how do we how do you have something that can keep up with the pace of technology.
David Sanger
You’re also going to see along the way some areas where defense may benefit for a while ahead of offense. Cyber was an interesting example of that because you can imagine AI being very good at picking out patterns that are unusual for the to nail intrusions before you would see AI necessarily designing even better cyber weapons. Well, this is just scratching the surface. But I want to thank you all and encourage your everybody stay around for Jake Sullivan being interviewed by my good friend Ed Luce. Thank you.