Aspen Strategy Group Logo
Aspen Security Forum Logo

Power, Security, and Influence in the AI Era

July 18, 2025

Aspen Security Forum

Speakers

David Petraeus, Partner, Chairman of the Global Institute and Chairman of KKR Middle East; Former Director, U.S. Central Intelligence Agency
Anja Manuel, Executive Director, Aspen Strategy Group and Aspen Security Forum; Principal, Rice, Hadley, Gates & Manuel LLC
Kent Walker, President of Global Affairs, Google
Moderator: Mary Louise Kelly, NPR

Full Transcript

Read the full transcript below or download it to your device.

Click to read the full transcript

Kelly

Good morning everybody. Welcome, welcome, welcome, welcome. It’s funny, as a journalist, there’s one question I always hesitate to ask, and it’s the so tell me what happens next? Predict the future, because the only remotely honest answer a person can give is Beats me. None of us can predict the future. None of us know exactly what is going to happen, particularly when it comes to tech. Ai some of the big things that we have already been hitting this morning. So welcome to a panel of whom I’m about to put the question. Tell me what happens next? Predict the future. I want to start by teasing out some of the themes, questions that have emerged on earlier panels that we’ve kicked around a little bit but not resolved. And I want to start because we have significant military battlefield expertise on stage with a former CENTCOM commander General Europe, first speak to the ways that you see the battlefield being transformed. What opportunities do you see in the air, on the land, on the sea, below the sea?

 

Patraeus

Unmanned and increasingly, it will not just be remotely piloted. It’s going to be algorithmically piloted, and you can predict the future by seeing what is going on in the present. Frankly, right in Ukraine, I’m a frequent visitor there. I’ve watched as they formed not just an Army, Navy, Air Force, but unmanned systems force. As they have introduced unmanned systems to every single battlefield formation. Every company now has a drone platoon, battalion, a drone company, brigade, a drone battalion. And there are other huge regiments that divide up the front lines among them. To give you a sense of the scale of this, the last time I was there was about two or three months ago, I asked the overall commander of Ukrainian forces, how many drones Did you employ yesterday against the Russians? And he said, almost 7000 7000 do the math. I hope we have more than 7000 drones in the US military, but it can’t be that many times more. So they’re going to produce 3.5 million unmanned systems. And keep in mind, this is not just in the air of various types, everything from short duration suicide drones to much longer duration strategic going in and closing down Moscow’s International Airport every other day or so. This is maritime drones. How does a country that has no navy sink 1/3 of the Russian Black Sea Fleet, aerial drones that find the ships and maritime drones that sink them. How does a country shoot down aircraft over occupied Crimea by shooting air defense missiles off maritime unmanned systems. How does a country with $1 million worth of drones parked outside airfields that have strategic aircraft of Russia on them, 1000s of kilometers apart, $1 million worth of drones damages or destroys five to $7 billion worth of strategic aircraft, some of which cannot be replaced. So you can see that future and again, right now, most of those drones are remotely piloted. But the future of the future is going to be systems that are remotely piloted. And so if you now turn it to a US scenario, our Indo Pacific commander has publicly described what he wants to do in the Taiwan Strait, 110 miles of open ocean, very formidable task, and he wants to turn that into his term, a hellscape. How do you turn that into a hellscape, unmanned systems underneath the surface of the water, massive numbers of them on the surface of the water, in the air, on the ground, in outer space, the equivalent in cyberspace, the cognitive air. You know, all of these domains of warfare. And again, increasingly, not remotely piloted. So the human in the loop is going to become the human on the loop, in other words, establishing the criteria for what the machine the mission, and then the tasks, and then what it has to meet. And by the way, AI is going to write the algorithms with a few bits of input from humans.

 

Manuel

Well, Kent and I, I think we are in heated agreement with this. The Chinese large language models are rapidly catching up. If not there I think this is the year that the Chinese achieve parity with us. 

 

Kelly

Kent, you agree? 

 

Walker

I think It’s neck and neck. I think a few years ago we would have said we were years ahead. Now I think we’re months ahead. And in some areas, they may well be ahead. And to step back, the stakes as a generalist started to lay out on the battlefield are significant. But even more fundamentally, this is a race for geopolitical influence. This is a race for economic leadership around the world. If you go back to what was called the long century between the French Revolution and the First World War, Great Britain dominated that century because they dominated in steel and coal. The United States dominated the 20th century because we led in mass production material science. So the question is, who’s going to lead in the 21st Century? And the early signs are not as auspicious as we would like. The Australian strategic policy institute does a review every few years of who is leading in critical technologies. They look at 64 different technologies, from batteries to advance to engines to advance chemistry in 2003 the United States led in 60 of those 64 categories. Today, China leads in 57 of 64 now the good news is we do think that AI is a key element in turning that around, because AI is not just a scientific breakthrough, it’s a breakthrough in how we make breakthroughs. I roughs, so that’s new generations of material science. That’s quantum that’s personalized medicine, and many more, but we really have to lean into this and have an affirmative pro innovation, approach.

 

Kelly

Just to follow up and make this specific. You just said Anya, you think this might be the year where China achieves parity with the US in large language models. What does that? What do you mean? Like, what are the Chinese about to be to do as well as we do?

 

Manuel

So when you look at the frontier models, as we call them, it used to be Google, open, AI, anthropic, maybe meta were solidly in the lead. You all have heard about deep seek. But behind deep seek are probably 15 other very exceptional Chinese models, including kiwi and others. And the problem with this race is it’s exactly as you said, Kent. You don’t quite know where it leads. It’s not just about having the best model. It’s how do you use it in your society? How do you implement it? And I think here the Chinese are actually eating our lunch starting September 1 of this year, every student K through 12 in China will have aI lessons age appropriate. How do you use the model? How do you interact with it? How do you do it ethically? In the US, there’s a good the Trump administration actually had a pretty good executive order on AI and education, but it’s baby steps. You know, we’re training a couple 100,000 teachers. We’re starting to think about it. And so if you think about not just where are we at the frontier of the technology, but how are people actually using that technology in their societies, you’re doing?

 

Walker

Well, let me pick up on that. Sergey Brin, one of our co-founders, has a saying, ideas are easy, execution is hard, and the United States has a history of being the first to invent technologies, but not necessarily the best to deploy. And that learning by doing, whether it’s television or color printers or many other technologies that are now manufactured outside the United States, is a sobering no for us. China has launched its ai plus program. It is spending hundreds of billions of dollars a year investing in AI. There are 200,000 Chinese companies using AI today, and something like 600 million Chinese report that they are already actively using AI. Now that you know, good on them for taking this seriously, but a real challenge to us as a book that just out by Jeremy ding called technology and the rise of great powers, and the secret, he says, is an innovation economy, Yes, but a diffusion economy is just as important, get these tools in the hands of the people that need to use them, and that actually makes the tools better. 

 

Kelly

Well, and it’s so interesting because Anya, you just made the point about China is teaching this at an eight appropriate level in schools. I suspect many of my fellow parents in this room would agree, our schools spend a lot more time trying to keep children away from it because we want them to learn there’s good intentions. But what do you think are the policies if we want to if we agree that maintaining dominance, that not getting left behind in this race is a noble goal, 

what do we do?

 

Manuel

So I have to give the Trump administration an enormous amount of credit here. They came in right away. They have a lot of technologists who’ve taken time out of their jobs in Silicon Valley and elsewhere, who really understand this stuff, and they’re embedded across this administration. David Sachs, the AI czar, says we need to cement our US technology stack around the world, they’re doing a lot. Here are the good things they’re doing, making it easier to build power plants, referring to Dan pond, again, deregulation, actually doing a pretty good job, I would say, I don’t know if David would agree on trying to get breakthrough, some of the bureaucracy in the Pentagon to get drones and other tech enabled things faster in there. 

 

Patraeus

Trying, trying.

 

Manuel

Trying is the operative word. But there is a real, I think there’s a real renewed vigor and energy and a willingness to break China that I haven’t seen in a previous administration.

 

Patraeus

But there’s also vested interests in what Senator McCain used to term the military industrial, congressional complex, each each element of which wants to maintain, legacy systems, Legacy processes, Legacy basing structures, Legacy maintenance contracts, you name it, and frustrates the efforts of those who know. We need to be accelerating our transition, we really need to go this is very simplistic terms, but we need to transform our military from a small number of very large platforms that are incredibly capable, exorbitantly expensive and very heavily manned and increasingly vulnerable to an extraordinary number of unmanned systems, again, under the sea, surface, air, ground, space, et cetera, and which will increasingly be not human piloted or remotely piloted, but algorithmically piloted. Again, we still need some of those platforms. It’s not the entire force. There are other scenarios around the world that call for those, but we’re not remotely making that kind of dramatic move. I would in fact, submit that the Chinese are going to school more on Ukraine than are we in the United States. We don’t have our wanted army. Lessons learned, team, joint forces. Lessons learned, team. They’re not on the ground, because, understandably, we’re concerned about boots on the ground and American casualties. But if you don’t do that, it’s very hard to divine the lessons if you’re not on that ground with the actual units seeing what they’re doing. You know, the latest innovation is in Ukraine, because they can’t work through the jamming. They’re trying to maintain a command and control link, a radio link, if you will, sometimes using star link and a GPS link. So now what they’re doing is they just pull out little fiber optic cable behind the drones. They’re doing this on scale of 1000s of these. By the way, it makes a mess of the farmers fields. Now you have to wade through a lot of fishing line, basically fiber optic cable. But the innovation is so rapid, if you’re gone for three months and you come back, you find a whole big breakthrough that they have just actually put onto the battlefield. And they’re doing this at a rate, at a pace that only a country that has extraordinary, it talents, manufacturing, skill, design skills, and is fighting for its very survival could actually do. 

 

Manuel

One sentence to double down on what David just said. When I have walked the halls of the Pentagon and had these conversations until recently, it feels like we are the Titanic. We see the iceberg and we are not turning. And I would say the Trump administration is doing a really good job trying to turn the tide.

 

Patraeus

But we should have sicked DOGE on the military procurement system instead of USAID.

 

Kelly

So last night, I asked Google’s personal AI assistant to help me come up with a good question to put to Kent Walker of Google before a live audience, it came up with four in less than 10 seconds. They were not bad. Then I asked it to tailor a question for an audience particularly interested in national security. It did so it actually added a helpful hint.

 

Kent

I can’t wait for what’s coming.

 

Kelly

This is for me, when you ask the question, deliver it confidently and ask and allow for a thoughtful pause before he responds. Good luck. Here goes fellow moderators, take note. Mr. Walker, very polite. Mr. Walker, what specific concrete measures is Google implementing to ensure that increasingly powerful AI capabilities cannot be leveraged by adversarial nation states or non-state actors in ways that directly threaten us, national security interests or global stability. How does Google balance its commercial asset, its commercial interests, with the imperative to prevent catastrophic misuse? Mr. Walker, have at it.

 

Walker

This is my thoughtful pause, by the way. It’s a very good question. I think Jim is earning its keep. We talk about being bold and responsible, and the responsible side of it is very much. We are right now at the Pareto frontier of capability of these models and efficiency of these models. And the rate is increasing at a remarkable pace. We are 300 times, not 300% 300 times more efficient than we were at the state of art. That state of the art was just two years ago. That is a remarkable rate of change in that kind of dynamic environment. We are spending a lot of time building in guardrails to our models to minimize the chances that they can be hacked or abused. That’s particularly important as we get now into the agentic era of AI, where these tools will be able to take multi step actions and potentially would be extraordinarily useful for all of us in our daily lives, including the scientific community, but also potentially dangerous when it comes to things like chemical weapons, biological weapons, radiological weapons, nuclear weapons. So how do we take steps against that? We build into the model hard guardrails we build into what are called deterministic guardrails.

 

Kelly

Can I just push you on this to sharpen your chat bots questions about how you balance commercial interests against the potential for catastrophic misuse, take us inside a meeting at Google. Are there conversations where you think, God, this would be so cool. It would make us a lot of money, but boy, could that go really wrong. So we’re not going there.

 

Walker

So we have held off on releasing some models over the years where we have had concerns. I’ve given you example from a few years back, we had a model that would do great recognition, speech recognition at a distance, and we said, Well, wait a second, that could also be misused for surveillance. So we held off publishing a paper around that and only published the part that would be useful for people who have hearing issues. So it’s right up close and you can see the person’s face. So those kinds of back and forth happen all the day. We have teams that are focused just on the frontier model safety classes of questions. We have red teams that go in and try and break the models in different ways to make sure that they’re not subject to these abuse. Now that said, it’s a fast evolving technology, and nobody is going to be perfect in this area, but we’re devoting a lot of work ourselves and across the industry through something called the frontio model forum to try and do cutting edge research to benefit everybody work in this area to make the guard whales as strong as they can be and limit the chances of a jailbreak.

 

Kelly

I have a question, and I want all three of you to take this on lightning round General Petraeus. You famously asked the “tell me how this ends” question. You were talking about the war in Iraq, but I want you to apply it to this, because I keep thinking, you built your career in a world the military that places a premium on predictability, on ability to plan. None of us know. Not one of us how all this is going to go with tech, with AI. How do you think about that?

 

Patraeus

Well, if I just come back to the world that I know the best, which is that of the military and perhaps even intelligence. It would be that you’re going to see not too in the not too distant future, unmanned systems fighting unmanned systems. And they will not be remotely piloted. They will be algorithmically piloted. So it’s going to be AI systems, and again, in a sense, fighting against other AI systems in the form of unmanned systems, again, in all the domains of warfare. And that is really something. And so you’re really your technology. It’s your technology fighting their technology. And the human is not in the loop. Human may or may not even be on the loop all that much, because, indeed, the algorithms increasingly are going to be produced by the AR large language models. And of course, I’d actually be curious if you agree with this, those models reportedly will be within two years, at the level of a Nobel Laureate. I think right now they’re at a very good graduate student next year’s great PhD, and the year after that is again Nobel laureate level intellectual thinking.

 

Walker

The test we use demos who actually just won the Nobel Prize for some of his work in this area, is, if these models had all the information available to Einstein in 1900 could they come up with a theory of relativity, and we are certainly making progress in that direction.

 

Patraeus

Would it have taken 10 seconds or 20 seconds?

 

Walker

And wait until quantum, by the way, we haven’t really touched on that, but then take this incredible acceleration of computational power, and I should say, quantum is on track. We think that by 2030 we will have working quantum computers, which will more than exponentially increase the rate of AI. AI is making quantum faster, and quantum will make AI faster. So you get this combinatorial loop of innovation. So we already are working on post quantum cryptography, trying to get ready for that future.

 

Kelly

Five years is not a very long time.

 

Walker

No, it’s not. 

 

Kelly

Anya, tell us how this ends.

 

Manuel

Let me make the specific. Give you the dark scenario and then why that scenario is absolutely not inevitable. So we’ve all been here. Are the things that could go wrong. A non state actor now has the equivalent of a PhD in chemistry, biology, physics, sitting on their shoulders if they’re tinkering to make a weapon of mass destruction. That’s possible hasn’t happened yet. You could imagine a scenario where you, at some point in the next few years, have a 911 of AI, where some bad actor uses it to do harm in the physical world, there was another harm, which is that the AI itself. I know a lot of us in Silicon Valley spend a lot of time on this self replicates in ways that are deeply harmful, does things you’re going to give the paperclip example, does things that we don’t intend, jail breaks, all of the great safeguards that Google’s and others are putting in that I would call that a potential Chernobyl of AI, where the technology itself does harm. None of this is inevitable. And by the way, you have to call out the UK here, Rishi Sunak did something amazing, and he started three or four years ago, the UK AI Safety Institute. It is completely not woke. They do testing in advance of models being deployed. I think Google and others voluntarily have those models tested for these types of risks. We’ve been talking about cyber, chemical, bio, jumping its own safeguards and so and they now have 14 safety institutes all around the world. The Trump administration has been a little more quiet about that, but they have not gotten rid of the US one. The Chinese whenever we talk to the Chinese about these issues. Their. Scientists are also deeply worried and so I think there is a groundswell here to do something really positive on safety testing before we have a disaster.

 

Kelly

We’re going to have  time for maybe one question. I do want to just flesh out the paper clip example. This is, this is credit to you are of the mind that there is a very small, but not zero percentage that AI will run the world, and we will all be paperclips in its service.

 

Walker

So, this is an area where we just call the alignment problem. You need to make sure that your models are doing what you want them to do. Them to do, whether that’s ordering one pizza instead of 10 pizzas, or not doing grievous harm. And that’s one of the reasons why this area of AI safety research continues to be extremely important and something we take seriously. That said, coming back to the what happens next? Question, it’s difficult to make predictions, especially about the future. Something I learned from my son, who’s a history major, something called the aperture of. Now we look back at history and we see all these patterns, and it’s so obvious that this led to this, led to this. And then we ask, well, what’s going to happen next? So I don’t know. It’s all completely contingent as it goes through the aperture of now, I would say that to extent history is any guide, technology has been a remarkable positive impact on human lives around the world. Human life has doubled in the last 100 years. Human life, average human lifespan around the world. We have cleaner water, we have better food, we have better medical care, and not just people in developing worlds, people in the developing world as well. AI is a general purpose technology. There’s a lovely paper called gpts are gpts generative pre trained transformers are general purpose technologies. And if we do this right, and we take into account the risk that Anya has laid out, but also the benefits of being able to dramatically make our economies more productive, raise the standards of science, create nuclear fusion, create clean water for people around the world. The upside is really tremendous. So the stakes are extremely high. And we all have a deep responsibility to get it right.

 

Kelly

One quick question, anybody out there?

 

Manuel

Mary Louise, I think we should just stop there.

 

Kelly

Yes, sir, right here. Thanks.

 

Patrick Blott

Patrick Blott from Intermap Technologies, we’re a mapping company. My question is around with all of these incredible advances, actually, some of which happening today on the battlefield in Ukraine, in some battles, 80% of the casualties are inflicted by drone, FPB drone. We just struck Iran. In the last panel, they were talking about potential blowback from that and uncertainty around that. And my question to you guys is, is this filtering into and being articulated in terms of our bright red lines, our homeland security, right? Our deterrence? And these evolutions, which we’re seeing real time in Ukraine, whether they’re being adequately articulated from a policy perspective?

 

Kelly

Quick answer, General.

 

Patraeus

I think we’re going to get some wake up calls in the United States from drone attacks that are carried out. And I think that only then will we truly get serious about having the kind of counter drone defenses around any large gathering of people, any significant institution, probably prisons, you name it. But I think there’s going to be some of that that will take place over time.

 

Kelly

A reckoning. All right. Thank you. Thanks everybody.

 

Aspen Security Forum Logo
Aspen Strategy Group Logo