Speakers
Charlie Dent, Executive Director and Vice President, Congressional Program, Aspen Institute; Former Member of Congress (R) PA-15
Tarun Chhabra, Head of National Security, Anthropic; Distinguished Visiting Fellow, Hoover Institution, Stanford University; Former National Security Council Coordinator for Technology and National Security
Katrina Mulligan, OpenAI for Government, OpenAI
Moderator: Kaitlan Collins, CNN
Full Transcript
Read the full transcript below or download it to your device.
Click to read the full transcript
Collins
Thank you so much. And thank you for all being here. I know this is everyone’s favorite panel, because it’s the last one before you get to lunch, and we are going to solve all of your questions and answer all of them when it comes to AI and national security. And it’s so great to be here. And first, I do want to say thank you to the organizers and everyone who’s hosting us here at the Aspen Institute. This is such a fun place to be. This is actually my first time coming to the National Security Forum, and I’ve really enjoyed hearing from everybody already, and it’s such a cool room to be in. So thank you for our host for having us here. We have so many different interesting perspectives here on the stage to talk about this. And I can’t wait to hear all of your thoughts about kind of where this intersection of AI and national security stands right now, but also where it’s going, and where you see it going, and where the biggest gaps are. And so I kind of would love for each of you to just, you know, tell us your one big unanswered question that you have in this moment. I’m sure you have a few as to where this intersection is and the biggest holes you see, and where we’re going forward from here. And Katrina, I’d like to start with you.
Mulligan
Well, one of the things that’s on my mind today, actually, right now, maybe during this panel, we’re going to be releasing our ChatGPT Agent. A lot of people have been talking about this year as being the year of agents, not just at OpenAI, but I think at all, at all of the frontier AI labs. What ChatGPT agent is going to allow you to do is sort of move from thinking about AI as a sort of replacement for search into being more of an operating system for how you how you actually do your work, or the tasks that need doing, so everything from like your ability to, you know, have have your ChatGPT agent, go through your calendar for the day and then run searches and queries on, you know, the people that you’re meeting with and recent news that they may have, you know, been involved with, and then send you a report that’s like, formatted and ready for you to read in the morning every day, like that’s the kind of thing that’s going to be enabled by the agent release that is happening today. And one of the things that I that I wonder about the the year of agents is, how is it going to transform the way that people think about what’s possible? I think, you know, one of the things you’ll probably hear from from all of us, is that, you know, adoption of AI in the national security space probably isn’t where we want it to be. Yet we’re getting there. We’re starting to see the glimmers of that changing, thanks in part, by a lot of the foundational work that Tarun and Ben and others did that now the current administration is building on top of but I really think that, you know, this question about what’s really going to be the tipping point on adoption, you know, I hope that it turns out to be agents. But so far, we’re not quite there in the national security space.
Collins
Tarun, what about you?
Chhabra
Great, well, I wholeheartedly agree with Katrina that it’s, it’s important to start with agents, because I think we haven’t thought enough about how agents might be used in a war fighting context, and so that’s something that we need to be working on together as labs with the Defense Department and the broader national security community, because it’s, I think it’s still under conceptualized, let alone talking about adoption. But I think it’s important to put the adoption picture in the broader context, which is, you know, we are in a competition with China for AI dominance. I wholeheartedly support kind of the approach of the administration to pursue AI dominance. It’s absolutely critical for national security, and that has several dimensions. One dimension is the race to the frontier. Who has the most advanced models. The second dimension is diffusion. Who’s used what AI is being used around the world? Is it our AI, or is it China’s AI? And obviously we want to win that race, but the third is adoption, and we could win the first race, we could win the second race, but if we don’t adopt fast enough, being at the frontier won’t be much. We’ll be in for some sort of strategic surprise, and that’s the scenario that we really have to prevent.
Mulligan
Can I put two fingers on that?
Collins
Sure.
Mulligan
Can I put a two finger on that? Just because I think you just hit on a really important point. There was an Edelman study that came out a couple months ago that looked at trust in AI globally. And what it found is that trust in AI in the US, in the UK and other parts of the Western world, is extremely low. It’s in like, the 30s percent, and in China and the rest of the developing world, it’s in the 70s. And it really underscores, I think, the point that turn just made, which is that, you know, we we could end up in a world in which we win the race to AGI, we win on the technology. We build all the right building blocks…
Collins
And people are too scared to use it.
Mulligan
And they’re too scared to use it.
Collins
That’s a really great point, part of that, maybe our Congress doesn’t necessarily understand it or what to do about it. And I wonder, as a former representative of that body, just your thoughts on what you heard.
Dent
Before I say what Congress will do, I agree about the Chinese that we want to maintain our technological leadership and dominance. And I think, you know, everybody was taken aback when the Chinese came out with deep seek. And that might not be the best technology, but maybe it’s good enough, and that frightened a lot of people. And so we as a country need to figure out what our objectives and our goals are for maintaining technological leadership. How do we want this to improve our society? And perhaps most importantly, how do we make sure that this technology aligns with our interests and our values, as opposed to our competitors in China? Now, as it relates to Congress, and I’m going to give a little plug, I brought this up here as a prop. I held a congressional Conference on Artificial Intelligence in May for four days. Had 22 members of Congress there, and we discussed a lot of these issues. But I think the biggest challenge for the Congress now is how they go about regulating AI. You know, they you saw in the recent reconciliation package, the Big, Beautiful Bill, Congressman Joe Obernolte of California had inserted a provision that would have imposed a 10 year moratorium on the states to regulate and it was removed in the Senate because it didn’t comply, I think, with the Bird Rule but…
Collins
But only after the House passed it, and then members complained and were like, we’re not going to vote on this if the Senate doesn’t remove it.
Dent
Well, I think it was. It did something important, though it helped prompt the discussion. And 10 years is a long time, but that means Congress knows it needs to do something, but what they need to do and how they need to do it is an open question. I think at some point they might need to do what they did on China with that, that bipartisan select committee and try to come up with some serious recommendations, if not a select committee, maybe a blue ribbon commission. I mean, after 9/11 we had a blue ribbon commission. Maybe we should do one before there’s a disaster, and make recommend, this commission, Industry leaders, people who understand this technology, make serious recommendations to the Congress about how to go about regulating this. And where are you going to house this regular… we’re going to create a new regulatory entity? I don’t know. I mean, there are a lot of questions that we don’t have answers for right now, and there’s… and they’re starting to develop some expertise in Congress. There’s some members I mentioned, Jay Obernolte, California, tender, they have technical understandings. Olbernolte, he was on a career track to be a software professor engineer, and made money making writing software for video games. But that said, there are people like that. Senator Schumer has been very active. Rounds, Blackburn, Hawley, Warner, you got a lot of people interested. And so the question is, how do you channel this? Because right now the good news is, as members and staff, cheering up our staff, are learning more about this technology, there aren’t hard partisan divisions yet. What’s the Republican position on AI versus Democratic position? I almost don’t think there really is one right now. So there’s an opportunity to arrive at some consensus fairly early before things get overly political.
Collins
And I think when it comes to terms of what they can regulate and what that looks like, before you get there, I’m reading this book right now. It’s a forthcoming book from Garrett Graff on the building of the nuclear weapon and what that looked like. And it’s a step by step process, and it’s really fascinating to just look at how the federal government didn’t really know what these scientists were coming to them and saying that this is a possibility of what they have, but they were trying to stress the need of how important that was. And I think you know, to your point, about the relationship between government and AI companies as they are building these models, you seem to clearly think it’s not at the level of where it should be yet in terms of that interconnectedness.
Mulligan
I mean, one of the things I would say, and this is no one’s fault, it’s just the nature of our system, is that, you know, one of the most critical years of development of this technology involved a presidential transition. And so you had a whole bunch of people who, you know, we had spent a lot of time sort of, you know, trying to educate, trying to have those kinds of conversations where the labs are coming to senior leaders in government and saying, Hey, this is maybe something you need. Really critical stuff going on here. And we had, you know, people in the administration who were paying attention to it and being champions for it, but we didn’t have, you know, we we started to get there, and then we had this period, you know, we had an election, and then there’s this period of, really, six months or more, where there’s a there’s a whole bunch of churn that happens, and you don’t really have the foothold to have those kinds of conversations. We’re now back in the place where we can start to have them again, but it’s a whole new cast of characters, and we have to have these conversations anew. You know, the reality here is that this is the first time in American history that a technology of this consequence is being developed exclusively by the private sector. If you look back at like the advent of electricity, you know, the nuclear fusion, CRISPR, like any, you know, GPS, those were all moments the internet, where, where the government had a seat at the table as that technology was being developed. And they don’t right now. And that, I think, is it is, I think the government is having a bit of a hard time figuring out how to, how to handle that reality. You know what I think it looks like? I think it looks like, you know, a different way of thinking about public, private partnership. I think, you know, there’s no question in my mind that the government needs to be working really closely with Frontier AI labs and having the ability to see under the hood when it’s happening at the frontier, at the leading edge, before those things get released. We’re not really there yet. We do that in anecdotal ways, but it’s not, it’s not routinized. It’s not it’s not contingent on it. But you do it’s actually by people like myself and Tarun inside of these companies, saying, Hey, we’re gonna have to have to go talk to the national security community about this. And I think, you know, we’re going to need to build some reps and sets and mature that.
Collins
If you know, as this technology gets more capable well, and Tarun, when you look at that, I mean, to speak of that six month period, obviously, you know, there’s a presidential transition. That is how our democracy works. I think we can all understand why we respect and appreciate that. But China’s not having that moment. Other nations aren’t having that. That are interested in developing this technology are already doing so in terms of how that works for them. So I wonder how the United States handles something like that.
Chhabra
this is where I think Charlie is absolutely right, and credit to him for the congressional institute that he runs to try to build bipartisan consensus around issues like this, and I think we actually can. So if I think about where is there a strong bipartisan agreement, one to build critical AI infrastructure in the United States, what’s that we’re going to require more energy and reform on power and permitting issues? China brought on around 400 gigawatts of power last year we brought on around 56 by 2028 we’re going to need gigawatts just for AI uses. We think so that’s an area where the President was just in Pennsylvania talking about this Tuesday. Our CEO was there. I think there’s bipartisan support for what the administration is going to do there. Second on diffusion, I think there’s work on more financing for data centers abroad, so that you have American AI in more places around the world. And then I think on China, the need for export controls, the need for faster adoption, is another area the coalition may not look like the old bipartisan coalition on China. It may be different for a variety of reasons, but I think we can build it again.
Dent
Just on the energy piece, Tarun mentioned on Tuesday, Senator Dave McCormick set up an energy summit in AI in Pittsburgh. The President was there, Governor Shapiro was there, Fetterman was there. It was bipartisan, and they really highlighted the importance of energy in this whole conversation. I, when I was in Congress, I represented Three Mile Island. You may remember the nuclear plant that melted down in 1979 that one reactor has been sealed since then, but there was another reactor operating up until just about the time I left Congress. Well, it was shuttered. Community was unhappy about it. It’s been reopened. Microsoft bought it just to fuel their data center, that one nuclear reactor and so and then what happened in Pittsburgh? Dealt with BlackRock, I think, announced 10s of billions of dollars of investment in natural gas to help fuel data centers out in the pits in the south and western Pennsylvania. So it’s hard to separate this national security conversation from the energy conversation, because the amount of demand and the energy required is is so significant, and we got a lot of catching up to do as you pointed out, and we’re far behind the Chinese, and if we want to maintain this leadership, we’re going to have to build that infrastructure out and, and that was a good sign on Tuesday, and I applaud the senator McCormick, for putting that on.
Collins
So when you see, Katrina, something like what happened on Tuesday? I mean, obviously that’s a step, I would assume, in the right direction of what you’d like to see more of. But when you have these conversations with officials, do you feel that they take it seriously? How do you believe that they look at it?
Mulligan
When they understand it, they take it seriously.
Collins
But do they understand it, I think is a key question.
Mulligan
We are going to have to bridge that gap. I mean, one of the things that, you know, I started at OpenAI about a year and a half ago now, and that means that I started at OpenAI longer than two-thirds of its employees ago. So, you know, I was the first person, I think, in OpenAI, other than maybe Anna Makanju who had any national security background or experience whatsoever. So, you know, it’s not, you know, it’s not native in these companies for people to really understand these particular implications or be, you know, thinking about them in the same way that people, you know here at the Aspen security forum are. So, you know, we’ve spent a lot of time really thinking about, how do we up level the understanding here. You know, one thing I noticed, because I talked to people about AI all day, every day, one shift I’ve seen in the year and a half that I’ve been doing, this is a real shift away from being a bit in a defensive crouch and being talking about risk all the time, which is pretty much all we did, you know, a year and a half ago to now. You know, even in this panel, like really talking about what happens if we don’t get the opportunity side right. So I have seen a shift in this timeframe between those two worlds, and I think we’re getting closer to the right balance there. But some of it is also in demonstrating the capabilities. You know, we we have had to think really creatively about how we create the right demos to show people like, Listen, this is what it means for you that this technology has now had this massive capability jump. And you know, for us, that has looked like showing what it could mean from a, you know, drone development capacity. It’s meant showing its ability to do GEOINT at a level that is really sophisticated already. It’s, you know, some of it is also helping people understand the more mundane applications that can really be useful for toil, reduction and changing. You know what it looks like to be a cybersecurity expert in government, like by removing some of the parts of the job that are really just not that fun to do and can be automated. So I think there’s a lot of educating that we still have to do. What I’m really looking forward to is getting the technology in the hands of users. There is, you know, one of the things I always say is you don’t get fit by reading about working out, and you don’t get smart about AI as much as you are all getting smarter about AI right now by hearing a panel discuss it, you get smart about it by…
Collins
Except this panel.
Mulligan
Except this panel, everyone’s going to be geniuses. Really, you have to use it. And if you aren’t using it right now, it is not available at, you know, at every workstation in the DOD, for example. And we’re gonna have to change that because, because the best ideas, the most transformational use cases, they’re not going to come from us in industry. They’re going to come from the men and women in uniform. They’re going to come from the intelligence community officials. They’re going to come from in and the only way we’re going to start to see that bear fruit is by getting it in the hands of users.
Collins
Tarun, what does that look like in terms of… We talked about the public trust and what it looks like in the United States for AI capabilities and people just using it in their daily basis, which I have noticed, though, in the last six months, just people in my regular life use AI way more on a day to day basis. I mean, the hair and makeup people at CNN talk about this, how they use it. I was gonna, I’m as a guest on “Who Wants to Be a Millionaire?” It comes out, please don’t watch, in a few weeks. And I was talking about being nervous about it, because, you know, I spend my day reading newspapers and preparing for briefings. But I was thinking about, you know, just random knowledge and what I know. And one of the artists at CNN was like, you should just use ChatGPT, how do you how do I prepare for “Who Wants to Be a Millionaire?” And I actually did it, and it was actually so helpful, just in terms of random ways to, like, study or prepare for a game show, pulling how other people have done it. But I think, you know, convincing people that at work this is going to make your life easier or more efficient. How do you how do you make that case?
Chhabra
Yeah, look, I think it’s a great example. And it goes to Katrina’s point of which you just have to get the technology in people’s hands to use it. So we see what is driving adoption rapidly. It’s when a combatant commander has started using it, and then says, Everyone here is going to be using this. And I want to see it being used for the following, you know, mission sets. I want to see it being used in an exercise, and so on and so forth. There is a dilemma, though, which is, we have to do that, get it into the hands of users, but the transformational piece is not just using it as a chat bot, right? The transformational piece is, once you start using it, then how do I re engineer, the way I do the mission, right? But that’s not unique to the national security mission. Anyone who’s in a C suite here is experiencing the same thing where the CEO is telling them, I want to know how you’re going to use this to be much more efficient, to be much more productive, and hit all of these goals. And you have to do that while bringing it in and see how people start to use it.
Dent
Just on… My key artificial intelligence advisor is my 29 year old son, who is a software engineer. I talk to him about this regularly. I say to him, you know, you know, Dario and others, CEO, has talked about, you know, 25% of entry level jobs are going to be white collar. Entry level jobs are going to be gone. I say, you’re a software engineer. And my son, you know, they say you’re going to be out of work because you code. He writes a language called Python. I thought it was a constricting snake, but that’s what he writes. And he says, “No, I’m not going to have a job. You kidding me?” He says. He tells me. He says, “You have to know how to use this. If you know how to use this, you control the AI. It doesn’t control you.” And he sees all these applications. He’s not all worried about losing work. He sees all upside on this stuff. So I think you’re absolutely right that people have to use this stuff, not talk about using it, but actually use it, and there’s enormous opportunity and those who figure it out, I think, have a are going to have a pretty interesting path forward.
Collins
And one thing that you focus on Katrina is how to use this to make people’s jobs better, more efficient. Obviously, we’re talking about this in the lens of a national security perspective. And when people hear this, they’re thinking, “Okay, how are you using that on the battlefield?” You know, there’s been conversations about this with Ukraine or with Israel’s war in Gaza, and what that looks like. There is a question of these companies are developing in these AI models, you know, how are they being deployed on the battlefield? What if it’s the same model on both sides of the battlefield? Those are big picture questions that I think a lot of people have, but we don’t, we don’t know yet, but there are questions of what does that look like when it’s actually in practice.
Mulligan
I mean, the answer is, we don’t know yet. You know what I think, you know, I think you have to start somewhere and I would advise that we not start there, right? Like these models are really good at a number of things, and then we need to actually do more kind of testing and evaluation before we start relying on them in context like war fighting, where you know the stakes are incredibly high. Like I want to start with transforming the Military Health System. Like I want to start with taking everything we’ve learned from private sector use of these models. And I want to go to the chronically underfunded, you know, defense, health agency and say, okay, like, what could, what lessons have we learned over here that, how can we solve traumatic brain injury? I mean, these are things that we could be doing right now. And so I think one of the things that tends to scare people is the idea of, like, you know, taking humans out of the blue for critical decision making. You know, mass surveillance being enabled by these models. Those are all things that we aren’t in the business of doing right now. And you know, I don’t know what the future will hold. I think that when these models are more capable, we need to really stress test what our policy lines are around, around things like that. But, you know, there are so many things short of that, changing the way the intelligence community workflow works, making sure that we’re actually identifying and pulling together disparate threads of information. I mean, we’ve spent the last 20 years creating more data than any human can ever process, and now we have this tool that can process data really, really effectively and surface really hard questions for humans, so that they can focus their time and energy on the things that only humans can do. And that’s really what I think good looks like in this space. But, you know, we’re going to have to, I think, start somewhere, and where that, I think where we start looks like getting it in the hands of users, but when I look out into the future and I say, okay, like, what does it look like for government to really, like, make big bets? Because to turn this point, you know, the real transformation isn’t going to come from the chat function. It’s going to come from using AI to do things that are not possible to do today. And what does that look like? It looks like custom model development using government data, building models that are trained to do things that there is no commercial application for, that are exclusively for giving government an asymmetric advantage. We’re not that’s not really happening in any there’s pockets of excellence and little glimmers of that right now, but I think the next year is going to be really transformational, and I give a lot of credit to the DOD and to the chief data and AI office in particular, for their announcement of these massive partnerships with all of the frontier AI labs. Tarun anthropic is one of them. Open AI is one of them. There are others. Google’s one of them. What it is doing is it’s actually bringing us together. And instead of the way the normal acquisition process works, the government figures out its requirements over here in secret and then says to industry, this is what I want. Bring me this thing. And right now, the government doesn’t know what it needs AI to do for it. We have a better insight into what it should be doing than they do. And what they’ve done with these contracts is they’ve said we don’t actually know. Can we just spend some time with you figuring out what we should be doing? And it’s a different model, and I give them a lot of credit for really. You know this is getting doing anything differently than you’ve always done it in the Pentagon is always hard. It is always hard, so the fact that they’ve been able to do this is, I think, something to celebrate. But I really think that by bringing all of the frontier labs to the table, the Defense Department is going to get something bigger than the sum of its parts.
Collins
That’s such a great point on those new contracts that are awarded. Tarun, I just wonder what you just think of what does that look like, in terms of when you’re kind of given a contract, but it’s blank, you don’t really know exactly what the requirements are. You don’t have these three things that you have to fulfill in order to fulfill that contract.
Chhabra
So I’m in full agreement with Katrina. This is the right way to do it, because the initial contract is basically, we need you to offer your time, your expertise, to come in and help build out what the capabilities could be. And there’s really no other way to do it, and there’s a responsibility on the part of the labs to help do that. So I think about where we were, you know, one year ago, and where we were today. So as Katrina mentioned, there was a national security memorandum from the previous administration saying, Thou shalt adopt frontier AI, but it was pilots, you know, do the following pilots today, we have contracts from the Defense Department. There’s serious work happening in the intelligence community to integrate data and do all sorts of things that just wasn’t really, I didn’t think was actually going to happen a year ago. So there’s real progress and other innovative things, like the army has set up a detachment where you come in as a lieutenant colonel, if you have the right background, and you’re going to do exactly what Katrina is talking about, which is help them figure out, what are the requirements and what can the capability be.
Collins
Yeah, I want to take some audience questions too, Congressman, just on in terms of the congressional role in this. We always talk about, you know, what that looks like in terms of regulating it and how they handle that. What do you think is the other side of this from the law that you heard from when you were hosting them, in terms of what they want to see, in terms of advancing it?
Dent
Well, a few things in the follow up. A little bit on what Katrina said too about the Pentagon. You talk about defense health, and that would be a great area to lead. And I have a kind of more basic issue, you know, that the great white whale of procurement reform at the Pentagon, and they’re probably a bunch of experts in this room on that subject, but, but, you know, the Pentagon kind of thinks in terms of, like hardware. You know, weapon systems takes a long time from beginning to end. Then you guys start thinking more about software. And you know how fast that moves, and you know is the Pentagon going to be ready to deal with procurement in a serious way, and that Congress has a role there. But again, I think the fundamental question for Congress is, you know where they begin? I think they need industry experts and other technology leaders to give it guidance about where to begin, where to start, how they should do this, because I think they just aren’t sure. I mean, they know they’re going to need a federal preemption that you can’t have 50 state laws dealing with all this. I mean, if there was an interstate issue, this is it more than interstate issue, but so clearly they know they’re going to have to preempt the states at some point. There’ll be federalism fights and all that, but that’s going to happen, but then it’s an open slate. You know, where do we go and do we need? Does Congress need to create a separate entity to regulate AI, or do we house it at NIST, or, you know, wherever, Commerce, pentagon? I don’t know, but that’s work, questions and answers.
Collins
Could it help the Pentagon pass an audit?
Mulligan
1,000% yes, it absolutely can help.
Collins
That could be bipartisan.
Mulligan
You know, I would be surprised if every single one of the AI companies didn’t have that on their list of use cases to talk about, with, through the CDO contracts that we all just caught. But like, some of the most transformational things are going to be the boring back office things, and they are going to free up incredible resources for, you know, the Defense Department to do things that would be unsustainable if it had to do them, by continuing to do things the way we’ve always done them. And so, like, when I think about risk, people ask me all the time, like, what do I think is the biggest risk? The biggest risk from AI? And my answer is always the same, the biggest risk is keeping on doing things the same way that we’ve always been doing them. That is actually, of all the things that I’m worried about, the one that worries me the most.
Chhabra
I think there is a provision in the draft NDA to do precisely this, to use AI to get us out.
Collins
We’ll see if it happens. Please let us know. I do want to take some audience questions. I know this is a high interest subject. We’ll start here at the front.
Shashank Joshi
On the subject of agentic warfare, Tarun, you raised it on use cases. What does agentic intelligence look like in this context? And a lot of the use cases you’re discussing are analysis. Is there a use case in intelligence collection, in operations that you’re also seeing or could envision?
Chhabra
Yeah, look, I think this is simply about an AI using tools. And so this could be an AI, an agent doing things in cyberspace. It could be a team of agents doing things in cyberspace. You can imagine in many other domains. I think, to Katrina’s point earlier is we don’t know yet what exactly the interaction looks like, but I think it’s something we’re gonna have to get our hands around much sooner than we think, particularly as we know that when our companies release agents into the commercial world, where we all have safeguards teams looking at what our adversaries are trying to do to use them and experiment with them, and that’s obviously really important information that we need to share with US government.
Collins
Go ahead.
Unnamed Audience Member 1
Thank you very much for your or these panelists. Very, very interesting. I was wondering, how do you see the investment and the growth of AI in developing countries, such as in Asians in Africa, Latin America, and I like what you said on putting technology in the hands of the people. How you see that on developing countries? Thank you so much.
Mulligan
I can start. I mean, one I see tremendous. There’s a lot more trust in AI and excitement about what it can do in developing countries than there is in the Western world. As I was mentioning earlier, one of the things that you know, I think everybody’s hungry to see what it can do. I think it also creates risk for the US and the Western world that that if Chinese models and open source models are out there and being able to be used and people are building on top of them, then that’s that the values that are imbued in those systems are going to be the values that you know are in the models that people are using. And so I think we we really need to think about it through a strategic competition lens. One of the things I loved that we did a little over a year ago at OpenAI is we actually created one 800 ChatGPT and we made the one of the things we realized is that the biggest gap is actually in people’s access to WiFi. So if you, you know, keep you don’t have WiFi, you’re gonna have a hard time using, you know, our tools and technology in the way that you can so one 800 ChatGPT gave you the ability to call and have conversations over the phone using our voice mode. Rather than having to have Wi Fi, you could just if you had a phone line, you had access. And that opened the aperture tremendously. And it’s something that I think made you know, we found a lot of people using the technology for advice on on health and like reproductive health and all of these really incredible use cases in the in the developing world that were made possible because, you know, we figured out how to bring it to where people are.
Collins
That’s so interesting. Ok, go ahead.
Victoria
Thank you for being here. My name is Victoria. I’m part of this year’s rising leaders class. And my question is, how do you see this technology shaping younger generations, particularly generation alpha, they’re not only going to be digital nomads. About AI nomads that seem sorry, nomads, natives. We’ve seen critical thinking children go down with the use of AI. We’ve seen those cases of, you know, a child attempting against his own life because of the conversations he’s been having with a chatbot. What are some pros of having children grow up this way? And what are we doing from your side, on the private sector, to prepare and protect our children?
Chhabra
This goes way beyond my expertise, but I’ll say, I think it raises a broader point, which is, what is the responsibility of companies operating at the frontier when we know how transformative impacts on jobs, on education, on healthcare, on national security, and so our view has been, we need to put resources against bringing best in class expertise, say, in the case of job impacts, economists to the company to interface with our technologies so they can understand, what will it mean in particular sectors? Because the real answer is to many of these questions is we don’t know yet. And in many cases, people who study these areas don’t know the technology well enough, and they wouldn’t. It’s not, it’s not clear how they would. So that’s something that we all, I think, have to do as frontier companies, and we need to bring all these discussions out into the open. You know, I think sometimes there are closed door discussions, you know, in C-suites or conferences where people are talking about what the job impacts should might be, but we don’t talk about them openly. I think it needs to be much more transparent.
Collins
Yeah, and it is a good question. Go ahead.
Mulligan
You know, there’s a question about what is the responsibility of these companies, and I think that’s one that we we take pretty seriously. And I know other AI companies do as well. But there’s also a bigger question, which is, what is the role of the government and of society to think about this too? You know, we, you know, the best analogy that I have for what this moment is and what, what to compare it to is, is the advent of electricity. And, you know, I think that the makers of, you know, electricity had to contend with some of it, but they had government partnership to help them figure out how to safely. You know, we don’t even think about the fact that we’re sitting a few feet away from it an electrical outlet that could, in theory, kill us at any moment of the day, because we’ve figured out how to make it safe enough for people to use and to use in ways that are productive, you know, and help us in society. We’re gonna have to do the same thing with AI, the same technology that is going to enable all of the incredible things that we’ve talked about on the stage will undoubtedly enable some things that we don’t want. And figuring out how to do that is kind of a whole-of-society responsibility.
Dent
And remember when Edison developed, uh, invented, had his discoveries on electricity. Many people at that time thought it was too dangerous of a technology for common people to use.
Collins
I’m not going to bring this back to “Who Wants to Be a Millionaire?”, but there was a question when I was studying that was, who was the first president who lived in the white house with electricity but was too scared to actually flip the light switches himself? William Henry Harrison, it was a million dollar question. In case anyone’s curious, go ahead in the back. Yeah, all of you, yeah. Don’t worry, we’ll get to you. We have six minutes.
Belcher
Thanks very much. Emma Belcher, President of Plowshares, nuclear weapons threat action production organization, Katrina, I was thrilled to hear you say, use caution when thinking about the battlefield, and I wanted to get your reaction to studies that we’ve seen recently that have shown that increasingly war gaming that relies on AI is resulting in escalation that leads to nuclear use disproportionately. This, to me, suggests that maybe there’s some bias in the data and the algorithms, which potentially has pretty damaging implications. So I’m just wondering if you could comment on that case.
Mulligan
Well, I can tell you, I mean, we actually did a war game. We hosted a war game here at the Aspen Security Forum prior to the start of the regular program. It didn’t lead to that outcome. So I can’t, I don’t, I’m not familiar with the studies that you’re citing, but here’s what I will tell you that I am excited about, and I think knowing your background that you will be interested in this as well. One of the thing, partnerships, our first, our first partnership with the federal government, actually was with the US National Labs, and we have decided to deploy our model weights. We have deployed them for the first time ever outside of our own infrastructure, inside the US government secure classified supercomputer. And you know, what it is enabling us to do is to really help advance the frontier of science in a way that allows us to do increasingly complicated, you know, math and scientific discovery that are essential to underpinning our commitment to no nuclear testing. Because every year the, you know, the NNSA has to certify the viability of the nuclear stockpile, and every year doing that gets harder and harder and harder, and I am really excited about the possibility that we are actually going to facilitate maintaining US policy on this for a longer time period than we ever would have without this technology. And I think that would be an amazing outcome of the use of this technology for national security.
Collins
Go ahead.
Unnamed Audience Member 2
Thank you. To the panel, great discussion. I was hoping you could comment on access to talent, even in the context of national security. It occurs to me, and I think anybody observing, that so many of the top AI researchers are foreign born, and we also see a number of changes to student visa programs, grants, access to grants funding and such that might be making the United States less hospitable location for drawing in the world’s talent.
Dent
Well, it’s already happening. I’ve been talking to academic leaders who’ve been telling me that some of their top talent is being courted heavily by Europe right now, and I think this can and I think policymakers, if they’re listening, that they need to continue to make major investments in basic research. We have to, because our competitors, especially the Chinese, are making those investments, as our friends in Europe, they’re doing the same thing, and we are really at risk of a brain drain. I mean, that’s been our advantage. The talents always come here, and I think that’s been a big part of our technological edge in so many areas, from medical research to quantum computing to you name it, we’ve been the leaders, but it’s a real threat. It’s a problem and I think the Congress is going to have to step up and reassert itself on that very issue where it’s been so strong for a very long time.
Chhabra
Yeah, look, the competition AI boils down to chips, talent, and energy, and in all three, we ought to be maxing out everything we can do. That’s how you get to AI dominance.
Collins
Well, and when you are in this book about the development of the nuclear weapon, I mean, you realize how many of these scientists were driven to the United States. To the United States by the war in Europe, and just to think about that, if that had not happened, and we had not had those brilliant minds working in the lab, is a fascinating part of this as well. I think we have time for maybe one more question.
Shannon
Hi, my name is Shannon. I’m with the Rising Leaders, and I’m a Public Diplomacy practitioner in Denver. And so Katrina, what you were saying about the values instilled in certain language large language models, depending on where they’re coming from, reminds me of some conversations that I’ve had with some of my international exchange participants about so my broader question is, who decides what the values are that are instilled in the large language models that we’re working with, especially coming from the private sector, I’m sure many of us have seen some of the challenges that grok has had, and so who makes those decisions? Who’s controlling those buttons?
Mulligan
So right now, it’s the AI companies themselves and there isn’t any real regulation or oversight of how we do that. The way that we do it at OpenAI, we have a model policy team that develops something that we call the model spec. And if you guys haven’t, if you guys are interested in this question, which I think is a super important question, you should pull out your phone and pull up the model spec. Because what we did, and now other companies have done as well, is we published it. We actually wrote down what we taught the model and why and how we taught it to, you know, what values we instilled, and then we made that transparently available to everybody. Now, I don’t think people having, you know, we did that because we think that we should have a debate as a society about what values, you know, we made certain decisions and trade offs in doing that. You may not agree with all of them. We should have that discussion. We should have that debate as a society. The AI companies do it all a little bit differently. Some of them a lot differently. And, you know, I think in the end, what what is different between, you know, building on top of American AI versus building on top of Chinese AI is that we’re being transparent about the values we’re instilling in the model, and they are not.
Chhabra
Yeah, I think this is a really important question and you’re probably going to hear much more about it in the next, you know, few weeks or so in Washington, but I think the best you can do is be transparent to Katrina’s point, about what you were doing to train the model. And I think it’s okay that there will be some models that are going to be trained a bit differently, depending on a number of considerations, but so long as you know what you are getting. And then there’s their market dynamics, right? Where enterprise is choosing to go. You know, with certain kinds of models, you have consumers choosing to go with certain kinds of models as well. But I think the key thing is, are we being as transparent as possible about how we’re doing this.
Dent
The only thing I’d say, in conclusion is that, you know, one of our key policy objectives has to be making sure the development of AI aligns with our values and our interests. And that’s why this diffusion of this technology, we want to make sure people are using American systems so we can control the systems and our values, as opposed to maybe the Chinese and God knows what they’ll do on surveillance and other things. So that’s the issue. You have to make sure it aligns with our interests and our values.
Collins
And there’s another panel tomorrow that will focus more exclusively on what that looks like with China, which I think will be a fascinating conversation. Thank you all that was great. I really enjoyed that, and I hope you all did as well. Thank you so much.