Aspen Strategy Group Logo
Aspen Security Forum Logo

The Promise and Peril of Artificial Intelligence

July 19, 2023

Aspen Security Forum

Speakers

Arati Prabhakar, Director, White House Office of Science and Technology Policy

Kent Walker, President, Global Affairs, Google

Heidi Heitkamp, Director, University of Chicago Institute of Politics; Former U.S. Senator for North Dakota

Anja Manuel, Executive Director, Aspen Strategy Group and Aspen Security Forum

Moderator: Ryan Heath, Global Technology Correspondent, Axios

Full Transcript

Read the full transcript below or download it to your device.

Click to read the full transcript

Ryan Heath  

We are almost at the finish line. So thank you for sticking with us. By the end of this 40 minutes. We hope that you will leave this session feeling smarter about the real risks and the real opportunities related to AI and I frame it in those terms, because for the last six months, we’ve had a very exciting AI debate, but it’s often been dominated by extreme voices around extreme fears, or sometimes, frankly, unrealistic, utopian visions about what AI can do for us. And there is a really messy middle in between and those are the nuts and bolts that I would love us all to dive into. In this session. I made two promises. The first is that if there’s anyone in the room who feels they’ve not been keeping up to speed in the debate, do not worry, this technology is transformative, and it’s going to be here forever. So you have not missed much. And number two, everyone I work with at Axios firmly believes that because this will transform all of our lives. We all have a right and a stake and a voice in this discussion. So we really look forward to your questions at the end of the session. Arati I’m going to dive in with you. You have said that the most important first step in dealing with AI is understanding and managing the risks. Not everyone agrees Sam Altman, CEO of open AI he wasn’t sitting there consulting the White House blueprint for AI rights when he released his church Chat GPT product into the world. So what are you going to do walk us through what the White House will do to shape this revolution and help embed our democratic values into the AI as it develops?

 

Arati Prabhakar  

Thanks, Ryan. Just amazing to be here with all of you. This topic of AI is an active urgent area of work at the White House. I appreciate the chance to kick off for this discussion. President Biden has been very clear and many of you must have heard him talk about how we are at an inflection point in history. And he very much talks about AI in that context of it is one of the powerful forces today. And the choices that we make today, including about AI are going to change the arc over the next few decades. And so that that is why AI is such a high priority and the work that we are doing. And our work starts by recognizing that because of the phenomenal breadth of this technology is the most powerful technological force of our times. We all know what human history tells us about what happens with powerful technologies we know humans are going to use them for good and for ill and so the approach that we have taken from the White House on artificial intelligence is to say we absolutely want to seize its benefits. But the way to do that is we have to start by managing its risks. Because AI is so broad its applications are vast applications are vast. So I’ll just briefly give you four categories of risk that we think about because you need to untangle this right. I’m sure you’ve heard the cacophonous conversation about AI. So I mentioned four broad categories of risks that we are focused on in the work that we’re doing at the White House. The first is risks to truth trust in democracy because messengers information. The second is the broad category of risks to safety and security, everything from self driving cars to cybersecurity and biosecurity concerns. A third is risks to privacy, civil rights, civil liberties, and including issues of bias that came up in rather than algorithms. And then finally, is a very different category, which is risks to jobs in our economy. And I think that starts to give you a sense of how incredibly broad this this challenge is, with AI. So what you will see from us is ongoing work. The week that I arrived to join the White House in October last year we released the AI Bill of Rights. That is a lighthouse to steer by. And I think when you’re in choppy waters as we are with AI moving as rapidly as it is there’s no more important time to be clear about your values. So that’s an important foundation you have seen and you will continue to see many actions. today. We’re working closely with AI companies leading AI companies and making helping them to step up to their responsibilities. We are working across agencies and government on everything we can do through executive action to get AI on a good track. We are and we will definitely continue to work with Congress on a bipartisan basis as they start laying out their agenda, the legislative agenda that needs to come. And then finally we are working with our international allies and partners that are you’ll see all of those lines of work proceed. I just want to step back from that to say there. You know, we know we are in a time where every nation in the world is trying to use AI to shape a future that reflects their own core values. And I think you know we could all disagree about many other things. But the one thing I know we agree upon is that none of us wants to live in a future that is driven by technology shaped by authoritarian regimes. And that is why, at this moment in time, American leadership in the world depends critically depends on American leadership and AI and I think that’s, that’s what we will keep our eyes on if we do our work

 

Ryan Heath  

Kent I’ll come to you next. Speaking about values one American value is opportunity. And your job is to make a bunch of money pursuing the opportunities of AI and on the other hand, Google got slammed for not brushing products out the door earlier this year. So how do you walk that tightrope? How do you be a partner to the White House, but still make sure you’re innovating and not missing out on those opportunities and not walking

 

Kent Walker  

 Yes. So we’ve talked about the notion of innovating boldly and responsibly. And doing that together doing that in a way that’s inclusive and brings in lots of different views. And that’s challenging, but we have I think we’re up to 13 14,000 Computer Science PhDs, who feel that they have a mission to try and optimize the benefits of this technology, while minimizing the likelihood that it is misused. So for us that breaks down into three categories, many of them parallel and what Arati was talking about, we have the notion of opportunity, the incredible progress we’re going to be seeing in science and technology over the next decade. This will unlock tools and accelerate progress in areas like quantum but also in things that make the difference in people’s lives, like personalized medicine or clean water in the emerging world or precision agriculture and many more. So it’s a really exciting time. I would say many people in computer science have never seen something like this in their careers. What that has to be balanced with responsibility agenda, and many of the comments that were made before go exactly to this, making sure that we get fairness. Right, and we’ve had an ML fairness program at Google since 2014. Or they have principles in 2018. How do we make sure we are thinking about the ways that AI is going to change the future of work? How do we make sure that we’re staying grounded and factual, you get this information which can be challenging, you’ve heard about ml or machine learning hallucinations, how we minimize all likelihood with that. So that’s a big research agenda. We’re working on comprehensively goal alignment, safety and many other areas. And then lastly, particularly relevant to our conversation today is security. We have to think about the challenges for cybersecurity, but also potentially the advances in cybersecurity we can have we talked about our safe approach the secure AI framework, which draws on traditional strength and zero trust computing and work there, but also adds to it threat intelligence groups like Mandiant and others. And now the notion of red teaming and adversarial review that we’ve started to work on throughout the industry. How do we make sure that we’re fine tuning these models in a way that minimize the harms and maximize the benefits?

 

Ryan Heath  

And maybe Arti? Can I just do a follow up to you there not all AI models are created equal. There are different levels of risk for different use cases. And people have been afraid at letting some of these things out into the wild. And so I don’t want to get into too technical debate. When you release a big open source model, for example, and anybody can use it without really any restrictions that can get used in a lot of different ways. I think How worried are you about how some of those models get used and what are the mechanisms that can put some constraints about how these models get used?

 

Arati Prabhakar  

Right. I think you’re an incredibly important point a few months ago, we would have said that the progress in AI is purely dependent on more compute more crunch time and because of luck, proliferation, and when I was a venture capitalist, I would have said it’s democratizing the technology today, when I was in the Defense Department, I would say it’s proliferated. And both of those things are true. And that’s the moment we’re in and I think it’s it’s a very fast moving landscape. I want to step back from the specific questions and just note that what I think we all want is for an AI system to be safe and effective before it’s before it leaves. You know, the horse has to be safe and effective before it’s out of the barn, irrespective of whether you’re putting it out as an open source model or as a proprietary model with an interface that people can use. And I think we have to stay anchored on this question of safe and effective. And I think we should be clear that we actually don’t have tools and methods today. To know when something when an AI model is safe and effective

 

Ryan Heath  

So by definition they’re not and safe and effective?

 

Arati Prabhakar  

Don’t know right? And that’s a bad place to be and that’s the work we’ve got to do

 

Ryan Heath  

Anja want to to jump in

 

Anja Manuel  

Yeah, I just want to jump in and it’s really all the questions go to Arati because she has the impossible job trying to figure this out but to those of you aren’t following this every day the foundational models that were talking about with Google and others cost 100 million or more to train. It’s a huge amount of compute. Those you might be able to regulate because they’re attached to big companies we know. But then there’s a proliferation of smaller models some open source some not open source cheaper to train and you sort of despair at ever figuring out how to create the standards and then stick with the standards for them. Is that fair? 

 

Arati Prabhakar  

Yeah I think that’s the dynamic landscape we’re in. aa

 

Kent Walker  

and building on both of those comments. I would say the notion of case by case deployment by deployment application by application is going to be critical, because it’s very hard to have a general purpose one size fits all standard for evaluation. But we do have decades centuries of experience in regulating healthcare, financial services, transportation etc. And so as we start to fine tune our models for those specific use cases, we can I think, develop more benchmarks for evaluation, drawn regulatory expertise, and get to a better outcome that’s more fine grained and more nuanced.

 

Ryan Heath  

Heidi, I’m gonna bring you in now. The voice of the people, a former senator. Why do so many people seem to be so afraid of AI as have been explained to them this year? And how do we go about creating those building blocks of trust because it’s not going away? So we’re gonna have to find some way to have AI that we can trust.

 

Heidi Heitkamp  

I think like I like to say is kind of the Jurassic Park. Skynet Terminator, you know, kind of when people say there’s threats out there, you see, you know, Arnold Schwarzenegger coming down and wreaking havoc. And so when we when we look at it, the problem is, AI has been around a long, long time. And there is no one putting this new technology, the generative AI in context. Every time you say, hey, Google, I do that because we’re here with Google. Or Alexa, you’re using an AI kind of mechanism. You’re using a assistant, I digital assistant, that is going to help you get information. You know, set your alarm, do whatever you’re going to do. But yeah, we’ve been told that this is some new, emerging scary Jurassic Park kind of technology. As opposed to an iteration of what we’ve been doing in the past. Yes, there are new threats is this like when I Bible moment, probably, but it is not as scary as what people think. And I think the media has driven this narrative of saying boo every time you say AI boo, boo, you know, be up very afraid it’s going to get you it’s going to take what your job is going to do this. And we know that over the period of time looking at this, you know, when you look at technology, when you look at innovation, we actually create jobs with innovation to different jobs and so there’s a need to have a transformative policy that deals with this and that’s absolutely what’s happening. But I think when you look at at the, the sense or the knowledge that people have, it is opaque to them. They don’t know what this is. And when you don’t know what it is and you’re told is gonna take your job, and you’re told it’s fun to you know, make your baby have two ads. You know, you worry because it’s not you don’t have a context to judge the honesty. I just want to say something about my concern. And I will tell you, I I talked with Senator recently, very high placed whose staff put together a deep fake of him saying something he would never say. Does that energize politicians to get involved in the AI debate? You You better believe it and so now they’re in this process. I’m trying to understand how that can happen. How do you protect against it, and what do you do about it? And so I think there’s some really smart kinds of directional things that are being done in the Senate being done in the house being done by the industry. And so I think we’re we’re moving in the right direction, but we have to ratchet down the rhetoric about being afraid and amp up the rhetoric about how exciting this technology is for human civilization.

 

Ryan Heath  

What do you think of Senator Schumer’s plan, Heidi? He announced on Monday that he wants to do a series of AI insight forums to get senators up to speed so they are more informed before they regulate something that they maybe don’t fully understand. Is that a model that can work in broader society? The point is that this is an extremely cross cutting complicated transformation of all of our lives. Do we in fact, need AI in some halls across the country? All right, people.

 

Heidi Heitkamp  

I mean, I think honestly, that there is a range of ability to understand this shouldn’t say ability, probably a range of ability as well, but a range of kind of current understanding of the technology and I think one of the really good news pieces of this is this bipartisan concern and the hesitation that the need to do something, but also the hesitation to step back and say, I don’t know everything I need to know, to make the right decisions. They’re stepping back. They’re going to analyze this. Now, the problem typically, what you would see in Washington would be a big food fight over which committee at jurisdiction, whether it’s Congress, whether it’s judiciary, whether it’s treasury, I can give you Homeland Security. I think what Chuck’s trying to do is get out ahead of jurisdictional fights and say, This belongs to all of us. Let’s elevate our understanding so that we can be rational and reasonable debate about the level of regulation and so I think you should all feel comforted by by number one, the bipartisanship, but also the measured approach that Congress is taking to advancing some kind of framework in which to regulate and to discuss AI.

 

Ryan Heath  

Can I throw you one question to consider is your time and two things that were bubbling up their work jurisdiction and maintenance might not know how I’m sorry, jurisdiction then know how so? Are we going to end up needing a particular AI agency or are we in fact needing to build up no help across every agency in order to deal with this?

 

Arati Prabhakar  

I think the place to start is just to recognize how extremely broad the applications of this technology are. And so I think it’s not, to me, it’s not a workable model. Certainly in the work we’re doing with the executive branch is not there’s not just one action that’s going to magically make this come out, right. So you really have to understand them as a mosaic and look at all of it. And I see that very much reflected in what I hear when I talk to people on the Hill, to your point about the Majority Leader’s sessions. He’s run two of them so far, and one was a general briefing for senators to learn about the technology. And then I was able to participate as one of five people who spoke on national security. That was the purported topic is AI national security. This was about a week and a half ago. And we ended up covering a lot of territory, not just national security, but the thing I really want to say is we had over half the Senate in a skiff, having a secure room and a secure room. Very bipartisan. It’s hosted in a bipartisan way and we had people from definitely from both sides of the aisle. And it was the second time I had been with a large group of senators talking about AI, and I have to just share that I thought, the quality of the conversation, the quality of the questions that were being asked, were very good. They were thoughtful. They I couldn’t tell from the nature of the question, which side of the aisle was asking, and it’s not an upward sloping and a couple of months ago, I think people were still feeling that way. So I think that learning process is underway. And and I think, you know, from the White House, we’re, while we do what we’re going to do from the executive branch, we very much want to maintain that good partnership and get to some good bipartisan solutions

 

Anja Manuel  

That is virutally uheard of in the US Congress now so we should give them a round of applause.

 

Arati Prabhakar  

I just want to appreciate that moment. Exactly.

 

Ryan Heath  

Now, sounds like there’s a lot of partnership going on. Kent Kent, I’m gonna assume that you would consider yourself a partner here with the US government exploring AI territory. But not all partnerships are perfect. And there’s been a rough ride for big tech, sort of in the last few years in Washington. Is there something you’ve nominate that you wish Congress or the executive branch were doing in this field to make it a more productive partnership?

 

Kent Walker  

Well, I agree with the last couple months that I appreciate the fact they are taking their time to get up to speed on the technology to do in a deliberative way. There we’ve talked about various Blue Ribbon commissions, etc. I think that’s a helpful part of the learning process. And there are broad areas of agreement even not only just within the United States, but also internationally. You have Europe with its AI and you have Brazil, Canada, other countries that are also introducing new legislation. So areas like trying to provide more transparency with regard to some of these tools, figuring out what benchmarks make sense in different specific areas, having a risk based approach so that if you’re looking at very high risk kinds of applications, one set of rules applies. But as he was saying a moment ago, many of I thought maybe all of you have been using AI for many years if you use Google search or translate or maps or Gmail, and I think most people would say those are relatively lower risk kinds of applications. So you do need kind of a horses for courses approach. And that doesn’t come out of bumper sticker conversation that comes out of to your friends, Ryan, the messy middle of getting people in a room debating how the trade offs work, how do we draw on different high important principles for privacy and non discrimination, openness and security. And we get that right. That requires getting the experts in the room and I’m hoping that that’s the process for engaging.

 

Ryan Heath  

Now, it we’ve seen a range of CEOs talk about their willingness to embrace regulation. I haven’t necessarily seen any of those CEOs invite your team to come and look under the hood of their model into their offices. If that’s something you’d like to do like an invitation from Kent or someone else to say Hang on, we’re gonna roll up in San Francisco or Seattle or wherever it is, and come and check out how this stuff is coded

 

Arati Prabhakar  

I want to step back from that question too.

 

Ryan Heath  

I think you will have to answer it, you can step back

 

Arati Prabhakar  

Let me put it in the right context, because I think we use the phrase regulate an AI we use those two words together, but I don’t think we have a good model yet of what we’re talking about. And where I want to start is to say, again, very broad applications. And it turns out a lot of the harms that we are concerned about happen to already be illegal, right? So when you use AI to commit fraud with a voice clone, when you use AI to accelerate your ability to commit cyber crimes, there are so many things that are already not okay. And there I think there’s a very important issue which is our our is our the laws exist is our ability to regulate and enforce against those harms as AI accelerates and changes how people do that. That’s that is the issue there. And one, I think, very important step in this direction a couple of months ago, the EEOC, the Consumer Financial Protection Bureau, the FTC and the Department of Justice, the joint statement, simply reminding people that these things are still illegal if you’re using AI to do them, that’s like a ditch off the hook and they would still be enforcing against it. So I think that you know, that’s a great example of a kind of step that I think is essential, and I and we’re gonna need to do some work because keeping up with those concerns with that, with that kind of accelerated malfeasance and then being able to spot it when you start seeing, you know, new forms of problems and and a scale issue that we’re not really ready for. Those are some of the things that we’re working on right now and you’ll see more work so that that that those are some important actions that I think can start getting us to a safer place. What the question of what you do about the core technology itself, is what people usually want to talk about, and I think that is not yet clear. It’s and again, there I think I want to keep coming back. Heidi, I loved your point about Terminator and everything and Jurassic Park. You know, we’re living in a time in which I you there are a lot of science fiction conversations about AI. There are cops that there are a lot of philosophy conversations. I sometimes feel like I’m in a dorm, you know, like a freshman dorm room. At midnight. Anyone else ever got that feeling? There are marketing conversations, all of those should go on. But if we’re going to make sensible policies that actually change outcomes, we’re going to stay anchored in what do the human beings and the corporations do as they decide to build systems? What is the human data that they’re being trained on? And then how do humans and corporations decide to use this technology? And then what impact do they have in the real world? And if you stay anchored on that, and then you really start working through and figuring out how do we mitigate these risks, then you get to some very practical solutions and I think that’s not bad. That has to be the benchmark against we weigh, against which we weigh any regulatory.

 

Ryan Heath  

So that’s more important looking under the hood.

 

Arati Prabhakar  

If that’s part of it, then I think it needs to be considered but the benchmark is always going to be did we reduce biosecurity risks and reduce misinformation, spiraling out of control, etc.

 

Kent Walker  

If I could jump in because I two points one, in a sense, we are externalizing some of these models so at DEF CON coming up, I think in the next month or so, we will be doing external mentoring and we and other companies for some of these models, and that’s a way of that’s

 

Ryan Heath  

that’s really sophisticated folks go in and try and break these systems and figure out the ways they could be used,

 

Kent Walker  

which is a collaborative learning exercise, right? It’s not really about who’s first and who’s best. It’s how do we collectively learn from that? What kind of attacks work well, and how do we all build those into our systems? But ultimately, this has to be a linear system of governance, right? You have to have the companies individually taking responsibility for building things in the right way. Security, by default security, by design and the way these are created. You need cross industry groups, whether that’s a underwriters laboratory or a good housekeeping seal of approval kind of model to make sure what are the standards in many cases, those can be faster and more nimble than what governments will be able to come up with. And in some cases, provide the the starting point for government regulation to get incorporation to be incorporated into what ultimately the law requires. Obviously, you’re going to need forms of government regulation, these different areas, and you will probably need international frameworks to deal with some of these security risks that have already been started conversation have already started the g7 erosional process, the OECD and other multinational fora. So it’s a little bit of an all of the above.

 

Heidi Heitkamp  

But what we’ve been talking a lot about government regulation, but there is a whole rule of law piece of this that we haven’t been talking about. If you look at 230, section 230 of the Communications Decency Act, which basically said, Look, we’re gonna use these platforms, you’ll these these great systems that were created. They’re going to be like bulletin boards, and we’re not going to sue the bulletin board for what’s on the bulletin board. Right. So there has been a mammoth ability for growth in this area, to to be free of any kind of interjection of civil liability. That’s not true, in my opinion, in genitive AI because a product is being created. And you will already see litigation around this area, whether it is violation of copyright, whether it is in fact taking a look at using my image and appropriately using my data. And so one of the reasons why you may want to look at regulation is regulation. Yes, it can be a sword, but it’s also a shield, that this is the accepted practice of the industry. Therefore, we balanced all of these interests, and we’re going to give certain levels of protection. And so when people talk about regulation, remember that it’s not just always a sword. It can in fact, be a shield. From that other enforcement entity, which is called civil litigation.

 

Anja Manuel  

Can I jump in here? I just want to double down on that. It’s such an important point. This is in its infancy, and we were just chatting getting our act together for the panel before this. And I don’t think any of us here had a clear view of should there be an FDA type institution? Probably not because it’s a general purpose technology. Should every department within the US government be working on AI? Should there be legislation should there be tort law applied? This is so new, that we have analogies for this kind of stuff from different technologies. But here it’s the analogies are all imperfect, and we don’t quite yet know how to apply them.

 

Ryan Heath  

I want to do a bridge between that point on you and your point can around how we internationalize or create global frameworks here. And so I guess I have a two headed question. First part is, does the US actually have a second or third mover advantage because it hasn’t rushed to regulate behind the EU. China’s already got around a regulation out and has extremely high levels of trust and AI across China actually, not always for the best intended purposes, I would argue. So perhaps, sort of by accident, we’re in a really good position to be more flexible and do the right thing. And then in terms of the internationalizing. We had the Secretary General of the UN yesterday saying it really has to be the United Nations that is the forum for a global body. You’ve got the UK government’s organizing an AI safety and governance summit in October. You have processes at the OECD, the Council of Europe back with you tool. So how do we take advantage of the flexibility that mean a little bit slower for the market the US has given the United States and really amplified that top level level for everybody.

 

Kent Walker  

We jump in quickly and say the race should be for the best state regulations, not the first AI regulations. This is going to be a long term technology. We do have a little bit of time to get it right. We have a lot of great achievements to build on the fact that this is a triumph for the ingenuity that our system has created. And will I think have dramatic improvement that result in dramatic improvements in people’s standards of living around the world is something to celebrate. But at the same time we have to balance that and if we take six months or a year to figure out what combination of executive order legislation, self regulation, internationalization makes sense. That’s probably a pretty good downpayment on the future.

 

Arati Prabhakar  

Yeah, I’ll just add that this isn’t going to be one of them and you will see waves of action and I think that’s exactly what needs to happen because as as a particular area gets big enough that we can do the good regulations that really get us on the right path, that when things ripen, that’s when they’ll happen, but again, I don’t think this is a I don’t think it’s a six month thing. I this is I think there are urgent immediate actions that can be taken right now. I’ll give you one just as an example guide here. We’re talking about how much AI is already in our world. Well, Congress has considered privacy legislation has gotten very close. It’s considered legislation to protect our kids. These are harms that are happening in the world today. And the President’s been very clear that these are things we need to deal with. Now before the next privacy. regulation that would be a fantastic step and something that we continue to work as hard as we can on

 

Ryan Heath  

how to use that sorry, how to use that in those discussions because those privacy debates have been going on the years. How likely is it we can get something like that out the door in your opinion?

 

Heidi Heitkamp  

well, I mean, I think there there needs to be a sense of why are we doing this? Think about that. why why are we taking these steps to control privacy, has there been an abuse of people’s data, have, have, have we inappropriately used it. And so I think one of the reasons why you haven’t seen privacy law. If I don’t think the the voters and the public are demanding it that you gave a list, I was giving an example of somebody going door to door in a swing district in Pennsylvania. How many of you think that they’re going to say AI is my biggest concern in life? They’re talking about gun violence. They’re going to talk about education. They’re going to talk about student loans. They’re talking about all the things that affect their lives. And so when you look at privacy, and you talk to people on the street, I would I would challenge you all, because I do this because I did a huge privacy initiative as it related to bake privacy. When I was still in politics in North Dakota. And and if you ask them, they go, Oh, I lost that years ago. I don’t care. Whatever, I’m not doing anything wrong. And so it’s not this is not a point of voting. It’s not what what’s going to motivate voters jobs are going to motivate voters. That’s why you see a lot of attention to job displacement, especially now that the Democratic party feels like they missed the the free trade argument ended up transporting a lot of jobs is not going to make that mistake again. You hear that over and over again, within the Democratic Party. Let’s not be the party that dislocates so many people that we get the blame. And so a lot of this is being driven in especially in an election year election cycle by what voters are talking about, and they’re talking about jobs and they’re talking about security.

 

Arati Prabhakar  

Can I chime in on this because this is a great example of it’s so much easier to deal with a threat than staring you in the face, whether it’s fictional or real, but that seems immediately catastrophic. But the privacy erosion that’s happened in our society is so countered with the fundamental number of liberties in our country, and it’s crept up on us and people have traded away privacy for a lot of conveniences but we are at a point where it is driving addictive behavior online. It’s it’s in my view, it is linked to the polarization that we’re seeing. It’s linked to the mental health issues. That we’re seeing, and yet it is so nebulous and so diffuse. So I think we struggle to deal with

 

Heidi Heitkamp  

I would challenge you on whether that is an access problem or a privacy problem. Is that really a privacy problem? By access? I mean, you had your kids so they’re crying your hand about some kind of tablet and say entertain yourself, and you don’t look over their shoulder? I mean, so so, yeah. And going back to my example of tobacco, you know, when you when you look at the tobacco settlement, we knew it will help older people are social platforms today to the to the challenge, the civil litigation challenge, but I don’t see those as privacy issues nearly as much as I do utilization, access, access issues, utilization and access and I want to put a plug in, look, you know, I used to pay $20 a month for AOL. That was a 90 day ad, right? I loved it. I would pay it today. I get all this stuff for free. Right? And I’m really happy and occasionally you know, if I get an ad for a plus size blouse, because I ordered, you know, a plus size balls before I’m okay with that. I might be a little privacy violation, but I’m okay. Because I got to use email and I got to use G and that’s how the public looks at it. You know, I’ll tolerate some of that invasion, because I get some goodies out of this. And I’m not paying Google for what they give me. I’m not paying minute for what they give

 

Arati Prabhakar  

you and I’ll just say, I think that it’s not just about five year olds getting access to tablets because teenagers are in a mental health crisis that is partly fueled by this addictive behavior and I think we have to see how these things relate together. So we can agree to disagree on this.

 

Ryan Heath  

Lack of the effects of lack of privacy differ according to where it comes from. And so this might seem like a strange jumping into a new topic, but I knew I wanted to bring you in and join up those thoughts to China. For once. Because there isn’t really a lot of privacy when it comes to how AI is rolled out and used in China and that gets used for fun and various purposes and then blouse advertising. And you talk to a lot of Chinese entrepreneurs on Yeah, we heard the Chinese ambassador today say that AI is a area for potential positive cooperation. Like how do you and the people you talk to see the development of AI in China?

 

Anja Manuel  

Thank you for asking that. I find that the conversation around national security and AI in Washington is very animated by what’s China doing? Do we need to run faster because they’re doing it or do we need to restrict and how do we do this? Here’s what I see on the ground in China. Elsevier, which is an independent agency that kind of measures this stuff says now four out of the 10 top companies in the world working on AI are Chinese. A lot of the cutting edge research papers on AI are coming out of China. You can debate whether they’re all the highest quality, or they’re quite as good as the US unclear. We clearly lead the world in foundational models, the large language models that we’ve all been hearing so much about, but they’re getting better. Ken and I were joking last night we’re not really joking. But, you know, Ernie knew Biden was language model is actually pretty good. And every month it’s getting better and better. When I go to Washington, there seems to be a sense at least in some parts that we can export our export control our way out of this problem. I don’t think that’s going to work. We put really harsh restrictions on semiconductors in place in October. Some of that is absolutely right. Someone chips some of the equipment that makes the chips, we doubled down again, with tougher restrictions on some of the NVIDIA chips just now a couple of weeks ago. Clearly, we have to do a little bit of that. But let’s not have any illusions that that’s really going to slow things down. So in a way, we got to find a way to move faster and when I talk to Chinese entrepreneurs, they’re energized and let me just say two more things. There are these kind of memes out there. You know, well, the Chinese have shot their own entrepreneurs and now they can’t do anything. They’re not allowed to and the Chinese are afraid of AI they’ve already regulated and harms the Chinese Communist Party. I don’t see that I see a lot more the Chinese government having created clear lanes for their entrepreneurs, what’s okay and what’s not okay, and you know, what’s okay, surveillance related AI, visual recognition, object recognition, autonomy, and they’re excellent, but those things so that’s kind of how I see the space.

 

Kent Walker  

If I could just pick up I think this notion of the global competition here is a real one and ultimately, again for the Aspen security conference, national security ultimately is underpinned by economic security, productivity, and we think these will be incredible tools for leadership of countries that do this right, implement AI wizard trusted by their populace, make their workers more productive. This is the only globalization we were losing American jobs overseas. This is not trying to make American workers even more productive than they’ve ever been before. So if we approach it with that lens, I think there’s an awful lot of good investment we need to do to make sure we land a message that everybody can get behind.

 

Ryan Heath  

I think it’s time to bring you the audience into this discussion. But I want to start with a question to all of you rather than to obtain them from you individually. So hands up question. Who among you think that we collectively as a society have done enough to make AI understandable and accessible and representative of all of us? Who thinks that we’ve done that? There we go, no hands. We have work to do. You’re going to have coffee with the woman in the wineglass and there was a gentleman on the back here in a dark polo shirt or T shirt.

 

Audience question  

Here this Okay, my name is Sharon Hi, and I have eight children went to a very disturbing lecture. And I’m sure some of you will be at work, where they explain how AI and young children and children in particular to in the morning, there’s nothing you can do about getting AI. They also explained that they really don’t want Google to use the chat GP because everything is getting ahead of itself. At this point, some voice recognition. I can get a call and say one of my eight children needs to be ransom and the sample sounds like the child speak. There’s too much that is not regulated that is really important to our everyday life. I understand everything you’re saying. I understand everything about how it’s going to do this and do this and do that. However, it takes so much away from the generations that I had to do everything that I mentioned, and are doing things that are really very, very harmful. And there is no control on that because they are petitioning not to do chat. Because because it doesn’t go like this to this it goes like this, too. And you’re not going to be able to control that. If you’re not heading. What were things that you weren’t doing how and briefly

 

Ryan Heath  

So you’d rather a more proactive approach to the regulation is that where it’s headed.

 

Audience question  

How could you release the next thing it’s going to be so far ahead that yes, the young children will do it, and they’ll do everything with it. And we’re not going to know because I thought of the two in the morning when my grandchildren will do. And I think this has to be addressed briefly. Because I think it’s a problem.

 

Ryan Heath  

I bet you’re not alone Sharon any reactions up on the stage 

 

Anja Manuel  

there’s a whole session on deep fakes tomorrow. So that specific issue.

 

Kent Walker  

I think when we talk about the responsibility agenda started with the companies the industry government, this is exactly right. i But I do think that the answer to the problem is in the technology. How do we build in safeguards guardrails to minimize abuses? We’ve there challenges with regard to proliferation of some of these tools and have we strike the right balance between open sourcing and keeping security? Right this is this is a widely shared concern. We need to make sure we get it right.

 

Arati Prabhakar  

I just tried to say I think, you know, I’ve heard the very rosy descriptions of what’s possible. And I hear very much I hear what you’re saying. And I think this is one of the fundamental quandaries of this. Anytime you have such a powerful technology, it’s a raw force, and you really have to be able to keep bright and dark in your head at the same time in relation after work towards both of them mitigating one and achieving the other simultaneously. I think that that’s part of what we’re all struggling with.

 

Heidi Heitkamp  

This, this won’t make me very popular, but you’ll never be surprised by your way out of this problem. This has to be regulated you cannot count on the government to protect your children alone. That just not going to work is never worked before. You can’t count on the government to take every drug off the street. You can’t count on the government to make sure that there are no risks out there for your kids. And so I get what you’re saying. and I know that keeps people up at night. And but but we have to have a partnership between families and this technology, and the technology could in fact, lead to a cure. that would cure cancer, childhood cancer. So we got it we got it socially balance these things and. Go back to, you know, we have to have responsible usage of the products that are being created. And I would also say that one of the ways that we’ve done this in the past is through civil litigation when you create the risk. So let’s say you have a small startup company, it’s not going to be Google, because Google has reputational risk, they’re gonna listen to you and want to fix your problem. Then there’s somebody who’s got just the great product that they’re gonna want to deploy. That’d be and I think I could go to some historical examples. They’re gonna want to deploy it. How do we control them? Where are they getting their money? What’s the risk for the people who are borrowing, or giving them the money to create these products? How do we create, you know, monetary risk to people who create dangerous products, and that’s not going to be done by regulation alone. It’s going to be done in the courts of law in places that understand the rule of law.

 

Ryan Heath  

Most definitely one venue for that the gentleman in the yellow

 

Speaker 16  

Hi I’d love Kent to note the others panelists feedback, on what lessons and, I’m Daniel founder of kind and the Lubitzky Family Foundation start with us. And I’d love to know can tell our panelists feedback on what lessons we can draw from section 230 as it applies to today’s reality because when I look back, of course, we were to encourage a lot of innovation but neither self regulation or government regulation has prevented massive misinformation, disinformation and abuse. When I think of it, it’s not just you know, my grandmother, coming up with silly conspiracy theories. There’s people that are hundreds of millions of followers that say things that are if you weren’t traditional media, you could be held accountable. Even though there’s high protections of freedom of speech. There is at least some protections and some liability but Google can publish and disseminate staff Twitter can empower 100 million followers and no accountability whatsoever. But also, there’s a huge conflict of interest because Google today, monetizes false advertising like just a couple of days ago, so many use my likeness, to sell pills for gummies to lose weight. And we had to talk to people and they were responsive, but you have to play Whack a Mole rather than Google and all the other social media and internet companies sell for you in exchange policy, so they prevent false advertising that happens every single day that all of these people are misusing the likeness of others and making money and the

 

Ryan Heath  

clock is definitely running down and I want to have an opportunity to respond, sing as you were named.

 

Kent Walker  

Thank you. Thanks for the question. Look, we would argue that the internet on balance has been extraordinary positive for American economy, culture and for people around the world. And part of that involves some challenging this uses and abuses. Although the industry as a whole has gotten dramatically better example was given earlier is probably eight years ago, one view with 100 on YouTube violated our policies. today. It’s one view in 1000. And that’s in part because we’re using these AI tools to do pattern recognition, identify problems and nip them in the bud before before they can proliferate. So there’s an awful lot of good and there are learnings from that experience. Now, it’s odd that the advances in AI which are criminal, scientific, we tend to think of through the lens of social media, we could as well think of them through the lens of mRNA vaccines and other sort of scientific advances that we’ve seen. So on balance, we absolutely recognize the challenges. We have to stay on top of it. We have 10s of 1000s of people trying to do just that. We’re hoping AI will turbocharge those efforts, but don’t lose sight of all the benefits for distribution access to information around the world. The fact that a billion people have come out of extreme poverty in the world and achieve an unprecedented in the history of mankind, in part because of the proliferation of technology and access to information. So it’s a hard balance to strike. But again, if we do it together, we think we can get it right.

 

Heidi Heitkamp  

Having led the only amendment to 230 It’s a real challenge, and partly because the courts have misinterpreted I think 230 and allowed nefarious things to happen, illegal things to happen, and then shielded those illegal things using 230. So I would argue part of this is court misinterpretation.

 

Anja Manuel  

Can I end on a positive note? I really want Arati to have have the last word because she’s leading on all this but we’ve heard a lot of doom and gloom. There are real worries here. There are also amazing positive things that are coming come out of this technology. I just want to give a shout out to Ely Atari, who’s been working on this at the special competitive studies project. He and many others have been pushing something that we haven’t talked about here at all, which is a national AI research resource. One of the things that’s been happening here is Google meta, open AI you know, all of these guys, it’s our companies innovating and racing ahead. That’s great. But academics can’t keep up. They don’t have the compute. And they don’t have one big resource, one large language model that we can use for non commercial things. And so I know Ilya and others, a lot of people have been pushing for this. I know you’ve been doing a lot of work. And this is one of the many ways that we can harness AI for good

 

Arati Prabhakar  

I’ll just finish by where I started. This is a time when American leadership in AI is essential to the way the future is going to unfold and leadership. We are so lucky to have innovators in this country that are driving this technology, but it is not just market leadership. It’s choices that we’re going to make to navigate all of the issues that we just discussed. So I hope everyone here will stay engaged because it’s going to take everybody to get there.

 

Ryan Heath  

You have been an extremely engaged audience and I know we probably could have kept this going for a lot longer. But I think some of us will be interested to take questions when we come off the stage and others will need to get on to their next. The next engagement so we’ll cut it off there. I’m sorry, we didn’t get to everybody. But thank you all for sticking out this discussion. To the end of the day. And thank you all of our panelists.

Aspen Security Forum Logo
Aspen Strategy Group Logo