-
James M. LindsaySenior Vice President, Director of Studies, and Maurice R. Greenberg Chair
Ester Fang - Associate Podcast Producer
Gabrielle Sierra - Editorial Director and Producer
-
Andrew ReddieAssociate research professor of public policy, University of California, Berkeley
Transcript
LINDSAY:
Welcome to The President's Inbox, a CFR podcast about the foreign policy challenges facing the United States. I'm Jim Lindsay, director of Studies at the Council on Foreign Relations. This week's topic is the impact of artificial intelligence on warfare.
With me to discuss how AI is revolutionizing warfare is Andrew Reddie. Andrew is an associate research professor of public policy at the University of California, Berkeley's Goldman School of Public Policy. He is also the founder and faculty director for the Berkeley Risk and Security Lab. His work focuses on cybersecurity, nuclear weapons policy, war gaming, and emerging military technologies. Andrew, thank you for coming on The President's Inbox.
REDDIE:
Thanks so much, Jim.
LINDSAY:
Let's begin, Andrew, if we can, at the 40 thousand-foot level. When we talk about integrating artificial intelligence or AI into military operations, what exactly are we talking about? And is it all that new?
REDDIE:
So despite, I think, what you're going to be seeing in the media, in reality, the integration of these tools into military planning and military operations are certainly more of an evolution than a revolution. So in fact, a lot of these tools have been used for decades for performing data analysis applications from pattern recognition to anomaly detection. And of course, recently, over the last ten to fifteen years, they were increasingly being used for kind of backend functions inside of the military, so things like predictive logistics, predictive maintenance, you see those decision support applications.
What's increasingly invoked today are conversations about how some of the latest and greatest technologies might impact decision support. And of course, there are various views on this topic from those that think that that capability is going to induce a great deal of harm, and then, others who are quite excited about how some of these tools can really improve the efficiency, the speed, and the scale of the military as they seek to kind of perform their operations.
LINDSAY:
Okay. Andrew, let me ask you to unpack that because we're doing a little bit of military, Pentagon speak, and one of the things that is well known about Pentagon jargon is it can really obscure some of what's going on here.
REDDIE:
This is true.
LINDSAY:
When you use the phrase, it's an anodyne phrase, decision support, what are we talking about here? Give me sort of a better sense of what that means in practice.
REDDIE:
Yeah, absolutely. I mean, so any type of military decision is likely to have some sort of data underlying it. So whether that's situational awareness, so the awareness of what an adversary is doing in a particular geography. And then, you have a series of sensors that are picking up all of that data, they can be satellite-based, they can be ground-based. And ultimately, that's going to be pulled into an operator who's going to sitting in front of a terminal, adjudicating that data, carrying out analysis, and then, supporting their superior officers as they make decisions in that context.
Now, what we've been discussing in the AI space is where we can start to fuse some of those data sources together. So previously, for example, satellite data might be kept separate from ground-based radar data or human intelligence data. And now, we're thinking about, well, how do you pull all those data streams together and try to get, "better decisions"?
Now, in reality, when we say, "better," in scare quotes, we're talking about faster decisions, potentially more efficient decisions, and then, decisions that are kind of based on larger pools of data given that, particularly in the American context, we tend to have silos, if you will, of various different types of intelligence. And that's kind of reflected in our intelligence apparatus where, for example, you have the CIA primarily supporting the collection and the analysis of human intelligence. The National Security Agency, NSA, really focused on signals intelligence. And of course, we know from various different commission reports, following various crises, that we're not the best at sharing. And so, really what we're talking about here is how do you kind of pull all that data together so that you get better decisions on the backend?
Of course, what's changing here is not, "Hey, look, before, we had military decision makers with no data backing up their decisions," it's just a different type of way of getting that information to the commander that's making all sorts of decisions, not necessarily all of them about warfighting or kinetic. You've got lots of decisions that have to be made about, like I said before, doing logistics, trainings, maintenance of capability.
And so, where those failure modes can be kind of quite mundane, for example, if I have my maintainers arriving at a foreign military base two days before the material that we use to kind of retrofit a transport vessel, I'll get yelled at, but ultimately, nobody's going to pass away or be a casualty because of that decision. Of course, as we start to talk about other applications in other contexts, whether it be drones or others, and that kind of failure mode obviously becomes more significant.
LINDSAY:
Okay, so you have artificial intelligence contributes to the military's ability to process information and theoretically allow you to combine lots of different information streams much more rapidly than a human being could do.
REDDIE:
Absolutely.
LINDSAY:
But there's also talk about using AI, Andrew, in terms of things like target selection. I know there was controversy earlier this year over Israel's use of a program called Lavender to make target selection. So help me understand what the thinking is or the progress is on that score.
REDDIE:
Yeah, I mean, indeed it's one of the more "exciting" applications of artificial intelligence technologies in this space, but again, an evolution rather than a revolution. So the historians among your listeners will know that previously we had fairly static targeting plans. So for example, SIOP-62 from during the Cold War where our operators here in the United States would have a preselected set of targets that they would hit in the eventuality of a nuclear crisis, and there was no flexibility in that targeting plan.
And so, really, what's happened over time is that the president has sought increased flexibility and more options as they're thinking about those targeting plans. And so, you move from having one preselected target to having three options to having targets based on some sort of baseline in terms of, well, if the crisis looks like this, these are the sets of targets that you, the missileer or you, the bomber pilot or you, the submarine captain, that's where you're going to actually carry out your operation. And so, the integration of these new tools, if you will, kind of expands that optionality.
And of course, it can be used for maximizing or minimizing any particular variable of interest. And so, in reality, the problem with Lavender is not the fact that it's AI, it's not the AI-ness that's problematic, it's the conversation that we're having around the thresholds for the selection of the military target. And indeed, some of these tools could be used for "good" as well, insofar as one of the things you might want to do with some of these targeting algorithms is decrease the likelihood of civilian casualties, for example. So same technology deployed in a slightly different way, trying to minimize or maximize a different variable, has an incredibly different impact. And so, for me, looking at the policy conversation around that particular use case, it's not really the AI-ness that's the problem, it's kind of where they're thresholding the choice of target.
LINDSAY:
Andrew, I'd like to focus on this issue of who actually is using AI to inform their military operations. Obviously, we referred to the United States using it in a wide variety of applications, the Israelis have. Who else that we know of are using AI?
REDDIE:
I mean, it's all your usual suspects, and again, it's kind of in gradations of use, if you will. There's no kind of push to deploy this capability widespread for making kinetic decisions or-
LINDSAY:
Wait. I got to stop you right there. When you say kinetic decisions, I know that's Pentagon lingo, what does it mean in the real world?
REDDIE:
A decision that could actually ultimately lead to casualties on the part of the adversary, so when you're talking about launching a payload, warhead, or...One of the things that actually complicates this conversation a little bit is that there's a tendency, particularly in the governance conversation, to pretend that we actually haven't deployed these types of systems when that's in fact not true. So SIPRI, one of the think tanks out in Europe does some really good work counting the numbers of autonomous systems that are already deployed, and they reckon that there's something around the order of fifty-five to eighty, and kind of the best example of that type of system is a theater-based missile defense architecture, which of course, wouldn't work if they had a human sitting on the loop. Now the problem is that a missile defense architecture is observationally equivalent to a missile that could be used for offensive purposes as well.
LINDSAY:
Okay. And just to get some clarity there, Andrew, we were talking about AI and now you introduced the term autonomous systems, are they different words for the same thing? Different things?
REDDIE:
It depends whose definition that you like. There is no subtle definition of AI.
LINDSAY:
You do have a PhD.
REDDIE:
Yeah, there is no subtle definition. From my perspective, I think it kind of is a gradation in terms of where the human falls inside the decision-making process. And I think that's kind of the appropriate way of thinking about it. Of course, the U.S. Defense Department, they have language around human-out-of-the-loop, human-in-the-loop, and then, in between, human-on-the-loop, and that's kind of how they think about that particular type of gradation.
Ultimately, when you've got autonomous systems, the human's entirely out of the conversation. And indeed, when we think about various different examples of drone warfare, for example, we'll have a drone pilot piloting multiple aircraft at the same time, and what's happening is that they're sending drones to target entirely autonomously. And then, when the drone's ready to do something kinetic, that is to say right payload on target, that's when the operator will actually pick it up and then make that determination about whether to use the capability.
LINDSAY:
Okay. I want to come back to this issue of human at the helm, as you've spoken about before, Andrew, but I just want to talk a little bit more about the capabilities that other countries have in this space. Obviously, the countries that come to the fore would be China-
REDDIE:
China and Russia.
LINDSAY:
... Russia, Iran, perhaps, North Korea. Do we have a good sense of what their capabilities are, how far they have gone in using artificial intelligence in their military? What guidelines or red lines they're observing or not observing?
REDDIE:
Yeah. So by virtue of some of the documents that were collected after the fall of the Soviet Union, we have a fairly decent sense of how the Russians think about this technology. So one of the systems that you'll often hear about in conversations around military AI integration is the Perimeter, Dead Hand system that the Soviets were supposed to have developed in the 1980s that, ultimately, in the event of a nuclear war, would've launched a nuclear weapon in the absence of a human making that decision. And if you believe in nuclear deterrence, that can be a really useful thing because then your adversary will be deterred from trying to perform a decapitation strike. Here in the United States, we've had-
LINDSAY:
Let me just ask you a question about that, Andrew. Is the Dead Hand system dead or has that system been revised advanced under the Putin regime?
REDDIE:
To the best of our knowledge, it's been revised and is in some semblance of use. And so, you've got various different hand wringing about the degree to which it was entirely human-out-of-the-loop in the first place. So lots of open debates among those that spend a lot of time studying Russia. But among the various different exotic systems that Putin seems to deploy in the nuclear sphere, it's hardly a surprise that you're hearing reports that they're kind of bringing these systems back.
And of course, in the U.S. we're having the same conversation about whether we ought to be deploying it too. There's various different articles in the likes of War on the Rocks where they're saying, "Hey, the U.S. should develop this type of system too." It makes me very uncomfortable, but it's a conversation that's out there.
LINDSAY:
Okay. Again, I want to come back to this topic, but if you just give me a bit more on what we know about China's efforts, perhaps the Iranians or North Koreans.
REDDIE:
Yeah, so the Chinese, it appears so. So their strategy documents have called for what's called the intelligentization of their military capabilities. And of course, the great virtue that the Chinese state has in this space is that it has access to a great deal of data because the separation between their government military complex and their private industry is not nearly as separated as here, United States. So lots of conversations about how ultimately they're able to leverage that data, deploy new types of tools.
Now in reality, and we can look to various different other examples of what warfare seems to be looking like in 2023, 2024, it doesn't seem to be moving the needle in terms of them making decisions about the Taiwan Straits crisis, for example, but I think the odds are good that they're using these tools in very similar ways to the United States.
The North Koreans and the Iranians is, unsurprising, far less developed, but certainly is something that you'll see in various policy documents coming out of those, well, the Iranian case. At least in the open, learning anything about what the North Koreans are doing is a very difficult thing, and so, that's kind of more postulation.
LINDSAY:
Andrew, do we have a sense of what the barriers to entry might be for countries to adapt AI and integrate it with the military? I mean, when we look back at nuclear weapons, the origins of the nuclear age, one of the characteristics of it was that the barriers to entry were very high. It really took a great deal of state wealth and capacity to be able to build nuclear weapons. Is that the case with artificial intelligence and automating weapons systems or are we moving to a position where you could buy this stuff off the shelf and anyone with a high school degree could begin to do it?
REDDIE:
So the barrier is certainly lower. That said, I can send my students to Doe Library on Berkeley's campus, and they can also learn, at least in theory, how to build an atomic bomb, so-
LINDSAY:
But they don't have the machine tools to do so. They don't have access to the-
REDDIE:
Exactly. The precursors. Exactly. And so, here, what we're talking about when we say AI really matters. And so, if we're talking about AI applications at the edge, not foundation model type tools, very, very easy. So it's deploying the data sets that you have against your particular use case and you're off to the races. If you're talking about the creation of foundation models that might be useful in a military context, there, the limiting factor, if you will, is really compute. And so, that's why here in the United States, you'll see lots of conversations about export control and trying to stop the travel of Nvidia GPUs to China. Now, of course, wherever the export control bar is set, Nvidia will stay right below that bar and try to get as much of the market as they possibly can. But really, the compute's been that limiting factor. And of course, on the other side of it, for the foundation model use case, the trading data is also a limiting factor as well.
LINDSAY:
And I would assume, from the vantage point of Washington, DC, it matters less whether a military is able to master AI to do its logistics more efficiently than it is if AI is used in targeting or decision-making on the use of weapons. Is that a fair assessment?
REDDIE:
It is. Although, there are also some really interesting conversations about when you want your adversary AI systems to actually work fairly well. And so-
LINDSAY:
Tell me about that.
REDDIE:
Yeah, it's very similar to the conversations we were having about nuclear technical assistance to Pakistan in the 1990s. And so, if you are worried about an adversary deploying a system, that it's unstable, does things that are unexpected, you're increasing the likelihood for potential accidents. And this is something that actually keeps me up at night. So if you have an adversary who calls the hotline and says, "Hey, look, the system is not behaving as we would expect," and ultimately still does something to an ally or a partner, what does that mean for how the United States is going to respond in terms of Article 5 commitments if it's a NATO country? And so, you can create all of these scenarios, which is what we do with the war-gaming work that we do, all of these scenarios that can quickly spin out of control. And that's something that we worry about quite a lot.
And so, when we talk about testing and evaluation tools, one of the discussions is actually, can we create a global framework for the sharing of those testing and evaluation tools such that we're able to raise the bar in how these systems are being used? Of course, so long as it doesn't give away any of the secret sauce about how these models are actually behaving. In all the conversations that I've been having with the various different AI companies, they're fairly certain that you can share some of these best tools and methods without giving anything away. And so, insofar as these systems are going to be used no matter what, one of the arguments is, well, let's share what we can so that everybody's doing it as safely as possible, and that, of course, includes our adversaries. And you'll see that reflected in some of the conversations. So the Bletchley Park Summit, for example, that the United Kingdom put together in November, it included the Chinese and the Americans as a part of that broader AI safety discussion.
LINDSAY:
And I should note your reference to Pakistan and nuclear weapons in the '90s is the concern that if a country gets nuclear weapons, you want to make sure that those nuclear weapons can't be easily stolen or accidentally used. And that creates some dilemmas for policymakers, obviously, particularly if you just spent years trying to tell that country not to acquire the technology.
Let's go back to the question of the human at the helm or the person-in-the-loop, the variety of formulations there. Why do we focus so much on it, Andrew, when there's a lot of evidence as you suggested, that human beings actually aren't terribly good at making decisions? Or maybe a better way to put it is there were known biases in human decision-making. The one that sort of comes up most in domestic circumstances, talking about artificial intelligence, is that we have driverless cars and when there's an accident, everybody gets greatly concerned, why were we relying on this technology when the reality is human beings get in cars and crash them every day and no one notices? So help me think through these issues raised by automation, artificial intelligence in relying on autonomous systems.
REDDIE:
I love this point. And so, this actually came up in one of the war games we carried out for the Department of State about a year ago, where one of the players said, "Hey, look, there's no guarantee that my decision is going to be any better than this AI decision support tool." And ultimately, they're absolutely right. I think one of the things that makes us feel comfortable about a human making decision is that, ultimately, liability and responsibility lives somewhere that we can grasp on to. Of course, the autonomous vehicle scenario is kind of interesting because actually it would appear that, at least statistically, those vehicles crash less than vehicles that are driven by human beings. And so, ultimately, the conversation that we have to have is about is the juice worth the squeeze in terms of you lose the ability to kind of point to somebody responsible, but you're getting some efficiency gains somewhere else?
You'll also see this come up in the discussions around using these AI tools to try to give decision makers more time. So in things like nuclear crises, for example, one of the things that we keep trying to do is buy the president more time to make decisions. Now, of course, I can make a good decision in a short period of time, and I can also make a bad decision over a very long period of time. And in reality, the relationship between time and the goodness and badness of decision-making is contingent. But that said, that's kind of where we kind of have this false sense of, "Well, if I give the president lots and lots of time, the decision is going to be better on the back end."
LINDSAY:
Yeah. And there's also, obviously, the pressure of making decision under time constraints, but there's also the issue of getting good information, and the theory, at least, is that AI can generate better information than an individual would much more rapidly. Again, I know there's a lot of talk about how AI can improve diagnoses in medicine because it can simply outperform human beings, but again, you still get back to this question of if you turn it over to a human being, they may have more information, it doesn't mean they'll make a better decision.
But there seems to be an awful lot of focus about the issue of autonomy and nuclear weapons in particular. I note that earlier this month, a State Department official called on China and Russia to follow the United States in saying that any use of nuclear weapons will be made by a flesh and blood human being. Now, help me understand that. Is that actually U.S. policy? And how much actual autonomy would a human decision maker have in such a situation given time constraints and the like, responses from China and Russia, if at all?
REDDIE:
Yeah, so there's a lot to unpack here. I mean, a lot of this conversation is driven by...So U.S. is in the middle of a modernization cycle. It's modernizing all three parts of our triad, so the boomers, the bombers, and the ICBMs.
LINDSAY:
Air, land and sea as we say.
REDDIE:
Yes, exactly, air, land and sea. And as part of those conversations, STRATCOM Commander Hyten in the late-2010s really made an argument that we ought to be modernizing our nuclear command and control architecture at the same time because ultimately, that network knits all of these capabilities together and is a single point of weakness. And there were various conversations about the cyber threats to that particular network. And as a part of that conversation, there was this AI-NC3 integration discussion. Now, as I mentioned before, we've already integrated various AI tools in parts of the nuclear command and control architecture, particularly for early warning. So that's the pattern recognition, anomaly detection that I talked about before.
And then, the conversation became, well, how much of the communications can I cut the human out of? Because ultimately, the argument for AI tools in this space is that you're getting so much sensor data there's no way that a human operator could actually adjudicate all of it. And so, instead of having a single human operator sitting over the signals intelligence and one over the image intelligence, now you fuse them all together and you get to skip over that analyst level and get it straight into the decision maker. Of course, the problem with that is that you kind of lose the context that that individual might have to help the decision makers adjudicate whether they ought to be doing something or not.
And that's really the thing that we're trying to make sure that the Russians and the Chinese keep as part of their nuclear decision making doctrines. And so, the U.S. perspective is that there's a lot of value to be had from a human ultimately making the decision based on the information that they have available, whether to go or no go.
The kind of apocryphal concern is really evident in a video that was launched by the Future of Life Institute called TLDR, where they were showing operators making decisions about performing a particular military action coming straight out of an AI model. So for example, it was if China does this in Taiwan, then I respond in this way. And what they were showing was the human operators just basically rubber-stamping the decisions that the AI recommenders were giving to them. And so, that's kind of what we worry about in this space. And so, the U.S. has said, "Hey, we're not going to introduce autonomy to making nuclear decisions, so you, Beijing or Moscow, ought not to as well." Now of course, that cuts against the Perimeter, Dead Hand capability that we talked about before.
LINDSAY:
Andrew, let me draw you out on that because a lot of the discussion about applying AI assumes that AI is going to give you better information faster so that you can increase the chances that humans, if they are in the loop are going to make better decisions. But I've read a lot about large language models, AI being subject to hallucinations and actually providing the wrong information. Do we have a sense of how likely that is to happen? I mean, how do you rely on a system for life and death decisions when there is a probability that it could give you exactly the wrong information?
REDDIE:
Yeah, so it's a great question, Jim. I think one of the arguments that the AI companies are making is that they're getting better at controlling some of those hallucinations and getting past them moving forward. But really, the problem is a little bit more first order than is often described in that kind of use case. We don't have training data for making nuclear decisions. Certainly as a political scientist, I would be very uncomfortable with the idea that policymakers are going to be using the Correlates of War data set or the Militarized Interstate Dispute data set to be making these types of decisions. In the absence of empirical training data, they might seek to create it synthetically. And so, that's where you bring in computer-based mod sim or some of the war-gaming work that certainly we do. But again, I would be very uncomfortable if we're going to be making nuclear decisions based on inputs that are derived from any particular war game scenario as well.
And so, we have a first order problem, which is that in some of the nuclear use cases for actual decision making, there is no training data that's going to be relevant to making that type of determination. Indeed, if I built you a model in January of 2022 around the conditions under which military exercises lead to conflict, my model would not have predicted that the Russians would've invaded Ukraine because we've had thousands of military exercises, let's say, bounded over the last decade, that did not lead to conflict. Of course, our intelligence agencies ultimately made the determination that this time was different, and they had various reasons for doing so, for example, the blood banks being full on the Belarusian border. And so, they were able to kind of make that prediction, but an AI model would not have made that prediction.
LINDSAY:
So Andrew, let's talk about efforts to regulate the reliance on artificial intelligence in military operations. And obviously, it's not going to be eliminated, as you pointed out, countries have been relying on artificial intelligence for years to guide military operations and decision-making. There was a meeting in late March in Washington that produced something known as the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. Perhaps you can tell us a little bit about what that was. And are there ongoing efforts among the major powers to try to create rules of the road for reliance on AI?
REDDIE:
Perfect. Yeah, I'm glad you spelled out all the Political Declaration, so I'll just call it the Political Declaration from here on out. It's an unwieldy title. So in fact, that Political Declaration really started about a year earlier, and I think is a little bit of a model for how the U.S. ought to be thinking about regulation and emerging technology spaces moving forward. And so, the way that this effort started was in a conversation that was happening internal to government between the State Department and the Defense Department. They basically crafted a unilateral declaration regarding how the U.S. military viewed the use of AI tools in warfare. And of course, this Political Declaration built directly upon the Department of Defense's own AI principles. And those principles are exactly what you would expect. So you're trying to use these capabilities as ethically as possible, trying to make sure that they're trustworthy, deployed against appropriate use cases and don't have the potential for significant failure modes.
And so, using that DOD's AI principles that came out of the JAIC, the Joint AI Center, and then, moving it over to the Political Declaration conversation, then the State Department took it and started to open it up to other countries for signature, and of course, all of the likely parties that you'd expect amongst the U.S. partners and allies glommed onto it and have taken part in various meetings.
And I think one of the really interesting things about the declaration is that it's actually matured and changed since those other countries have become a part of it. And the meeting in March was the first kind of meetings of all the states parties that have signed on to that agreement. Of course, a lot of European countries, allies and partners across the globe from the U.S. were a part of that conversation. I believe there's over fifty signatories at this point.
And so, it really has been moving the ball forward in a way that the more "traditional UN processes." So there's a group of government experts at the UN focused on lethal autonomous weapons that hasn't really moved much just by virtue of the fact that there's a lot of disagreement between, again, the likely suspects China and Russia on the one hand and the U.S. and its partners and allies on the other about what even AI might be. There's a lot of conversations about definitional issues. And so, the Political Declaration is really exciting because it moves the ball forward in a way that those broader global governance conversations haven't been able to yet.
LINDSAY:
Andrew, what do we know about conversations between Washington and Beijing? When President Biden met with President Xi on the sidelines of the APEC Conference in San Francisco last November, among other things, they agreed to start intergovernmental dialogue on AI. Late last month, Secretary of State Tony Blinken said that the first high level talks on artificial intelligence are going to begin in the coming weeks. What are they going to be talking about? And when are they actually going to start?
REDDIE:
Yeah, so like you mentioned, they are supposed to be happening at some point in May of 2024. And it's been a long time since November. I had some conversations with my State Department colleagues in January saying, "Hey, look, where's this meeting? When's it going to happen? And what's going to be inside of it?" I think that, ultimately, the agenda is not in the open. What I hope is discussed are some of the conversations around use cases where, ultimately, both sides believe it's inappropriate to kind of use these capabilities. And also, is there a path forward on kind of sharing some of the technical tools to drive down risk where we can? So what does red teaming look like across both countries? What does testing and evaluation look like across both countries? How are we thinking about the incorporation of foundation model capabilities inside of the militaries of each of the two countries as opposed to kind of AI at the edge or some of those more non-problematic AI applications that we mentioned before? So those are the types of things that I expect to be discussed.
But amid, I think that's really important because in other venues, there's a lot of pessimism about the degree to which Beijing and Washington are able to talk to one another. Nuclear arms control is a really obvious example. But I think what this conversation demonstrates is that there are still places where erstwhile adversaries can come together and cooperate on approximate problem of concern.
And again, I think one of the things that's really important is that the Chinese were actually in the room for the AI Safety Summit in the UK and are a constituent in trying to make sure that these capabilities are going to be safe. It's also worth noting that the Chinese have their own version of the Political Declaration that they have with their own kind of partners in the region, more or less following the lines of the Belt and Road initiative in terms of membership. And so, I think there are common interests in trying to make sure that these capabilities are safe. Now, of course, that doesn't necessarily mean that both sides aren't competing with one another over the capability.
LINDSAY:
Point taken, Andrew. I suspect this is an issue that's going to bubble along for years to come. For anyone who wants to have a better understanding of AI and its military applications, are there a few things that you might suggest they check out and read?
REDDIE:
Yeah, absolutely. So I'll plug CFR's own Foreign Affairs. My colleague at Stanford Jackie Schneider just wrote a wonderful piece that kind of got to grips with some of the dangers associated with AI military integration. I will say that in a lot of our work, whether it be war-gaming work or survey experiments, there are real questions about the appetite for actually integrating some of these tools to make kinetic decisions. But that's definitely a really fantastic starting point. And of course, on our own end, we have various different conversations that are published in Lawfare with colleagues from CNAS, really focused on what the future of red teaming might look like, why it's so important to engage in these governance conversations as well. So those are the places that I suggest that people look.
LINDSAY:
I'll make sure those pieces show up in the show notes for the episode. In the meantime, we're going to close up The President's Inbox for the moment. My guest has been Andrew Reddie, an associate research professor of public policy at the University of California, Berkeley's Goldman School of Public Policy. Andrew, thank you very much for taking the time to chat with me.
REDDIE:
Thanks so much, Jim.
LINDSAY:
Please subscribe to The President's Inbox on Apple Podcasts, YouTube, Spotify or wherever you listen. And leave us your review, we love the feedback. You can email us at [email protected]. The publications mentioned in this episode and a transcript of our conversation are available on the podcast page for The President's Inbox on cfr.org. As always, opinions expressed on The President's Inbox are solely those of the host or our guests, not of CFR, which takes no institutional positions on matters of policy.
Today's episode was produced by Ester Fang, with Director of Podcasting Gabrielle Sierra. Bryan Mendives was our recording engineer. Special thanks go out to Michelle Kurilla for her research assistance and to Justin Schuster for his editing assistance. This is Jim Lindsay. Thanks for listening.
Show Notes
Mentioned on the Episode
Alan Hickey, Andrew Reddie, Sarah Shoker, and Leah Walker, “New Tools Are Needed to Address the Risks Posed by AI-Military Integration” Lawfare
Max Lamparth and Jacquelyn Schneider, “Why the Military Can't Trust AI,” Foreign Affairs
“Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy,” U.S. Department of State
Podcast with James M. Lindsay, Liana Fix and Matthias Matthijs June 11, 2024 The President’s Inbox
Podcast with James M. Lindsay and Steven A. Cook June 4, 2024 The President’s Inbox
Podcast with James M. Lindsay and Andrés Rozental May 28, 2024 The President’s Inbox