-
James M. LindsaySenior Vice President, Director of Studies, and Maurice R. Greenberg Chair
Ester Fang - Associate Podcast Producer
Gabrielle Sierra - Editorial Director and Producer
-
Jessica Brandt
Transcript
LINDSAY:
Welcome to The President's Inbox, a CFR podcast about the foreign policy challenges facing the United States. I'm Jim Lindsay, director of Studies at the Council on Foreign Relations. This week's topic is artificial intelligence in the 2024 U.S. presidential election.
With me to discuss how AI might affect the 2024 U.S. elections is Jessica Brandt. Jessica is the policy director for the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution, where she is a fellow in the Foreign Policy Program's Strobe Talbott Center for Security, Strategy, and Technology. Before joining Brookings, she was the head of policy and research for the Alliance for Securing Democracy and a senior fellow at the German Marshall Fund of the United States. Jessica has written extensively on foreign interference in U.S. politics and on the implications of emerging technologies for liberal democracies. This episode is part of the Council on Foreign Relations, Diamonstein-Spielvogel Project on the Future of Democracy. Jessica, thank you for joining me.
BRANDT:
Thanks so much for having me.
LINDSAY:
We are now less than fourteen months out from the U.S. general election, which will select not just the next president, but a third of the Senate, all of the U.S. House, and governorships and state legislatures across the country. Now, it's well-known that U.S. elections have been particularly fractious in recent years, and into this mix suddenly comes artificial intelligence. Does the appearance of AI in the scene, Jessica, suggest we might repair those divisions or deepen them?
BRANDT:
I think we're going to deepen them. I mean, I think the explosion that we've seen in generative AI is going to exacerbate or turbocharge some of the challenges we've been grappling with, as you alluded to, for some time. So I see sort of four big buckets of challenges. One is deep fake technologies, which will give actors the ability to create realistic audio, video, and images. I think that could be used to do things like manufacture an October surprise, or fuel election conspiracies, right?
We know in 2020, images of supposedly discarded ballots were, for example, used in this way, or we could get audio of a candidate saying something objectionable about another party. We talk a lot about deepfakes of the major presidential candidates, and of course, that's an option, but there's a lot of ways that manufactured content could be used to shape the reality of our election. And I would say, the targets aren't just going to be candidates, as I said, I think journalists are important targets. Prominent journalists are also kind of, I would say, like institutions of our democracy, and therefore, maybe targets especially of foreign actors.
LINDSAY:
So we have four buckets, one of which is deepfakes. What would the other three buckets be?
BRANDT:
There is a sheer volume of unique content, which could be used to kind of overwhelm systems that take input from the public, things like notice and comment processes, but also just the inboxes of our elected representatives can make it really hard to determine what is real and warrants a response. So it's partly that it could kind of sway officials, candidates, and elected officials sense of where the public is, but it's also that it could just make it hard for Democratic governments to be responsive to their citizens, and that's a big problem, especially in the contest of, between authoritarians and liberal democracies. Maybe we can get to that.
Also here, I would say A/B testing messages, right? If it's almost costless to create messages, then you can create spaghetti and throw it at the wall and just test and test and test to see what works, so it's not the volume of content alone, but the ability to make it more persuasive, to refine the message to figure out what works, and it's just going to be a lot harder to detect this stuff because one of the easy ways that we found some of the original kind of bot content was that it was copy and paste. It was the same thing over and over and over again, but if we can be endlessly unique about it, then that'll be much harder. So that's the second bucket, the volume of content.
Then, I guess related to that, as I've alluded to, is just personalization and persuasiveness, right? Imagine much, much better phishing attempts, right? That's how the Russians got in in 2016. I'm thinking also about chatbots that could ... What happens if you're picking up the phone and you're talking to a computer on the other end, and it's listening to what your concerns are, and then spitting back messages that are responsive to your concerns? Right? This is something we haven't grappled with yet, but it's a plausible reality, and then just better targeting of vulnerable voters.
LINDSAY:
That sounds like the ultimate robo phone call if it can actually respond to what you're saying.
BRANDT:
Totally, content that's personalized, and persuasive, and real-time, and also virtually unidentifiable, right? One of the issues here is that if this conversation is happening on the phone and nobody knows that it's happening, how can candidates push back against messaging that's mischaracterizing their ... So we kind of operate on this marketplace of ideas model, but it doesn't really work if the marketplace isn't wholly open.
LINDSAY:
If it's not in the open market, but in the underground, you won't notice it.
BRANDT:
Exactly. And also, think about this happening in closed WhatsApp groups on Signal channels and other places where the sort of democratic model of the contest of ideas is really challenged. So that's the third bucket, the personalization, the persuasiveness. Then, I think the most important one is just nihilism about the existence of objective truth. Once we just live in a world where we feel like we can't trust what we see with our own eyes because we know about deepfake videos and audio, and everything can be dismissed as a deepfake, or as AI-generated content, or we feel as though we can't rely on trusted sources of information because they might be manipulated, that really erodes what I see as one of the foundations of our democracy, which is democracies depend on the idea that the truth is knowable.
Citizens can discern it, they can use it to govern themselves, and if we erode that, or in the case of the actors that I study, especially Russia and China, if they're able to erode that, I think it really has important implications for our democratic foundations and our position in this broader contest that I've gestured at.
LINDSAY:
Jessica, let's dive into those four buckets a little bit because I think there's a lot there. I think one question that might immediately come to mind, particularly about deepfakes, is the fact that information has been misconstrued before. You offered up the example back in 2020 of photos allegedly showing ballots that were being mishandled. Now, those weren't deepfakes. They were just photos that fed a narrative that, at least some people wanted to believe. So will deepfakes really be that different than what we've seen in the past?
BRANDT:
It's a great question. I think this is why I said a little turbocharged challenges we're already grappling with, more so than creating a new category of problem, what we call shallowfakes, right? Just slowing down a video of Nancy Pelosi to make her look appear as though she was intoxicated, or speeding up that Jim Acosta video, right? That's not even a deepfake, and as you've said, just Photoshop, or even just photos taken out of context can create a false impression. So I think you're right, that when we talk about deepfakes, we're not necessarily talking about a challenge of a new kind, but it's at a different order of magnitude.
As I said, I think we spend a lot of attention thinking about deepfake videos of the major candidates. I think those would be likely to be debunked very, very quickly, and so I'm much more concerned about these kind of lower level, or what appear to be lower level targets, such as election officials that could be a part of a broader information operation where you get a clip of an election official, I don't know, reported leak of a phone conversation that they have, suggesting that there's been malfeasance, and then an army of folks on Twitter, or it's whatever succeeds it, spinning up messages that we see across platforms. It's about the way that information travels across our information environment, and not just platforms, but news outlets, et cetera. I will say the nihilism about the truth problem feels very, very real to me. Again, it's not a problem that we're not accustomed to.
We have that problem already, but there, I do think that deepfakes and just the mere existence of our understanding that a deepfake exists and is a distinct possibility will exacerbate that problem whether or not it's used. There was a lot of conversation in 2020 about perception hacking, which was this technique of relying on sort of the anticipation that manipulation might happen. It's a claim that it has, whether or not you've actually manipulated an election, and I think there's something kind of analogous here, right?
And there's a good reason why the Russians kind of used that attempt. They used it in 2016 too, right? They, I think, understand that you don't have to actually manipulate an election at scale in order to claim that you did and create that perception, and the perception alone is damaging, and that policymakers face really real challenges around trying to both inform the public about the possibility of manipulation without kind of feeding into this broader ecosystem of mistrust. So to get back to your original question, I think deepfakes are going to ... The biggest impact will be this kind of nihilism about the truth that, I think is so ultimately damaging.
LINDSAY:
I take your point, Jessica, that in some respects, this is about throwing sand into the gears of democracy, creating questions, creating doubts so people can't be sure what the truth is, and I also take the point that timing could be critical in the use of some of these technologies. You mentioned the October surprise. I've been around a long time, so it seems to me almost every presidential election, someone is talking about an October surprise when the opposing camp is going to do something that could change the course of the election. The point here being that you could have something that looks pretty credible, that you couldn't debunk quickly enough. I would imagine this even be a bigger issue if you have people interfering in "smaller" races, so to speak, smaller in air quotes. I know mayor's races, maybe a contested House seat, which isn't going to bring the focus of The New York Times, Washington Post, Wall Street Journal, and the rest of the media complex. By the time the truth gets its shoes on, the lie has already traveled the world several times.
Is there any potential, Jessica, for technology to be a solution here? Recently, Google's DeepMind has made the claim that it's developed a technology that would create a watermark, that would be unalterable, that would indicate what a deepfake is so you could look at it and say, "Ah, that's not a true event because that was created, 'cause we can see the watermark." Do you see technology as potentially, whether on deepfakes or on large language models, providing a solution here?
BRANDT:
I would say, "Yes, and." There's definitely an important conversation underway about content provenance techniques, and there are companies that are working on this, sort of public private partnerships and various consortia, and I think it can be an important measure. So I certainly want to see efforts to innovate in that space to continue, but in some respects, it's a little bit of like a cultural adoption problem. It's not just that we need the technology, we also need sort of widespread uptake of this technology. We both need many platforms to kind of create the architecture where it could be used, and then we also need people to be kind of literate, media literacy efforts that would help people to understand what they're seeing. Broadly, I think this is the right approach or the kind of approach that I'm in favor of, which is helping people to understand the content that they encounter online so they can make their own judgments. When we think about kind of content moderation approaches, this is a sort of transparency enhancing approach that, I think can be useful.
On the other hand, I think right now, these technologies aren't great at identifying generated content, and so my worry is that if we label some of this content and not others, we're sort of inadvertently blessing that content, which does not have labels, and so if we're missing more than we're catching, there might be a perverse effect to labeling. Then, also, what are we talking about when we talk about AI-generated content? I think this is a problem that's solvable, but we're not really talking about using Photoshop to make yourself look better on a photo you post online. We're talking about things that are sort of wholly generated. There's a gradient of kind of content here.
Context matters a great deal, and so I think there's a whole bunch of questions that aren't really technical questions about whether you can identify manipulation or alteration, but is this the kind of alteration that we're talking about, and how do we help people to understand the difference between tweaking a photo and Photoshop, and something that's sort of wholly generated with manipulative intent that's likely to suppress voter intent to vote, all kinds of sort of election-related problems?
LINDSAY:
Obviously, as we're talking about this, we seem to be talking a lot about deepfakes, but I want to go back to your point about being able to use AI to generate large amounts of personalized responses to people's queries, and that raises a whole different set of issues. It's not necessarily misleading to try to find a more persuasive way to reach the audiences you're using. I will note that the Democratic party has already started doing tests that use AI to try to write effective fundraising messages. Obviously, if you're in American politics, you're hoping to raise money so that you can win campaigns. So I'm not sure how you deal with that issue.
BRANDT:
Yeah. I mean, you could also imagine trustworthy sources of election information making robocalls that get people the right information about where they can vote. So these are just tools, and they can be ... It depends a great deal on the hands in which they're put, and so I think you're right. We're going to see all kinds of political actors, whether they're issue advocacy groups, campaigns, government, institutions. They're all going to be using these tools, and some for good and some less so.
I mean, I think this points toward kind of need a whole-of-society approach. I think, as I've mentioned, I think measures that help us to restore transparency and give context, I think are helpful. So I'd like to see things like the FEC could make sure that its disclosure requirements for political ads cover paid influencers that might be using generative AI, and we could ask social media companies to be verifying the authentic accounts of trustworthy sources of election information. This is something I've been arguing for, actually for quite some time, but I think in an environment where we expect just a morass of information, some credible, some not, helping people to know where you can go for what's a trustworthy source is really vital. CISA, I think could be helping election officials to-
LINDSAY:
CISA is?
BRANDT:
The Cyber and Information Security Authority, I believe. Somebody should check me on that, but it's within the Department of Homeland Security, and they are well-positioned to equip election officials on a wide variety of the kind of challenges that they might face, and I think they're perfectly positioned to be resourcing election officials on these issues, help them to better defend against advanced phishing techniques and all of the rest. So those are kind of the direction, I think we should be thinking in terms of, "How do we build a framework that is resilient, knowing that these tools are coming and that they'll be adopted by a wide variety of actors towards very different ends?"
LINDSAY:
Jessica, I want to pick up on your point about the distinction between tools and actors. Obviously, tools can be used for good purposes, they can be used for bad purposes. Let's spend a little bit of time talking about who might use them. We've already begun that conversation in part, but I'm wondering if there aren't sort of norms that would inhibit major political actors, mainstream institutions from misusing AI in the way we've already suggested for the price they might pay or fear they might pay if they're found out.
I'll note that earlier this year, GOP presidential candidate, Governor Ron DeSantis of Florida posted AI-generated images of former President Donald Trump hugging and kissing Tony Fauci. Obviously, for many Republicans, particularly members of the Republican base, Tony Fauci is not a popular person, but Governor DeSantis, in his campaign, got a lot of backlash from that. Do you think norms could hold up in part against misuse of AI technology?
BRANDT:
I do. I think norms are incredibly important here. I mean, even ... When I think about election related disinformation challenges in a pre-generative AI era, we want our candidates to say, "I will not accept or use weaponized information in my campaign." It's not something that the federal government or platforms can make happen by waving a wand. It requires actors that are central to that undertaking to commit not to doing it, and so I do think, to your point, the public can play a role in sort of imposing a cost on those, especially political actors who are irresponsible in their use of these technologies.
So there's always going to be people and always going to be actors that their kind of market differentiator is going around or flaunting some of these norms. So I don't expect that norms alone are the solution, but as you said, can they be in part? I think they're a very important part of the solution, because all of the ideas that I threw out just a minute ago, they all have holes in them, so we need all these layers working together.
LINDSAY:
Well, Jessica, you're quite right that norms don't hold up against bad actors, because bad actors are more than willing to break norms, and indeed, when we're talking about foreign election interference, breaking norms may be the whole point. So sort of walk us through how we should think about the potential for hostile countries. And we're probably here thinking Russia, we're thinking China, we're thinking Iran, North Korea. How might they use AI to disrupt, or interfere, or muddle in U.S. elections?
BRANDT:
Yeah, it's a great question. Just to link this back to what you just said about norms, I think for Russia and for China, in particular, some of this is about making the world safe for their own illiberal practices, legitimating their uses of digital technology, right? I don't think they seek a world converted to their way of doing business, but they want the world to be safe.
LINDSAY:
And having chaos in the United States serves their geopolitical interests.
BRANDT:
I think it serves Russia's goals. I think of Russia and China, there are important overlaps, but there are also important distinctions, right? Russia is a declining power, and I think it's seeking to compensate for its relative weakness by disrupting, as you yet, to use your word, the partnerships, institutions, democratic political processes of its competitor states. It wants to do that right now, and it doesn't care about attribution because if we're talking about Russia, if we're talking about them, it actually makes them important and-
LINDSAY:
They want to be part of the conversation.
BRANDT:
Exactly, which, I think points to some challenges that we might face with mitigation efforts because they're just not going to be sensitive to attribution. And so I think this is why their activities are destructive, and the chaos is the point, is if we're distracted and we're divided, we're not paying a more forward-leaning role in the world that might run contrary to Russia's interests.
China, on the other hand, is a rising power. It has a lot to lose from the exposure of its destabilizing activities. It does seek a stable order. It just doesn't want an order that we lead, or wants an order that's more favorable to its way of doing business, and so it's very happy to capitalize on Russia's chaos operations, but chaos is very much not the goal for China, and where Russia does not care about how people view Russia. The goal of its operations is never to make you think positively about the Kremlin or about Russia. China very much cares about its image, right? The goal of its information operations is about presented as this responsible global leader, and then an attractive alternative to the United States as a hegemon, and so that's why I don't think ... You haven't seen China... In 2020, they considered but decided against election interference operations. That's just not their game.
LINDSAY:
Well, it's not their game in the United States. They've interfered in elections and politics around the world.
BRANDT:
Sure.
LINDSAY:
Our neighbors to the north Canada and Australia. I just think, I take your point, that the Chinese don't want to get caught, but China certainly benefits.
BRANDT:
It's a more subtle system of inducements and kind of co-opting political leaders and swaying public opinion. Russia doesn't care about convincing us of any one opinion, but just making democracy appear feckless and ineffective. China does care about convincing us of pro-China positions.
LINDSAY:
Yeah, but I would imagine Beijing is perfectly fine if the United States can be shown to be divided and feckless 'cause it plays into their argument that the United States is not a reliable actor, it's not a reliable partner and China with its new model, can actually deliver benefits that the declining, decaying Western powers, led by the United States, can't do. That's their characterization, not mine.
BRANDT:
Couldn't have said it better myself. I think they come in behind. They benefit from ... There's a lot of debate about whether Russia and China are coordinating. I think it a little bit misses the point because, especially in the information environment, they don't have to be intentionally explicitly coordinating for their actions to be compounding and to have a accelerating impact, and so I think ...
Think about Russian and Chinese messaging around the Ukraine crisis, for example. Why is it that we saw China parrot Russian messaging when it came to laying the blame at the feet of NATO in the United States? Because again, as you say, their interests are common, they share these targets, and they share these near-term goals, but they've declined to sort of endorse Putin's invasion wholesale because there's reasons why that's challenging for their vision of sovereignty. And why is it that we saw China amplifying the conspiracy theory around the Fort Detrick lab, all of that, the biological weapons conspiracy theories? It's because it served China's interests in diverting blame for its early mishandling of the pandemic, right? It wants to exacerbate skepticism, just these sorts of labs for those reasons, and so I think they have different goals, they work together where it's convenient for the two parties, but working together doesn't necessarily mean explicit formal coordination, just sharing these targets.
And as you said, China's very happy to kind of come in behind, capitalize on Russian messaging about the decay of the West, and China, in particular, tries to cloak itself in the language of democracy. It's a whole process democracy, kind of is ... We don't see Russia really do that. So you can imagine that China could use AI tools to make it appear like a army of netizens, agree with pro-China positions on Xinjiang or other issues, where I don't think you'd see Russia do that. You could see Russia, as I said, overwhelming in a destructive way, kind of spamming inboxes of elected representatives or notice of comment processes, stuff like that.
That doesn't feel to me like a Chinese game. China would be happy for Russia to do it, but I don't know that they'd do it themselves. I mean, I could be wrong. All of this is conjecture. One thing I'd say, we haven't talked about, Russia is very, very good at reading the societies that it targets, and it finds the gaps and seams or the-
LINDSAY:
So it's looking for fissures?
BRANDT:
Yeah.
LINDSAY:
It wants to find a fissure and stick a wedge in and make it bigger.
BRANDT:
Yes, stick a finger in our eye. China's kind of bad at reading target societies. Better at reading the societies closer to home, but it's not great farther abroad. That's why there's been kind of a backlash to wolf warrior diplomacy. It's not clear that China's efforts are really working to its advantage, and I guess one thing I think could potentially be transformative or at least impactful about the generative AI wave that's coming, and AI generally, is that if China can use sentiment analysis tools to better read the societies that it's trying to reach, and then pair that with, as I've said, A/B testing, where they can just create a ton of messaging, they might get better.
LINDSAY:
And A/B testing is when you have, let's say two different headlines and you see which one resonates more.
BRANDT:
Gets more clicks.
LINDSAY:
The one that gets more clicks is the one you go with. So you're always trying to find out, "What is the right button to push to get maximum response?"
BRANDT:
Exactly, or trying to figure out like, "What platform is going to get ..." We have this message that we want to convey about a certain, whatever it is, political issue, maybe one that's hot in the contentious issue in the political election, in the 2024 election. "What platform is the best platform to generate engagement around these issues?," or, "What audience is going to pick this up and retweet it, or share it, or ..." So it's not just the volume alone, but what that enables you to do. But generally speaking, China's a rich country. Decreasing the cost is helpful, but they're going to find the resources to do what they want to do.
LINDSAY:
The Chinese Communist Party can find money in the coffers to pay for these sort of efforts if it so desires.
BRANDT:
Yeah.
LINDSAY:
We know this is going to be an issue in the 2024 election, Jessica, because we've seen increased efforts in prior elections to interfere, to meddle, not necessarily to pick a particular candidate. It may simply be to sow confusion, undermine trust, sort of the glue that holds a democracy together. What is it that the U.S. government has been doing to prepare for this moment? You've mentioned one agency that has already been set up, but you sort of survey what's being done on the federal level. I'm not sure what's been done on the state and local level. Are we making adequate preparations?
BRANDT:
Yeah. I think we've come a long way since 2016, when we were really caught flatfooted. I mean, we've seen efforts by the federal government to resource election officials at the local level to kind of build a more coordinated threat picture. So for example, within ODNI, they have just stood up the Foreign Malign Influence Center. It's kind of the equivalent of the NCTC that will knit together the sort of analytic picture on foreign malign influence.
I think seeing across the full threat picture is really important because these operations kind of pick at the, as you've said, at the seams. And also one thing we didn't talk about, we spent most of our time talking about AI's impact on the information environment and the information environment, but there are many other ways that authoritarian actors try to interfere in democracies, and so seeing that full picture, I think is important. The FBI has set up a body that looks at this. So I think the government is doing a better job at both equipping itself to see the picture, and then a better job at communicating with important audiences, including the public. Here, I would say the government's strategy of intelligence disclosures around the launch of the Ukraine conflict, I think is just a really great example of how our government, I think, understands that the information domain is among the most consequential terrain that Putin's contesting, and so using these kinds of disclosures to get ahead of Putin and to complicate his efforts to muddy the waters and to use kind of a false flag to justify an invasion, I think really shaped public perceptions of the conflict in a way that's been durable.
We have a much clearer sense that Russia's the aggressor, Ukraine's the defender than we did in 2014. And it's not that our government, I think knew more this time around than then, but that they were much more effective at communicating with the broader public. I think a side effect of that was just moving fence-sitters off the fence, creating public support for a stronger response, not just here, but in Europe. So those are some of the places where I see activity, like within government, between government and the public, and also just better coordination and more conversation underway with both researchers and platforms.
I would also say, in the example I just gave you about the start of the Ukraine crisis and that information strategy, no way would that have worked if there hadn't been a vibrant open source intelligence community of researchers that are not affiliated with government, that were able to kind of verify disclosures that government were making, right? So you had private companies that had satellite images, that they were giving them to investigative journalists that were on the ground and working together. They could corroborate, "This body was here and it got moved there," and it helped to kind of give credibility to government pronouncements that I don't think would've been trusted, especially after the intelligence failures around the Iraq war.
LINDSAY:
Well, that obviously gets in a whole different set of concerns that, I think many Americans might have, that their own government could use AI and information technology to mislead them or persuade them, and that's sort of, I think, a complicated subject. I mean, you already see right now that a fair number of Americans don't trust the messages they're getting from Washington, whether it's held or governed by Republicans or by Democrats.
BRANDT:
Yeah. I mean, there are so many ways to take that question. I think this is the point that I was making about sort of nihilism, about the existence of objective truth and the paralysis that that causes within a democratic society, and so it's not a problem that we can waive a wand and fix. I don't think it's a problem that can be fixed, but a condition to be managed. But not to be Pollyannaish about it, but I think it's incumbent upon all of us.
Our democracy is only as strong as we make it, and it's incumbent on all of us to kind of lower the temperature of the debate, and to instill a healthy respect in one another, and to where possible, behave responsibly online, which is not to say that, I think, the whole responsibility for solving this problem falls to the level of the individual user. I mean, especially when we're talking about Russia and China, and Iran and other actors, we're talking about going up against the well-resourced intelligence services of adversary states. So I don't think that it's media literacy alone is a reasonable place to land, but it is a component because I think our polarization, it's like the number one obstacle to overcoming foreign interference because it provides the fodder on which so many of these operations rely, and it makes it harder for us to do the things we need to do, or to get our house in order.
LINDSAY:
Well, foreign intelligence services are clearly taking advantage of pre-existing momentum in the U.S. political system, and that's getting back to your point about the Russians reading their societies, trying to find cracks and phishers, and then to exploit them.
I'm just curious, Jessica, I take your point that at the end of the day, the weight can't be entirely on individual citizens in democracies to be able to stop misuse of AI and misinformation, but do you have any advice for people listening to our conversation about what they can do to sort of minimize the chances that they can be misled by this technology, because it is quite impressive? I mean, just learning that AI can get a small clip of your voice, and then build up realistic dialogues that could fool your closest friends is pretty chilling.
BRANDT:
Yeah. I mean, I think too much skepticism is a bad thing, but healthy skepticism is healthy, and so I think, especially when it comes to, for example, accessing election information, before you're listening to or taking your cues from posts from friends online, like we seek out trusted sources of information, authoritative news outlets, or your local election boards, those are the kinds of places where you can rely on the information that's provided there. Then, as I've said, if content makes you angry or if you sort of notice an emotional response, just take a beat and think about, "What is the intent behind that content?," and maybe don't play a role in furthering it, 'cause I think we would all benefit from, as I said, just healthy respect for one another, raise in quality of our political debates 'cause our democracy is as strong as we make it.
LINDSAY:
On that wise note about taking deep breaths and a moment to reflect, I'm going to close up The President's Inbox for this week. My guest has been Jessica Brandt, policy director for the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution, where she is a fellow in the Foreign Policy Program's Strobe Talbott Center for Security, Strategy, and Technology. Jessica, thank you very much for joining me for a very informative conversation.
BRANDT:
Thanks again. I enjoyed it.
LINDSAY:
Please subscribe to The President's Inbox in Apple Podcasts, Google Podcasts, Spotify, wherever you listen, and leave us your review. We love the feedback. The publications mentioned in this episode and a transcript of our conversation are available on the podcast page for The President's Inbox on cfr.org. As always, opinions expressed in The President's Inbox are solely those of the hosts or our guests, not of CFR, which takes no institutional positions on matters of policy.
Today's episode was produced by Ester Fang, with Director of Podcasting Gabrielle Sierra. Special thanks to Michelle Kurilla for her research assistance. This is Jim Lindsay, thanks for listening.
Podcast with James M. Lindsay, Liana Fix and Matthias Matthijs June 11, 2024 The President’s Inbox
Podcast with James M. Lindsay and Steven A. Cook June 4, 2024 The President’s Inbox
Podcast with James M. Lindsay and Andrés Rozental May 28, 2024 The President’s Inbox