Asher Ross - Supervising Producer
Markus Zakaria - Audio Producer and Sound Designer
Molly McAnany - Associate Podcast Producer
-
-
Yoel RothVisiting scholar, University of Pennsylvania
Transcript
Hello everyone, it's our final episode of the season and we have absolutely loved having you all along for the ride. The Why It Matters team wishes everyone a happy holiday and a happy new year, and we look forward to seeing you back in the spring for a whole new round of episodes! Now, on with the show.
https://youtu.be/ALjRSf13OX0?si=w2q_fg8bhJC9rdsF&t=7
DW News: From shaping the future of search engines to driving the next era of warfare, AI is fueling a new industrial revolution.
https://youtu.be/G3Ov4lXIJ1E?si=abi9HC8FLZDyg1ER&t=121
ABC News: You’ve got a fake image of the pope. Fake images of President Joe Biden. At some point if those proliferate at a much greater scale, won’t that confuse people about what the truth is?
https://youtu.be/r5OLEM0rsjk?si=kET12xfRXt5zx24k&t=23
Reuters: It’s one of my areas of greatest concern. The more general abilities of these models to manipulate, to persuade, to provide sort of one on one interactive disinformation.
Since it splashed into public view last year, artificial intelligence has raised serious questions about the future. The future of work, medicine, and even the survival of the human race itself. But as we look ahead to these long term issues, AI is poised to play a significant role in something far more immediate: the global elections of 2024.
2024, which we are calling the year of elections, could mark AI’s first tangible test, with a powerful technological tool in the hands of billions of individuals as well as foreign governments, all with powerful incentives to deploy AI-powered disinformation for their own advantage.
More than half of the world’s population, across 78 countries, will be eligible to take to the polls next year, and while the elections won’t all be equally free and fair, the leaders who emerge from them will be making critical decisions about a series of existential global issues: climate change, democracy, conflict, and the regulation of artificial intelligence. The stakes are high, the threats are real, and nobody knows what’s going to happen.
I’m Gabrielle Sierra and this is Why It Matters. Today, my chat with two experts, Kat Duffy and Yoel Roth, in which we explore the phenomenon of this supercharged election year.
Yoel ROTH: I feel like 2023 was this really defining moment where we all learned to panic about the long-term possibilities of AI.
This is Yoel Roth, a visiting scholar at the University of Pennsylvania and previously led the Trust and Safety team at Twitter, working on election security and content moderation issues.
ROTH: This idea that a few years from now, a super intelligent AI could conquer humanity and end the world as we know it. And like yeah, that's scary and I worry about that, but when I look ahead to 2024, the thing that worries me more is the fact that three billion people around the world are going to be voting. We have a year that is jam packed with elections, and we're seeing the intersection of novel technologies, and all of the existing threats that elections have faced for years. I'd like us to focus a little bit more on the grounded realities of how AI can disrupt elections and democratic deliberation. And I think there's quite a bit to worry about there too.
Kat DUFFY: Yoel, I could not agree more in terms of the need to focus on immediate risks.
And this is Kat Duffy, our own senior fellow for Digital and Cyberspace Policy here at the Council. Last year Kat also ran a task force on the emergence of the trust and safety space.
DUFFY: One of the things I talk about a lot is the fact that right now we're living with AI, we're living in this sort of post-market, pre-norms space. And that's a really wonky way of saying that a whole market for AI tools and software and applications, the commercialization of that has exploded. And all of these new models have come out that tools can be built on that operate in different languages, that offer different capacities, different audiovisual capacity. So you have this incredible market that is available to the public, and there aren't really many constraints on what that is allowing. At the same time, we don't really have any laws. We also don't have any societal norms. We don't have any community definitions. We don't have any sort of common understanding of how it's okay to use those tools and when it's not okay to use those tools. And so you have the sort of traditional confusion that comes with an emerging technology, but now you also have the fact that we have 78 countries with national level elections next year. Everyone is very well aware of the fact that the U.S. is having an election, but we also have countries like India, Mexico, South Africa, Taiwan, really important elections that are going on in the UK, the EU Parliament. And so the way that all of these different and new tools can be used to confuse elections, to create distrust in elections, can be used for nefarious purposes, and our inability at the moment to really forecast exactly what that will look like, how it's going to play out, or how we can prevent it, that is one of my greatest areas of focus for the coming year.
ROTH: We're also at a moment where there's a real contest of ideas about how to govern these technologies, and that's part of the stakes for the elections that are coming up next year as well. We have different folks across Europe, the United States, Asia, everywhere, who are debating the right ways to govern social media, the right ways to govern AI. And when voters go to the polls next year, part of what they're going to be choosing is a regulatory direction for these technologies for the future of how we manage climate change, the future of how we think about democratic integrity and election security around the world where there are very different perspectives, the stakes couldn't be higher. And it's all happening at a moment, where as you said, Kat, we don't have clear norms, we don't have a clear path forward, we don't have an aligned vision of what this future is actually going to be.
A few weeks ago, European Union lawmakers became the first to pass AI legislation, requiring things like watermarking AI content. But these rules won’t be enforced for years, and the rest of the world is even further behind. Without those norms, things like deepfakes and AI-powered phishing are already flooding the internet with content intended to influence voters. But just as AI shapes the 2024 elections, the leaders elevated in those elections will in turn shape the regulation and development of AI and its role in our society. They’ll take power at what feels like a global inflection point, as destabilizing conflicts continue to erupt around the world and as climate change becomes increasingly visible, with extreme weather wreaking havoc.
DUFFY: Simultaneously, we have 50 different global AI governance initiatives happening in some form or fashion. Some of those are smaller groups like the G-7 Hiroshima Process. Others are the Digital Belt and Road initiative AI Governance Process that China announced, which includes 155 countries. So, when we think about global technological governance over the next three to five years, the elections in this coming year are defining the governments that are sitting at those tables and the governments that are making those decisions. And all of that is happening in a year where we have seen consistent democratic decline globally and the increase of autocracy and illiberal democracies globally as well.
We don’t have to look far from home to see evidence of democratic decline. In 2020, President Donald Trump falsely claimed that the election was stolen from him, feeding extremism that culminated in the January 6 riot on the U.S. capitol.
Gabrielle SIERRA: Is the United States ready for another election muddied by disinformation? Because AI could make all of that so much worse.
ROTH: I feel like we've gone through one of these cycles recently enough that the damage and the blast radius of going through another messy election so soon could be considerable. I'm referring here to the 2016 elections, where I think we came to awareness, especially in the West of the risks of foreign influence on democratic elections and the ways that social media could be weaponized in those contexts.
https://youtu.be/ZusqgWUNFG4?si=SIdtxJNXQJPxQbAT
Channel 4 News: In a U.S. election like no other, made-up news took center stage.
https://youtu.be/NWcWtHYZtz0?si=yZbjw1e3tiO92FMq&t=25
ABC News: And we’re learning disturbing new details about just how pervasive the Russian attack on the 2016 election actually was. The new senate intelligence report says the Russians likely attempted to find weaknesses in the voting systems of all fifty states.
https://youtu.be/tW-dg_IU3uM?si=Xuiz1yUHMMinlyx0&t=25
PBS NewsHour: Users shared false stories like this one about Pope Francis endorsing Donald Trump or Hillary Clinton selling weapons to ISIS hundreds of thousands of times, even more than real stores.
ROTH: In 2020, especially in the United States, I think we learned hard and painful lessons about democratic backsliding and about the ways that domestic violence and extremism can lead to real harm and can imperil the democratic process. None of those factors have gone away. They're chronic illnesses, we haven't cured them. Foreign disinformation is a factor. Extremism remains a factor. Democratic backsliding remains a factor.
https://youtu.be/mc7_GzMOodY?si=etfhFFtsy_wxgq9Y&t=3
CNN: Stop the steal, stop the steal, stop the steal.
https://youtu.be/mc7_GzMOodY?si=3STnH-wFy-crZId-&t=7
CNN: Trump is still your president.
https://youtu.be/mc7_GzMOodY?si=5xtXizFXyMMcPI4E&t=39
CNN: The ballots that you said you saw lying around the place, or in trash cans or whatever, where are you hearing that from? The videos are going viral everywhere. I’ve seen ‘em on TikTok, I’ve seen ‘em on Facebook.
https://youtu.be/jWJVMoe7OY0?si=q6300ugtsBuhIvbn&t=126
The New York Times: This will be their destruction.
https://youtu.be/jWJVMoe7OY0?si=qqGFp0iEQRj6Bb7h&t=131
The New York Times: What happened next was chaos, insurrection, death.
ROTH: And now, we have a bunch of technological unknowns, new and emergent systems that are going to influence and potentially contribute to all of that, and it's all happening all at once. And so I would say it's not so much that, "We have a new thing and maybe we'll get it wrong and we'll get a version 2.0 next time." It's more like, "We have an accumulation of a lot of different risks and they just keep intensifying."
DUFFY: I think that's absolutely right. And I would also say that where we have seen the greatest impact of a lot of bad information in electoral cycles is in sowing distrust in the electoral process itself. And if one cannot believe that one's vote has been counted, that one has participated in a free and fair electoral process, one cannot believe that they're living in a democracy.
Since the release of generative AI tools last year, the internet has become rife with debate about authenticity. Deepfakes, including the image for this very episode which is a composite of AI generated and real photos, have seriously threatened democracy because they can lead people to believe in falsehoods. But the opposite is just as dangerous. In a world of plentiful deep fakes, voters may become skeptical of everything they see - particularly real footage that is inconvenient to their political views or ideology. For example, a New York Times analysis of social media discourse following Israel’s invasion of Gaza found extensive evidence of people saying that real images of casualties were fake, and then using that perception to further their arguments. This presents serious threats for democratic societies that depend on consensus, trust, and shared information.
DUFFY: And so the more that it becomes inexpensive and fast and easy to sow distrust in a process, the more that we suggest to citizens around the world that in fact maybe they're not living in a democracy or maybe it's not possible to have a democratic or representative government. And it's that distrust that also, I think, is deeply concerning to me, because it's aligned with a broader distrust that we're seeing in information and facts writ large. Right? This concept of the liar's dividend is that, if everything is fake, nothing can be true. And if nothing can be true, everything can be fake.
In some countries, these risks have already materialized. In Slovakia, fake videos of candidates containing hate speech and disinformation ahead of their recent national election offered a glimpse of how AI deepfake campaigns could become a part of political reality next year. While there is no way to quantify AI’s impact on the election, journalists noted that deepfakes favored the talking points associated with the populist party that ultimately won most of the votes.
Similarly, AI-generated deepfakes of newscasters in Bangladesh, reportedly costing as little as $24 a month, accused U.S. diplomats of interfering in the country’s upcoming January election - which experts say is likely to be stage-managed in favor of the incumbent.
But again, we don’t need to look that far from home to see how AI could alter the electoral landscape. Here in the United States, a recent Axios poll found that half of Americans are now worried AI misinformation will impact next year’s election, and a third say that AI will make them less trusting of the results.
ROTH: I think we're still at the starting line of building an empirical understanding of how exactly this works. But there's some things that I find a little bit concerning that we already know. For example, researchers at the Stanford Internet Observatory carried out a study of whether AI-generated propaganda could be more persuasive to people than human-generated propaganda. And what they found was that in certain circumstances with appropriate tuning, yes, AI-generated propaganda could actually convince people more effectively than something written by a human.
Last year, pro-China bot accounts posted videos on social media that appeared to show American newscasters touting Chinese Communist Party positions on issues like U.S. gun violence. But the newscasters weren’t real - they were generated by a British AI company called Synthesia. Tailoring such deepfakes to appeal to specific individuals’ interests is a technique known as microtargeting - and generative AI has made it much easier and cheaper. In fact, on its website, Synthesia even said that its process is “as easy as writing an email.”
ROTH: That's really worrisome.
SIERRA: Yeah.
Roth: It suggests that one of the things we were terrified about in 2016, vis-a-vis Cambridge Analytica, this notion of supercharged data-driven microtargeting actually now might be a thing that AI enables. And we have a study for the first time that suggests that AI can make propaganda more convincing to folks. And so I'm concerned, but I would qualify that concern by saying we don't actually yet know how transformative AI is actually going to be in these contexts and whether it's going to be effective, more effective than a human baseline at changing what voters believe and what they ultimately do.
DUFFY: And I would add onto that, that part of the challenge here, and Yoel knows this literally better than anyone, is that the content that is surfaced and the content that people see will be highly dependent on whatever digital platform they are using to get their information. And we don't have uniform standards, and we don't have uniform resources across those different platforms to equalize that playing field. And so if you are living on Telegram or on TikTok, you, in some respects, may be living in a different reality than someone who is getting served up their content by, let's say, Instagram or Facebook. Or if you're in a private WhatsApp chat, you're likely going to be getting served up very different content than if you are in like a private Signal chat, right? And what we've consistently seen is that access to all of those other types of content is then sowing distrust in credible independent media because, you know, they may not be telling the whole story. And so there's also this fact that right now the global community is living across an increasingly large patchwork of different digital platforms, all of which have their own standards, their own incentives, their own drivers, their own revenue models. And so even within a community or within a family, you could be having a very different experience of what information is real or is true, based on the way that you're getting served that information.
ROTH: The researcher, Renee DiResta, who's at the Stanford Internet Observatory has coined a brilliant term to describe this, it's’ ‘bespoke realities.’ She talks about the ways that increasingly the mediated environments that we are in lead us down these very different paths. And the result of that, ultimately is an erosion of our ability to be a part of a democratic populace. The consequence of a bespoke reality is that you end up living in a very different world than your fellow citizens. And that seems to challenge some of the basic concepts underpinning deliberative democracy.
DUFFY: And Gabby, just to bring it back to the sort of AI question and why AI may be a game changer here. For me, it's really about the speed, the scale, and the diminished cost of pushing out information that is audio, that is visual, that is audiovisual, and that is multilingual. And those are all things that the current AI models power in a way that was harder before. Like it was always easy to modify text, it was then pretty easy to modify photos, audio was technically harder, and video was really pretty hard. And so you could still do it, but it took a really high-level capacity. It took training, it took solid resources. Now, it's going to be available for like $1.99 in an app store. So the first thing is that you can produce sort of what used to be nation state-level types of disinformation, and you know middle schoolers will be able to do that in short order if they want to, just as an experiment. And so again, part of it just becomes about sowing confusion and sowing distrust and the ease with which it is possible to do that if you have a vested interest in messing with someone's election.
SIERRA:: It sounds like you're partially talking about just content creators, right? And from 2016, I think a lot of us have been used to thinking about disinformation as something that like foreign governments do, you know, Russia, et cetera. But why would regular people want to use AI to exploit the elections?
ROTH: I think actually, this comes back to what you were saying, Kat, about the liar's dividend and about the sort of erosion of our ability to trust anything that we're seeing. And I think we're going to start running into some of those same risks around the content that we are exposed to vis-a-vis AI. If you encounter something that challenges your worldview, what's to stop you from saying," I think this is a deepfake. I don't think this person actually said that"? You can imagine something like the Access Hollywood tape in 2016 coming out today and being dismissed as a fabrication. That undermines our ability to have these really critical debates from a shared foundation of fact.
DUFFY: You know, Yoel, adding onto that. I also think another component here is the creator economy and how it works, and the thought that you could generate now something that is the equivalent of that 2016 tape, have it be completely false, go out, circulate it, and earn a lot of money from the clicks that you are getting on that. And so there can also be purely economic motivators here in sowing really bad information that have nothing to do with political malevolence or political influence and have everything to do with the fact that there's a profit motive. So it can be just as simple as you've got a farm essentially of people who are churning out this content in different countries, we've seen Southeast Asia for example, be an epicenter of a lot of this, and make enough money to make a very good living in their country. And it's just about some other country, there's no political malevolence there. It's just a gig economy.
ROTH: Again, not to take us back in time, but like for all that we're really worried about AI in the present, these aren't new dynamics. In 2016, one of the driving forces of interference in American elections were a bunch of teenagers in Macedonia who made a whole bunch of money by publishing pro-Trump articles on fake websites. The term fake news emerged as a way to describe teenagers making a quick buck by manipulating social media. These are the same dynamics we should expect to see today. Right again, they are chronic security challenges, and those challenges can evolve over time. Maybe today the Macedonian teenagers will use ChatGPT to write their articles, but they were writing them by hand before, and the economic incentives to do that are just as present today as they were in 2016.
To be clear, Donald Trump did a lot to popularize the term “fake news.” And while individual actors can now more easily use generative AI tools to create content and exploit political polarization for profit, foreign governments are still in the game. Afterall, Russia played a big role in facilitating the spread of misinformation in the 2016 election - something we covered in an episode in our first season.
And this time around, China is taking a page out of Russia’s playbook. Meta has already removed almost 5,000 fake Facebook accounts that the company said China was using to impersonate Americans and rile up voters ahead of next year’s election.
DUFFY: One of the things I'm concerned about for next year is that we have a lot more to worry about. We don't necessarily have a commensurate increase in the resources or the capacity to do the research that is needed to actually understand how this is playing out. We are also seeing partisan attacks on the information researcher community that has driven much of our understanding of how these dynamics play out, both in the United States and abroad. And the platforms don't have unlimited resources to look at every single country. So if that's not a high priority market for that platform, the U.S. government and independent researchers can play a really fundamental role in surfacing that information and helping the platforms figure out that they need to respond. That particular cycle of communication and collaboration is far more endangered going into next year than it was in previous elections. And I think it is changing the stakes.
ROTH: We're also seeing platforms back away from these critical efforts. 2023 has been a year of cutbacks and layoffs across the tech sector, but those layoffs have hit trust and safety teams especially hard. And that's especially true of the teams responsible for working on misinformation and election security issues. And so the question is, where do we go from here? In an environment where everybody across the board is pulling back and we've started to migrate to services with more inherent vulnerabilities, it seems like we're headed for a bit of a perfect storm.
SIERRA: Okay, of the more than 70 countries having elections, which are not all democratic, which should we be keeping an eye on?
ROTH: India has always been a particularly challenging country for election security efforts on social media and on the internet because of precisely how diverse and heterogeneous it is. It's a country of many very different states with different political stakes, and that makes engaging with questions of disinformation quite challenging. We've seen a number of attempts in the past to manipulate elections in India across social media. For example, when voters go to the polls in India, after they vote, their fingers are painted with something that indicates that they have already voted. And on Twitter, we saw it in previous major elections in India, a campaign to try to persuade people that the paint that was used contained pig's blood, which appeared to us to be a clear attempt to disenfranchise Muslim voters who potentially would not want paint containing pig's blood to be applied to their fingers. That was untrue. And as a platform, Twitter intervened and applied fact checks and removed a number of those posts. And so going back to our discussion about the resourcing that platforms and tech companies are investing in this work, one of the things I really worry about is that it takes a team of specialists time to adjudicate every single one of those claims. Multiply that by hundreds or thousands of claims of deepfakes and cheap fakes and manipulated audio, and then multiply that by 70 different elections happening concurrently, as the team's responsible for doing this work have been cut back below their previous levels, I think we have a recipe for disaster.
DUFFY: And I think you can look at Mexico as well, we’re talking about longstanding problems and longstanding trends where AI exacerbates the risks and exacerbates the threats. And so I think Mexico is another good example of a fraught and unsafe environment that then becomes open to even greater fragility and even greater exploitation on this front. And when you look in the U.S. then at a bill that would provide aid to Ukraine being held up because of disagreement over our border policy, it also makes you think very carefully about what the ramifications are of the Mexican election vis-a-vis the U.S. political reality and how that then plays out in other continents. So there's a chess game at play here on the foreign influence operation that I think can be hard to track, but that is a very legitimate risk.
In a previous episode of Why It Matters, we talked about India’s backsliding democracy and how problematic disinformation was there several years ago. Today, not only has the quality of misinformation campaigns and deepfakes greatly improved, but the volume of such disinformation is now much higher because of AI.
And by the way, it’s not just India and Mexico. The dozens of elections next year will determine the leaders of governments influential in both regional and global dynamics, and some will occur in countries that are already experiencing democratic backsliding. There are also few issues more global than AI governance - which many experts have compared to nuclear weapons in terms of the international cooperation required.
https://www.youtube.com/watch?v=zgKxpUNUpmY&t=377s
Lex Clips: I grew up in the 70s and 80s where the nuclear doom, a lot of adults really had existential threat, almost as bad as now with AI doom. They were really worried.
https://www.youtube.com/shorts/WstkiHTzYCA
Elon Musk/@MarketingTutorship: Mark my words, AI is far more dangerous than nukes. The danger of AI is much greater than the danger of nuclear warheads, by a lot.
SIERRA: What about the good old U.S. of A.? Are U.S. elections better protected against AI than other countries’?
ROTH: In many ways, I think American elections are going to represent the best case scenario of what platform responses to adversarial activity looks like next year. The problem is, most of the people voting next year aren't going to be Americans, but we're going to see American companies, primarily based here in California, continuing to invest heavily in American elections to stave off regulatory pressure and advertiser pressure that tends to be disproportionately focused on the United States. And that's a real risk globally. I think even Americans are going to face challenges. It's going to be extremism, it's going to be the impact of synthetic and AI-generated content of foreign adversaries targeting these countries, and platforms are going to focus their increasingly scarce resources on where they're based and where they feel the most acute regulatory threats. And unfortunately, that's going to disadvantage voters in India, in Taiwan, across Southeast Asia and Latin America, where the threats exist, arguably where the threats are even more significant, but there are fewer of those protections, because in the eyes of primarily American tech companies, there just isn't as much of an incentive to focus resources there.
SIERRA: Okay, so Americans are better protected against electoral threats than other countries, but are there any risks distinct to the United States?
ROTH: We do face particular headwinds in this country related to how big and diverse, and consequently how localized a lot of these discussions tend to be. And they're happening at a moment where local media is the weakest it has ever been, and arguably lacks sustainable funding structures to create the kind of local press that is essential for filling in the information gaps that social media's failures can create. Researchers have talked about this phenomenon of data voids, this idea that when something happens and people turn to the internet to find information, what they are met with is a void. AI is going to be great at filling that void. It's going to fill it quickly, it's going to sound convincing because ChatGPT is really good at writing convincing prose, and the problem is it just might not be true.
It’s not too late - at least in the United States - for government officials and companies to take actions to protect elections from the brewing AI storm. In October, President Joe Biden signed an executive order requiring the Department of Commerce to develop rules for watermarking AI content - rules that are yet to be published. Until then, social media companies could require political advertisers to disclose their use of AI - which some, including Meta, have already done. Meanwhile, AI could also facilitate some improvements.
DUFFY: So the other things that are going to be really interesting are like, to what degree is fact checking much faster now? To what degree can you be listening to a live debate between candidates and almost instantaneously determining whether or not what they were saying is in fact accurate or not, and how you get that information out in a faster way, the channels for doing that, the audiences that you could reach with that, the different ways that you can communicate it? I think that's really interesting. I have a middle schooler, and when I listen to my middle schooler and their friends talking about different types of environment, that generational divide is significant, and they have a vastly different take on what is trustworthy and what is not. And they're highly skeptical, and I think they're skeptical in good ways and in healthy ways. And so it's possible, too, that we're going to hit some critical threshold of distrust and exhaustion stemming from it, that we actually get a reemergence of interest and demand for carefully vetted, truly credible information.
SIERRA: We’ve touched on this, but really, the leaders we elect in this upcoming cycle, how much say will they have over decisions about AI governance?
ROTH: I think the foundation of how leaders are going to engage with and regulate AI stems from how informed they are about it and who is informing them. I think the best thing we can do as citizens is demand that our leaders shy away from knee-jerk, headline-driven pushes to regulation, and instead assemble diverse groups of experts who can weigh in on these technologies. And I'll emphasize diversity here, right? It can't be the same group of primarily white, primarily male, primarily wealthy Silicon Valley CEOs talking about these technologies, we need to bring in particularly the women of color who have been warning about the harms of AI for years. There is expertise on this that exists, those voices need to be elevated, and we need to make sure that regulators are hearing from them. Not just about these long-termist, alarmist concerns about AI, but about the ways that AI can be used for discriminatory and harmful and dangerous purposes right now. There are people studying this and writing about it. We should demand that our representatives listen to them, rather than listening to the loud voices coming from Silicon Valley, talking about the things that primarily serve the interests of Silicon Valley companies.
DUFFY: At the geopolitical level, this is a bit of a different story. It's really been interesting in AI governance, this tension between great powers competition on the one hand, and the fact that we are also inhabiting what a lot of people in foreign affairs are referring to as a multipolarity space, where African countries, Latin American countries, southeast Asian countries, are not just aligning with one side or another side. This isn't like a Cold War where we have these hard blocks. Countries are going to, I think, swing in different ways and through different governance models based on what makes sense for them. What isn't going to change there, what hasn't changed there, is the human capacity inside of those governments to have the governmental officials with the requisite expertise to sit in a governance space or in a governance room and have political power or be able to really roll out what that governance is in their own countries. And so there is going to be a lot of catching up to do on the global governance side, in terms of different governments having the capacity that they need, both to balance power, but also to have the technical expertise to weigh in on these conversations and truly inform them. And so there's a lot of work to be done there, and the technology and its expansion is far outpacing those other elements.
This semester we had such wonderful interns and we are so sad to see them go. Here they are to read us out for our season finale...
Rhea BASARKAR: Thanks Gabby! For resources used in this episode and more information, visit CFR.org/whyitmatters and take a look at the show notes. If you ever have any questions or suggestions or just want to chat with us, email at [email protected] or you can hit us up on Twitter at @CFR_org.
Kalsey COLOTL: Why It Matters is a production of the Council on Foreign Relations. The opinions expressed on the show are solely that of the guests, not of CFR, which takes no institutional positions on matters of policy.
BASARKAR: The show is produced by Asher Ross and Gabrielle Sierra. Our sound designer is Markus Zakaria. Our associate podcast producer is Molly McAnany. Production and scripting assistance for this episode was provided by Noah Berman. Our interns this semester are me, Rhea Basarkar...
COLOTL: ...and me, Kalsey Colotl. Robert McMahon is our Managing Editor, and Doug Halsey is our Chief Digital Officer. Extra help for this episode was provided by Mariel Ferragamo. Our theme music is composed by Ceiri Torjussen. You can subscribe to the show on Apple Podcasts, Spotify, YouTube or wherever you get your audio. For Why It Matters, this is Kalsey.
BASARKAR: And this is Rhea, signing off.
BASARKAR/COLOTL: See you soon!
Show Notes
Around half of the world’s population will cast their vote in national elections next year. As governments around the world prepare to host potentially world-changing elections, they must now consider a new threat: artificial intelligence.
Individuals and foreign governments alike could be incentivized to use AI to influence this massive slate of elections. AI-aided disinformation could be particularly dangerous in countries like India and Mexico, where democracy is already backsliding; even in countries where elections are unlikely to be free and fair, authoritarian leaders could use AI to manipulate public opinion. Meanwhile, the leaders elected next year will contend with a slew of global issues, including worsening climate change, a series of wars new and old, and the rise of AI itself. As international rules governing AI remain sparse, the leaders who emerge in 2024 will have a huge say in the regulation of AI across the globe. Not only will AI influence next year’s elections, but these elections will influence the future of AI.
From CFR
Anu Bradford, “The Race to Regulate Artificial Intelligence,” Foreign Affairs
Ian Bremmer and Mustafa Suleyman, “The AI Power Paradox,” Foreign Affairs
From Our Guests
Kat Duffy, Liana Fix, Will Freeman, Matthew Goodman, and Zongyuan Zoe Liu, “Visualizing 2024: Trends to Watch,” CFR.org
Yasmin Green, Andrew Gully, Yoel Roth, Abhishek Roy, Joshua A. Tucker, and Alicia Wanless “Evidence-Based Misinformation Interventions: Challenges and Opportunities for Measurement and Collaboration,” [PDF] Carnegie Endowment for International Peace and Princeton University
Read More
Robert Chesney and Danielle Keats Citron, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security” California Law Review, University of Texas at Austin School of Law, and University of Maryland Law School
Renee DiResta, Matthew Gentzel, Josh A. Goldstein, Micah Musser, Girish Sastry, and Katerina Sedova, “Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations,” Stanford Internet Observatory
Watch and Listen
“Confronting Disinformation in the Digital Age,” CFR.org
“Elections in the AI Era,” CFR.org
“AI’s Impact on the 2024 U.S. Elections, With Jessica Brandt,” The President’s Inbox
*Disclaimer: The image for the episode includes content generated by artificial intelligence (AI).
Podcast with Gabrielle Sierra, Onikepe Owolabi and Patty Skuster June 5, 2024 Why It Matters
Podcast with Gabrielle Sierra, Ashok Swain and Hartosh Singh Bal May 23, 2024 Why It Matters
Podcast with Gabrielle Sierra and Daniel Kurtz-Phelan May 10, 2024 Why It Matters