Sebastian MALLABY: AI is going to influence everything. It’s probably the biggest event in human history since the Industrial Revolution.
Janet HAVEN: I’m really much more concerned about how automation will impact society at large.
Gabrielle SIERRA: Do you feel good about where we’re heading with AI?
MALLABY: Maybe we need to invent a new word here which is kind of a combination of frightened and excited. I’m fri-sighted.
Welcome to Part II of our exploration into the world of AI. In Part I we learned that AI presents nearly unlimited possibilities for scientific progress. And that it also, theoretically, presents unlimited dangers.
Regardless of which you believe, the fact stands that the technology is evolving really quickly.
All of this sets up a classic situation in which regulation will play an essential role. Every country will have to decide for itself how much to limit the potential harms of AI, even if those regulations could also slow down its development among fierce international competition. And the nature of the world we live in 10 or 20 years from now could hang in the balance. What path will the most powerful governments choose?
I’m Gabrielle Sierra, and this is Why It Matters. Today, how the world might regulate AI before it begins regulating us.
SIERRA: Can you give us just a quick lay of the land about AI regulation? You know, what exists currently?
HAVEN: So at this point, in terms of actual law that is specific to AI, I think the answer is either nothing or close to nothing.
Back from part one of this two part episode is Janet Haven. She’s the Executive Director of the nonprofit research group Data & Society. She’s also a member of President Joe Biden’s National Artificial Intelligence Advisory Committee, though it’s important to note that her comments in this episode don’t reflect the views of the committee itself.
HAVEN: This is an industry that has been remarkably unconstrained from the outset. One of the really clear interesting tropes that has come through that is the widespread agreement among policymakers that the essentially non-regulation of social media in the earlier 2000s was a huge miss and a big mistake.
MALLABY: I think if the objective of the regulation is not to suppress AI, but rather to disclose the identity of an AI that does something so you know it's machine done, to maybe include some explainability, confidence level, disclosures in the AI. I think those sorts of regulations can create safety, can create confidence, can create good use of AI, whilst not trying to eliminate AI. I think if you try to eliminate it, that's a fool's errand.
Also returning to the episode is Sebastian Mallaby. He’s the senior fellow for international economics at the council. He’s currently writing a book on artificial intelligence.
MALLABY: It's just too useful. And the upsides of AI are too compelling that if you try to eliminate it, A, you are shooting yourself in the foot, B, it'll be used somewhere else and you'll fall behind and that's not a good outcome. So countries should want to be part of AI. They should want to embrace AI. They also just should be thoughtful about actively managing the downside. I think the key is yes to regulation, but just make sure it's the right kind.
SIERRA: You know, we did an episode on the three global internets. China's heavily restricted system, the U.S.’s sort of free for all system, and the EU’s regulated system with a focus on privacy. Do you think that we'll see similar splintering in the AI space?
MALLABY: Yes. I mean, I think the Chinese will go their own way and the rest of the world will develop its own set of rules. And it'd be great to think that they can be unified. But given the current geopolitics, it doesn't seem very likely.
There’s a lot at stake for US companies as lawmakers decide how to regulate. The government knows that there’s serious money to be made from innovation, and the United States has typically been light handed in its regulation of the tech industry, which is a major pillar of its economy. On the other hand, the risks of AI are unique and considerable, and American lawmakers are increasingly aware of what can go wrong when tech companies are left to regulate themselves.
HAVEN: I think it's really easy to assume that people who are talking about regulation or people who are in my position who study, you know, the harms and the implications of these technologies in society are coming from an anti-tech perspective. It's important to say that that isn't the case. And I think that what we have the opportunity to do at this moment is not just to regulate, but is to consider how we create a different kind of social contract that opens up space to imagine what AI in the public interest looks like, what it could be for our society, for the United States, globally, that is truly beneficial. But I don't think that that's easy or obvious how to do it. And so the whole frame of the recent congressional hearing that we saw with Sam Altman testifying was, "We messed this up with social media, so now we need to get it right with AI." You know, which is interesting because it does suggest that there's some momentum to take meaningful steps.
https://www.youtube.com/watch?v=fP5YdyjTfG0
Sam Altman: My worst fears are that we cause significant harm to the world. I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening. But we try to be very clear eyed about what the downside case is and the work we have to do to mitigate that.
Okay, let’s stop for a second and just digest. AI is a world-changing technology, and so far there is no corresponding regulatory agency or body of law governing its use in the United States. That’s wild! But that doesn’t mean that the government has been sitting on its hands entirely.
HAVEN: One of the policy tools that came out last October, from the Office of Science and Technology Policy is the blueprint for an AI Bill of Rights. And what that laid out was five core principles of protections that Americans should expect, when interacting with an AI system. Those five principles, just very briefly, are safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and one that I think is really intriguing, human fallback and intervention. So, essentially, that if you receive a decision or an output from an AI system that you don't understand or that you disagree with, that you should have a right to interact with a human, which echoes the European Union's GDPR position on human intervention.
GDPR - or the General Data Protection Regulation - is the world’s strongest data security and privacy law, implemented by the European Union in 2018. One part of this legislation requires human intervention in some automated decision making. The U.S. doesn’t have anything like it - but the AI Bill of Rights could change that.
HAVEN: One of the things that I think is really important that we're seeing happen is that through an executive order called Further Advancing Racial Equity a number of agencies are flexing their powers that they currently have to more actively regulate around AI. So that particularly is through existing civil rights law, which of course in the United States we have civil rights protections, and those are applicable to AI. So we're talking about the use of AI systems in criminal legal systems, for instance, predictive policing, risk assessment scoring in courtrooms. Those use algorithmic decision making systems, in some cases AI, and have been shown to be ineffective in some cases, harmful in others, again, to increase inequality. So that is one area where we can see civil rights law coming into play. Access to housing is another area. The FTC is, of course, looking at antitrust and anti-monopoly action, and they're particularly looking at it through the lens of commercial surveillance and data protection. So looking at it very much from the beginning of the chain. So where the data is coming from, how we are collecting data. And there is a theory around one particular governance tool, the idea of data minimization - the collection of the minimal amount of data rather than right now where we're at, which is maximal data collection. And also the limiting of retention of data, the limiting of selling of data through data brokers. So the whole question of how data is regulated is beyond the narrow bands of privacy.
SIERRA: Is there a way to put the genie back in the bottle or is this a situation like nuclear technology where there's just no way to get everyone to just drop a dangerous technology all at once?
MALLABY: I think the nuclear analogy is terrific. I mean, nuclear is also a technology that has enormous destructive potential. But also if you think about civilian nuclear power, quite big upsides. And I think with nuclear, the difference is that it was produced entirely by governments at the beginning, whereas AI is being produced entirely in the private sector. However, there is an encouraging similarity between the two, which is that the number of teams in the world that are really cutting edge in AI is not all that big. There's a whole different debate about what if this is open sourced, then it becomes very cheap. Once you have the open sourced algorithmic code, it fits on a thumb drive and somebody else can plug that thumb drive into a laptop and start running generative AI models because somebody else has trained it.
Open source refers to code that is published publicly, and it can be modified and distributed by anybody. This can be a good thing. More than half of all academic papers studying machine learning aka the process that pushes AI toward higher capabilities, have relied on open source software. But while open source allows a larger group of people to help refine AI, it also allows access to those who would use it for destructive purposes.
MALLABY: That's a caveat that points to one policy prescription. We shouldn't be allowing open sourcing of these models. And since there are only a small number of companies that are big enough to train the models, those who do train it need to hear soon from their governments, you're not allowed to open source it because that is sort of letting the genie out of the bottle, and that's probably not a good idea.
Although there are no laws in the United States yet that regulate AI, there are laws aimed at fueling its development.
In 2022, President Biden signed the CHIPS and Science Act, which invested billions in U.S. semiconductor manufacturing and discouraged allies from selling semiconductors to China. And this paid off - as ChatGPT soared in popularity in the first half of 2023, Nvidia, the leading U.S. chipmaker, tripled in value, reaching a $1 trillion valuation.
Progress on making AI safe, however, has not materialized as quickly.
SIERRA: Okay, what are some other holes in AI regulation?
HAVEN: We need to worry about data privacy. In this country, we do not have a federal comprehensive data privacy law, which is extraordinary and leaves us open to enormous vulnerabilities. I think we also, you know, at the most basic level, people often don't know when an AI system is being used. That creates huge risk. And so we need approaches to understanding when an AI system is being used so that people understand what they're interacting with. We do not at this point have a national AI strategy, and that means that if we end up in a room with a bunch of other countries and are trying to hammer out some kind of agreements, that we are not drawing on a base that is solid. I think that we need to really understand the value of international agreements on emerging issues.
Alright so then it pays to see what other countries are up to. Last week, the EU got one step closer to passing the EU AI Act, a framework they first proposed in 2021 that could be finalized by the end of this year.
https://www.youtube.com/watch?v=I7EaKAdqvgs
Associated Press: We can build responsible AI for the systemic risks that this can entail, but also thinking of everyday citizens, consumers, businesses, institutions.
https://www.cnn.com/2023/06/15/tech/ai-act-europe-key-takeaways/index.html
CNN: This is huge and this regulation by the EU could act as a model for lots of other governments who are also trying to figure this out right now.
The act would create a suite of AI rules, including the disclosure of copyrighted material and restrictions on AI surveillance. Firms that don’t comply with its regulations would face steep fines: up to 6% of their global annual revenue. This proposal has already made waves across the AI industry and could even prevent some AI tools from being available in the region.
MALLABY: There's a bit of a sort of unifying thing where the Europeans write the regulations, but then they get adopted elsewhere as well because they just become accepted as best practice. There's less gridlock. Weirdly in Europe, you know, there're 27 countries in the European Union, but they seem to be able to get to a consensus on internet regulation better than the United States can.
HAVEN: The EU is far ahead of the United States in developing actual legislation. So the EU AI Act uses a framework for regulating AI that is risk based. It says essentially that we need to assess the societal risk of a particular instance of AI in order to decide how we regulate it.
Some European countries have gone even further, in April of this year, Italy briefly banned ChatGPT, saying that it collects too much user data. And recent reports show that Germany, France, and Ireland could soon be following in Italy’s footsteps. And of course China is a story unto itself.
https://youtu.be/ej-8pn9nXwY
CNBC Television: One of the key future battles with China is over artificial intelligence but one former Pentagon expert says it’s already over, and China has won.
https://youtu.be/F0dd_Vm7wXA
MSNBC: China has already declared its intent to become the global leader in AI by 2030 as the U.S. and European Union fight to hold their ground.
https://youtu.be/uaB5VJpX_dM?t=153
CNBC International: Beijing may also be interested in “monitoring or regulating” such AI products to make sure “they are not being used in ways that threaten national security or social stability.”
China has framed AI innovation as a national priority, with the potential for AI development to add $600 billion to its economy annually. But, in contrast to the United States, China is keeping tight control over AI development, as it has in the past over other industries.
HAVEN: From a geopolitical perspective, there's definitely this narrative that the United States is in an AI Cold War with China, that we are in this potentially existential race. And existential not in the sense of snuffing out humanity, but existential from a national security and international sovereignty perspective. That narrative is absolutely shaping the ways in which we approach regulation in this country.
This past April, the Chinese internet regulator released a draft regulation on generative AI, like chatbots. In general, the proposal focuses on transparency, anti-discrimination, and respect for intellectual property, though these rules would not constrain the Chinese government itself. The regulation also stipulates that output from generative AI must “reflect the core values of socialism,” a constraint that could shape how their AI develops.
SIERRA: Are we going to have to rely on each country to make its own regulations? I mean, is there a possibility of a global regulatory body?
HAVEN: You know, there is a possibility of a global regulatory body.
SIERRA: There's always a possibility really, yeah.
HAVEN: There have been calls from multiple places for an international regulatory body. That strikes me as really difficult to achieve if we don't have a clear sense of our own regulatory structure, but critically our own values in regulating AI.
MALLABY: People talk about having an equivalent to the IAEA, the International Atomic Energy Agency, which tries to set rules around the use of nuclear power and sort of supervise and prevent proliferation through inspections and so forth. And I think that sort of international body would be a good addition to the global system. I think, though, that one thing which is obvious for anybody who studies international relations is that you tend to get to results through a combination of supranational and national and private sector initiative. And no one body is going to be a silver bullet. So you're going to need national regulation as well. And you're going to need a lot of industries that think actively about developing best practice and self-regulate. I think the good thing here is that there already is conversation going on between industry and governments. There's conversation between companies, although they are competitors, they also talk about safety and ethics and responsible AI with each other.
SIERRA: Do you think that countries can or should push to limit private development? Sort of force everyone into this Manhattan Project model where everything is done under the eye of the government?
MALLABY: I think we're just not in a place where it's credible to believe that the government can take over all AI development. That would take such a radical act of nationalization and disruption. I think that's probably too extreme. But I think what you can do, and what's kind of happening anyway is that there will be a huge amount of dialogue between the AI developers and people in government. And people in government are already going to the obvious big developers, Microsoft, Google, et cetera, and saying, “You know, listen, we know that when you publish AI research papers, these are downloaded in vast numbers in China. Just be a bit selective about what you're publishing, a bit careful.”
SIERRA: I've lived through a lot of the emergence of the internet in my life, and I don't really recall a moment where regulators or “good guys” seemed to be ahead of “bad guys” or the rogue actors. Do you think that this will be the case with AI as well? Has it been the case with AI as well?
HAVEN: There's a lot of literature and research on what's called the pacing problem, which is exactly what you're talking about. That technological development tends to outpace regulation, but in many ways that's less about the regulation of the bad guys and more about the regulation of the guys, of the companies who have the greatest access to data, compute, talent, and money, and who are, you know, falling outside of regulatory controls. So I would back up and say I think that we have a pacing problem with, you know, regulating theoretically well-intentioned companies or, you know, capitalist markets. That said, we know that humans will use technical systems in all kinds of surprising ways, both, you know, wonderful and entertaining and also nefarious and adversarial. And so, I think regulating with the idea that we're only regulating the good actors or the sort of known actors and not considering the adversarial uses of AI would be a huge mistake.
SIERRA: One major concern that people have is that very soon we won't know if a piece of content: writing, news, music, et cetera, was created by AI. Do you think there should be a rule that AI always has to tell us that it's AI?
MALLABY: Yes. I think there's a question about how you enforce the rule if some of the AI’s are based in Russia or wherever. But here again, I think the optimistic vision is that, you know, the responsible players in AI are going to include what they call watermarks, either in the music that is generated, like computer generated music, computer generated images, computer generated text, all of these things will have signatures. So you, as a normal internet surfer, could see a great picture on some internet platform and then click, you know, like an information button and say who generated this? Was it a human? Was it a machine? And if it's a machine, the machine would reveal its identity to you. I think that's a super useful protection against deep fakes. The risk is of course, that responsible creators of AI will do that, and irresponsible people will be offshore and they won't, and the bad actors will use the irresponsible versions of AI. So then comes the question, what do you do about that? I think this is all something that's being discussed and worked out in real time, but those are some of the indications of where we might find a solution.
There’s just so much to consider when we look to the future of AI. And while the United States may be ahead on the development, we’re way behind on the regulation. The rules and guidelines we establish today are likely to have profound impacts on the future of society. Getting AI right could be a very good thing, and getting it wrong could be a very bad thing indeed. Almost everyone on earth has an interest in getting it right.
SIERRA: What makes a responsible AI?
MALLABY: Part of it is designing it in such a way that humans feel agency and feel that they can understand what the AI is doing. There's a whole field of trying to think of ways to reduce the black box element of artificial intelligence. So explainability is improved. So when you’re given an output, you might be given A, a confidence level around how confident the algorithm is that it got that right to B, maybe some indication of how it got to that answer. There's a whole field now called alignment within consumer computer science, which is to align the machines with what humans want from machines. And again, this is a sort of thing which gives me some optimism that the people building this stuff care about making it good.
HAVEN: I think that there is a very strong desire to see a check on what feels like a runaway industry that has amassed an enormous amount of power and money and decision-making ability to set the agenda on a technology that will impact us all. And I think that we need to think about the AI moment in that way, not as a moment of clamping down and, you know, shrinking opportunity, but as actually a moment of hugely increased opportunity to think about the ways in which we can use a technology like this to mitigate climate change, to find new energy sources, to solve seemingly intractable health problems. And I think that is entirely possible, but I also think that it requires societal investment and agency and shaping that we don't have right now.
For resources used in this episode and more information, visit CFR.org/whyitmatters and take a look at the show notes. If you ever have any questions or suggestions or just want to chat with us, email at [email protected] or you can hit us up on Twitter at @CFR_org.
Why It Matters is a production of the Council on Foreign Relations. The opinions expressed on the show are solely that of the guests, not of CFR, which takes no institutional positions on matters of policy.
The show is produced by Asher Ross and me, Gabrielle Sierra. Our sound designer is Markus Zakaria. Our associate podcast producer is Molly McAnany. Our interns this summer are Isabella Quercia and Jiwon Lim. Special shout out to Noah Berman for his production help on this two-parter episode.
Robert McMahon is our Managing Editor, and Doug Halsey is our Chief Digital Officer. Extra help for this episode was provided by Mariel Ferragamo.
Our theme music is composed by Ceiri Torjussen. We’d also like to thank Richard Haass, Jeff Reinke, and our co-creator Jeremy Sherlick.
You can subscribe to the show on Apple Podcasts, Spotify, Stitcher, YouTube or wherever you get your audio.
For Why It Matters, this is Gabrielle Sierra signing off. See you soon!