Expert in computer chips that run the AI revolution weighs in on how it might affect us in many ways

“I think we’re entering a new era in which we no longer search for information on the internet,” says Chris Miller. “We ask the internet and these AI models to give us information.” Illustration: Shutterstock
Earlier this year, the Chinese company DeepSeek announced that it had the ability to perform AI tasks at a fraction of the cost of competitors like ChatGPT. It also censors its responses to questions about Chinese politics.
That raises many questions about the future of AI. What are the production, technological, and political challenges facing AI development? Will AI live up to its promise of making life easier and better, or will it destroy jobs and potentially run amok? How will geopolitics affect AI development, and what’s been the result of U.S. efforts to build homegrown advanced computing chips for AI?
To answer these questions, Michael Klein, William L. Clayton Professor of International Economic Affairs, recently spoke with Chris Miller, professor of international history and author of Chip War: The Fight for the World’s Most Critical Technology, which won the 2022 Financial Times Book of the Year Award, for an EconoFact podcast, from which this is adapted. Both are on the faculty of The Fletcher School.
Michael Klein: DeepSeek, a Chinese AI company, seems inexpensive to use. Is it?
Chris Miller: A broader question is whether DeepSeek is charging at cost for using the model. Is it charging with a bit of a margin? Or is it even offering use below cost to win market share?
The strongest version of the argument is that DeepSeek might be doing to AI what China’s already done to other industries like solar panels: provide subsidized production, lower cost to win market share, and imperil the business models of foreign firms. I think that’s an interesting issue to track.
“There’s inevitably going to be a relationship between the politics of the day and the outputs of AI models.”
It might be related to the fact that when DeepSeek was first launched, it had such demand for its services, it was unable to service them all and had to shut off access to non-Chinese users, which suggested, again, that they weren’t pricing at a level that they were able to sustain in the long run.
I’ve read that if you ask DeepSeek questions about the Tiananmen Square massacre, it doesn’t seem to answer those. Is that correct?
That’s true. DeepSeek, like a fair number of Chinese models, is censored for political reasons. I think we shouldn’t, though, conclude that it’s not a capable model. I think the censorship is probably fairly narrow. It won’t talk about Xi Jinping. It won’t talk about Tiananmen Square. It’s cautious in talking about Winnie the Pooh. But that doesn’t mean that for 99% of use cases, it won’t give you a pretty good answer.
Still, it seems like there’s something Orwellian about this—do you agree?
Yes. I think we’re entering a new era in which we no longer search for information on the internet. We ask the internet and these AI models to give us information. This is a particular challenge in authoritarian countries with censorship, like China, where you’re going to have the government actively saying to companies, do this, don’t do that.
There are other issues, too. Unlike Google, where if you search for something, it’ll give you 10 different search results, ChatGPT will give you one result. Or if you ask for two results, it’ll give you two, but only if you ask for it.
I think this will pose an interesting set of dilemmas and trade-offs for us as a society, as we increasingly learn to trust AI models as arbiters of truth. And I think we’ll have to collectively develop an ability to parse what models are telling us and learn to assess when they’re likely to give us accurate versus inaccurate answers.
Is it conceivable that this kind of thing, with national politics influencing AI responses, will happen in the U.S.?
There’s inevitably going to be a relationship between the politics of the day and the outputs of AI models. That’s because humans play a role in training AI models, in helping models prioritize what to focus on. They play a role in the quality and safety evaluations that are done at the end of the training process.
“I think we’re probably overestimating the scale of labor market turnover or turmoil that we’re going to see.”
So to the extent that you prioritize or deprioritize anything, that will be present in the model. I think you see this from both sides of the political spectrum. The right alleges that models are being too politically correct, for example. I think you’re seeing now, with political change in the last couple of months, concerns on the left that you’ll have models being biased in a more right-wing direction.
I think there’s no single way to be sure models aren’t biased. They’re inevitably biased by the information you train them on, which is in turn a feature of the choices that humans make when they set up the training. That speaks to the benefits of having competition—multiple models to choose from.
But it also speaks to the importance of teaching individuals to be discerning users, just like we want people to be discerning readers of the news, discerning browsers of the internet. We’ll also need to become discerning users of models—to learn what we should trust and what we should not.
What do you think the economic landscape will look like as AI becomes more pervasive?
One of the key areas of uncertainty in answering this question is about the rate of change. We’ve always had technology improving, and different ways to measure it. If AI improves at roughly the rate that computing technologies have changed in general over the past half century, we shouldn’t expect the social implications to be more disruptive, nor the labor market implications to be any more disruptive.
Given that assessment, I think we’re probably overestimating the scale of labor market turnover or turmoil that we’re going to see. There are lots of examples from economic history of expectations of large-scale unemployment that weren’t matched in reality.
“There’s a really direct relationship between peace in East Asia and progress in AI, because the way supply chains are structured today—tech companies couldn’t get the chips they need without access to Taiwan.”
My favorite example is employment in bank branches. When the ATM was first invented, there were widespread predictions that banks would need fewer employees, because they wouldn’t be handing out cash. Today, there are more people working in banks, because they’re selling mortgages and other types of products.
The concern I have is if we have much more rapid progress in AI than prior trends, it’ll be harder to digest the labor market impact. But that could be a good scenario as well as a bad scenario.
If we reach an age of rapid productivity growth, that will create much more wealth. And then the question is just, can it be allocated in a way that addresses the labor market impact that’s created? But right now I don’t think we see evidence that we’re having this extraordinary acceleration in productivity growth. It’s certainly not present in economic data.
For now, I think the most reasonable assumption to make is that technology will continue to improve. We’ve reached a new phase of computing technology, which is AI. It’s going to be one of the key drivers of productivity improvements—but not a revolutionary one, or at least not any more revolutionary than smartphones or PCs or mainframe computers were in their day.
There’s also the deeper concern that in a world with pervasive AI, there could be apocalyptic possibilities. As a historian, what do you think about that?
I think it’s impossible to fully reject or disprove apocalyptic scenarios. But I’m struck by our ability over time to develop institutions to manage risks in other technologies. The fact that we’ve managed to live in the world for 70 years without nuclear weapons use, even though we have nuclear weapons, I think is a positive sign.
I’m much more worried that we slow down AI progress by wrongly fixating on the worst-case scenario. I think that there’s so much concern in the AI community about safety—and safety is important—that we aren’t looking enough at quality, which is even more important.
We should be focusing just as much on ways to make sure we’re using AI in ways that are going to improve outcomes in health care and the provision of consumer and government services, for example. I worry much more about our slow adoption than I worry about AI displacing or replacing or causing humans to go extinct.
With the Taiwan Semiconductor Manufacturing Company making 99% of the advanced chips that are used for AI, how does the fact that the Chinese Communist Party views Taiwan as a renegade province and threatens its current independence affect AI going forward?
There’s a really direct relationship between peace in East Asia and progress in AI, because the way supply chains are structured today—tech companies couldn’t get the chips they need without access to Taiwan. It is at the absolute epicenter of AI supply chains. We often think of AI models as being disembodied, something that exists in the internet. But in fact, the physical products that make AI possible depend on supply chains that, to a very large extent, get traced back to a single country in East Asia, Taiwan.
This is a key challenge to the future of the AI industry. It’s already the case that over the past couple of years, leading AI labs have struggled to get access to enough of the GPU chips and the high bandwidth memory needed to build AI servers and train and deploy AI systems.
How about shifting the industry to the United States? The 2022 CHIPS and Science Act was meant to bring microchip manufacturing to the United States. How successful has it been so far, and what are its prospects for success?
The goals of the CHIPS Act were, to boost investment in chip-making facilities in the United States, and to encourage cutting-edge R&D in chip-making technologies to guarantee the U.S. technological lead when it comes to as many segments of the chip supply chain as possible.
We have seen a dramatic increase in investment in chip-making facilities in the U.S. for the past couple of years, relative to prior trends. That’s directly due to the subsidies and tax credits that Congress offered via the CHIPS Act.
But there are some challenges in terms of ensuring that these dollars invested are resulting in the types of output that you’d optimally want to guarantee your economic security. One of the challenges is cost. It’s some 30% more expensive to make chips in the U.S. versus Taiwan. So even if you’re investing more, you’re getting fewer chips per dollar of investment than you would if you’d invested a dollar in Taiwan.
Plus, there’s a lot of different types of chips. And when it comes to the most cutting-edge chips, the most advanced, they’re still only made in Taiwan. Companies like Apple and Nvidia, they’re sourcing some chips from the U.S., but they’re sourcing the most important chips still from Taiwan.
In terms of cutting-edge R&D, I think we’re still in the early stages of the CHIPS Act beginning to set up the pipelines of talent from academia to industry. But I’m relatively optimistic that these institutions will have a major impact in solving the key challenge in the chip industry, which is moving ideas from the lab and universities towards the fabrication facilities, where they’re brought into production.