U.S. technology firms have spent an estimated $400 billion this year on AI infrastructure. That spending is estimated to increase to $3 trillion worldwide by 2028—raising questions about whether such investments can possibly pay off and about the nature of the technology itself.
Will the technology shift the balance between capital and labor? Could it lead to runaway growth of the economy? And is AI a fundamentally extractive technology?
Those are just a few of the questions that came up in my recent conversation with FP economics columnist Adam Tooze on the podcast we co-host, Ones and Tooze. What follows is an excerpt, edited for length and clarity. For the full conversation, look for Ones and Tooze wherever you get your podcasts. And check out Adam’s Substack newsletter.
Cameron Abadi: AI is often described by economists as a “general-purpose technology.” What are the economic implications of that designation?
Adam Tooze: This is a great place to start, I think, because it takes us to the heart of how economists are trying to figure out how AI relates to the economy and to the familiar history of economic development and technological change. So “general-purpose technology” is a phrase that economists came up with to describe, in some ways, a puzzle. So technologies that in their immediate impact on the economy were sometimes subtle and hard to detect, and yet over time, they become indispensable and almost impossible to imagine the world without. The sort of thing we’re talking about is steam power or electrification or the internal combustion engine or semiconductors. And the idea is that AI might be that kind of technology.
So the list is short. The question with AI is not so much is it a general-purpose technology, but is it something more than just a normal general purpose of technology? Is it something hyper? Is it, in some senses, the end point of all technological development because it’s a technology about technology? Because what we’re doing here is applying technology to thinking, which is the source of technology. And so that then raises a bunch of other questions. I think most economists are agreed that this is a normal general-purpose technology. The question is, could it be some sort of unprecedented general-purpose technology? If it were, it would again impact the very heart of economic thinking about technological change.
And this goes to a weird aspect of economic thinking about economic development. If you look at standard classical, neoclassical, mid-20th-century, neoclassical growth models in which capital and labor are combined to produce output, which grows over time because labor becomes more productive because you add more capital, or machines are more productive because you have more labor, those models don’t actually predict sustained growth. They predict convergence to some kind of level of GDP after which growth subsides. And the way that technology figures in standard growth models is as the factor which enables capital and labor together to be increasingly productive over time. This is the notion of total factor productivity.
The real kicker here is that if AI turns out to be an R&D-enhancing technological innovation, like the invention of the big lab, for instance, then all of a sudden we might actually see not just a one-off period of improved growth and higher GDP, which is what we’ve seen with general-purpose technologies in the past, but a quantum leap into an era in which because we can think faster, we can innovate faster, and so the extra juice that innovation adds to economic growth is itself growing over time.
And so rather than having diminishing returns in research, which is ultimately the biggest nightmare of conventional economic thinking—that the low-hanging fruit have all been exhausted—we would, in fact, be in an era in which, through AI, we can in fact increase the pace at which we innovate progressively. So we could leap, quite suddenly, to growth rates of 20 percent per annum in a sustained way. Because we’re not just getting smarter, we’re getting smarter at getting smarter. There is a second level. And that’s really what the debate in AI is about: Is AI a normal general-purpose technology, or is it this other type of break in historical development?
CA: Yeah, I suppose any time the prefix “hyper-” is being evoked, the potential for something ominous is there. But yeah, I guess maybe to get grounded again for a second, I mean, it does seem like a lot of these conversations about the future of AI are sort of extrapolating from its existing growth, but I wonder whether the trajectory of AI improvements is itself changing. In some sense, is it already slowing down? That seems to be the suggestion from some of the most recent AI services that have been released. Are we already seeing diminishing marginal returns from this technology?
AT: Well, this is a really hot item of debate within the AI community. And I’m, as a novice kind of reading my way into this, struck by the intensity of this debate. The question is whether the scaling laws apply, or whether we’re going to hit a wall. Those are at least two of the terms which are being used to discuss this. So the idea of the scaling laws was launched by an internal paper by OpenAI and DeepMind in 2020, which shows that model performance on tasks that language modeling improves smoothly with more data, more parameters, and more compute. And this is the sort of simplest kind of vision of how to expand the AI system and the functioning of the model. Simply add more of those three components. And the debate within the AI community is not simply about those laws as descriptions of AI as a technology, but as management principles. If you are OpenAI, is the smart thing to do simply to scale up those three inputs to the model? More parameters, more data, and more compute—is that the way in which you grow? And there has been a series of arguments within the community about trying to optimize that. So, if you like, a technology of the technology describing AI’s growth. Can you actually add relatively more data relative to the parameters that you’re trying to model? Is really the most sensible thing to do simply to double down on number-crunching capacity? This is one of the arguments that’s going on.
And the next argument is, if you do optimize, do you still ultimately hit a wall? And very senior people like Meta’s chief AI officer are convinced that even if it’s not a wall that we’re headed toward, there is at least on the current configuration of AI models, a series of constraints which mean that simply trying to estimate more parameters with more data and more computing power are not going to allow the industry and the models as they’re currently set up to jump into a qualitatively new type of AI. And he is arguing that the fundamental problems are that you’re not essentially incorporating in data anything other than text. This is fundamentally a text-based form of modeling. You’re not retaining enough data as memory, so you’re not building up enough experience within the models. And the agenda of simply ramping up more and more computing power is not going to allow you to break free of these constraints. And so to double down on this point, this question you’re asking as a kind of academic question has then within the industry itself turned into essentially a kind of management question. If it is true that we face diminishing returns, rather than just being able to scale up, what should we do? And that then informs corporate strategy with players like Meta trying to diversify the range of data that they feed their models with, for instance, so as to give them more real-world relevance, or shifting the models from simply estimating predictive algorithms—so what you’re trying to do is predict the most likely next token, the most likely next word—to trying to get the models to think in terms of actually constructing images of the world, models of the word. Not just, as it were, individual predictions of what comes next in the sentence against the backdrop of all the other sentences which the model is processing, but here is my vision of what the world is, so as to be able in a more, if you like, normatively grounded or empirically grounded way to move to more intelligent statements and predictions and interactions with the world. So rather than a bit-by-bit, sentence-by-sentence, question-answer interaction to move toward a kind of belt-built, a kind of vision of the world as mirrored within the AI. So these are the kind of conversations which are, again, they’re not abstract academic conversations. They are shaping the investment strategies and billions of dollars of investment in firms like Meta.
CA: Is it useful to think of AI as a fundamentally extractive or exploitative technology, damaging the planet and producing new inequalities?
AT: I appreciate the polemical force of this kind of argument. In some ways, extraction is perhaps less ethically sound as a mode of economic activity than production, or manufacturing, or maybe commerce even. But it’s difficult to see why that makes AI distinctive. The phrase from Kate Crawford’s book from 2022 is, AI is neither artificial nor intelligent, it’s made from natural resources and human labor. Which is, I think, as much as to say artificial intelligence is in the world. Of course, it is not just about intelligence as a disembodied thought process but fueled by material inputs. And we’re going to need a lot of energy and water for cooling and some land to install these facilities in.
I think the two aspects of this where it’s worth digging in a bit more and where it could be illuminating is it reminds us that labor is involved. People might think that’s strange because first and foremost, we think of AI as displacing labor. But the mind-boggling fact is that none of these models would operate in the vaguely human way that they do unless they were anchored in something that I understand the industry calls “ground truth.” Ground truth is a phrase they’ve gotten from digital geography, from GPS. And ground truth is a data point in the GPS system, which isn’t just something you’ve seen from a satellite or constructed through extrapolation, but where some person has been on the ground to confirm that that mountain is there where that mountain is said to be or that building is where it’s supposed to be. It’s been checked out by a person. And the human-like quality of the [large language] models that we know in large part is owed to and essentially depends on massive amounts of coding of texts where basically you’re associating the letters C-A-T with the word cat and then you associate it with an image.
And basically the big firms have employed millions of people—it’s hard to get a precise estimate, but we’re talking a large fraction of the online transient digital service workforce is doing this. And it’s basically just going through and anchoring the AI’s manipulation of symbols in relatively crude chains: c-a-t, furry, animal, sits on people’s lap, pet, that chain. Now the model can eventually pick up the way those chains appear, but at some point, the whole thing has got to be anchored in a ground truth, which a human puts in. And for that, there is a huge amount of underpaid human labor being mined, being used.
And then the other thing that’s been used is of course the huge volumes of text, which the companies in what is surely one of the most gigantic acts of—well, they would insist of course it’s all fair use, but a whole series of lawsuits that have been brought against Meta, that have been brought against Apple and other players in this space, allege on the contrary that the intellectual property of Reuters, the Encyclopedia Britannica, Disney has been plundered for the purposes of constructing these models. And there is, for those of us who have a bibliophile tendency, the nightmarish vision of one firm buying millions of books and literally stripping the pages out and constructing a physical graveyard of books that could be churned through digital scanners and then fed into their system. And whereas with the Google Books Project fairly serious copyright issues were raised immediately and everyone understood that this was a major struggle over IP, the folks that have done AI a decade later have just barreled into this, gone ahead and done it, and are now just waging the legal fight through courts in the United States on a case-by-case basis. And so far, are winning. They’re winning the cases on the whole because the judges are so impressed by the creative use being made and are therefore labeling this as fair use. That, I think, is the most obvious kind of element where we’re really talking about extraction. In some cases, you’re really treating a book as though it were a raw material.
CA: Henry Farrell, a political scientist at Johns Hopkins University, has been describing AI as a “cultural and social technology,” suggesting it should be thought of as something that will fundamentally reorder our social and cultural life, in the way that markets are a technology, or bureaucracies are a kind of technology, or print for that matter is a technology that has reordered society. Do you find that plausible, and if so, what aspects of social life are potentially subject to change by AI? Could our fundamental relationship to language or creativity be changed by this technology?
AT: Yeah, I think this is exactly the right approach. The premise should be: Look, let’s take all of the silly, kind of sci-fi anthropomorphist nonsense off the table here. Like, this is not a human being and this is human intelligence. But what it clearly is, is an amazing statistical indexing search generative—it’s a symbol-generating mechanism, right? It’s a bit like synthesizer music or something like that. And we’ve not had one as powerful as this before, but we have had symbol-generating mechanisms before. And they’re very powerful. And they change societies in lots of different complex ways. And they definitely change our relationship to language in the same way as development of writing changed people’s relationship to language and the development of computers has and the development of the printed book and the page and so on.
So I think this is exactly the way to think about this. And to take off the table certainly at this stage, you know, all of the really kind of obscure metaphysical arguments about whether this is general intelligence or really human or whether it—of course it doesn’t. But can a novel, for instance, induce feelings in you? Can you fall in love with a character in a TV series? Of course you can. It’s happened to most of us in our lifetimes, right? So could AI, if it has the power to, in a very sympathetic way, continuously affirm various things in you or spot patterns in your speech pattern or your thoughts and then sympathetically render them back to you, could it induce different types of emotions in you? Of course it can. That’s the scary thing about symbol systems, whether it’s music that can make us cry or feel very sexy or very excited or battle cries that will take you over the top and to your death whilst cheering, like these mechanisms work. And this is a very powerful engine for generating those kinds of effects.
The post Is Artificial Intelligence Worth the Investment? appeared first on Foreign Policy.