Behold the decade of mid tech!
That is what I want to say every time someone asks me, “What about A.I.?” with the breathless anticipation of a boy who thinks this is the summer he finally gets to touch a boob. I’m far from a Luddite. It is precisely because I use new technology that I know mid when I see it.
Academics are rarely good stand-ins for typical workers. But the mid technology revolution is an exception. It has come for us first. Some of it has even come from us, genuinely exciting academic inventions and research science that could positively contribute to society. But what we’ve already seen in academia is that the use cases for artificial intelligence across every domain of work and life have started to get silly really fast. Most of us aren’t using A.I. to save lives faster and better. We are using A.I. to make mediocre improvements, such as emailing more. Even the most enthusiastic papers about A.I.’s power to augment white-collar work have struggled to come up with something more exciting than “A brief that once took two days to write will now take two hours!”
Mid tech’s best innovation is a threat.
A.I. is one of many technologies that promise transformation through iteration rather than disruption. Consumer automation once promised seamless checkout experiences that empowered customers to bag our own groceries. It turns out that checkout automation is pretty mid — cashiers are still better at managing points of sale. A.I.-based facial recognition similarly promised a smoother, faster way to verify who you are at places like the airport. But the T.S.A.’s adoption of the technology (complete with unresolved privacy concerns) hasn’t particularly revolutionized the airport experience or made security screening lines shorter. I’ll just say, it all feels pretty mid to me.
The economists Daron Acemoglu and Pascual Restrepo call these kinds of technological fizzles “so-so” technologies. They change some jobs. They’re kind of nifty for a while. Eventually they become background noise or are flat-out annoying, say, when you’re bagging two weeks’ worth of your own groceries.
Artificial intelligence is supposedly more radical than automation. Tech billionaires promise us that workers who can’t or won’t use A.I. will be left behind. Politicians promise to make policy that unleashes the power of A.I. to do … something, though many of them aren’t exactly sure what. Consumers who fancy themselves early adopters get a lot of mileage out of A.I.’s predictive power, but they accept a lot of bugginess and poor performance to live in the future before everyone else.
The rest of us are using this technology for far more mundane purposes. A.I. spits out meal plans with the right amount of macros, tells us when our calendars are overscheduled and helps write emails that no one wants. That’s a mid revolution of mid tasks.
Of course, A.I., if applied properly, can save lives. It has been useful for producing medical protocols and spotting patterns in radiology scans. But crucially, that kind of A.I. requires people who know how to use it. Speeding up interpretations of radiology scans helps only people who have a medical doctor who can act on them. More efficient analysis of experimental data increases productivity for experts who know how to use the A.I. analysis and, more important, how to verify its quality. A.I.’s most revolutionary potential is helping experts apply their expertise better and faster. But for that to work, there has to be experts.
That is the big danger of hyping mid tech. Hype isn’t held to account for being accurate, only for being compelling. Mark Cuban exemplified this in a recent post on the social media platform Bluesky. He imagined an A.I.-enabled world where a worker with “zero education” uses A.I. and a skilled worker doesn’t. The worker who gets on the A.I. train learns to ask the right questions and the numbskull of a skilled worker does not. The former will often be, in Cuban’s analysis, the more productive employee.
The problem is that asking the right questions requires the opposite of having zero education. You can’t just learn how to craft a prompt for an A.I. chatbot without first having the experience, exposure and, yes, education to know what the heck you are doing. The reality — and the science — is clear that learning is a messy, nonlinear human development process that resists efficiency. A.I. cannot replace it.
But A.I. is a parasite. It attaches itself to a robust learning ecosystem and speeds up some parts of the decision process. The parasite and the host can peacefully coexist as long as the parasite does not starve its host. The political problem with A.I.’s hype is that its most compelling use case is starving the host — fewer teachers, fewer degrees, fewer workers, fewer healthy information environments.
I have seen this sort of technological Catch-22 in higher education before. Academia is a major institutional client for technology solutions. Schools helped Zoom beat Skype during the Covid-19 pivot to remote learning. Once upon a time, schools also helped the flagging Apple shore up its bottom line while it found a consumer market for its devices. All of the technology revolutions that are coming for America’s workplace have usually come earlier through mine.
Despite our reputation, most of the academics I know welcome anything that helps us do our jobs. We initially welcomed A.I. with open arms. Then the technology seemed to create more problems than it solved. The big one for us was cheating.
Every day an internet ad shows me a way that A.I. can predict my lecture, transcribe my lecture while a student presumably does something other than listen, annotate the lecture, anticipate essay prompts, research questions, test questions and then, finally, write an assigned paper. How can professors out-teach an exponentially generative prediction machine? How can we inculcate academic values like risk-taking, deep reading and honesty when it’s this cheap and easy to bypass them?
Academics initially lost our minds over the obvious threats to academic integrity. Then a mysterious thing happened. The typical higher education line on A.I. pivoted from alarm to augmentation. We need to get on with the future, figure out how to cheat-proof our teaching and, while we are at it, use A.I. to do some of our own work, people said. Every academic friend of mine has now encountered a letter of recommendation or a research peer review that was obviously written by A.I. Its wide adoption — and its midness — is threatening to topple an already fragile but important model of peer-reviewed research, deliberate scholarship and well-educated expertise. Which is just what we need in the post-fact era: less research and more predicting what we want to hear.
This isn’t the first time institutions pivoted from concern to tech acceptance. The same thing happened in the 2010s with massive open online courses, or MOOCs. Tech evangelists promised that we would not need as many professors, for one expert could teach tens of thousands online! But MOOCs were a mid technology that could barely augment, much less replace, deep expertise. Receiving information is not the same as developing the facility to use it. That did not stop universities from downsizing experts or from making online videos. Now MOOCs have faded from glory, but in most cases, the experts haven’t returned.
A.I. is already promising that we won’t need institutions or expertise. It does not just speed up the process of writing a peer review of research; it also removes the requirement that one has read or understood the research it is reviewing. A.I.’s ultimate goal, according to boosters like Cuban, is to upskill workers — make them more productive — while delegitimizing degrees. Another way to put that is that A.I. wants workers who make decisions based on expertise without an institution that creates and certifies that expertise. Expertise without experts.
That tech fantasy is running on fumes. We all know it’s not going to work. But the fantasy compels risk-averse universities and excites financial speculators because it promises the power to control what learning does without paying the cost for how real learning happens. Tech has aimed its mid revolutions at higher education for decades, from TV learning to smartphone nudges. For now, A.I. as we know it is just like all of the ed-tech revolutions that have come across my desk and failed to revolutionize much. Most of them settle for what anyone with a lick of critical thinking could have said they were good for. They make modest augmentations to existing processes. Some of them create more work. Very few of them reduce busy work.
Mid tech revolutions have another thing in common: They justify employing fewer people and ask those left behind to do more with less.
If you want to see the actual revolutionary use case for A.I., don’t look to biological sciences or universities. Look at Elon Musk’s so-called Department of Government Efficiency, which has reportedly considered using A.I. to help it find waste. The issue of whether workers and work is wasteful is a subjective call that A.I. cannot make. But it can justify what a decision maker wants to do. If Musk wants waste, A.I. can give him numbers to prove waste exists.
A.I. may be a mid technology with limited use cases to justify its financial and environmental costs. But it is a stellar tool for demoralizing workers who can, in the blink of a digital eye, be categorized as waste. Whatever A.I. has the potential to become, in this political environment it is most powerful when it is aimed at demoralizing workers.
This sort of mid tech would, in a perfect world, go the way of classroom TVs and MOOCs. It would find its niche, mildly reshape the way white-collar workers work and Americans would mostly forget about its promise to transform our lives.
But we now live in a world where political might makes right. DOGE’s monthslong infomercial for A.I. reveals the difference that power can make to a mid technology. It does not have to be transformative to change how we live and work. In the wrong hands, mid tech is an antilabor hammer.
The post The Tech Fantasy That Powers A.I. Is Running on Fumes appeared first on New York Times.