DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

Something big is happening in AI, but panic is the wrong reaction

February 28, 2026
in News
Something big is happening in AI, but panic is the wrong reaction

When the World’s Fair came to Queens, New York, in 1964, robots were shown taking over housework, coming soon to a home near you. When the Fair closed, the exhibits moved to Disney World and made the same claim for the next 30 years: the robots are coming, just around the corner.  Except they didn’t.

In the 1990s, the expansion of distributed computer power and the vast purchases of it led to new claims about massive increases in productivity that would soon be released.  Except it didn’t. It took a long time and associated changes in how work was organized to drive productivity improvements.

In the early 2000s, advances in data science and the use of machine learning in predictions raised new alarms, with claims emerging in the 2010s that as many as half of all jobs were “at risk” of being taken over by new AI tools. By the end of that decade, the perceived threat had moved back to robot-like devices that would soon take over blue-collar jobs, with the claim that as soon as 2019, truck drivers would be obsolete. Except they weren’t. The increased use of robots in manufacturing  has not happened either, as new robots are actually associated with growth in employment. 

Experts have a long history of torturing us with predictions about how technology will wipe us out, first our jobs and then just getting rid of us altogether because humans are a bother. The AI panic around Large Language Models over the last three years is no exception. 

The inconvenient truth is that by 2025, it was hard to find examples where LLMs had actually taken over lots of jobs. The layoffs that were supposedly related to AI look increasingly like they weren’t—at best, they were in anticipation that AI would replace workers. Even OpenAI CEO Sam Altman has said there is “AI washing” going on, with these AI-related layoffs being mostly smoke and mirrors.

We are back in panic mode in 2026, brought on by new claims about the dangers of AI, even though we don’t see evidence yet of these changes. 

Do you see a pattern here? Scientists and developers are rightly excited about a new innovation, and they are happy to imagine out loud how they think the new tools could be used. Then vendors rise to sell those new tools, and they push the claims hard. This is the beginning of the hype cycle. They aren’t thinking about whether those uses would be practical: what will it cost, what other changes are required for it to work, and does anyone need the tools in the first place?  

Colleagues in academia have found that three-quarters of the public companies that they could trace AI introduction got little benefit from it, only 5% used it in a systematic way, and it has not cut many jobs. My own research has been doing something a little different, looking at individual workplaces to see what happens when AI is actually introduced: What did it look like before and what does it look like afterward? Here’s why the spread of AI is slower than we think and it hasn’t actually been taking over many jobs. 

The reality of AI adoption is different than the fears

First, it is expensive to introduce. The LLM companies are not in the business of giving these tools away, and the really good ones cost a lot to use. The bet that they will inevitably get cheaper is not obvious. While there are tons of vendors offering LLM tools, they are almost all built on core LLM technology from six vendors who already control almost 80% of the market. Computer time is not getting that much cheaper and the electricity to power it is jumping in price.

But the biggest cost is the time and energy needed to configure them in your own organization and keep them up to date. Most of those costs need to be front-loaded. We still need some human back-up to solve the problems that the LLMs can’t, and productivity improvements that could lead to fewer workers come much later. Selling an expensive, front-loaded project with substantial and continuing IT costs to a CFO looking for a return on investment is difficult when the benefits are uncertain and only show up years later.

Second, related to the ROI challenge, there is the misplaced focus on eliminating low-skill work. Two lessons here. The first is that we don’t save much money if we cut a bunch of minimum-wage jobs, especially when we still need employees to monitor and problem-solve the AI tools. Next, simple white-collar jobs are simple because they don’t require much judgment and tend to be binary: identify which form this is and put it in the right pile. But they have to be right every time. Those are perfect tasks for Machine Learning, but Machine Learning is also a lot more expensive than using LLMs because it has to be built for each task, and it has to be monitored and adjusted almost constantly.

Third, LLMs can take over tasks in more complicated jobs where it just has to be reasonably good, not perfect. It is cheaper to use than Machine Learning, but it still requires monitoring and checking. A typical human job has a large number of discrete and complicated tasks that cannot be automated, or at least not yet.

LLMs can really help with programming tasks, for example, but computer programmers spend as much as 70% of their time on tasks other than programming, which mainly involve dealing with other employees. If, say, LLMs can take over the 20% of the time that school principals spend preparing reports, we can’t cut 20% of each principal. But we can have them do something new.

The real benefit of LLMs, I believe, won’t come in cost savings; rather it will be allowing us to do new things we haven’t thought of yet. For an analogy, look back on the introduction of search engines, which led to a massive cut in the time needed to do research and get answers. I’ve never heard that search engines caused massive job losses. Instead, they created new businesses, new ways of working, and new jobs. Most businesses, for example, are awash in data that has been too difficult to organize for them to even look at.  If the latest Claude/Anthropic tool can do as much with analyses as is claimed, it could spend a few years just making sense of all that data. 

Maybe we should stop fixating on what AI is cutting (headcount reduction) and focus instead on what it is growing: all the new products and new solutions that AI may let us do. 

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

The post Something big is happening in AI, but panic is the wrong reaction appeared first on Fortune.

The Story We Tell About Energy Will Determine Its Future 
News

The Story We Tell About Energy Will Determine Its Future 

by TIME
February 28, 2026

There’s a fight underway about how to characterize this moment in energy and climate. Depending on who you ask, we’re ...

Read more
News

Trump Media considering ‘spinning off’ money-losing Truth Social as stock plummets: report

February 28, 2026
News

Matthew Lillard, a Fan Favorite, Still Has to Hustle for Work

February 28, 2026
News

Trump’s Top Suck-Up Thirsts Over Americans Dying in Iran

February 28, 2026
News

Trump Is Bluntly Told Bombing Iran Is Illegal

February 28, 2026
Dario Amodei on Anthropic’s Pentagon spat: ‘Disagreeing with the government is the most American thing in the world’

Dario Amodei on Anthropic’s Pentagon spat: ‘Disagreeing with the government is the most American thing in the world’

February 28, 2026
These Iranian Doctors Risked Their Lives So You Could See These Images

Iranian Doctors Risked Their Lives So You Could See These Images

February 28, 2026
Epstein’s former Lolita Express pilot-girlfriend worked with feds in exchange for US visa help: DOJ files

Epstein’s former Lolita Express pilot-girlfriend worked with feds in exchange for US visa help: DOJ files

February 28, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026