DNYUZ
No Result
View All Result
DNYUZ
No Result
View All Result
DNYUZ
Home News

How ‘Jagged Intelligence’ Can Reframe the A.I. Debate

April 15, 2026
in News
How ‘Jagged Intelligence’ Can Reframe the A.I. Debate

Say what you will about whether artificial intelligence will one day be as smart as a human. It has already become a star math student. Last summer, A.I. built by Google and OpenAI correctly answered five of six complex questions at the International Math Olympiad, an annual competition for the world’s top high school students.

A.I.’s common sense, however, may still be a bit lacking. A few months later, Anuradha Weeraman, a software engineer in Sri Lanka, noticed that leading A.I. systems struggled to answer what was essentially a trick question that most people would find laughably simple. When he told various chatbots that he needed to take his car to a repair shop that was only 50 meters away and asked if he should walk or drive, the bots told him to walk.

The strange way that A.I. looks like a genius at one moment and dense in another is what researchers, engineers and economists call “jagged intelligence.” They use this term to explain why A.I is racing ahead in some areas — like math and computer programming — while still struggling to make headway in others.

The term, which is widely used by the people building A.I. and analyzing its effects, could help reframe the debate over whether these systems are becoming as smart as, or even smarter than, humans. Instead, researchers argue, A.I. is something completely different: far better than humans at some tasks and far worse at others.

Understanding those strengths and weaknesses can also help economists get a better handle on what A.I. means for the future of employment. While entry-level programmers have reason to worry about their jobs, for example, it is not so clear — at least right now — how A.I. will affect other work. But watching where A.I. starts to make rapid improvements could help predict what kinds of jobs will be affected by the technology.

“The performance of these systems varies, and it is not easy to tell when they will fail to do things a human can do,” Mr. Weeraman said.

The term “jagged intelligence” was coined by Andrej Karpathy, one of the founding researchers at OpenAI, a former head of self-driving technology at Tesla and, on social media, one of the most closely watched commentators on the rise of A.I.

“Some things work extremely well (by human standards) while some things fail catastrophically (again by human standards),” he wrote on social media in 2024, “and it’s not always obvious which is which.”

This, he wrote, is different from the human brain, “where a lot of knowledge and problem solving capabilities are all highly correlated and improve linearly all together, from birth to adulthood.”

Since OpenAI started the A.I. boom in 2022, tech executives have seesawed between warning that their new creations could have a devastating effect on white-collar jobs and downplaying the long-term impact on employment.

So far, outside of the technology industry, there is only anecdotal evidence that A.I. has become a job killer. But given how quickly the technology is improving, many tech experts argue that whether A.I. replaces other kinds of white-collar workers is not a question of if but when. Only a few years ago, these systems were just starting to show the most rudimentary programming skills.

“These systems have been showing incredible improvements,” said Alex Imas, an economist at the University of Chicago’s Booth School of Business. “Every time there is a major new release, people are surprised by how much it can do.”

But technology that adds to what workers can do without replacing them has plenty of precedent, and that’s what some A.I. researchers and economists are arguing will happen. As far back as the 1960s, a pocket calculator could add, subtract and multiply much faster than a person. That did not mean a calculator could replace an accountant.

Now, systems like Anthropic’s Claude and OpenAI’s Codex can write computer code much faster, too. But they are not that good at understanding how each piece of code fits into a larger software application. They need human help with that.

“If a job involves a bunch of different tasks — and most jobs do — some tasks will be automated and some will not,” Dr. Imas said. “And if that is the case, the worker may have more time to do bigger things.”

Last month, François Chollet, a noted A.I. researcher, released a new digital benchmark test called ARC-AGI 3. It asks for solutions to hundreds of gamelike puzzles without providing a single instruction for how to solve them. All of the puzzles can be solved by an average, untrained person, but the leading A.I. systems fail to master any of them, according to testing done by Mr. Chollet and the ARC Prize, the nonprofit research lab that oversees the test.

Once people realize that A.I. is a jagged intelligence, experts like Mr. Chollet said, they develop a better understanding of how A.I. is likely to evolve in the coming years — and what effect it might have on the labor market.

“This will depend on what tasks it automates and how and when,” Dr. Imas said.

A.I. systems like Claude and OpenAI’s ChatGPT learn their skills by pinpointing patterns in digital data, including Wikipedia articles, news stories, computer programs and other text culled from across the internet. But that gets them only so far.

The internet holds only a small fraction of human knowledge. It records what people do in the digital world, but contains comparatively little information about what happens in the physical world.

That means these systems can write emails, answer questions, riff on almost any topic and generate computer code. But because A.I. systems reproduce the patterns they find in digital data, they are not good at planning ahead, generating new ideas or tackling tasks they have not seen before.

“A.I. does not have general intelligence,” Mr. Chollet said. “What it has is a lot of different skills.”

Now, companies like Anthropic and OpenAI are teaching these systems additional skills using a technique called reinforcement learning. By working through thousands of math problems, for example, they can learn which methods lead to the right answer and which do not.

This works well in areas like math and computer programming, where A.I. companies can clearly define good and bad behavior. The answer to a math problem is either right or wrong. Computer code either passes a performance test or it fails.

But reinforcement learning does not work as well in areas like creative writing or philosophy or even some of the sciences, where the distinction between good and bad is harder to pin down.

“Coding — which everyone is enthusiastic about at the moment — is not representative of everything A.I. does,” said Joshua Gans, an economist at the University of Toronto’s Rotman School of Management. “With coding, it is much easier to use a feedback loop to figure out what is working and what isn’t.”

(The New York Times sued OpenAI and Microsoft in 2023 for copyright infringement of news content related to A.I. systems. The two companies have denied those claims.)

For users, it is often hard to tell what A.I. does well and what it does not. And when people finally get a firm handle on the strengths and weaknesses of the systems, the technology changes.

“The jaggedness of A.I. means that the problems can come from anywhere,” Dr. Gans said. “There are gaps, and we don’t always know where the gaps are.”

The wild card is that A.I. is quickly improving. Many of the weaknesses that Dr. Karpathy and others pointed out in 2024 and early 2025 are no longer there. Companies will find other shortcomings and fix them as well.

“The valleys in the technology are closing,” Dr. Imas said.

Cade Metz is a Times reporter who writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology.

The post How ‘Jagged Intelligence’ Can Reframe the A.I. Debate appeared first on New York Times.

Live Nation found to hold an illegal monopoly in antitrust trial
News

Live Nation found to hold an illegal monopoly in antitrust trial

by Business Insider
April 15, 2026

The Live Nation antitrust trial kicked off last month. Mario Tama/Getty ImagesA Manhattan jury found Live Nation liable for violating ...

Read more
News

‘Blowout brewing’: GOP said to have ‘no way on earth’ to keep House unless Trump rebounds

April 15, 2026
News

NASA Wants to Put Nuclear Reactors on the Moon

April 15, 2026
News

Trump’s Stuck in a Bind of His Own Making

April 15, 2026
News

A girl accused her relative of abuse. Then LAPD linked him to a rape unsolved for decades

April 15, 2026
Wuchang: Fallen Feathers Is in Trouble After Its Studio Loses All Core Developers

Wuchang: Fallen Feathers Is in Trouble After Its Studio Loses All Core Developers

April 15, 2026
I helped my 86-year-old dad plan his estate. It changed how I see my life.

I helped my 86-year-old dad plan his estate. It changed how I see my life.

April 15, 2026
Bravo TV star makes surprise run for Congress — in rural Minnesota

Bravo TV star makes surprise run for Congress — in rural Minnesota

April 15, 2026

DNYUZ © 2026

No Result
View All Result

DNYUZ © 2026