OpenAI on Friday unveiled a new artificial intelligence system, OpenAI o3, which is designed to “reason” through problems involving math, science and computer programming.
The company said that the system, which it is currently sharing only with safety and security testers, outperformed the industry’s leading A.I. technologies on standardized benchmark tests that rate skills in math, science, coding and logic.
The new system is the successor to o1, the reasoning system that the company introduced earlier this year. OpenAI o3 was more accurate than o1 by over 20 percent in a series of common programming tasks, the company said, and it even outperformed its chief scientist, Jakub Pachocki, on a competitive programming test. OpenAI said it plans to roll the technology out to individuals and businesses early next year.
“This model is incredible at programming,” said Sam Altman, OpenAI’s chief executive, during an online presentation to reveal the new system. He added that at least one OpenAI programmer could still beat the system on this test.
The new technology is part of a wider effort to build A.I. systems that can reason through complex tasks. Earlier this week, Google unveiled similar technology, called Gemini 2.0 Flash Thinking Experimental, and shared it with a small number of testers.
These two companies and others aim to build systems that can carefully and logically solve a problem through a series of steps, each one building on the last. These technologies could be useful to computer programmers who use A.I. systems to write code or to students seeking help from automated tutors in areas like math and science.
With the debut of the ChatGPT chatbot in late 2022, OpenAI showed that machines could handle requests more like people, answering questions, writing term papers and generating computer code. But the responses were sometimes flawed.
ChatGPT learned its skills by analyzing enormous amounts of text culled from across the internet, including news articles, books, computer programs and chat logs. By pinpointing patterns, it learned to generate text on its own.
Because the internet is filled with untruthful information, the technology learned to repeat the same untruths. Sometimes, it made things up — a phenomenon that scientists called “hallucination.”
OpenAI built its new system using what is called “reinforcement learning.” Through this process, a system can learn behavior through extensive trial and error. By working through various math problems, for instance, it can learn which techniques lead to the right answer and which do not. If it repeats this process with a very large number of problems, it can identify patterns.
Though systems like o3 are designed to reason, they are based on the same core technology as the original ChatGPT. That means they may still get things wrong or hallucinate.
The system is designed to “think” through problems. It tries to break the problem down into pieces and look for ways to solve it, which can require much larger amounts of computing power than is needed for ordinary chatbots. That can also be expensive.
Earlier this month, OpenAI began selling OpenAI o1 to individuals and businesses. One service, aimed at professionals, was priced at $200 a month.
(The New York Times sued OpenAI and Microsoft in December, alleging copyright infringement of news content related to A.I. systems. The companies have denied the claims.)
The post OpenAI Unveils New A.I. That Can ‘Reason’ Through Math and Science Problems appeared first on New York Times.