In the wake of the disruptive debut of DeepSeek-R1, reasoning models have been all the rage so far in 2025.
IBM is now joining the party, with the debut today of its Granite 3.2 large language model (LLM) family. Unlike other reasoning approaches such as DeepSeek-R1 or OpenAI’s o3, IBM is deeply embedding reasoning into its core open-source Granite models. It’s an approach that IBM refers to as conditional reasoning, where the step-by-step chain of thought (CoT) reasoning is an option within the models (as opposed to being a separate model).
It’s a flexible approach where reasoning can be conditionally activated with a flag, allowing users to control when to use more intensive processing. The new reasoning capability builds on the performance gains IBM introduced with the release of the Granite 3.1 LLMs in Dec. 2024.
IBM is also releasing a new vision model in the Granite 3.2 family specifically optimized for document processing. The model is particularly useful for digitizing legacy documents, a challenge many large organizations struggle with.
Another enterprise AI challenge IBM aims to solve with Granite 3.2 is predictive modelling. Machine learning (ML) has been used for predictions for decades, but it hasn’t had the natural language interface and ease of use of modern gen AI. That’s where IBM’s Granite time series forecasting models fit in; they apply transformer technology to predict future values from time-based data.
“Reasoning is not something a model is, it’s something a model does,” David Cox, VP for AI models at IBM Research, told VentureBeat.
What IBM’s reasoning actually brings to enterprise AI
While there has been no shortage of excitement and hype around reasoning models in 2025, reasoning for its own sake doesn’t necessarily provide value to enterprise users.
The ability to reason in many respects has long been part of gen AI. Simply prompting an LLM to answer in a step-by-step approach triggers a basic CoT reasoning output. Modern reasoning in models like DeepSeek-R1 and now Granite 3.2 goes a bit deeper by using reinforcement learning to train and enable reasoning capabilities.
While CoT prompts may be effective for certain tasks like mathematics, the reasoning capabilities in Granite 3.2 can benefit a wider range of enterprise applications. Cox noted that by encouraging the model to spend more time thinking, enterprises can improve complex decision-making processes. Reasoning can benefit software engineering tasks, IT issue resolution and other agentic workflows where the model can break down problems, make better judgments and recommend more informed solutions.
IBM also claims that, with reasoning turned on, Granite 3.2 is able to outperform rivals including DeepSeek-R1 on instruction-following tasks.
Not every query needs more reasoning; why conditional thinking matters
Although Granite 3.2 has advanced reasoning capabilities, Cox stressed that not every query actually needs more reasoning. In fact, many types of common queries can actually be negatively impacted with more reasoning.
For example, for a knowledge-based query, a standalone reasoning model like DeepSeek-R1 might spend up to 50 seconds on an internal monologue to answer a basic question like “Where is Rome?”
One of the key innovations in Granite 3.2 is the introduction of a conditional thinking feature, which allows developers to dynamically activate or deactivate the model’s reasoning capabilities. This flexibility enables users to strike a balance between speed and depth of analysis, depending on the specific task at hand.
Going a step further, the Granite 3.2 models benefit from a method developed by IBM’s Red Hat business unit that uses something called a “particle filter” to enable more flexible reasoning capabilities.
This approach allows the model to dynamically control and manage multiple threads of reasoning, evaluating which ones are the most promising to arrive at the final result. This provides a more dynamic and adaptive reasoning process, rather than a linear CoT. Cox explained that this particle filter technique gives enterprises even more flexibility in how they can use the model’s reasoning capabilities.
In the particle filter approach, there are many threads of reasoning occurring simultaneously. The particle filter is pruning the less effective approaches, focusing on the ones that provide better outcomes. So, instead of just doing CoT reasoning, there are multiple approaches to solving a problem. The model can intelligently navigate complex problems, selectively focusing on the most promising lines of reasoning.
How IBM is solving real enterprise uses cases for documents
Large organizations tend to have equally large volumes of documents, many of which were scanned years ago and now sitting in archives. All that data has been difficult to use with modern systems.
The new Granite 3.2 vision model is designed to help solve that enterprise challenge. While many multimodal models focus on general image understanding, Granite 3.2’s vision capabilities are engineered specifically for document processing — reflecting IBM’s focus on solving tangible enterprise problems rather than chasing benchmark scores.
The system targets what Cox described as “irrational amounts of old scanned documents” sitting in enterprise archives, particularly in financial institutions. These represent opaque data stores that have remained largely untapped despite their potential business value.
For organizations with decades of paper records, the ability to intelligently process documents containing charts, figures and tables represents a substantial operational advantage over general-purpose multimodal models that excel at describing vacation photos but struggle with structured business documents.
On enterprise benchmarks such as DocVQA and ChartQA, IBM Granite vision 3.2 shows strong results against rivals.
Time series forecasting addresses critical business prediction needs
Perhaps the most technically distinctive component of the release is IBM’s “tiny time mixers” (TTM)– specialized transformer-based models designed specifically for time series forecasting.
However, time series forecasting, which enables predictive analytics and modelling, is not new. Cox noted that for various reasons, time series models have remained stuck in the older era of machine learning (ML) and have not benefited from the same attention of the newer, flashier gen AI models.
The Granite TTM models apply the architectural innovations that powered LLM advances to an entirely different problem domain: Predicting future values based on historical patterns. This capability addresses critical business needs across financial forecasting, equipment maintenance scheduling and anomaly detection.
Taking a practical enterprise-focused approach to gen AI
There is no shortage of hype and vendors are all claiming to outdo each other on an endless array of industry benchmarks.
For enterprise decision-makers, taking note of benchmarks can be interesting, but that’s not what solves pain points. Cox emphasized that IBM is taking the ‘suit and tie’ approach to enterprise AI, looking to solve real problems.
“I think there’s a lot of magical thinking happening that we can have one super intelligent model that’s going to somehow do everything we need it to do and, at least for the time being, we’re not even close to that,” said Cox. “Our strategy is ‘Let’s build real, practical tools using this very exciting technology, and let’s build in as many of the features as possible that make it easy to do real work.’”
The post IBM Granite 3.2 uses conditional reasoning, time series forecasting and document vision to tackle challenging enterprise use cases appeared first on Venture Beat.