Google’s Gemini 3 is here — and you can start playing with it right away.
The search giant said its new AI model, which will be widely available on Tuesday, takes a “massive jump” in reasoning, is more creative than Gemini 2, and is better able to combine text, image, and video.
For Google, the weight of expectations hangs heavy on Gemini 3. After GPT-5’s modest arrival earlier this year, the pressure has been on for Google to deliver something much better. It’s also an opportunity for Google to reassert itself as an AI leader and cement a turnaround that’s been three years in the making.
Google DeepMind CEO Demis Hassabis described the latest updates to Gemini as going from “simply reading text and images to reading the room.”
Here are the key things to know about Gemini 3.
It’s more visual — and better for learning
Google said Gemini 3 is an improvement in both reasoning and multimodal abilities. Put those together, and you get something better at explaining and visualizing ideas. “It doesn’t just process text or images separately,” said Tulsee Doshi, product lead for Gemini, on a roundtable with reporters this week. “It actually understands the nuances across them to convert information into the medium that actually makes most sense for you.”
Google says this is strengthened by the model’s coding abilities, meaning it can create a presentation or an interactive graphic to explain a complex idea. Google DeepMind’s CTO, Koray Kavukcuoglu, said he thinks this will be a big deal not just for coders, but for students and anyone trying to use AI as a learning tool.
It’s coming to search right away, if you pay for it
Google’s flexing its distribution advantage by rolling out Gemini 3 to search on launch day — with a catch. Out of the gate, any US users who pay for Google’s Pro or Ultra Gemini tiers will see a new “Thinking” option in Google Search’s AI Mode, which will use Gemini 3 if you choose it. Google says it will soon make Gemini 3 in search available to all.
The new model should perform better searches by breaking down your query into even more pieces, Google said. Robbie Stein, VP of Search, said that people have been asking Google more complex questions as it’s rolled out new AI capabilities. Gemini 3 will also be capable of building more visualizations and interactive graphics right onto the AI Mode search page.
Google claims Gemini 3 is its ‘most factual model’ to date
Google is going out of the gate with Gemini 3 “Pro” (whether a standard non-Pro Gemini 3 will follow at some point is unclear). It’s pushing its smartest model out first this time, which it claims to be its “most factual” model to date.
AI companies often boast about how their models outperform others on various benchmarks. These aren’t always super useful to the average person, and the real test for Google will be in how users receive and use its new model. However, one notable figure Google shared was that Gemini 3 scored 37.5% without tool use on Humanity’s Last Exam, a test for AI models comprising 2,500 questions across a wide range of subjects. Doshi said this makes Gemini 3 better at solving maths and science problems “with a very high degree of reliability.”
It has agents that might finally be useful
So far, it’s been fairly primitive, but Google is now launching the next step — simply called “Gemini Agent — which it says will be able to carry out multi-step tasks from within Gemini. Google is still calling this “experimental,” but said it will be much more useful and able to interact with various Google apps, such as managing your Google Calendar or rearranging your Gmail inbox.
One of Google’s big visions is to create a universal AI assistant, and these agentic features are part of that plan. The more it knows about you, the more it can do by itself. Google says you’ll be able to do things like ask the agent to research an upcoming holiday based on information in your emails to find a suitable rental car for your trip, with little human input.
Other unique properties are emerging, and it can ‘vibe code’ like crazy
Josh Woodward, Gemini’s app lead, said the new model has some “latent capabilities,” which are more hidden functions that are starting to emerge. One of those is creating generative interfaces using something called “dynamic view,’ where you might ask Gemini for information about a famous historical figure, and it can generate a fully interactive website with clickable widgets and tabs.
This is because Gemini 3 is much better at coding, Google said. The company is tapping those advantages with a new platform it’s launching called Antigravity, which takes vibe coding to another level by having an autonomous agent perform most of the work for you.
Google gave a demo of an agent building an interactive flight-tracking app with just a prompt. The agent can also create progress reports along the way and a final walkthrough report.
“I think for everyone in software engineering we realized that LLMs, large language models, have really fundamentally changed how people code and how we build software, how we bring ideas to life,” said Kavukcuoglu.
Have something to share? Contact this reporter via email at [email protected] or Signal at 628-228-1836. Use a personal email address and a non-work device; here’s our guide to sharing information securely.
Read the original article on Business Insider
The post Google’s Gemini 3 is live after months of hype. Here’s what it can do. appeared first on Business Insider.




