On Tuesday, Google unveiled Gemini 2.5, a new family of AI reasoning models that pauses to “think” before answering a question.
To kick off the new family of models, Google is launching Gemini 2.5 Pro Experimental, a multimodal, reasoning AI model that the company claims is its most intelligent model yet. This model will be available on Tuesday in the company’s developer platform, Google AI Studio, as well as in the Gemini app for subscribers to the company’s $20-a-month AI plan, Gemini Advanced.
Moving forward, Google says all of its new AI models will have reasoning capabilities baked in.
Since OpenAI launched the first AI reasoning model in September 2024, o1, the tech industry has raced to match or exceed that model’s capabilities. Today, major AI players such as:
...all have AI reasoning models that use extra computing power and time to fact-check and reason through problems before delivering an answer.
Reasoning techniques have helped AI models achieve new heights in math and coding tasks. Many in the tech world believe reasoning models will be a key component of AI agents, autonomous systems that can perform tasks largely without human intervention. However, these models are also more expensive.
Google has experimented with AI reasoning models before, previously releasing a “thinking” version of Gemini in December. But Gemini 2.5 represents the company’s most serious attempt yet at besting OpenAI’s o series of models.
Google claims that Gemini 2.5 Pro outperforms its previous frontier AI models and some of the leading competing AI models on several benchmarks:
To start, Google says Gemini 2.5 Pro ships with a 1 million token context window, allowing it to process roughly 750,000 words in a single go—longer than the entire Lord of The Rings book series! Soon, Gemini 2.5 Pro will support 2 million tokens, doubling its input length.
Google has not yet published API pricing for Gemini 2.5 Pro but promises to share more details in the coming weeks.
We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Read more...