Trending:

Google launches Gemma 4, calls it the most intelligent open model: What is it and how to use

FP Tech Desk April 3, 2026, 07:43:54 IST

Google introduces Gemma 4, pitching it as its most capable open model yet. According to the company, the model is designed to bring advanced AI features to both data centres and everyday devices.

Advertisement
Google unveils Gemma 4, claims it to be the most intelligent open model yet
Google unveils Gemma 4, claims it to be the most intelligent open model yet

Google is doubling down on open AI development. At a time when most powerful models remain tightly controlled, the company has released Gemma 4, a new generation of open large language models that it claims delivers cutting-edge performance with far less computing power.

Developed by Google DeepMind, Gemma 4 is positioned as a flexible AI toolkit for developers, researchers and businesses alike. Unlike earlier versions, these models are released under the Apache 2.0 licence, meaning they are truly open source and can be freely modified, deployed and integrated across use cases.

STORY CONTINUES BELOW THIS AD

With this move, Google appears to be betting that openness, efficiency and versatility will define the next phase of AI adoption.

What is Gemma 4 and its capabilities?

Rather than a single model, Gemma 4 arrives as a suite of four models designed for different environments, from high-performance servers to compact edge devices.

At the top end are the 26B and 31B models, built for powerful GPU infrastructure such as the Nvidia H100. The 26B model is optimised for lower latency by activating only a portion of its parameters during inference, making it more efficient in real-time applications. Meanwhile, the 31B variant is designed for maximum performance, deploying its full parameter set to deliver higher accuracy and reasoning ability.

On the other end of the spectrum are the E2B and E4B models. These lighter versions, with two and four billion parameters respectively, are tailored for mobile devices, IoT systems and even standard home computers.

Despite their smaller size, Google says they can operate entirely offline with near-zero latency, thanks to collaboration with hardware partners like Qualcomm and MediaTek.

Across the board, Gemma 4 models share a robust set of capabilities. They support advanced reasoning, enabling multi-step problem solving and structured planning. They are also designed for agentic workflows, meaning they can autonomously interact with tools, APIs and external systems to complete tasks.

STORY CONTINUES BELOW THIS AD

The models include native support for multimodal inputs, handling images, video and, in smaller variants, audio. This allows them to perform tasks such as optical character recognition, chart analysis and speech understanding. Context windows are also significantly expanded, reaching up to 256K tokens in larger models, which allows entire documents or code repositories to be processed in a single prompt.

Another standout feature is offline code generation, which enables developers to write and test code without needing an internet connection. Additionally, the models are trained across more than 140 languages, expanding their global usability.

Google claims Gemma 4 can outperform models up to 20 times larger, highlighting what it calls a leap in “intelligence per parameter”.

How to try it?

Gemma 4 is already accessible through multiple platforms. Developers can experiment with the models via Google AI Studio, where they can test prompts and build applications directly in the browser.

For those looking to deploy locally or customise the models, Gemma 4 is available for download on platforms such as Hugging Face, Kaggle and Ollama.

STORY CONTINUES BELOW THIS AD

This broad availability, combined with its open licensing, makes Gemma 4 one of the most accessible high-performance AI model families to date. Whether running on a cloud server or a smartphone, Google’s latest release signals a shift towards AI that is not just powerful, but also portable and open.

Home Video Quick Reads Shorts Live TV