Google is giving its Gemini chatbot a serious boost. The company has unveiled a new experimental tool called Personal Intelligence, a feature designed to make your AI assistant far more intuitive by pulling together information from across your Google apps.
Whether it’s surfacing a forgotten photo from a birthday last year or connecting an email thread to a YouTube video you watched, Gemini’s latest update is built to understand context like never before.
Announced in a blog post by Josh Woodward, Vice President of Google Labs and Gemini app lead, the feature marks a major step in Google’s race to make its AI more reasoning-driven and personal. “Gemini now understands context without being told where to look,” Woodward said, highlighting the company’s focus on making the assistant genuinely helpful, not just reactive.
Google unveils Personal Intelligence: How to use
Rolling out this week, Personal Intelligence is initially available to Google’s AI Pro and AI Ultra subscribers in the United States. It’s accessible through the Gemini app on the web, Android, and iOS, and works with all Gemini models via the model picker.
Google says this limited rollout will help it test and refine the feature before bringing it to more countries and eventually, the free tier. It’s also coming soon to AI Mode in Google Search.
Quick Reads
View AllImportantly, this beta is only for personal Google accounts, business, enterprise, and education accounts are sitting this one out, at least for now.
To turn it on, users can head to Settings, click on Personal Intelligence and then tap on Connected Apps within the Gemini app.
Once enabled, Gemini can tap into your Gmail, Google Photos, and other connected services to deliver tailored responses and proactive suggestions.
For instance, you could ask Gemini to “find the receipt for that camera I bought last summer,” and it might pull the email confirmation from Gmail and cross-check it with a related photo of the product. Or it could remind you about a recipe video you watched that ties in with a grocery list in your notes.
Taking aim at Apple’s ecosystem
If all this sounds familiar, it’s because Google’s move comes hot on the heels of Apple’s own big AI reveal. Cupertino’s upcoming Apple Intelligence promises deep integration across iPhone apps, from helping you write messages to creating images and parsing context within your device.
In a curious twist, Apple has also chosen Google as a partner for powering some of those AI features, including the much-anticipated Siri overhaul due later this year. That means the two tech giants are both competitors and collaborators in the AI race.
Still, Google’s Personal Intelligence feels like a direct shot at Apple’s tightly woven ecosystem, especially for users who already live in Google’s app universe. By making Gemini a central hub that connects Gmail, Photos, Calendar, and beyond, Google is positioning itself as the first to truly bridge your digital life together.
But, what about privacy?
Google insists that your data isn’t being used to directly train its AI models. In the blog post, Woodward clarified that Gemini doesn’t access your Gmail inbox or Photos library for training purposes.
Instead, only limited information, such as the prompts you type and the model’s responses, is used to improve functionality over time.
Still, he acknowledged that the beta version isn’t flawless. “Gemini may struggle with timing or nuance, particularly regarding relationship changes or your various interests,” he admitted. For sensitive subjects like health, Gemini “aims to avoid making proactive assumptions” but “will discuss this data with you if you ask.”
That said, the new feature is designed to go through your data and collect information to function. While it is only when you enable the feature, the privacy is still a big question for such personalised AI assistant.
)