Highlights from Google I/O 2025: AI Innovations and Exciting Features
The anticipation for Google I/O 2025 was palpable, especially with the company holding a dedicated event just to unveil its Android advancements. However, the length and depth of the nearly two-hour presentation, predominantly centered on artificial intelligence, were certainly noteworthy.
It’s important to recognize that not every AI announcement carries the same weight. Various updates were tailored for enterprise and developer communities, while numerous features are also making their way to consumer devices shortly. This discussion will center on those specific updates that are either readily available now, expected soon, or slated for future rollout.
Gemini Live Launching on iPhone
Earlier this year, Google introduced Gemini Live for Android through the Gemini app, enabling users to share their camera feeds or screens for real-time answers to their queries. As of today, this capability is being extended to iPhone users with the Gemini app as well. Regardless of your platform, the ability to share what you see with the AI is now at your fingertips.
AI Mode: Redefining Google Search
Since March, Google has been refining AI Mode in Search, transforming how queries are handled into a more comprehensive and interconnected experience. This feature allows users to combine several questions into a single, multi-faceted inquiry. According to Google, AI Mode efficiently dissects your requests and scours the internet for the most pertinent information, theoretically compiling a complete report featuring answers, source links, and images.
AI Mode is slated to be available for all users—not just those in testing—over the upcoming weeks. Additionally, fresh AI Mode features were unveiled during I/O, enhancing the overall experience.
Cram Multiple Searches into One
One of the standout features is Deep Search, which amplifies the number of searches conducted for a query and generates a “fully-cited, expert-level report” for users. Although this feature holds great potential, it is prudent to verify the information it provides, as AI is known to occasionally create falsehoods. Additionally, AI Mode will incorporate Gemini Live access—enabling users to engage their camera or screen during searches.
Utilize “Agent Mode” as a Personal Assistant
Project Mariner is also being integrated into AI Mode. This feature offers “agentic capabilities,” meaning users can delegate tasks to the AI. For instance, you can instruct AI Mode to find “affordable tickets for this Saturday’s Reds game in the lower level,” and it will handle the searches and form completions for you. This capability extends to event ticketing, restaurant bookings, and scheduling local appointments.
During the presentation, Alphabet CEO Sundar Pichai showcased Agent Mode by requesting assistance in finding an apartment with specific criteria. Gemini initiated a search, browsed real estate sites, and efficiently arranged a tour.
AI Mode will also utilize users’ previous search histories to provide more tailored outcomes, including localized recommendations for future trips and options based on established preferences, such as favoring outdoor dining when searching for dinner plans.
Introducing New Features for Workspace
During the I/O event, Google revealed several new capabilities for Gemini, some of which will enhance the Workspace suite.
A major highlight is the introduction of Personalized Smart Replies in Gmail. Unlike the existing smart reply feature, this improvement leverages user data to craft responses that mimic individual writing styles, aiming to include all pertinent questions or comments related to the email. While the utility of AI as a primary communicator raises questions, this feature is anticipated to roll out later this year, initially for paying subscribers.
For users on paid Google Meet plans, live speech translation will begin rolling out today. This feature offers real-time dubbing in a chosen language during calls, functioning as an instant translator. For instance, if you speak English while your counterpart speaks Spanish, you’ll hear them in Spanish first and then receive the English translation from an AI voice.
Experiment with “Try It On”
Google aims to reduce the rate of clothing returns by launching a feature termed “Try It On,” utilizing AI to showcase how users might look in various apparel. This is not merely a theoretical proposal; “Try It On” is currently being rolled out to Google Search lab users. For more details on how to utilize this innovative feature, consult our comprehensive guide.
Android XR
As anticipated, Google also shared updates regarding Android XR, the platform designed for glasses and headsets. While some previously disclosed features were reiterated, a few intriguing functionalities were demonstrated live.
For instance, those using future glasses equipped with Android XR will have access to an unobtrusive HUD displaying content from images to messages and Google Maps. The potential for augmented reality in Google Maps is particularly captivating for navigating unfamiliar cities. A live demonstration featured speech translation, which showcased real-time onscreen translation as presenters conversed in different languages.
While no specific timeline has been established for public trials of Android XR, Google indicated collaborations with Warby Parker and Gentle Monster to create glasses incorporating this technology.
Introducing Veo 3, Imagen 4, and Flow
This year’s I/O also marked the unveiling of two cutting-edge AI generation models: Imagen 4 (for images) and Veo 3 (for video).
Imagen 4 significantly enhances image quality, boasting greater detail compared to its predecessor, Imagen 3, and the improvements in text generation are particularly noteworthy. For instance, if the model is prompted to create a poster, the resulting text is expected to be both relevant and stylistically consistent with the request.
The event commenced with videos generated by Veo 3, underscoring the company’s pride in this video generation model. Although the output is vibrant and detailed, it still displays some of the common issues associated with AI-generated video content. Another exciting component is “Flow,” which serves as an AI editing tool that collaborates with Veo 3 for video production. This allows creators to string together clips like any other non-linear editor, alongside features for customizing camera movements in each shot.
While the technology presents impressive capabilities, its practical applications remain uncertain for those who may not be drawn to watching AI-generated films, despite their development through familiar editing techniques.
Veo 3 will be exclusive to Google AI Ultra subscribers, while Flow will have limited availability for AI Pro subscribers utilizing Veo 2.
New Features in Chrome
Chrome users can anticipate two fresh updates following the events of Google I/O. The first brings Gemini functionalities directly into the browser, eliminating the need to access the Gemini site separately. The second enhancement will allow Chrome to automatically update old passwords for users, set to launch later this year, contingent upon support from the websites themselves.
A New Subscription Model for AI Access
Finally, Google has introduced new subscription tiers for accessing AI features. Google AI Premium is transitioning to AI Pro, maintaining most of its prior offerings while additionally granting access to Flow and Gemini within Chrome, at a monthly cost of $20.
The newly launched subscription, Google AI Ultra, comes at a premium price of $250 per month, providing users with everything included in Google AI Pro, with elevated limits across all AI models, including Gemini, Flow, Whisk, and NotebookLM. Notably, it includes access to Gemini 2.5 Pro Deep Think, the latest advanced reasoning model, Veo 3, Project Mariner, YouTube Premium, and 30TB of cloud storage. An intriguing package indeed.