Mega Launch Week: Gemini, Claude, and more
Launch week introduced Google Jules, Open AI Codex, Claude Code, and more
TL;DR: Google I/O launch week turned out to be a great week for AI. Announcements from Google, Anthropic, and OpenAI were the highlights of the week.
AI’s Supersonic Week: A Use-Case-Centric Breakdown
This past week, the AI landscape changed again, with advancements across various domains: Code gen, image gen, video gen, and some hardware.
Code Generation: Advancements in AI Programming Assistants
The realm of AI-assisted coding has seen notable progress, with Google’s Jules, Claude Code, and OpenAI Codex.
Google Jules is an asynchronous, agentic coding assistant that integrates with the codebase/repository, since it uses Gemini 2.5 pro with long context, it seems to have a better chance at performing well on a larger codebase. Like the others in the category, it can plan, reason and provide a diff of changes that it made for you to review.
Claude Code with Opus 4 & Sonnet 4 Anthropic’s latest models have achieved state-of-the-art results on coding benchmarks such as SWE-bench (72.5–72.7%). These models demonstrate sustained performance on long-running tasks, maintaining focus over extended periods. However, unlike codex
, Claude Code is not a cloud-based agent, it is a local agent that can be run on your machine. The pro is that you can chat and iterate on the code quickly, since it does not need to rebuild the sandbox for each conversation. The con is that you need to be at your desk to use it.
OpenAI Codex: Codex is a cloud-based software engineering agent designed to automate common development tasks. Integrated into ChatGPT, Codex operates in secure sandbox environments, handling tasks like writing code, debugging, and generating pull requests. Since it runs in a sandbox, it essentially runs outside of your local development environment, i.e., you can ask it to do things while on the move, but it also means that you need to upload your secrets, environment variables, etc. to ChatGPT.
These advancements are great for the software engineering landscape where multiple organizations are pushing the boundaries of AI-driven code generation, it is not just VS Code plugins.
3. Image & Video Generation: Enhancing Creative Capabilities
AI models are increasingly capable of generating high-quality visual content:
-Veo 3: Google’s latest video generation model can produce 4K videos with synchronized audio, including speech and ambient effects, based on text prompts. The accompanying tool, Flow, allows filmmakers to iteratively steer output using text, shots, and mood boards .
-OpenAI Sora: Sora remains a benchmark for physical realism, many pictures on this site were built with Sora.
-Imagen 3: Google’s updated image generation model offers improved fidelity and prompt controllability, narrowing the gap with competitors like Midjourney, Sora, and DALL·E 3 .
These tools are democratizing content creation, enabling users to produce professional-grade media with minimal resources.
4. Hardware & Ambient Agents: Integrating AI into Daily Life
AI is transitioning from software to integrated hardware solutions:
-Android XR Glasses: Demonstrated at Google I/O, these lightweight headsets offer real-time translation and Gemini overlays, providing "heads-up answers" without the need for a phone .
-Project Astra: Google’s research prototype can utilize a phone’s camera to remember context and perform actions across the Android UI, indicating a shift from chat-based agents to integrated operating layers .
-"io" Device (OpenAI × Jony Ive): OpenAI’s acquisition of Jony Ive’s startup, io, for $6.5 billion aims to develop a design-led pocket AI companion, targeting the shipment of 100 million units. This device aspires to be a screen-free, context-aware assistant, marking a significant move towards ambient AI hardware .
While early attempts like Humane’s AI Pin faced challenges, the continued investment and innovation in this space suggest a promising future for AI-integrated hardware.
Note: This overview is based on developments up to May 24, 2025.