At the I/O 2025 conference, Google officially introduced Gemma 3n, its latest open-source AI model, tailored for seamless performance on smartphones, laptops, and tablets. Designed to work efficiently even on devices with modest hardware, Gemma 3n can process text, audio, images, and video locally, eliminating the need for cloud connectivity and ushering in a new era of on-device AI.
According to Google executive Gus Martins, Gemma 3n is built on architecture similar to Gemini Nano and features breakthrough Per-Layer Embedding (PLE) technology. Thanks to this innovation, models with over 5 billion parameters can run with just 2GB or 3GB RAM, enabling powerful AI experiences even on low-resource devices, especially across Android and Chrome platforms.
Testing by AINavHub shows that Gemma 3n achieves a 90% accuracy rate in generating descriptions from 1080p video frames or 10-second audio clips, while consuming significantly less memory than Meta’s Llama 4. This device-side performance is expected to accelerate the spread of edge AI, with major potential in fields like education, accessibility, and mobile content creation. Google developed Gemma 3n in partnership with leading hardware providers such as Qualcomm, MediaTek, and Samsung.
Emphasizing responsible and safe AI practices, Google stated that all Gemma models undergo rigorous security and ethical reviews. Starting today, Gemma 3n is available as a preview to developers and is set to become widely accessible on Android and Chrome platforms later this year.