Audio-first, display-equipped smart glasses could turn glasses into everyday assistants
Google is preparing to launch its first generation of AI-powered smart glasses in 2026 — a move that could reshape wearable computing by blending artificial intelligence, augmented reality, and everyday eyewear.
The new glasses — born from collaborations with Warby Parker, Gentle Monster and Samsung — will reportedly come in two variants. One model prioritises a “screen-free” experience: audio-first, with built-in cameras, microphones and speakers to let the user interact with Google’s AI assistant Gemini hands-free. The other version will include an in-lens display, giving wearers discreet access to navigation prompts, translation captions, contextual notifications and more — all without pulling out a phone.
Google’s return to smart eyewear marks a significant shift. After the earlier stumble of Google Glass over a decade ago — criticised for bulky hardware and privacy concerns — the company now aims for a subtler design, with Android XR as the underlying platform and modern AI helping deliver real-world utility.
Analysts and industry watchers say the timing is strategic: the market for AI/AR glasses is heating up, with estimates showing rising interest in wearable smart eyewear as consumers seek alternatives to constantly reaching for smartphones.
If successful, Google’s AI glasses could accelerate a shift from screens in hands to information in sight — making navigation, real-time language translation, “always-on” assistance, and even photography as natural as blinking. But like all wearables, they will face scrutiny around privacy, usability and battery life.
Sign up for the Daily Briefing
Get the latest news and updates straight to your inbox