|
| AI Beyond the Screen Last year I made the case that AI Browsers are the new AI revolution. In this week's newsletter, the story continues because there is now a wearable revolution coming. CES 2026 made it clear that the future of AI will include your voice, your face, and your physical environment. Smart glasses saw 250% growth in 2025 and audio-first devices are proliferating.
But wearable AI hasn't been successful yet, and memory is one of the key reasons why. Knowing how AI uses "context" is critical in understanding why this is the case. And so I've added a new section to the newsletter called Foundations to offer some insight into the foundations of AI technology. And this week's Foundation is on context.
With wearables, "my glasses are always listening" changes everything from privacy to workplace norms, customer engagement, and how AI can actually help. Wearables won't replace screens but they will create a powerful new integration surface for AI in your life. | Wearable "second brains" are shifting AI from screens to your body, offering real-time translation, voice notes, and decision support through discreet glasses, rings, and clip-ons. | The AI industry is moving toward an audio-first future where voice interaction replaces screens. Companies are integrating advanced audio AI into glasses, search results, and more. | Lacking persistent memory may be one of the biggest reasons wearables have failed in the past. But there's a solution that could solve this problem. |
|
| The primary interface for AI may be shifting from screens to the human body through a diverse array of wearable devices including glasses, rings, and clip-on "second brains". At CES 2026, vendors focused on discreet, ambient AI that provides real-time translation, summaries, and decision support without requiring users to pull out a phone or sit at a computer. This shift manifests across diverse form factors, each targeting different use cases. Smart glasses lead with visual recognition and translation, while ear-worn devices excel at transcription and ambient note-taking. Smart rings capture quick voice memos, and clip-on pendants automatically generate meeting summaries. These devices emphasize phone-free connectivity through built-in eSIM, multi-model AI systems, and intentional rather than continuous activation hoping to make the technology socially survivable. Strategic Insight: The rise of "always-on" sensors brings significant privacy challenges. The trade-off between functionality and privacy will be a critical inflection point, and we will definitely see this divide reflected in adoption rates as the technology evolves. |
The tech industry is betting on an "audio-first" future where voice interaction replaces screens. OpenAI has unified multiple teams to overhaul its audio models to reportedly handle conversational overlaps and interruptions like an actual conversation, capabilities today's models can't manage. The broader industry movement includes Meta's Ray-Ban glasses using five-microphone arrays to create directional listening devices and Google transforming search results into conversational audio summaries. The Practical Angle: Voice optimization for search, commerce, and customer service should be treated as a primary interaction mode, as audio interfaces may soon rival or exceed screen-based engagement. View Article → The AI wearable market has become a graveyard of failed products. Removing the social friction, the biggest failure point isn't hardware or AI capabilities. It's memory and context. Without durable memory layers, AI wearables become expensive cameras that occasionally answer questions. Memories.ai is developing a system to convert continuous video into structured, searchable memory frames to addresses this problem. Strategic Take-away: Whether for a wearable, or in your day-to-day, before deploying AI, ask whether it can maintain the context your work requires, not just whether it can complete isolated tasks. View Article → | Quick HitsFDA revises AI medical device oversight The FDA loosened regulations for wearables noting that blood pressure and glucose tracking can now be "wellness features". This enables faster innovation but raises questions about safeguards. | Voice assistants find their next act Siri, Alexa, and Google Assistant are transforming from simple command executors into conversational AI agents that manage smart home ecosystems, handle multi-step tasks, and influence household decisions. | Foundations Context Windows: AI's Working Memory Context windows determine how much an AI model can "remember" at once. Larger context windows enable longer conversations but require exponentially more computing power. Research shows that AI models perform better when relevant information appears at the beginning or end of prompts, with performance degrading for information buried in the middle of long contexts. |
|
| Industry DevelopmentsApple Partners with Google for Siri Apple disclosed a partnership with Google to use Gemini to help deliver “Apple Intelligence” features and finally modernize Siri. The partnership lands in an unusually sensitive area since Apple and Google are already under heavy antitrust scrutiny for their search relationship. | OpenAI Launches ChatGPT Health With 230 million users asking health questions weekly, OpenAI launched ChatGPT Health to securely integrate medical records and wellness apps. Developed with 260+ physicians, the feature isolates health data and excludes it from model training. |
|
| |
|