1,350 points · 8 submissions
with Cursor
Sous is a voice-first cooking companion built on ElevenLabs Conversational AI and developed entirely in Cursor.Search any YouTube recipe, and Sous fetches the transcript, uses Google Gemini 2.5 Flash to extract clean numbered steps, then opens an ElevenLabs agent session pre-loaded with the full recipe context. From that point, your hands never touch the screen. You can seek to any part of the video just by describing what happens there, ask what step you’re on, or set timers completely by voice. The agent tracks your progress through the recipe and responds naturally to questions mid-cook. Instead of simply reading instructions, the ElevenLabs agent understands the full recipe, knows where you are in the process, and responds conversationally to whatever comes up while you cook. Built with Next.js, ElevenLabs Conversational AI, Gemini 2.5 Flash, YouTube Data API v3, and Cursor.
Submitted 14 May 2026
with v0
Listing is a modern reimagining of Craigslist built for the age of AI. The project explores a simple idea: what if classifieds actually evolved? Built with v0, Listing transforms the traditional static classifieds experience into a cinematic, voice-first marketplace that feels alive, interactive, and intelligent. Users can search naturally using voice, compare listings with AI-generated trust and pricing analysis, discover local events, and even create listings entirely through conversation. Using ElevenLabs, voice is deeply integrated into the experience through conversational search, spoken AI comparison verdicts, and voice-powered posting workflows that make buying and selling feel effortless and modern. Inspired by nostalgic internet culture and futuristic product design, Listing blends the familiarity of local classifieds with the intelligence and usability expected from today’s AI-powered experiences.
with Zed
Growing up, I loved memory games, flip two cards, find the pair. Simple but addictive. So I rebuilt it with a twist. Not pictures. Sound. EchoJong is a Mahjong-inspired memory game where every move is guided by what you hear. Flip a tile, maybe it is a church bell, maybe an ice cream truck. Listen closely, hold onto it, and find its match. Every sound you hear is generated using the ElevenLabs SFX API and Music API, every audio clip in the game is AI generated. Built entirely using Zed, leveraging its AI editing, fast navigation, and integrated workflow to design and develop seamlessly in one place. This is not just a game, it is memory reimagined through sound. Can you remember what you hear?
with AWS Kiro
CareRing is a voice-first care companion for elderly parents living alone and their children who are far away. It replaces uncertainty with continuous awareness — using natural voice interactions instead of manual tracking or constant check-in calls. At its core is an AI voice companion powered by ElevenLabs, conducting daily check-ins through natural conversations and timely voice reminders. It tracks medicine adherence, monitors symptoms, and understands emotional well-being — referencing prescriptions, asking about medicines by name, and logging everything automatically. CareRing leverages ElevenLabs capabilities including Conversational AI agents for real-time interactions, dynamic data fetching and logging during conversations, Text-to-Speech for timely medicine reminders, and Instant Voice Cloning so reminders can sound like a loved one. CareRing also digitizes prescriptions using Google Gemini, builds personalized schedules, and uses a decision engine to detect missed doses, unusual responses, or potential health risks — alerting you instantly when something needs attention. Built using Kiro, CareRing integrates the ElevenLabs Kiro Power — enabling dynamic access to voice AI APIs, tools, and best practices directly within the development workflow. Combined with spec-driven architecture, automated hooks for testing and validation, and correctness-driven development, this ensures reliable, production-grade behavior for critical health alerts. CareRing makes care voice-first for parents and data-driven for their children — bridging the gap between distance and real awareness.
with turbopuffer
History is preserved in text—but its sound is lost. Chronoscapes reconstructs historical audio scenes by turning archival documents into music and soundscapes grounded in real evidence. Given an era, event, or location, it retrieves context-rich, sound-relevant passages—news reports, eyewitness accounts, and cultural fragments—using Turbopuffer for fast semantic search across large-scale archives. A sample of the 340GB American Stories collection, comprising millions of digitized historical documents, was used to ground the system in authentic historical material. Chronoscapes then analyzes this content to extract latent sonic signals such as mood, tempo, instrumentation, and environmental texture. These signals are structured and transformed into immersive audio using ElevenLabs, generating both period-appropriate music and environmental sound. For deeper immersion, Chrono Radio turns any historical theme into a continuous broadcast: a living station where AI-voiced DJs introduce each track with context drawn from the archive, weaving music and narration into an unbroken journey through time. A query like “letters home from soldiers in the 1940s and the radio swing music that kept them going” becomes an hour of era-faithful sound, paced and hosted as if heard on the original airwaves. Instead of imagining the past, Chronoscapes makes it audible—bridging archival data and generative audio to bring history back to life.
with Replit
DayCoach is a voice-powered daily accountability app that adapts its coaching style based on how you've been showing up. Instead of passive to-do lists, you have a real conversation with an AI coach every morning and evening — and the coach updates your task list in real time while you talk. The app uses ElevenLabs Conversational AI to power four distinct coaching personas — Sunny, Coach, Commander, and Champion — each triggered automatically based on your streak and completion patterns. You never pick a coach; the app decides based on your recent behavior. Keep your streak and you get warm encouragement. Miss three days and Commander shows up with no excuses. ElevenLabs handles the full voice conversation, including client-side tool calls that let the coach mark tasks complete, add new ones, update vague ones, or delete them mid-session — all reflected live in the UI. The app is deployed on Replit using autoscale, with a PostgreSQL backend (Neon) and a React frontend served from the same Express server in production.
with Cloudflare
VitalSync is a real-time, voice-driven emergency coordination system for paramedics and hospital staff. In a trauma emergency, field agents can't type — their hands are on the patient. VitalSync lets paramedics speak patient conditions aloud; Cloudflare Workers AI instantly classifies severity as critical or moderate; hospital coordinators see a live incident board update within one second and assign operating theatres by voice or click — synchronized across every device on the scene. ElevenLabs powers every audio confirmation in the system — patient reported, OT assigned, patient discharged, and critical divert warnings all play back as natural spoken alerts via the Rachel voice. In a high-stress environment where eyes and hands are occupied, voice output isn't a feature — it's the primary communication channel. Cloudflare is the complete infrastructure: Workers handle edge compute with zero cold starts, Durable Objects act as a single authoritative state machine per incident (making split-brain impossible), Workers AI runs LLaMA 3.1 for triage classification entirely within Cloudflare's network, and Pages hosts the frontend — all deployed, no servers managed.
with Firecrawl
EventSearch - Stop scrolling dead event pages. Just speak. Find concerts, markets, pop-ups, and anything happening near you in real time. Built with ElevenLabs + Firecrawl to surface what’s actually happening right now, not what happened last month.
Submitted 7 May 2026
Submitted 28 Apr 2026
Submitted 23 Apr 2026
Submitted 15 Apr 2026
Submitted 9 Apr 2026
Submitted 1 Apr 2026
Submitted 26 Mar 2026