2,150 points · 8 submissions
with Cursor
There are hundreds of millions of visually impaired people around the world. Things many of us do without thinking aren’t that simple for them. Beacon is a voice-first, hands-free mobile app built to help visually impaired users better understand and interact with the world around them. It runs on a smartphone mounted to a specialized lightweight chest harness and pairs with a Bluetooth remote that lives in the user’s pocket and instantly starts the assistant, turning the system into wearable AI. No screens, no menus, no touching the display. Users simply talk, and Beacon sees the world and talks back in real time. Beacon delivers real-time scene narration, hazard detection, object recognition, and real-time reading for signs, menus, books, and labels — all spoken aloud naturally through conversation. It also supports walking turn-by-turn navigation, live web search, location-aware weather, and smart-home control entirely through voice. The entire experience is fully hands-free with natural speech, sub-second barge-in, and no keyboards, menus, or screen interaction. Built with Cursor for rapid agentic development, ElevenLabs (Conversational AI Agents, Text-to-Speech, Custom Voice Design, Sound Effects), Gemini as the vision intelligence layer, Firecrawl and SerpAPI for web search, Google Maps + OSRM for navigation, Open-Meteo for weather, and Tuya for smart-home control. "Beacon. A guiding light for everyday freedom."
Submitted 14 May 2026
with v0
Hacker News(https://news.ycombinator.com/) is the thinking layer of Silicon Valley. Every day, engineers, founders, and researchers gather on a single page to figure out what matters first. It surfaces the ideas shaping the industry before Twitter, newsletters, or the mainstream catches up. But the interface hasn’t changed since 2007. HN++ is a full redesign of Hacker News. The content and product stay the same, but the design is completely reinvented. The experience is rebuilt from scratch with glassmorphism, warm SF-inspired tones, frosted cards, and a calmer reading flow without losing information density. Three modes shape the product. Highlights surfaces the best stories of the day by topic. Feed is the live firehose of everything happening on HN in real time. Podcast turns the front page into something you can listen to, with a built-in player and daily episodes generated automatically from top stories. Everything streams live from the official Hacker News Firebase API. The audio experience adds a completely new dimension to HN. Every story includes a Listen button that generates a short narration blending the article with the comment discussion. HN++ Pod is a fully automated daily podcast built from the day’s top threads. Voice Search lets you query HN by speaking naturally. HN++ Bot is a conversational agent with live access to Hacker News — ask what’s trending, what people are debating, or who’s hiring. Built with ElevenLabs (Text-to-Speech, Scribe v2, Text-to-Dialogue (Eleven v3), ElevenAgents, Music API), Vercel v0 for component generation and easy deployment, Firecrawl for scraping, Gemini as the intelligence layer, Cloudflare R2 for storage, and GitHub Actions for daily podcast automation. Hacker News has always had the best content on the internet. HN++ gives it a surface worthy of it — and now a voice. HN++: Same firehose, new senses!
with Zed
Meet the monkey who can't stop dancing. He's been grooving since before you got here, arms up, maracas shaking, locked to a rhythm he'll never break. Here's the catch: you can move him. You can't control what he does next. Rhythm Kingdom is a "split-control rhythm arcade game". You control where the monkey goes — but not what he does. Actions are locked to a beat sequencer: eight slots that repeat. You fill them with actions in advance. When the beat hits, they execute. No take-backs. Jump. Roll. Throw coconuts at enemies. Miss the timing and you die. All audio — from chill to intense beats, animal sounds, voices, and tribal chants — is generated using the ElevenLabs Music and Sound Effects API, with ElevenLabs TTS voicing the game, layered to build the atmosphere. The screen shakes to the beat. Lights pulse. Every action triggers its own audio stem. All locked to tempo — everything perfectly synced, nothing out of time, in a single flow. When everything lines up, it feels like a live performance. You lose the rhythm. You die. You try again. Built on the Phaser 3 game engine — runs entirely in the browser, no installs, no backend. Built with Zed. The dev flow stayed fluid and clutter-free. I used Zed's Parallel Agents feature to work across different parts of the game simultaneously. With MCPs like Context7 keeping everything in sync, nothing drifted. Zed let me move fast without losing control. "Rhythm Kingdom : Place the runes. Hit the beat. Don't fall behind."
with AWS Kiro
Learning an instrument is one of the most rewarding things you can do. But it is hard. Most people don’t quit because they lack talent. They quit because they don’t know what they’re doing wrong. Beginners start excited, then get stuck. Progress feels invisible. Many quit within a year. Lessons are expensive. The problem is simple: feedback comes too late. StringIQ changes that. StringIQ is a real-time AI music training system for guitar that gamifies learning by listening to every note, responding instantly, and adapting to your playing. Mistakes are caught as they happen, not after. While it starts with guitar, the system extends to any instrument. Feedback should not come after you play. It should be heard and felt in the moment. StringIQ builds on this with multi-sensory feedback. You hear the coach. You see guidance. You feel it through ambient lights that respond in real time. Think concert lights, but for home practice. Drift off tempo or hit the wrong note, the lights turn red. Lock in, they stay green. No delay. No analysis. Just correction. Your brain links error to signal and fixes it instantly. Under the hood: advanced digital signal processing coupled with ElevenLabs Agents and TTS deliver real-time coaching, ElevenLabs Voice Design crafted the coach's unique voice, the ElevenLabs Music API generates backing tracks for your sessions, and Tuya controls the smart lights. StringIQ was built entirely in Kiro—spec-driven development for structure, vibe coding for rapid iteration, agent hooks as quality gates, steering for consistent conventions, and Kiro's ElevenLabs Power for integration. MCPs extended the workflow: Context7 for persistent codebase context. StringIQ does not just tell you what went wrong. It trains your instincts.
with turbopuffer
Every musician has a tune in mind. What if you could search and create music by feel, play a song to create music Shazam-style, fuse genres into entirely new sounds, and transform lyrics into fully composed songs? Meet GrooveForge — THE ULTIMATE AI TOOLKIT FOR ORIGINAL MUSIC CREATION. GrooveForge empowers musicians to create in seconds through four powerful modes: Vibe Graph enables users to search and generate music by mood, emotion, and audio characteristics; Sound Match allows you to play a song while GrooveForge extracts its sonic fingerprint to generate something completely original in the same feel — Shazam, but for creation; Text-to-Music lets you type anything, from a mood or genre mashup to an era or even "Metallica meets Taylor Swift," as GrooveForge searches millions of blueprints to find the closest musical DNA and forges an original track grounded in real structure; and Lyrics-to-Music transforms your words into fully composed songs by analyzing emotion, themes, and rhythm to create music where every section fits your lyrics — poetry to production. At its core, GrooveForge leverages millions of indexed songs and audio blueprints enriched with features that define a track’s DNA. Built using datasets including Last.fm, Free Music Archive, the Million Song Dataset, and MusicCaps, it retrieves and analyzes the closest matches to generate original compositions grounded in real musical structure, ensuring precision, originality, and creative control. Powered by ElevenLabs for music generation, turbopuffer for lightning-fast vector retrieval, and Google Gemini for multimodal intelligence. Search by Vibe. Generate by Blueprint. Forge Your Sound. 🚀
with Replit
Does your child spend too much time watching and scrolling? What if screen time could become something they actively create? Powered by ElevenLabs’ AI magic, ElevenTales transforms storytelling into a creative, real-time experience where children don’t just listen—they co-create. Stories evolve in real time with rich, dynamic visuals that grow alongside a child’s imagination. Kids can choose a storyteller, pick a theme, show a toy, draw something, or invent a character, and instantly the story adapts. Nothing is scripted. It’s a live conversation where the AI responds to their ideas, questions, and twists as they happen. Each storyteller is powered by ElevenLabs as a real-time conversational voice agent, custom-designed and voice-cloned to give every character a distinct personality, with expressive narration and sound effects. At the same time, Google's Nano Banana generates vivid, consistent illustrations for every scene, building an immersive visual world alongside the story. Built and iterated rapidly on Replit, ElevenTales leveraged quick development cycles and one-click deployments to bring this experience to life. No menus. No interruptions. Just voice, visuals, and imagination flowing together. ElevenTales turns screen time into a space for creativity, curiosity, and expression, where children don’t just hear stories—they build them.
with Cloudflare
Vault: A Voice Escape Room. A concept. An experiment. A game. What if you could give a room a voice? A life. A will of its own. You are locked inside a glass vault, and the walls are already moving. There are no buttons, no UI—only the Vault Guardian, an entity that protects the vault, powered by a live ElevenLabs voice agent. Your voice is the key. It speaks a riddle, and you must answer out loud. Convince the Guardian you’ve earned it, and the vault shatters open. Fail, and the walls seal you in. It is a claustrophobic, suffocating, inescapable experience. The vault grows darker as the walls close in, and the glass begins to creak under pressure—louder, more desperate, more final. Every sound you hear is generated live using ElevenLabs sound effects: the strain of glass, the explosive shatter when you win, or the slow, crushing collapse when you don’t. There is nothing else—just the vault, alive around you. You can try it solo, or bring a friend in co-op. In co-op, the puzzle splits in two, with parallel conversations unfolding at the same time. Each of you holds only one word of the answer, and neither knows what the other has been told. You’ll have to piece it together and solve both halves before the walls close in on you both. Built on ElevenLabs Conversational AI, the experience runs on live, real-time voice agents. Riddles are generated dynamically using Gemini 2.5 Flash, ensuring every session presents a fresh puzzle. Cloudflare Durable Objects power the vault itself—a persistent, stateful process running at the edge. Each room is controlled by its own object, which tracks the wall position, runs the squeeze timer, and broadcasts updates to connected players over WebSocket. There is no client-side simulation: the server moves the walls, and the clients render what they’re told. When both words are solved in co-op, the vault opens for both players at the exact same moment. Your voice got you in. Now use it to get out. You have 100 seconds to live.
with Firecrawl
Founder delusion is real. One conversation could save you from a multi-million-dollar mistake. KillMyStartup is a voice-native AI that plays devil’s advocate against your startup idea before investors do. It’s not a one-time answer, but a real back-and-forth. Using Firecrawl, it pulls real-time competitors, recent funding rounds, and startups that already failed doing exactly this. You talk, it pushes back; you defend, it brings evidence; you pivot, it keeps digging. It continues until your idea breaks under evidence. This isn’t Google Assistant, it doesn’t help you find things, it helps you realize what shouldn’t be built. At the end, you get an Autopsy Report with everything cited, something to sit with. There’s no complex UI, no over-engineering, and no fluff. It’s just you, your idea, and the truth, built for first-time founders before bad ideas get expensive. Visit killmystartup.today!
Submitted 6 May 2026
Submitted 29 Apr 2026
Submitted 23 Apr 2026
Submitted 13 Apr 2026
Submitted 6 Apr 2026
Submitted 30 Mar 2026
Submitted 21 Mar 2026