1,000 points · 5 submissions
with AWS Kiro
LyricLingo — Learn Languages Through AI-Generated Songs Description LyricLingo turns language learning into a musical experience. Instead of flashcards and drills, it generates original, catchy songs tailored to the vocabulary you want to learn — in any language, any genre. **How it works:** 1. Pick your target language (Spanish, French, Japanese, German, and more) 2. Choose a topic (food, travel, greetings, emotions, business) 3. Select a music genre (pop, reggaeton, hip-hop, ballad, electronic) 4. LyricLingo generates a full original song with AI vocals, where target vocabulary is woven naturally into the lyrics **Interactive learning features:** - Karaoke-style synced lyrics — words highlight as the song plays - Click any word to hear its pronunciation and see the translation - Vocabulary panel with all target words, translations, and context sentences from the song - Celebration sounds when you've reviewed all vocabulary **Why songs work for language learning:** Research shows music activates multiple brain regions simultaneously — melody aids memorization, rhythm reinforces pronunciation patterns, and emotional engagement dramatically improves retention. LyricLingo combines this science with generative AI to create personalized musical learning experiences that are impossible with traditional methods.
Submitted 23 Apr 2026
with turbopuffer
LinguaQuest — Learn Any Language by Solving a Mystery You're not a student. You're an investigator dropped into an ancient city at dusk, and the only way to uncover the hidden secret of its bazaar is to speak the language of the locals. LinguaQuest is a mobile-first language RPG built around a single core insight: the best way to acquire vocabulary is to hear it first — no reading, no grammar drills. Just sound and meaning, the way children actually learn. How it uses ElevenLabs: Every vocabulary word is spoken by a native-quality voice generated on demand via the ElevenLabs TTS API. This isn't pre-recorded audio — it's dynamic voice generation that can scale to any language, any word set. The entire dual-channel mechanic (hear the word → match the image) only works because the audio quality is indistinguishable from a real speaker. How it uses turbopuffer: The full vocabulary is indexed as vector embeddings in turbopuffer. When the game selects the next quest challenge, it runs semantic search against what the player has already learned — surfacing vocabulary that's adjacent in meaning, reinforcing retention through contextual proximity rather than random drill order. turbopuffer also powers the mystery hint system: story clues are retrieved by semantic similarity to the player's current vocabulary state. How it uses Claude (via AWS Bedrock): The narrative layer adapts in real time. If a player hasn't learned a word yet, Claude generates a story hint that routes around it. As vocabulary grows, the mystery deepens. Tech: Next.js · TypeScript · ElevenLabs TTS API · turbopuffer · Claude via AWS Bedrock · Tailwind CSS Design philosophy: Every screen is a scene. Words are collectibles, not homework. The UI disappears into the story. → Repo: github.com/levanstein/LinguaQuest
with Replit
▎ REM-MOM is a voice AI application that addresses one of dementia care's most critical challenges: medication non-adherence. 55 million people worldwide live with dementia, and generic reminder apps with robotic voices are routinely ignored by patients. Research shows that familiar voices, especially family members, are encoded in deep procedural memory, the kind that dementia erodes last. ▎ REM-MOM uses ElevenLabs Instant Voice Cloning to let family members create AI-powered reminders in their own voice. A 30-second voice sample is enough to generate medication reminders, morning check-ins, and guided memory exercises that sound like the patient's actual child, not a machine. The backend runs on Node.js/Express deployed via AWS, with ElevenLabs handling voice cloning and multilingual text-to-speech synthesis. ▎ The project demonstrates how voice AI can solve a deeply human problem: helping families stay present for the people they love, even across distance and demanding schedules. Technology should bring us closer, not push us apart.
with Cloudflare
TrustVoice — Real-Time Vishing Detection Platform The Problem Voice phishing (vishing) attacks grew 442% in H2 2024 (CrowdStrike). Deepfake-enabled vishing surged 1,600% in Q1 2025. Companies lose $40 billion annually to AI-powered voice fraud (Deloitte). Unlike email — which has spam filters, DKIM, and phishing detection — there is no real-time defense layer for phone calls. An attacker calls your finance team, impersonates the CEO, requests an urgent wire transfer. By the time security reviews the call, the money is gone. The largest single vishing loss on record: $25 million in one phone call. The Solution TrustVoice intercepts and analyzes phone calls in real-time. It transcribes speech, classifies social engineering patterns across 6 threat categories, computes a risk score, and fires instant Slack alerts to security teams — all within seconds of the first spoken word.
with Firecrawl
FireTalk turns any two products into an AI-powered voice debate. Type two product names — like "Notion vs Obsidian" or "ChatGPT vs Claude" — and FireTalk scrapes both websites with Firecrawl (pricing, features, reviews), writes a structured comparison analysis with Claude, and generates a voice debate with two ElevenLabs AI voices arguing the pros and cons out loud. You get: a full audio debate, a side-by-side comparison table, and a verdict — in under 2 minutes. No more opening 20 tabs and reading 10 blog posts. Just listen. Tech stack: Next.js 16, Firecrawl (multi-page scraping), ElevenLabs TTS (Daniel & Sarah dual voices), Claude via AWS Bedrock, SST on AWS. Live demo: firetalk.comutato.com
Submitted 16 Apr 2026
Submitted 9 Apr 2026
Submitted 2 Apr 2026
Submitted 26 Mar 2026