450 points · 5 submissions
with v0
Duolingo landing page in a Soviet Constructivist Propaganda aesthetic — the complete opposite of Duolingo's friendly, rounded, playful brand. Same product. Same content. Completely opposite design. Blood red, near-black, and aged cream replace the cheerful greens. Anton condensed type replaces the rounded friendly fonts. Zero rounded corners anywhere. Every illustration regenerated in a flat woodcut style. The friendly owl is now a stern commissar pointing directly at you. "Free. Fun. Effective." becomes "Free. Relentless. Effective." Built completely in v0 by Vercel. Audio narration generated and powered by ElevenLabs — a single authoritarian voiceover plays from the hero section, delivering the full page as a state mandate. One button. One voice. No friendliness.
Submitted 7 May 2026
with Zed
Say It is a pronunciation game that doesn't let you move forward until you actually say the word right. A word appears on screen — French, Japanese, Arabic, German, or brutally hard English — and the gate stays shut until your mouth gets it right. Stuck? Hit Hear It and ElevenLabs speaks it perfectly. You copy it. You pass. Next word flies in. No waiting, no skipping, just you vs your mouth vs every language on earth. Kids sound ridiculous. Adults sound worse. Everyone wants one more round.
with turbopuffer
SoundDropLabs is an AI sound design tool that turns any text description into production-ready audio. Two modes: SFX Mode generates 4 unique sound effect variations in parallel. Scene Mode takes a scene description and outputs a full 4-layer DAW-style mix (ambience, foreground, background, music) in under 10 seconds. The core is a 4-stage RAG pipeline. Every generation embeds the user's query via HuggingFace, runs semantic search across 26,264 indexed Freesound samples in turbopuffer (~20ms, cosine distance), feeds the 8 closest acoustic neighbors into Gemini 2.0 Flash for prompt enrichment, then hits ElevenLabs SFX API x4 in parallel. The turbopuffer layer is what makes the generations actually sound grounded. Without it, ElevenLabs gets a vague prompt. With it, the model gets a vivid acoustic description built from real-world reference sounds. Scene Mode runs 4 completely independent pipelines simultaneously via Promise.allSettled. The music layer uses ElevenLabs Music API for a 30s instrumental. The other 3 layers use SFX API. Progress streams live to the browser via SSE so users watch each stage complete in real time. Full pipeline: ~5-6 seconds for SFX, ~8-10 seconds for a full scene. Live demo: https://v0-soundroplabs.vercel.app
with Cloudflare
DOPPEL uses ElevenLabs Instant Voice Cloning + Conversational AI 2.0 + Cloudflare Agents to create an immersive experience where you can talk with who you'll become 10 years from now — in your own voice.
Submitted 2 Apr 2026
with Firecrawl
Afterlife lets you have a real voice conversation with any startup that ever failed, reconstructed from its own history, speaking in first person. Type a name. Firecrawl searches four to five live sources per research phase - Wikipedia, TechCrunch, ProductHunt, founder post-mortems streaming results in real time. That research builds a detailed AI persona. ElevenLabs Conversational AI gives it a voice and keeps the conversation going, with a live search_web tool mid-conversation so the agent can answer questions it wasn't briefed on. It never breaks character. When the conversation ends, GPT-4o-mini generates an epitaph the startup's tombstone inscription. Every failed startup left a story nobody told cleanly. Afterlife doesn't summarize that story. It becomes it. You're not reading a post-mortem. You're in one.
Submitted 30 Apr 2026
Submitted 13 Apr 2026
Submitted 26 Mar 2026