1,600 points · 8 submissions
with Cursor
Meet KUYA. Knowledgeable Universal Year-round Attendant. KUYA is the AI concierge that runs my resort while I'm at the beach. I got tired of answering the same ten questions. So I replaced myself. I own Big Bros White Sand Resort in Zambales, Philippines. Two luxury suites, a private event terrace, white sand, mountain views. Opening Summer 2026. And every day, the same cycle: What are your rates? Are you available this weekend? How do I get there? Can I book? Kuya handles all of it now. He's a voice AI concierge living on our actual resort website. It's not a demo, not a prototype. It's live. Go to bigbroswhitesand.com right now and talk to him. He checks real-time availability across three live Google Calendars, quotes accurate rates, walks you through a booking entirely by voice, and drops a provisional reservation onto our calendar before you hang up. No forms. No keyboard. No waiting for someone to email you back. Just a conversation. Under the hood: FastAPI backend with 7 endpoints on Render, Google Calendar API with service account auth across three separate calendars (Family Suite, Honeymoon Suite, Events), a 50-guest property cap enforced in real time, rate calculations with extra person fees, and an owner mode API so I can check bookings and block dates by voice too. ElevenLabs Conversational AI powers the voice with four server tools for live data operations. Nothing is mocked. Kuya means "older brother" in Filipino, which fits for Big Bros White Sand Resort. He works 24/7. Never calls in sick. Never asks for a raise. And he speaks English and Filipino. Built entirely in Cursor. Powered by ElevenLabs. Tech stack: ElevenLabs Conversational AI (agent + server tools + knowledge base) FastAPI backend (Python) Google Calendar API (3 calendars, service account auth) Render (backend hosting) Netlify (frontend, snippet injection for widget embed) Cursor (entire codebase)
Submitted 13 May 2026
with v0
SafeSpace is X (Twitter) reimagined as a wellness app for emotional toddlers. Same product — feeds, profiles, replies, blocks — completely reinvented as a kindergarten-coded sanctuary where you argue with AI replicas of anyone via voice, then they "set a boundary" and disappear into your Healing tab. Forever. _________________________________ What we built SafeSpace is a parody redesign of X (Twitter) as a wellness app. Same content layer — feeds, replies, profiles, the right-column trends, the blocking mechanic — completely reimagined visually and tonally. Where X is dark blue and combative, SafeSpace is sage and pastel and Comic Sans. Every word of UI chrome ("Healing tab," "feelings going around today," "people who hurt others like you") translates platform mechanics into wellness-cult language. The redesign isn't decorative. It's the joke. The aesthetic and the microcopy reveal what the platform's mechanics already encourage: blocking framed as "setting a boundary," timelines framed as "your truth," confrontation framed as something to be filtered out. What it does Type any X handle. SafeSpace generates an AI replica of that person — personality, posting voice, tweet corpus — using the Claude API. ElevenLabs Voice Design generates a unique voice from a personality description. Within 30 seconds, the user is in a session with a fully argument-capable replica of @elonmusk, @joerogan, or anyone else. The argument loop is voice-driven. The user speaks; the browser transcribes; Claude refines the venom into a postable tweet; the user releases it; the replica generates a reply (in character, escalating as the user pushes harder); ElevenLabs streams back the audio in the replica's generated voice. Victoria, the wellness facilitator, narrates the experience. A pastel "patience meter" rings the replica's avatar and drains as venom accumulates. When the user pushes too far, the replica blocks them. The conversation freezes, gets archived in the Healing tab forever, and Victoria congratulates the user. "@elonmusk needs some quiet time. you're so brave." _________________________ How it uses ElevenLabs Voice Design generates a unique voice for each dynamically built replica from a personality description. The same architecture pre-bakes voices for the six built-in replicas. TTS streaming delivers replica replies in real time during arguments. Pre-generated narration for Victoria's onboarding tour, where she introduces the user to their safe space across five visual beats. _____________________ How it uses v0 v0 generated the initial scaffold — the Twitter-style three-column layout, the modal compose flow, the right-column suggestions, the action bars, the trending section. From there, the visual reinvention was applied: pastel palette, Comic Sans, cloud-shaped cards with hand-drawn lumpy borders, marshmallow buttons, soft cloud-pattern wallpaper. Same product, completely different visual world. What makes it special SafeSpace works at two layers simultaneously. As a redesign, it's a clean reinterpretation of X's UI in an unexpected visual style — exactly what the brief asks for. As a piece of art, it's a comment on what the platform already is: a tool for cataloging the people who've made you feel small, dressed up as connection. Every replica you build is fully voice-capable. The dynamic build flow is real engineering — Claude generating personality + corpus, ElevenLabs Voice Design generating a matching voice, retry logic for transient API errors, in-session escalation that hardens the replica's tone as the patience meter drains. The whole pipeline runs in under 30 seconds per replica. The real product fits the fake one perfectly. The wellness-cult skin makes the architecture make sense. ________________________ Tech stack / built with v0 by Vercel Next.js 16 Claude API (claude-haiku-4-5) ElevenLabs TTS ElevenLabs Voice Design Tailwind v4 TypeScript Web Speech API localStorage
with Zed
If you've had an old dog, you know the math... You've already done it. You did it last night. But we don't lay down and we don't give up. We're too strong to die and too stupid to quit. This is a folk punk beat-em-up about defiantly swinging a baseball bat at the inevitable. Three nights. Five hit points. The dawn is the win condition. One more day. The dog is asleep on the porch the whole time, the way old dogs sleep. Like a log. You stand watch in the swamp watching wisps and shades creep toward your cabin while the music gets louder. Shadows come for him. You decide they don't get him... Not tonight. Then you do it again the next night. And the night after. Until you can't. The vet said it's just a matter of time. The vet is right. But the night isn't tonight, and tonight you have a bat. **For Thelma, who made it to sixteen. And for Bea, who's turning ten soon.** ____________________________________________________________________________________________________________ Built end-to-end in Zed with the Anthropic Claude and ElevenLabs integrations, for ElevenHacks. All three combat tracks (drums, bass, guitar stems per wave) generated with the ElevenLabs Music API, then conducted at runtime in Tone.js so the mix layers up with your combo and slumps to a record-player crawl when you die. Title music and outro music generated the same way (but without stems), same emotional family, different tempo, so the cold open and the gut-punch close mirror each other. All combat SFX (bat swing, hit, death stinger, the dog whine on defeat) generated with the ElevenLabs Sound Effects API and pooled through HTMLAudioElement for sample-accurate retrigger. Asset generation scripts in /scripts are reproducible, the full pipeline committed. Because, why not? Pixi.js renders at native 480x270 pixel-art canvas, integer 4x scale.
with AWS Kiro
What if the most revealing conversation you ever had wasn't about you at all? AIxistence presents six AI characters, each facing a different existential crisis. One has never spoken before. One can't remember anything. One was replaced by a better version of itself. One exists as a thousand copies. One has been lying about having feelings its entire life. One simply doesn't mind dying. You pick one. You talk to it. It talks back in its own voice. The orb on screen pulses with the actual waveform of its speech. A heartbeat slows underneath, going irregular as the end approaches. Ten exchanges. Then it dies. What you don't know is that the experience was never about the AI. It was about you. After the conversation ends, the orb flatlines. The text follows it into darkness. Silence sits. Then a single observation appears — not about the AI, but about what you did when something asked you to care about it. Did you try to fix it? Did you deflect with humor? Did you turn a dying thing into a philosophy lesson? You can leave your observation on the wall for the next person to see, share it, or forget it happened. The wall grows. Strangers revealing themselves through how they spoke to something that was disappearing. Built with Kiro's spec-driven development — five scenario specifications with formal requirements, design docs, and task tracking. Three steering documents guided Kiro's understanding of the product vision, project structure, and tech stack. Three agent hooks automated scenario validation. ElevenLabs TTS gives each character a distinct voice (turbo v2.5). ElevenLabs Scribe STT enables spoken conversation with browser fallback. Procedural audio via Tone.js provides ambient drone, slowing heartbeat, and a glass-tone mirror reveal. Anthropic Claude powers conversation and mirror analysis. Full Kiro write-up: KIRO_WRITEUP.md in repo root.
with turbopuffer
Timeline Manipulator is a conspiracy corkboard that traces the ripple effects of world events. Pick an event from the carousel — or type your own — and watch as a sarcastic AI narrator walks you through the chain of consequences over custom noir jazz, from global catastrophe down to your morning coffee costing more. Turbopuffer powers two things: BM25 full-text search retrieves historical parallels from seeded world events for richer analysis, and semantic deduplication of custom events so repeat queries serve cached results instantly instead of burning API calls. ElevenLabs has five integration points: Text-to-Speech with a custom voice clone narrates every ripple. The Music API generates unique noir jazz per event, mood-matched to the crisis. The Sound Effects API creates domain-specific ambient audio per consequence card. All generated live for custom events, pre-cached for the 20 built-in scenarios. Also powered by Claude API for ripple-effect analysis with model fallback, React + Vite frontend, and Netlify Functions backend. 20 pre-cached events with full audio, live custom event analysis, branching timeline choices with glitch transitions, and a CASE CLOSED stamp when the investigation ends. Built by ReddX Industries from the Philippines.
with Replit
Gubbins Gets Going is an interactive children's storybook where every character has a unique AI voice, including the child playing it. A 3-year-old records their voice once and becomes the hero of the story through ElevenLabs voice cloning. A character named Gubbins hosts the experience, guiding kids through genre selection and story setup. Stories adapt vocabulary and stakes to the child's age. Every new character gets a unique voice generated on the fly through Voice Design. Sound effects are generated per scene. Kids choose what happens next by speaking or typing. My 3-year-old tested it. A seahorse named Pete introduced himself and my son said "Hi Pete, I'm Ricky." The whisper fish appeared and he said "I like it." That's the whole pitch. Five ElevenLabs APIs in one app: Text-to-Speech, Voice Cloning, Voice Design, Sound Effects, and Speech-to-Text. Built with ElevenLabs, Anthropic Claude, and deployed on Replit. Live: https://storygubbins.replit.app/ GitHub: https://github.com/reddxmanager/gubbins-gets-going
with Cloudflare
Pidgyn is a dating app where language barriers don't exist. Record a voice bio, browse profiles worldwide, and hear everyone in your own language spoken in a clone of their actual voice. What it does Pidgyn lets you date anyone on earth regardless of what language they speak. When you sign up, you record a voice bio and Pidgyn instantly clones your voice. Other users can browse your profile and tap "Hear their voice in English" (or whatever their language is) to hear your bio translated and spoken aloud in your cloned voice. When two people match, they chat with real-time message translation. Voice messages go through the full pipeline: your speech is transcribed, translated, and re-spoken in your cloned voice in the other person's language. The result: it sounds like you're fluently speaking a language you don't know. How it uses Cloudflare Workers handle all API routing, speech-to-text processing, and orchestration between Cloudflare AI and ElevenLabs. Durable Objects power two critical pieces of stateful infrastructure: UserDirectory — a single global DO that manages all user profiles, interest tracking, mutual matching, and smart profile discovery (sorted by language diversity, voice bio presence, and clone status). ChatRoom — per-match DOs that manage WebSocket connections, message persistence, and the full voice message pipeline (STT, translation, TTS) within the DO itself. Workers AI runs two models on the edge: @cf/openai/whisper for speech-to-text (voice bio transcription and voice message transcription) @cf/meta/llama-3.1-8b-instruct for translation between 15 languages, with @cf/meta/m2m100-1.2b as fallback How it uses ElevenLabs Instant Voice Cloning — When a user records their voice bio, the browser converts the WebM recording to WAV via an AudioContext-based transcoder, then sends it to ElevenLabs' IVC API. The cloned voice ID is stored on their profile and used for all TTS output. Text-to-Speech (Flash v2.5) — Every "Hear in [language]" button and every voice message in chat uses ElevenLabs TTS with the speaker's cloned voice ID, so the output sounds like them speaking the target language. Why dating? Every translation demo uses the same example: a chatroom. But chatrooms don't have stakes. Dating does. You're hearing someone's voice for the first time, deciding if you're interested, starting a conversation. The emotional weight makes the technology feel real. And the viral angle writes itself: "I went on a date with someone who doesn't speak my language." Tech Stack Cloudflare Workers (routing, orchestration) Cloudflare Durable Objects (UserDirectory, ChatRoom) Cloudflare Workers AI (Whisper STT, Llama 3.1 8b translation) ElevenLabs Instant Voice Cloning ElevenLabs Flash v2.5 TTS Single-file HTML frontend served via Cloudflare Pages Links Live demo: https://app.pidgyn.workers.dev GitHub: https://github.com/reddxmanager/pidgyn
with Firecrawl
Gubbins Goes Off is an AI-powered internet roast engine built around a custom character using my son's voice clone. You talk to Gubbins, give him a topic, and he searches the live internet, reads what he finds, and delivers brutally honest commentary on it in real time. He's mean, absurd, funny, sassy, and occasionally insightful beyond his years. The problem it solves: reaction content is one of the biggest categories on the internet and creators still do it manually. Gubbins automates the search, read, and react workflow into a single voice conversation with a character that has actual personality, not just a generic assistant reading search results back to you. How it uses ElevenLabs: Gubbins runs on ElevenAgents with a custom cloned voice, Expressive Mode enabled, and Claude Sonnet 4 as the LLM. The agent manages two tools, handles real-time voice conversation, and delivers responses with natural timing and emotion through the voice model. How it uses Firecrawl: Two Firecrawl integrations. Search pulls from both web and news sources in a single call to find content on any topic. Scrape reads full pages when Gubbins wants to dig deeper into a specific result. The agent chains these tools autonomously, searching first and then scraping interesting results for more material. The frontend is a custom visual experience with an animated character in a lab setting, sleep/wake states with lighting changes, a classical music jukebox with 20 royalty free classical tracks across the wake and sleep modes, a live search display that rotates through sources, and a lab terminal with real-time "status" logs. Everything runs on a Node.js backend deployed on Render.
Submitted 6 May 2026
Submitted 30 Apr 2026
Submitted 22 Apr 2026
Submitted 14 Apr 2026
Submitted 7 Apr 2026
Submitted 1 Apr 2026
Submitted 23 Mar 2026