550 points · 3 submissions
with Zed
Earwitness is a detective game where the gameplay is purely auditory. There is no combat, no exploration, no inventory — just four AI-generated suspect voices and the suspicion that one of them is lying. The premise. Each case opens with a forensic case file: victim, location, time of death, and four suspects. Players listen to each suspect’s recorded testimony, watch the transcript light up word-by-word as they speak, and flag the lines that sound suspicious. Once all four are heard, the player accuses the killer and identifies which line gave them away. A correct accusation triggers a full ElevenLabs-generated confession; a wrong one ends with an innocent person executed. The core mechanic. Lying lines are generated with lower voice stability and higher style settings in ElevenLabs Multilingual v2 — producing a subtle vocal tell that real human listeners can detect. The player isn’t guessing; they’re catching micro-cadence shifts the same way a detective in an interrogation room would. The textual layer. Every claim-dense line carries an evidence tag — ALIBI, TIMESTAMP, WITNESS, MOTIVE, ADMISSION — color-coded inline in the transcript. When a player flags a line, the notebook surfaces any cross-suspect contradictions: another suspect’s testimony, displayed in italic serif, with a short forensic note and a one-click jump to that exact moment in the other recording. Detective work earns the cross-reference, not handouts. What’s there. Three full cases: a 1960s-coded estate murder (Vance), a 1.2M-follower influencer drowned at her own brand-launch party (Wipeout), and a Hinge first date ending in an elevator (Double Tap). 112 individually-generated MP3 clips, 16 distinct ElevenLabs voices, 8 cross-suspect contradictions, full per-line stability tuning on the killers’ lies. The build. Built in Zed in 24 hours. Static HTML + React UMD + Babel — no build pipeline. Deployed on Vercel as a single static site. Audio generation pipeline is a 90-line Node script that reads case data, hits the ElevenLabs API per line, and writes per-suspect manifests the audio engine consumes at load time. Try it: earwitness.vercel.app
Submitted 30 Apr 2026
with Cloudflare
Have you ever been jolted awake at 1 a.m. by a callout—something’s red, a deploy went sideways, and you’re squinting at logs and dashboards before you’ve even had coffee? Or watched a release fail and felt that slow dread of hunting through the console while everyone waits on Slack? I built JARVIS.cloud for that moment: you stay in one lane—you talk, it listens and talks back—while it still does the real work on your Cloudflare Workers stack. ElevenLabs powers the voice conversation and routes intent through server tools so the agent isn’t just chatty, it can act. Cloudflare hosts the glue: a Worker that secures the session and a Durable Object that holds context and drives the Cloudflare API—deploy, tail logs, analytics, rollback, health checks—so a 1 a.m. or a bad deploy becomes “tell JARVIS what’s wrong” instead of another lonely date with the dashboard.
with Firecrawl
Is AI coming for your job? Not sure how to stay ahead of it? Looking for Career Advice in the AI wave? Meet Donald. An AI that: tells you how exposed your job is to AI gives you real career advice what to learn, where to apply, how to stand out hands you a clear path to stay ahead and helps you fix your CV It doesn't sugarcoat anything. It roasts you with the truth, then gives you a game plan. Built this using: - ElevenLabs Agents (voice + personality) - Firecrawl (real-time web intelligence) - Claude SDK (reasoning + analysis) - Multi-agent orchestration under the hood Since launch: - 30K+ views in 48 hours on insta 40K Views so Far - 200+ users in under 48 hours 300+ so far - 4.1+ Million Tuition in Shambles Try it yourself: www.heyitsdonald.com
Submitted 1 Apr 2026
Submitted 24 Mar 2026