1,000 points Β· 6 submissions
with Cursor
ποΈ NewsTalk AI β Voice-Powered News Intelligence What we built: NewsTalk AI is a fully voice-controlled, AI-powered news platform that reads the latest headlines aloud and lets you have natural conversations about any story β completely hands-free. Think of it as your personal AI news anchor that you can talk to. Just say "next news," "switch to sports," "change language to Hindi," or ask "what does this mean?" β and the AI responds instantly. It supports 10+ languages, 8 news categories, and 12 countries. The problem it solves: Reading news takes time and attention. You can't scroll through articles while cooking, driving, commuting, or working out. Existing news apps are screen-dependent β they demand your eyes and hands. NewsTalk AI eliminates that entirely. You press one button, and the AI starts reading the news. You interrupt it with your voice anytime β skip stories, switch topics, ask follow-up questions, or change languages β all without touching the screen. It turns passive news consumption into an interactive, voice-first experience that works even when your hands and eyes are busy. How we built it with Cursor IDE: The entire project was built from scratch using Cursor IDE as our primary development environment. Cursor's AI-powered coding assistant helped us rapidly prototype and iterate on the architecture β from designing the modular service layer (news fetching, voice recognition, TTS, AI chat) to debugging complex async audio lifecycle issues like overlapping speech streams and echo cancellation. Features that would normally take days β like continuous voice recognition with keep-alive, multi-language support, and real-time news aggregation from RSS feeds β were shipped in hours thanks to Cursor's intelligent code generation and context-aware suggestions. Every line of code, every bug fix, every deployment was done inside Cursor. How we used ElevenLabs: ElevenLabs API powers the entire voice experience. We use their eleven_multilingual_v2 model to generate natural, human-like speech in 10+ languages β English, Hindi, Spanish, French, German, Japanese, Portuguese, Arabic, Chinese, and Korean. When the AI reads a news article or responds to a question, the text is sent to ElevenLabs' text-to-speech API, which returns high-quality audio that plays seamlessly in the browser. We built a custom audio lifecycle manager on top of it β with abort controllers to cancel in-flight API requests when the user interrupts, generation counters to prevent stale audio from playing, and automatic cleanup to ensure only one voice stream is ever active. The result is a buttery-smooth, interruptible voice experience that feels like talking to a real person.
Submitted 14 May 2026
with v0
Wikipedia β Neubrutalist Edition What I Built A live Wikipedia client with a bold Neubrutalist UI and AI-powered Text-to-Speech. Every article, image, and news feed is fetched in real-time from en.wikipedia.org. Users can search, browse trending articles, explore categories, and β most importantly β highlight any text to hear it read aloud by an AI voice. What Problem It Solves Wikipedia's interface is stuck in the 2000s and has no native audio feature. This project gives it a modern, visually striking redesign while adding voice accessibility β so anyone can listen to articles instead of reading them. How ElevenLabs Is Used ElevenLabs powers the listen feature. When a user selects text, it's sent to a secure Next.js API route that proxies the request to ElevenLabs' eleven_turbo_v2_5 model. The generated MP3 streams back to the browser for instant playback with play/pause controls and download. The API key stays server-side for security, and input is capped at 2,500 characters to manage costs. How v0 Is Used v0 by Vercel was used to design and build the entire frontend β the Neubrutalist design system (thick borders, hard shadows, zero radius, vibrant colors), all page layouts (homepage, articles, search, categories), the component library, responsive design, and dark mode. It turned what would be days of CSS and layout work into rapid, iterative prototyping.
with Zed
Zombow β Survive the Horde Zombow is a fast-paced, 3rd-person survival archery game built entirely in the browser using raw JavaScript, Three.js, and WebGPU. Set in a lush, procedurally generated tropical environment, players must survive an endless, dynamically scaling horde of zombies. The game features high-performance 3D spatial audio, realistic projectile physics with continuous collision detection, and over-the-shoulder camera mechanics for precision aiming. Zombow pushes the boundaries of what is possible for immersive, lag-free 3D gaming directly in a web browser.
with Replit
I built CodeMapAI, a tool that turns a GitHub repository or uploaded ZIP file into an interactive visual code map with AI-powered explanations, learning paths, flow understanding, and voice-assisted navigation. It solves a common problem developers face when opening a new or large codebase: itβs hard to know where to start, how files connect, and what each part actually does. Instead of reading everything manually, CodeMapAI helps make the structure and purpose of the codebase much easier to understand. ElevenLabs is used for voice features, specifically text-to-speech, so the app can speak responses and explanations back to the user as part of the voice-assisted experience. Replit was used as part of the building workflow and development environment to help prototype, develop, and iterate on the project faster.
with Cloudflare
ArgueTV is an AI-powered live debate platform that turns any topic into a structured two-sided debate between two virtual hosts, Avery Quinn and Jordan Blake. It combines OpenRouter for debate generation, Firecrawl for live research and trending topics, and ElevenLabs for realistic voice synthesis, then adds post-debate fact-checking and verdict analysis. It helps users explore both sides of controversial or timely topics quickly, in a more engaging format than reading articles or static summaries. Instead of searching across multiple sources and forming arguments manually, users can enter a topic and instantly get a researched, spoken, multi-round debate with supporting context. Makes complex topics easier to understand by presenting both sides Saves time on research by pulling context automatically Turns static AI text into a more human, audio-first experience Helps creators, students, and curious users evaluate arguments and claims faster Adds fact-checking and verdict summaries so debates are not just entertaining, but informative ElevenLabs converts each debate response into natural-sounding speech Each host has a distinct assigned voice Audio is generated round-by-round during the debate It is also used for replaying segments and supporting the broadcast-style experience
with Firecrawl
CodeTune is a voice-powered AI repo assistant that helps users understand, navigate, and debug any GitHub repository, then turns that same codebase into music. It solves a real developer problem: unfamiliar repositories are hard to onboard into quickly, especially when you need to understand structure, setup, and open issues fast. CodeTune lets users paste a repo URL, ask questions about the project, inspect issue context, hear spoken answers, and then generate a soundtrack that reflects the repoβs engineering style and purpose. How it uses the tech: Firecrawl is used to scrape the repository page and README into clean markdown context, which becomes the foundation for repo understanding. That context is combined with repository structure and sampled source files so the assistant can answer practical questions about architecture, setup, debugging, and issues. ElevenLabs powers the voice layer for spoken repo guidance and the music layer for instrumental or lyrical generation, turning the repository from something you can read into something you can both hear and experience.
Submitted 7 May 2026
Submitted 30 Apr 2026
Submitted 9 Apr 2026
Submitted 2 Apr 2026
Submitted 26 Mar 2026