400 points · 2 submissions
with Cloudflare
Haven Architect is an autonomous generative ambient audio engine built for deep work and flow states. Users describe their current task, pick a sound world and energy level, and Haven instantly generates a completely original soundscape tuned to that exact context. Every 60 seconds the soundscape evolves on its own. A built-in Pomodoro timer runs 25/5 focus cycles with the sound intensity softening automatically at break time. Mid-session, users can type natural language to the Architect ("more rain," "pump the energy") and the system rebuilds the audio within seconds. Nothing loops. The same sound never exists twice. The problem Haven solves is simple: static music fails deep work. The human brain habituates to repetitive audio in roughly 20 minutes, after which the sound becomes invisible noise and so does the work. Every lo-fi playlist and rain sounds video on YouTube has this flaw because they were built for entertainment, not cognition. Haven fixes this by classifying each task against 9 neuro-acoustic profiles validated by attention restoration theory and environmental psychology. Coding gets tonal drones that support beta brainwaves. Writing gets natural soundscapes that reduce cortisol. Creative work gets abstract spatial textures that induce theta state. The audio environment is built for the person, the task, and the moment. ElevenLabs is the core synthesis engine behind every soundscape. Haven calls the ElevenLabs sound generation API to produce 22-second MP3 chunks on demand, with each prompt generated fresh by Llama 3.3 70B running on Cloudflare AI. These prompts are 15 to 30 word physical sound descriptions built from the user's task context, for example "heavy rain on forest canopy, deep resonant drone beneath ancient trees, distant thunder rolling through undergrowth." A custom Web Audio engine trims the MP3 silence on both ends of every chunk so they loop and crossfade with zero click artifacts, with a 3-second linear gain blend streamed in real time over WebSocket. Two chunks are pre-generated in parallel at session start so playback begins instantly, and three local 108Hz oscillators synthesize audio at boot while the first real chunk loads so there is literally zero silence from the moment the user hits start. Cloudflare powers the entire backend across four services. Cloudflare Workers handles all routing and audio serving at the edge. Cloudflare Durable Objects is the architectural core, with each user session getting its own isolated DO instance that holds complete session state including chat history, prompt evolution, energy parameters and usage limits, surviving WebSocket drops and reconnections without losing context. Cloudflare AI runs Llama 3.3 70B directly at the edge for task classification and prompt engineering. And Cloudflare R2 stores every generated audio chunk for fast global delivery. The result is a fully stateful, real-time, generative audio system running entirely on the edge with no traditional server anywhere in the stack.
Submitted 2 Apr 2026
with Firecrawl
Sentinel is a web intelligence tool that monitors topics on the internet and calls you when something important changes. Not an email, not a Slack ping, an actual phone call with a spoken briefing explaining what happened, why it matters, and which sources back it up. The problem is simple. Important things happen online every day. Competitor launches, pricing changes, regulatory shifts, security advisories, job openings. By the time they reach your inbox or feed, everyone already knows. Existing monitoring tools send notifications that get buried. Sentinel makes the alert impossible to ignore by calling your phone and speaking the update in plain language. It works in two modes. Personal mode calls one person. Team mode calls everyone at once, but each person hears a different briefing based on their role. The CEO hears strategic implications, the engineer hears technical impact, and the CFO hears the financial angle. Same event, different perspectives, delivered simultaneously. Every alert stores the exact source URLs, a full call transcript, and a written summary so you can always trace back why Sentinel interrupted your day and what evidence it found. Firecrawl is used at two stages. Firecrawl Search runs every 15 to 60 minutes to scan the web for a monitored topic and pull back structured content. The backend compares each new snapshot against the previous one to detect meaningful changes. When a change scores high enough, Firecrawl Agent runs autonomous deep research across multiple sources to verify the signal and gather context before any call goes out. ElevenLabs powers the voice layer. When a change crosses the threshold, the backend generates a briefing with Claude and uses the ElevenLabs Conversational AI outbound call API to place a real phone call. The agent speaks the briefing naturally and can handle follow-up questions on the call. For team watches, multiple calls fire at the same time, each with a different spoken message tailored to that person's role. After the call ends, transcripts sync back through the ElevenLabs webhook and get stored for review on the dashboard. Know more at sentinel-ai.up.railway.app.
Submitted 26 Mar 2026