Sidefire: A Million Requests in Fifteen Minutes

4 min read Updated April 5, 2026

Two Versions, Two Stories

Sidefire has two lives. Version 1 is a group chat app that works the way group chat should - no app store, no email signup, just your phone number. Real-time messaging on Cloudflare Workers and Durable Objects. It still runs.

Version 2 is where everything went wrong in exactly the right ways.

Version 1: Proof of Concept

The original Sidefire proved a complete real-time messaging platform could run entirely on Cloudflare’s edge with zero traditional server infrastructure. WebSocket rooms, image sharing, push notifications, phone number auth - the full feature set, no fleet of servers required.

It was also the project where the question first surfaced: if an AI agent can write good code in one session, what happens when you run a whole team of them in parallel?

Version 2: The Parallel Agent Experiment

With v1 stable, the question became concrete: could an entire rewrite be built by AI agents working simultaneously? Not one agent coding sequentially, but a team - Claude Opus as architect, multiple Sonnet agents working in parallel in separate VS Code instances, each owning a slice of the codebase.

Over 72 hours, the team produced 56 API endpoints, 458 tests, 31 React components, 13 database tables, and 101 commits. The parallelism model was elegant: file ownership as the concurrency primitive. No two agents touched the same file. Branches merged cleanly. Tests passed. Everything looked perfect.

Then we pushed to staging.

The root cause: calling useXxxStore() with no selector returns the entire store object - a new reference on every state mutation. That object went into useCallback dependency arrays, which invalidated callbacks on every mutation, which triggered effects to refire, which set state, which triggered more mutations. Five hooks had the pattern. Eight browser tabs open meant 8x amplification. The 401 auto-retry in api.ts multiplied every request. CORS preflight errors hit Cloudflare’s edge layer, bypassing the KV rate limiter entirely.

One million requests. Fifteen minutes. Cloudflare free tier exhausted for the day.

What the Failure Produced

The problem was not the Zustand bug. The bug was a symptom.

File ownership is not feature ownership. A WebSocket handler and a state management hook don’t share files, but they share a contract. That contract was implicit. Each agent’s code was correct in isolation. Together, they created feedback loops no single agent had anticipated, because no single agent had the full picture.

Integration testing is the real bottleneck. Each agent wrote thorough unit tests for its own code. The integration surface - how components interact at runtime - was barely tested. Agents are excellent at testing what they built. They are poor at anticipating how their output interacts with someone else’s.

The human has to be the architect. Claude Opus as architect worked for high-level decisions. It did not work for unglamorous detail work: “what happens when the WebSocket reconnects while a selector is mid-update?” That question requires holding the whole system in view. No parallel agent had that view.

Every deployable unit needs its own CI. The production API Worker had only ever been deployed manually. There was no deploy-api.yml. The web workflow ran green while the API ran stale code. A separate workflow for each deployable is not overhead - it’s the only way to know what’s actually live.

Wrangler’s default is preview, not production. Every CI run “succeeded.” The production slot kept showing old content. wrangler pages deploy without --branch main creates a throwaway preview URL. The production slot only gets claimed when you’re explicit about it. This is also why staging looked completely broken for weeks while all CI was green.

Why It Mattered

The failure was the education. Every methodology in use now - the spec-driven approach, single-agent-per-session, the emphasis on CI/CD feedback loops, the linearized prompt chain - was born from three days of Sidefire v2 chaos.

Boopadoop was built in one afternoon using those lessons. One spec, one agent, one session. No parallel state to reconcile. No implicit contracts between isolated contributors. The human held the full picture. PDARR was built the same way. So was Fork the Vote.

The instinct to document what went wrong - specifically, with root causes, not just symptoms - turned a $200 Cloudflare bill into the foundation of a methodology.

The Throughline

Sidefire v2 is not a cautionary tale. It is the origin point.

The question it was trying to answer - can a team of AI agents build production software in parallel? - got a clear answer: not like this. The follow-up question - so how does it work? - produced everything that came after.

Current Status

v1 still runs at old.sidefire.net. The memorial lives at sidefire.net - a landing page for a project declared unsalvageable. Not because the code couldn’t be fixed, but because the development model had produced a codebase no one could reason about. Sometimes the right call is to learn from the rubble and not rebuild on it.