We were building an automated test runner for DataShield's MCP (Model Context Protocol) platform — 56+ API endpoints that return responses via SSE (Server-Sent Events). We needed per-test timeouts. Simple, right?

It took 7 iterations and over 4 hours to discover that there is no single reliable way to timeout SSE stream consumption in Node.js. Not AbortSignal. Not Promise.race. Not reader.read() loops with deadline checks. Not even http.request with an external setTimeout. The solution requires combining three approaches — each covers a failure mode the others miss.

This affects anyone building MCP clients, consuming streaming AI responses, polling webhook delivery endpoints, or running automated tests against SSE APIs. Here's what we tried, why each approach failed on Node.js SSE timeout, and the combined pattern that actually works.

The Problem

When an SSE endpoint returns HTTP 200 with Content-Type: text/event-stream, the server holds the connection open to send future events. If the server sends keepalive or heartbeat data but never sends the actual result you're waiting for, your Node.js application hangs forever. No built-in timeout mechanism will interrupt it.

This isn't a bug — it's a design gap at the intersection of three systems: the Fetch API (designed for request/response, not streams), the Streams API (no timeout concept), and Node.js's event loop scheduling (I/O callbacks can starve timer callbacks). If you've tried to abort an SSE stream in Node and hit a wall, that intersection is why.

Seven Approaches — The Journey

Attempt 1: AbortSignal.timeout() on fetch() FAILS

The obvious first try. AbortSignal.timeout() was designed for exactly this, right?

attempt-1.ts
const controller = new AbortController();
const resp = await fetch(url, {
  method: "POST",
  headers: { "Content-Type": "application/json" },
  body: JSON.stringify(payload),
  signal: AbortSignal.timeout(30_000),
});
const body = await resp.text();  // hangs forever
Why it fails: AbortSignal.timeout() only aborts the initial connection phase. Once the server returns HTTP 200 headers, the signal is considered satisfied. The subsequent resp.text() call ignores it entirely and waits for the stream to close — which an SSE stream never does.

Attempt 2: await resp.text() on SSE stream FAILS

Maybe just reading the full response body will work?

attempt-2.ts
const resp = await fetch(url, {
  method: "POST",
  body: JSON.stringify(payload),
});
const body = await resp.text();  // waits for stream close... forever
Why it fails: resp.text() buffers the entire response body and resolves when the stream closes. SSE streams don't close — that's the point. Your process hangs indefinitely. This is the most fundamental misunderstanding of server-sent events timeout behavior in Node.js.

Attempt 3: Date.now() check in ReadableStream reader loop FAILS

Alright, let's get the reader and check the clock on each iteration.

attempt-3.ts
const reader = resp.body!.getReader();
const deadline = Date.now() + 30_000;

while (true) {
  if (Date.now() > deadline) break;          // never reached
  const { done, value } = await reader.read();  // blocks here
  if (done) break;
  processChunk(value);
}
Why it fails: The Date.now() check is before the await reader.read(). When the server sends no data, reader.read() blocks indefinitely — the event loop never returns to the top of the while loop. Your deadline check never executes. When the server sends rapid keepalives, you get a different problem: a busy loop that chews CPU but never breaks because each read() resolves immediately with keepalive data and the clock check passes.

Attempt 4: Promise.race(reader.read(), setTimeout) FAILS

This is where it gets insidious. Promise.race should let the timer win, right?

attempt-4.ts
const reader = resp.body!.getReader();
const deadline = Date.now() + 30_000;

while (Date.now() < deadline) {
  const result = await Promise.race([
    reader.read(),
    new Promise(r => setTimeout(r, 5_000)),
  ]);
  if (!result || result.done) break;
  processChunk(result.value);
}

This works when the server sends no data — the timer wins the race after 5 seconds. But it catastrophically fails when the server sends rapid keepalives. Here's exactly why:

  1. reader.read() resolves immediately with each keepalive chunk.
  2. The loop re-enters, creating a new Promise.race, but the timer's setTimeout callback is a macrotask.
  3. I/O callbacks execute in phase 2 of the Node.js event loop. Timer callbacks execute in phase 4.
  4. Each reader.read() resolution triggers microtasks (Promise resolution), which run between phases. New I/O arrives, re-entering phase 2 before the loop ever reaches phase 4.
  5. The timer is perpetually starved. CPU goes to 100%. The event loop is alive but the timer never fires.
The result: Your application appears responsive but is stuck in a busy loop. The readable stream timeout never triggers. This is Node.js event loop timer starvation in action.

Attempt 5: http.request + external setTimeout + req.destroy() PARTIAL

Switching from fetch to node:http gives us direct socket control. Register the timeout before any response handler:

attempt-5.ts
import http from "node:http";

const req = http.request(url, { method: "POST" });

// External kill timer — registered BEFORE response handler
const hardKill = setTimeout(() => {
  req.destroy(new Error("SSE timeout"));
}, 30_000);

req.on("response", (res) => {
  const chunks: Buffer[] = [];
  res.on("data", (chunk) => chunks.push(chunk));
  res.on("end", () => {
    clearTimeout(hardKill);
    resolve(Buffer.concat(chunks).toString());
  });
});

req.end(body);

This works when the server sends no data. The timer fires after 30 seconds, req.destroy() tears down the TCP socket at the OS level, and the connection terminates. But when the server sends rapid keepalives, we hit the same event loop scheduling problem as Attempt 4. The setTimeout callback lives in the Timers phase of the event loop. The res.on("data") callbacks live in the Poll phase. I/O starves the timer identically.

Partial fix: Handles the "no data" failure mode. Fails on rapid keepalives due to timer starvation.

Attempt 6: Date.now() inside res.on("data") + res.destroy() PARTIAL

Here's the breakthrough insight: put the deadline check inside the data callback itself. The very mechanism that was starving our timers becomes our clock.

attempt-6.ts
const deadline = Date.now() + 30_000;

req.on("response", (res) => {
  const chunks: Buffer[] = [];

  res.on("data", (chunk) => {
    if (Date.now() > deadline) {
      res.destroy();           // synchronous socket kill
      return reject(new Error("SSE timeout"));
    }
    chunks.push(chunk);
  });

  res.on("end", () => resolve(
    Buffer.concat(chunks).toString()
  ));
});

This works for rapid keepalives because the Date.now() check runs in the same I/O callback phase that was starving our timers. Every keepalive packet that arrives triggers res.on("data"), which checks the deadline and calls res.destroy() synchronously. No scheduling dependency. The server's own keepalives become the clock.

But it has the opposite blind spot: when the server sends no data at all, res.on("data") never fires, and the deadline check never runs. The connection hangs forever.

Partial fix: Handles the "rapid keepalives" failure mode. Fails when the server sends no data.

Attempt 7: destroyed flag + res.removeAllListeners("data") THE FIX

We thought we were done with the combined 5+6 pattern. Then we hit one more edge case in production: after calling res.destroy(), kernel-buffered TCP chunks still deliver to res.on("data").

This is a subtle Node.js behavior. When you call res.destroy(), it initiates socket teardown, but the kernel's TCP receive buffer may already contain data that was in-flight. Node.js will still emit "data" events for these buffered chunks before the "close" event fires. Without a guard, your fail() function gets called multiple times, and you process chunks after you've already decided to timeout.

attempt-7.ts
let destroyed = false;

res.on("data", (chunk: Buffer) => {
  if (destroyed) return;           // Stop processing after destroy
  if (Date.now() > deadline) {
    destroyed = true;
    res.removeAllListeners("data"); // Prevent further callbacks
    res.destroy();                  // Kill the socket
    fail(new Error("SSE timeout"));
    return;
  }
  chunks.push(chunk);
});

Three layers of defense:

  1. destroyed flag — early return prevents processing any chunks after timeout. This is the fast path.
  2. res.removeAllListeners("data") — unregisters the callback entirely so Node.js doesn't even invoke it for remaining buffered chunks.
  3. res.destroy() — tears down the TCP socket to stop any further data arriving from the network.
This completes the production pattern. Combined with Attempt 5's external setTimeout for the "no data" case, and Attempt 6's Date.now() check in the data callback for the "rapid keepalives" case, this destroyed flag handles the post-destroy cleanup edge case. All three together form the complete solution.

The Complete Pattern: Attempts 5 + 6 + 7

Now the full picture is clear. Three mechanisms, three failure modes:

No single approach covers all failure modes. All three together do.

# Approach Rapid Keepalives No Data Post-Destroy Chunks Result
1 AbortSignal.timeout() Hangs Hangs N/A FAIL
2 resp.text() Hangs Hangs N/A FAIL
3 Date.now() in reader loop Busy loop Blocks forever N/A FAIL
4 Promise.race + setTimeout Timer starved Works N/A FAIL
5 External setTimeout + req.destroy() Timer starved Works Leaks PARTIAL
6 Date.now() in res.on("data") Works Never fires Leaks PARTIAL
7 destroyed flag + removeAllListeners N/A alone N/A alone Works PARTIAL
5+6+7 Complete production pattern Works Works Works PASS

Why Event Loop Scheduling Matters

To understand why Attempts 4 and 5 fail under load, you need to understand how Node.js schedules work. The event loop processes callbacks in phases:

  1. TimerssetTimeout, setInterval callbacks
  2. Pending callbacks — deferred I/O callbacks
  3. Poll — retrieve new I/O events; execute I/O-related callbacks (socket data, file reads)
  4. ChecksetImmediate callbacks
  5. Close — close event callbacks

Crucially, microtasks (Promise .then, queueMicrotask) execute between every phase transition. When SSE keepalives arrive continuously, here's what happens:

  1. Poll phase fires res.on("data") with a keepalive chunk.
  2. The callback resolves a Promise (reader.read()), queuing a microtask.
  3. Microtask runs between phases, re-entering your loop and calling reader.read() again.
  4. New I/O is already waiting in the poll queue. Back to step 1.
  5. The Timers phase never executes because the loop is trapped between Poll and microtasks.

This is Node.js event loop timer starvation. Your setTimeout callback is queued and ready, but the event loop never reaches the phase where it would fire. The process isn't frozen — it's actively processing I/O at 100% CPU. It just can't get to your timer.

Attempt 6 sidesteps this entirely: the Date.now() check runs inside the Poll phase callback. No phase transition needed. The external setTimeout in Attempt 5 handles the complementary case where no I/O arrives and the event loop does reach the Timers phase normally.

The Copy-Paste Pattern

Here's the complete sseRequestWithTimeout() function. Drop it into any Node.js project targeting SSE endpoints — MCP clients, AI streaming consumers, test harnesses.

sse-timeout.ts
import http from "node:http";
import https from "node:https";

interface SseRequestOptions {
  url: string;
  method?: string;
  headers?: Record<string, string>;
  body?: string;
  timeoutMs: number;
}

export function sseRequestWithTimeout(
  opts: SseRequestOptions
): Promise<string> {
  return new Promise((resolve, reject) => {
    let settled = false;
    const deadline = Date.now() + opts.timeoutMs;

    const fail = (err: Error) => {
      if (settled) return;
      settled = true;
      reject(err);
    };

    const ok = (data: string) => {
      if (settled) return;
      settled = true;
      resolve(data);
    };

    const parsed = new URL(opts.url);
    const transport = parsed.protocol === "https:"
      ? https : http;

    const req = transport.request(opts.url, {
      method: opts.method ?? "POST",
      headers: {
        "Content-Type": "application/json",
        ...(opts.headers ?? {}),
      },
    });

    // ── MECHANISM A (Attempt 5) ──────────────────────
    // External kill timer for the "no data" case.
    // Lives in Timers phase — fires when event loop
    // is idle (no I/O to starve it).
    const hardKill = setTimeout(() => {
      req.destroy();
      fail(new Error(
        `SSE timeout after ${opts.timeoutMs}ms (no data)`
      ));
    }, opts.timeoutMs);

    req.on("error", (err) => fail(err));
    req.on("close", () => clearTimeout(hardKill));

    req.on("response", (res) => {
      const chunks: Buffer[] = [];
      let destroyed = false;

      res.on("data", (chunk: Buffer) => {
        // ── MECHANISM C (Attempt 7) ────────────────
        // Guard against post-destroy buffered chunks.
        if (destroyed) return;

        // ── MECHANISM B (Attempt 6) ────────────────
        // Deadline check inside I/O callback for the
        // "rapid keepalives" case. Runs in Poll phase
        // — can't be starved by its own I/O.
        if (Date.now() > deadline) {
          destroyed = true;
          res.removeAllListeners("data");
          res.destroy();
          return fail(new Error(
            `SSE timeout after ${opts.timeoutMs}ms (keepalive flood)`
          ));
        }
        chunks.push(chunk);
      });

      res.on("end", () => {
        clearTimeout(hardKill);
        const raw = Buffer.concat(chunks).toString();
        // Extract last SSE data line
        const lines = raw.split("\n");
        const dataLines = lines
          .filter((l) => l.startsWith("data: "))
          .map((l) => l.slice(6));
        const last = dataLines[dataLines.length - 1] ?? raw;
        ok(last);
      });

      res.on("error", (err) => fail(err));
    });

    if (opts.body) req.write(opts.body);
    req.end();
  });
}

Key implementation details:

All three mechanisms are essential. The setTimeout lives in the Timers phase and fires when the event loop is idle. The Date.now() check lives in the Poll phase and fires when the event loop is saturated with I/O. The destroyed flag stops post-destroy kernel-buffered chunks from being processed. Together they cover the entire spectrum of SSE server behavior.

When You Need This

When You Don't Need This

Conclusion

This finding emerged from building DataShield's automated platform test matrix — a harness that validates 56+ MCP tool endpoints across our data governance platform. The MCP ecosystem is growing rapidly as AI agents adopt the protocol for tool use. Any MCP client library in Node.js that needs request-level timeouts will hit this exact problem.

The key insight: no single approach works for all failure modes (no data, rapid keepalives, post-destroy buffered chunks). You need three mechanisms — one in the timer phase (external setTimeout + req.destroy()), one in the I/O callback phase (Date.now() inside res.on("data")), and a destroyed flag with res.removeAllListeners("data") to handle kernel-buffered TCP chunks that arrive after socket teardown. Together they cover the full spectrum of SSE server behavior.

We've open-sourced the pattern. Use it.