Detach — Fire-and-forget child flowcharts
The footprintjs/detach subpath gives you a primitive for scheduling child flowcharts off the parent stage’s hot path — telemetry exports, parallel evaluations, audit log shipping, cache warm-up, anything that should ride alongside the main pipeline rather than inside it.
Two semantics × two surfaces × six drivers.
Semantics
Section titled “Semantics”| Method | Returns | Use when |
|---|---|---|
detachAndJoinLater | DetachHandle | You want the result later (await, status check, fan-out) |
detachAndForget | void | Pure fire-and-forget (telemetry, audit log, etc.) |
Both are sync at the call site — the parent stage returns immediately. The child runs on whichever driver you pick.
Surfaces
Section titled “Surfaces”| Caller | refId prefix |
|---|---|
scope.$detachAndJoinLater | <runtimeStageId>:detach:<n> |
executor.detachAndJoinLater | __executor__:detach:<n> |
The synthetic __executor__ prefix is honest about provenance — there is no source stage to point back to.
Drivers
Section titled “Drivers”import { microtaskBatchDriver, immediateDriver, setImmediateDriver, setTimeoutDriver, createSendBeaconDriver, createWorkerThreadDriver,} from 'footprintjs/detach';| Driver | When | Capabilities |
|---|---|---|
microtaskBatchDriver | Default. Coalesces N detaches into one microtask flush. | browser + node + edge |
immediateDriver | Sync execution inside schedule() — for tests. | browser + node + edge |
setImmediateDriver | Node-only. Yields to I/O before running. | node |
setTimeoutDriver | Cross-runtime. Configurable delay. | browser + node + edge |
sendBeaconDriver | Browser-only. Survives page-unload via navigator.sendBeacon. | browser, survivesUnload |
workerThreadDriver | CPU-isolated execution in Node Worker / Web Worker. | node, cpuIsolated |
Driver is a required first argument — no library-default. Pass it explicitly so the choice of scheduling algorithm is visible at every call site.
1 · Fire-and-forget telemetry
Section titled “1 · Fire-and-forget telemetry”Parent does its work, fires telemetry, returns. The handle is discarded.
/** * Detach — Fire-and-Forget Telemetry * * The parent stage processes an order, then fires a telemetry chart via * `microtaskBatchDriver`. The handle is discarded — caller never waits. * * Pipeline: * ProcessOrder → (commits + returns) * │ * └─► driver flushes ─► TelemetryChart * * Run: npx tsx examples/runtime-features/detach/01-fire-and-forget.ts */
import { flowChart, FlowChartExecutor } from 'footprintjs';import { microtaskBatchDriver } from 'footprintjs/detach';
// ── Side-effect chart: a single stage that records the event ──────────
const telemetryEvents: unknown[] = [];
const telemetryChart = flowChart('ShipTelemetry', async (scope) => { // In real life this would POST to a telemetry endpoint. For the example // we just push to an array so the test can verify it ran. telemetryEvents.push(scope.$getArgs());}, 'ship-telemetry').build();
// ── Main chart: process an order, fire telemetry, return ──────────────
interface OrderState { orderId: string; parentReturnedAt: number;}
const main = flowChart<OrderState>('ProcessOrder', async (scope) => { scope.orderId = 'order-42'; // Fire-and-forget — driver schedules the work, we don't wait. scope.$detachAndForget(microtaskBatchDriver, telemetryChart, { event: 'order.processed', orderId: scope.orderId, }); scope.parentReturnedAt = performance.now();}, 'process-order').build();
// ── Run + inspect ─────────────────────────────────────────────────────
(async () => { const exec = new FlowChartExecutor(main); const t0 = performance.now(); await exec.run(); const parentRunWall = performance.now() - t0;
// At this point: parent has returned, but the telemetry microtask may // not have flushed yet. Yield twice to give it a chance. await Promise.resolve(); await Promise.resolve();
console.log(`Parent run wall: ${parentRunWall.toFixed(2)}ms`); console.log(`Telemetry events shipped: ${telemetryEvents.length}`); console.log(`First event: ${JSON.stringify(telemetryEvents[0])}`);
// ── Regression guards ── if (telemetryEvents.length !== 1) { console.error(`REGRESSION: expected 1 telemetry event, got ${telemetryEvents.length}.`); process.exit(1); } const evt = telemetryEvents[0] as { event: string; orderId: string }; if (evt.event !== 'order.processed' || evt.orderId !== 'order-42') { console.error('REGRESSION: telemetry payload wrong.', evt); process.exit(1); } // Parent should have returned fast — definitely under 50ms. if (parentRunWall > 50) { console.error(`REGRESSION: parent run wall too high (${parentRunWall}ms) — detach should not block.`); process.exit(1); }
console.log('OK — fire-and-forget telemetry flushed cleanly.');})().catch((e) => { console.error(e); process.exit(1);});2 · Join-later fan-out
Section titled “2 · Join-later fan-out”Fire N children in parallel, gather their results in a downstream stage. Combine many handles via Promise.all.
/** * Detach — Join-Later Fan-Out * * Fan out 5 parallel sub-evaluations using `$detachAndJoinLater`, * then await all of them in a downstream stage via `Promise.all`. * * Pipeline: * Fanout (queue 5 detaches) → Join (await all handles) * * Run: npx tsx examples/runtime-features/detach/02-join-later-fanout.ts */
import { flowChart, FlowChartExecutor } from 'footprintjs';import { microtaskBatchDriver } from 'footprintjs/detach';import type { DetachHandle } from 'footprintjs/detach';
// ── Sub-evaluation: pretend to score a prompt variant ─────────────────
const variantChart = flowChart('ScoreVariant', async (scope) => { const args = scope.$getArgs<{ variant: string }>(); // Simulate variable work time per variant. await new Promise((r) => setTimeout(r, 5)); // RETURN the score so it surfaces as the chart's run() result and // shows up on `handle.wait()`'s resolved `{ result }`. return args.variant.length;}, 'score-variant').build();
// ── Main chart ────────────────────────────────────────────────────────
interface FanoutState { variants: string[]; bestScore: number;}
// Closure-local — handles must NOT live in scope state (see README gotcha).const handles: DetachHandle[] = [];
const main = flowChart<FanoutState>('Init', async (scope) => { scope.variants = ['short', 'medium-len', 'a-much-longer-variant', 'tiny', 'middle'];}, 'init') .addFunction('Fanout', async (scope) => { for (const variant of scope.variants) { handles.push(scope.$detachAndJoinLater(microtaskBatchDriver, variantChart, { variant })); } // Parent returns immediately — children are queued for microtask flush. }, 'fanout') .addFunction('Join', async (scope) => { // Await every handle in parallel. const settled = await Promise.allSettled(handles.map((h) => h.wait())); const scores = settled .map((r) => (r.status === 'fulfilled' ? (r.value.result as number) : 0)); scope.bestScore = Math.max(...scores); }, 'join') .build();
// ── Run + inspect ─────────────────────────────────────────────────────
(async () => { const exec = new FlowChartExecutor(main); await exec.run();
const snap = exec.getSnapshot(); const bestScore = snap.sharedState.bestScore as number; console.log(`Variants scored: ${handles.length}`); console.log(`Statuses: ${handles.map((h) => h.status).join(', ')}`); console.log(`Best score: ${bestScore}`);
// ── Regression guards ── if (handles.length !== 5) { console.error(`REGRESSION: expected 5 handles, got ${handles.length}.`); process.exit(1); } if (!handles.every((h) => h.status === 'done')) { console.error('REGRESSION: not all handles reached "done".', handles.map((h) => h.status)); process.exit(1); } // 'a-much-longer-variant' = 21 chars — that's the best score. if (bestScore !== 21) { console.error(`REGRESSION: expected best score 21, got ${bestScore}.`); process.exit(1); }
console.log('OK — fan-out + Promise.all pattern works end-to-end.');})().catch((e) => { console.error(e); process.exit(1);});3 · Bare executor (outside any chart)
Section titled “3 · Bare executor (outside any chart)”When you have a FlowChartExecutor and want to fire side-effects alongside (not inside) the main chart — analytics pings, audit writes, health checks.
/** * Detach — From Outside Any Chart (bare executor entry) * * The host process holds a FlowChartExecutor and wants to fire several * side-effect charts (analytics, audit, health check) AROUND the main * chart's run. No parent stage available — uses the executor's bare * `detachAndJoinLater` / `detachAndForget` methods. * * Run: npx tsx examples/runtime-features/detach/03-bare-executor.ts */
import { flowChart, FlowChartExecutor } from 'footprintjs';import { microtaskBatchDriver } from 'footprintjs/detach';
// ── Two side-effect charts: analytics + audit log ─────────────────────
const collected: string[] = [];
const analyticsChart = flowChart('ShipAnalytics', async (scope) => { const tag = scope.$getArgs<{ tag: string }>().tag; collected.push(`analytics:${tag}`);}, 'ship-analytics').build();
const auditChart = flowChart('WriteAudit', async (scope) => { const tag = scope.$getArgs<{ tag: string }>().tag; collected.push(`audit:${tag}`); return tag;}, 'write-audit').build();
// ── Trivial main chart (the executor is the unit under test here) ─────
const mainChart = flowChart('Main', async (scope) => { scope.$setValue('mainRan', true);}, 'main').build();
(async () => { const exec = new FlowChartExecutor(mainChart);
// Side-effect BEFORE run (forget) — discard handle. exec.detachAndForget(microtaskBatchDriver, analyticsChart, { tag: 'before' });
// Side-effect WITH a handle (joinLater) — we want to await its result. const auditHandle = exec.detachAndJoinLater(microtaskBatchDriver, auditChart, { tag: 'mid' });
// Now run the main chart. await exec.run();
// Side-effect AFTER run (forget). exec.detachAndForget(microtaskBatchDriver, analyticsChart, { tag: 'after' });
// Await the joinable side-effect. const auditResult = await auditHandle.wait();
// Yield twice to let the forget detaches flush. await Promise.resolve(); await Promise.resolve();
console.log(`Collected: ${collected.sort().join(', ')}`); console.log(`Audit handle: status=${auditHandle.status}, result=${JSON.stringify(auditResult)}`); console.log(`Audit refId: ${auditHandle.id}`);
// ── Regression guards ── const sorted = collected.sort(); if (sorted.length !== 3) { console.error(`REGRESSION: expected 3 collected events, got ${sorted.length}.`); process.exit(1); } if ( sorted[0] !== 'analytics:after' || sorted[1] !== 'analytics:before' || sorted[2] !== 'audit:mid' ) { console.error('REGRESSION: collected events wrong.', sorted); process.exit(1); } if (auditResult.result !== 'mid') { console.error('REGRESSION: audit result wrong.', auditResult); process.exit(1); } if (!auditHandle.id.startsWith('__executor__:detach:')) { console.error(`REGRESSION: audit refId should start with __executor__:detach:, got ${auditHandle.id}`); process.exit(1); }
console.log('OK — bare-executor detach paths all behaved correctly.');})().catch((e) => { console.error(e); process.exit(1);});4 · Immediate driver (deterministic for tests)
Section titled “4 · Immediate driver (deterministic for tests)”immediateDriver advances the handle to running synchronously. Useful when you want to assert handle state without managing microtask draining.
/** * Detach — Immediate Driver for Tests * * Demonstrates the contrast with microtaskBatchDriver: the immediate * driver advances the handle to `running` SYNCHRONOUSLY inside * `schedule()`. Useful in tests where you want to assert handle state * before the next async tick. * * Run: npx tsx examples/runtime-features/detach/04-immediate-for-tests.ts */
import { flowChart, FlowChartExecutor } from 'footprintjs';import { immediateDriver, microtaskBatchDriver } from 'footprintjs/detach';import type { DetachHandle } from 'footprintjs/detach';
// ── A trivial child chart ─────────────────────────────────────────────
const tinyChart = flowChart('Tiny', async (scope) => { scope.$setValue('done', true); return true;}, 'tiny').build();
// ── Main: snap two handles, compare initial status ────────────────────
let immediateHandle: DetachHandle | undefined;let microtaskHandle: DetachHandle | undefined;const initialStatusImmediate: string[] = [];const initialStatusMicrotask: string[] = [];
const main = flowChart('Capture', async (scope) => { immediateHandle = scope.$detachAndJoinLater(immediateDriver, tinyChart, undefined); initialStatusImmediate.push(immediateHandle.status); // expect 'running' microtaskHandle = scope.$detachAndJoinLater(microtaskBatchDriver, tinyChart, undefined); initialStatusMicrotask.push(microtaskHandle.status); // expect 'queued'}, 'capture').build();
(async () => { const exec = new FlowChartExecutor(main); await exec.run();
await immediateHandle?.wait(); await microtaskHandle?.wait();
console.log(`Immediate driver initial status: ${initialStatusImmediate[0]}`); console.log(`Microtask driver initial status: ${initialStatusMicrotask[0]}`); console.log(`Both terminal? immediate=${immediateHandle?.status}, microtask=${microtaskHandle?.status}`);
// ── Regression guards ── if (initialStatusImmediate[0] !== 'running') { console.error(`REGRESSION: immediate driver should snap to 'running' synchronously, got ${initialStatusImmediate[0]}.`); process.exit(1); } if (initialStatusMicrotask[0] !== 'queued') { console.error(`REGRESSION: microtask driver should remain 'queued' synchronously, got ${initialStatusMicrotask[0]}.`); process.exit(1); } if (immediateHandle?.status !== 'done' || microtaskHandle?.status !== 'done') { console.error('REGRESSION: at least one handle did not reach done.'); process.exit(1); }
console.log('OK — immediate vs microtask driver telescoping verified.');})().catch((e) => { console.error(e); process.exit(1);});5 · Error handling
Section titled “5 · Error handling”A child that throws does not propagate to the parent. The driver catches it and routes to handle.error. Sibling detaches in the same batch are not poisoned.
/** * Detach — Error Handling * * A child throws. We show: * 1) `wait()` rejects with the original Error * 2) `handle.status === 'failed'` and `handle.error` is set * 3) Sibling detaches in the same batch are NOT poisoned * * Run: npx tsx examples/runtime-features/detach/05-error-handling.ts */
import { flowChart, FlowChartExecutor } from 'footprintjs';import { createMicrotaskBatchDriver } from 'footprintjs/detach';import type { DetachHandle } from 'footprintjs/detach';
// ── A child runner that fails for one input value ─────────────────────
const failingDriver = createMicrotaskBatchDriver(async (_chart, input) => { if (input === 'bad') throw new Error('vendor 503: temporarily unavailable'); return `ok:${input}`;});
// Stand-in chart — driver doesn't actually execute it (we replaced runChild).const dummyChart = flowChart('dummy', async () => {}, 'dummy').build();
// ── Main: fire 3 detaches; the middle one will fail ───────────────────
let okHandleA: DetachHandle | undefined;let badHandle: DetachHandle | undefined;let okHandleC: DetachHandle | undefined;
const main = flowChart('Trigger', async (scope) => { okHandleA = scope.$detachAndJoinLater(failingDriver, dummyChart, 'first'); badHandle = scope.$detachAndJoinLater(failingDriver, dummyChart, 'bad'); okHandleC = scope.$detachAndJoinLater(failingDriver, dummyChart, 'third');}, 'trigger').build();
(async () => { const exec = new FlowChartExecutor(main); await exec.run();
// Await each handle independently so one failure doesn't short-circuit. let captured: Error | undefined; try { await badHandle?.wait(); } catch (e) { captured = e as Error; }
const a = await okHandleA?.wait(); const c = await okHandleC?.wait();
console.log(`Sibling A: status=${okHandleA?.status}, result=${JSON.stringify(a)}`); console.log(`Failing: status=${badHandle?.status}, error=${badHandle?.error?.message}`); console.log(`Sibling C: status=${okHandleC?.status}, result=${JSON.stringify(c)}`); console.log(`Captured via catch: ${captured?.message}`);
// ── Regression guards ── if (okHandleA?.status !== 'done' || (a?.result as string) !== 'ok:first') { console.error('REGRESSION: sibling A did not complete cleanly.'); process.exit(1); } if (badHandle?.status !== 'failed' || badHandle.error?.message !== 'vendor 503: temporarily unavailable') { console.error('REGRESSION: failing handle should have status=failed with the original Error.'); process.exit(1); } if (okHandleC?.status !== 'done' || (c?.result as string) !== 'ok:third') { console.error('REGRESSION: sibling C did not complete (sibling failure poisoned the batch?).'); process.exit(1); } if (!captured || captured.message !== 'vendor 503: temporarily unavailable') { console.error('REGRESSION: wait() did not reject with the original Error.'); process.exit(1); }
console.log('OK — error containment + sibling-isolation invariants hold.');})().catch((e) => { console.error(e); process.exit(1);});6 · Status polling without await
Section titled “6 · Status polling without await”The handle is intentionally not Promise-shaped — no .then(). Reading .status is a plain property access, useful for backpressure checks, status banners, and “still in flight?” gates that shouldn’t depend on async.
/** * Detach — Status Polling (Synchronous Property Reads) * * The handle is NOT Promise-shaped. Reading `handle.status` is a plain * property access — useful for backpressure checks, status banners, and * "still in flight?" gates that shouldn't depend on async. * * This example fires 10 detaches with random work durations, then polls * `.status` until they're all terminal — without ever calling `wait()`. * * Run: npx tsx examples/runtime-features/detach/06-status-polling.ts */
import { flowChart, FlowChartExecutor } from 'footprintjs';import { createMicrotaskBatchDriver } from 'footprintjs/detach';import type { DetachHandle } from 'footprintjs/detach';
// ── A child runner with variable work duration ────────────────────────
const driver = createMicrotaskBatchDriver(async (_chart, input) => { // Pretend each unit takes 5–25ms. const ms = 5 + ((input as number) % 5) * 5; await new Promise((r) => setTimeout(r, ms)); return input;});
const dummyChart = flowChart('dummy', async () => {}, 'dummy').build();
// ── Main: fire 10 detaches, then poll ─────────────────────────────────
const handles: DetachHandle[] = [];
const main = flowChart('Fire', async (scope) => { for (let i = 0; i < 10; i++) { handles.push(scope.$detachAndJoinLater(driver, dummyChart, i)); }}, 'fire').build();
function inFlightCount(): number { return handles.filter((h) => h.status === 'queued' || h.status === 'running').length;}
(async () => { const exec = new FlowChartExecutor(main); await exec.run();
// Snap initial status (right after schedule but before microtask flush). const initialInFlight = inFlightCount(); console.log(`Initial in-flight: ${initialInFlight}`);
// Poll loop — no await on any handle, just status property. let pollCount = 0; while (inFlightCount() > 0) { pollCount += 1; await new Promise((r) => setTimeout(r, 5)); if (pollCount > 200) { console.error('REGRESSION: handles never terminated within 1s.'); process.exit(1); } }
const doneCount = handles.filter((h) => h.status === 'done').length; const failedCount = handles.filter((h) => h.status === 'failed').length;
console.log(`Poll cycles: ${pollCount}`); console.log(`Final: done=${doneCount}, failed=${failedCount}`); console.log(`Sample results: ${handles.slice(0, 3).map((h) => String(h.result)).join(', ')}`);
// ── Regression guards ── if (initialInFlight !== 10) { console.error(`REGRESSION: expected 10 initial in-flight handles, got ${initialInFlight}.`); process.exit(1); } if (doneCount !== 10) { console.error(`REGRESSION: expected 10 done, got ${doneCount}.`); process.exit(1); } if (failedCount !== 0) { console.error(`REGRESSION: expected 0 failed, got ${failedCount}.`); process.exit(1); }
console.log('OK — sync status polling pattern works without any wait() calls.');})().catch((e) => { console.error(e); process.exit(1);});7 · Graceful shutdown — flushAllDetached
Section titled “7 · Graceful shutdown — flushAllDetached”Drain all in-flight detached children before the process exits. Returns { done, failed, pending }. pending === 0 means the drain ran to completion. Useful in SIGTERM handlers and test cleanup.
/** * Detach — Graceful Shutdown via `flushAllDetached` * * Simulates a server that scheduled 20 telemetry events via * `detachAndForget` and now needs to drain them all before * "process.exit". Without `flushAllDetached`, exiting immediately * would lose any not-yet-flushed events. * * Run: npx tsx examples/runtime-features/detach/07-graceful-shutdown.ts */
import { flowChart, FlowChartExecutor } from 'footprintjs';import { flushAllDetached, microtaskBatchDriver } from 'footprintjs/detach';
// ── Side-effect chart — slow enough that the drain matters ────────────
const drained: number[] = [];
const telemetryChart = flowChart('Ship', async (scope) => { const seq = scope.$getArgs<{ seq: number }>().seq; // Pretend each event takes 5ms to "ship" (network round-trip). await new Promise((r) => setTimeout(r, 5)); drained.push(seq);}, 'ship').build();
// ── Main — schedule a burst of 20 detaches, then drain ────────────────
const main = flowChart('Burst', async (scope) => { for (let seq = 0; seq < 20; seq++) { scope.$detachAndForget(microtaskBatchDriver, telemetryChart, { seq }); }}, 'burst').build();
(async () => { const exec = new FlowChartExecutor(main); await exec.run();
// At this point, 20 detaches are in flight. Without flushAllDetached, // exiting now would lose most of them. console.log(`Detaches in flight after main run: ${20 - drained.length}`);
const stats = await flushAllDetached({ timeoutMs: 5000 }); console.log(`After flush: drained=${drained.length}, stats=${JSON.stringify(stats)}`);
// ── Regression guards ── if (drained.length !== 20) { console.error(`REGRESSION: expected 20 telemetry events drained, got ${drained.length}.`); process.exit(1); } if (stats.pending !== 0) { console.error(`REGRESSION: expected pending=0 after successful drain, got ${stats.pending}.`); process.exit(1); } // The drain ran to completion, no leftover work. console.log('OK — graceful shutdown drained every in-flight detach.');})().catch((e) => { console.error(e); process.exit(1);});8 · Builder-native composition
Section titled “8 · Builder-native composition”Make detach a labeled chart stage so it shows up in narrative + visualizations + Mermaid diagrams. Pure sugar over addFunction — zero engine changes.
For addDetachAndJoinLater, the handle goes to a consumer-supplied onHandle callback (closure pattern) — handles can’t survive shared-state storage because of the structuredClone step.
/** * Detach — Builder-Native Composition * * Demonstrates `addDetachAndForget` (fire-and-forget as a chart stage) * and `addDetachAndJoinLater` with `onHandle` callback pattern. * * Pipeline: * Seed → [DetachAndForget: telemetry] * → [DetachAndJoinLater: eval-a] (handle pushed to closure) * → [DetachAndJoinLater: eval-b] (handle pushed to closure) * → Join (await Promise.all) * * Run: npx tsx examples/runtime-features/detach/08-builder-native.ts */
import { flowChart, FlowChartExecutor } from 'footprintjs';import { createMicrotaskBatchDriver } from 'footprintjs/detach';import type { DetachHandle } from 'footprintjs/detach';
// ── Side-effect chart: telemetry ──────────────────────────────────────
const telemetryShipped: unknown[] = [];
const telemetryChart = flowChart('ShipTelemetry', async (scope) => { telemetryShipped.push(scope.$getArgs());}, 'ship-telemetry').build();
// ── Eval chart: returns the input × 2 (just for demonstration) ────────
const evalChart = flowChart('ScoreVariant', async (scope) => { const input = scope.$getArgs<{ value: number }>().value; await new Promise((r) => setTimeout(r, 5)); return input * 2;}, 'score-variant').build();
// ── Closure-local handle bag (see "Concurrency note" in the .md) ──────
const evalHandles: DetachHandle[] = [];
// ── Driver: build a fresh one so the example is hermetic ──────────────
const driver = createMicrotaskBatchDriver();
// ── Main chart with builder-native detach stages ──────────────────────
interface MainState { orderId: string; configA: number; configB: number; evalSum?: number;}
const main = flowChart<MainState>('Seed', async (scope) => { scope.orderId = 'order-99'; scope.configA = 7; scope.configB = 13;}, 'seed') .addDetachAndForget('telemetry', telemetryChart, { driver, inputMapper: (scope) => ({ event: 'order.created', orderId: scope.orderId }), }) .addDetachAndJoinLater('eval-a', evalChart, { driver, inputMapper: (scope) => ({ value: scope.configA }), onHandle: (h) => evalHandles.push(h), }) .addDetachAndJoinLater('eval-b', evalChart, { driver, inputMapper: (scope) => ({ value: scope.configB }), onHandle: (h) => evalHandles.push(h), }) .addFunction('Join', async (scope) => { const settled = await Promise.all(evalHandles.map((h) => h.wait())); scope.evalSum = settled.reduce((acc, r) => acc + (r.result as number), 0); }, 'join') .build();
(async () => { const exec = new FlowChartExecutor(main); await exec.run();
// Yield so the forget-detach has a chance to flush. await Promise.resolve(); await Promise.resolve();
const snap = exec.getSnapshot(); const evalSum = snap.sharedState.evalSum as number;
console.log(`Telemetry shipped: ${telemetryShipped.length}, payload: ${JSON.stringify(telemetryShipped[0])}`); console.log(`Eval handles created: ${evalHandles.length}`); console.log(`Eval handle statuses: ${evalHandles.map((h) => h.status).join(', ')}`); console.log(`Eval sum: ${evalSum} (expected: ${(7 + 13) * 2})`);
// ── Regression guards ── if (telemetryShipped.length !== 1) { console.error(`REGRESSION: expected 1 telemetry event, got ${telemetryShipped.length}.`); process.exit(1); } const evt = telemetryShipped[0] as { event: string; orderId: string }; if (evt.event !== 'order.created' || evt.orderId !== 'order-99') { console.error('REGRESSION: telemetry payload wrong.', evt); process.exit(1); } if (evalHandles.length !== 2) { console.error(`REGRESSION: expected 2 eval handles, got ${evalHandles.length}.`); process.exit(1); } if (!evalHandles.every((h) => h.status === 'done')) { console.error('REGRESSION: not every eval handle reached done.', evalHandles.map((h) => h.status)); process.exit(1); } if (evalSum !== 40) { console.error(`REGRESSION: expected eval sum 40, got ${evalSum}.`); process.exit(1); }
console.log('OK — builder-native detach stages compose cleanly with downstream join.');})().catch((e) => { console.error(e); process.exit(1);});The handle
Section titled “The handle”| Property | Type |
|---|---|
id | string — refId minted from the source stage |
status | 'queued' | 'running' | 'done' | 'failed' |
result | unknown — set when status === 'done' |
error | Error — set when status === 'failed' |
wait() | Promise<DetachWaitResult> — cached |
wait() returns the same Promise on every call — no re-running, no duplicated work. Errors land on handle.error and reject wait() with the same Error.
The refId format makes log correlation easy:
sf-tools/exec-tool#42:detach:7└────── runtimeStageId ──────┘:detach:<counter>Grep for the refId in your logs to find every event tied to a specific detached child.
Custom drivers
Section titled “Custom drivers”Drivers are plain objects that satisfy the DetachDriver interface. The createXxxDriver(runChild) factories take a custom ChildRunner if you just want to wrap the executor (e.g., for tracing context).
import type { DetachDriver } from 'footprintjs/detach';import { createHandle, asImpl } from 'footprintjs/detach';
const myDriver: DetachDriver = { name: 'lambda-extension', capabilities: { nodeSafe: true, survivesUnload: true }, schedule(child, input, refId) { const handle = createHandle(refId); sharedBuffer.push({ refId, child, input, handle }); return handle; },};When your buffer flushes, call asImpl(handle)._markRunning() / _markDone(result) / _markFailed(error) to advance the lifecycle.
See also
Section titled “See also”- All 8 examples live at
examples/runtime-features/detach/— each has a.mdcompanion + regression guards. They run automatically as integration tests, so the snippets on this page never go stale.