Skip to content

Streaming stages

Streaming stages emit tokens incrementally via StreamHandlers callbacks. Use them for LLM responses, file processing, or any incremental output.

.addStreamingFunction(name, fn, id?, streamId?) adds a stage that receives a streamCallback in its third argument:

import { flowChart, FlowChartExecutor, type StreamHandlers } from 'footprintjs';
interface SummaryState {
patientName: string;
temperature: number;
summary?: string;
}
const chart = flowChart<SummaryState>('PrepareContext', async (scope) => {
scope.patientName = 'Jane Doe';
scope.temperature = 102.6;
}, 'prepare-context')
.addStreamingFunction(
'GenerateSummary',
async (scope, _breakFn, streamCallback) => {
const tokens = ['The ', 'patient ', 'has ', 'a ', 'fever.'];
for (const token of tokens) {
await new Promise((r) => setTimeout(r, 30));
streamCallback?.(token);
}
scope.summary = tokens.join('');
},
'generate-summary',
'llm-summary', // streamId for the handlers
)
.addFunction('SaveReport', async (scope) => {
console.log('Summary:', scope.summary);
}, 'save-report')
.build();

Pass StreamHandlers when constructing the executor. All three hooks are optional:

const streamHandlers: StreamHandlers = {
onStart: (streamId) => {
console.log(`[${streamId}] started`);
},
onToken: (streamId, token) => {
process.stdout.write(token);
},
onEnd: (streamId, fullText) => {
console.log(`\n[${streamId}] done: ${fullText?.length} chars`);
},
};
const executor = new FlowChartExecutor(chart, { streamHandlers });
await executor.run();
HookFires when
onStart(streamId)First streamCallback() call
onToken(streamId, token)Each streamCallback(token) call
onEnd(streamId, fullText)Stage completes; fullText is concatenated tokens
  • Streaming — simulated LLM token streaming