Skip to content

FlowChart / Swarm / Parallel

Chain agent runners into a sequential pipeline.

FlowChart.create(): FlowChart
MethodDescription
.agent(id, name, runner, options?)Add an agent stage
.recorder(rec)Attach an AgentRecorder
.build()Returns FlowChartRunner
pipeline.agent('research', 'Research phase', researcher, {
inputMapper: (parentState) => ({ query: parentState.topic }),
outputMapper: (result) => ({ research: result.content }),
});
import { FlowChart, Agent, anthropic } from 'agentfootprint';
const provider = anthropic('claude-sonnet-4-20250514');
const researcher = Agent.create({ provider, name: 'researcher' })
.system('Research the topic thoroughly.')
.build();
const writer = Agent.create({ provider, name: 'writer' })
.system('Write a clear summary.')
.build();
const pipeline = FlowChart.create()
.agent('research', 'Research phase', researcher)
.agent('write', 'Writing phase', writer)
.build();
const result = await pipeline.run('Explain quantum computing');
console.log(result.content);
MethodReturns
run(message, options?)TraversalResult
getNarrative()string[]
getNarrativeEntries()object[]
getSnapshot()unknown
getSpec()unknown

LLM-driven routing to specialist agents.

Swarm.create(options: { provider: LLMProvider; name?: string }): Swarm
MethodDescription
.system(prompt)Set orchestrator prompt
.specialist(id, description, runner)Add a specialist agent
.tool(tool)Add extra orchestrator tool
.maxIterations(n)Max routing iterations
.streaming(bool)Enable streaming
.recorder(rec)Attach an AgentRecorder
.build()Returns SwarmRunner
import { Swarm, Agent, anthropic } from 'agentfootprint';
const provider = anthropic('claude-sonnet-4-20250514');
const coder = Agent.create({ provider, name: 'coder' })
.system('You are a coding specialist.')
.maxIterations(3)
.build();
const writer = Agent.create({ provider, name: 'writer' })
.system('You are a writing specialist.')
.maxIterations(3)
.build();
const swarm = Swarm.create({ provider, name: 'orchestrator' })
.system('Route coding tasks to coder, writing tasks to writer.')
.specialist('coder', 'Handle programming tasks', coder)
.specialist('writer', 'Handle creative writing', writer)
.streaming(true)
.build();
const result = await swarm.run('Write a Python fibonacci function');
MethodReturns
run(message, options?)TraversalResult
getMessages()Message[]
resetConversation()void
getNarrative()string[]
getNarrativeEntries()object[]
getSnapshot()unknown
getSpec()unknown
toFlowChart()FlowChart (internal)
interface TraversalResult {
content: string;
agents: AgentResultEntry[];
totalLatencyMs: number;
}

Run multiple agents simultaneously and merge results.

Parallel.create(options: { provider: LLMProvider; name?: string }): Parallel
MethodDescription
.agent(id, runner, description)Add a parallel branch
.mergeWithLLM(prompt)Merge with LLM (uses provider)
.merge(fn)Custom merge function
.streaming(bool)Enable streaming
.recorder(rec)Attach an AgentRecorder
.build()Returns ParallelRunner
import { Parallel, Agent, anthropic } from 'agentfootprint';
const provider = anthropic('claude-sonnet-4-20250514');
const analyst = Agent.create({ provider, name: 'analyst' })
.system('Analyze the data and provide insights.')
.build();
const critic = Agent.create({ provider, name: 'critic' })
.system('Identify risks and potential issues.')
.build();
const balanced = Parallel.create({ provider, name: 'analysis' })
.agent('analyst', analyst, 'Data analysis')
.agent('critic', critic, 'Risk assessment')
.mergeWithLLM('Synthesize both perspectives into a balanced recommendation.')
.build();
const result = await balanced.run('Should we expand into the EU market?');
interface ParallelResult {
content: string;
branches: readonly BranchResult[];
messages: Message[];
}