Connected evidence
Every tool call, every LLM decision, every response is captured in a structured trace — not disconnected logs. See exactly what happened, verify the LLM wasn’t hallucinating, and debug in minutes.
Connected evidence
Every tool call, every LLM decision, every response is captured in a structured trace — not disconnected logs. See exactly what happened, verify the LLM wasn’t hallucinating, and debug in minutes.
$0 test runs
Write tests with mock(). Deploy with anthropic(). Same agent, same tools,
same flowchart. Swap one line.
Conditional instructions
defineInstruction() injects into system prompt, tools, AND tool-result
recency — driven by accumulated state. Progressive tool authorization,
context-aware prompts, all declarative.
6 concepts, one interface
LLMCall, Agent, RAG, FlowChart, Swarm, Parallel. Each adds one capability.
All share .build() → .run() → .getNarrative().
When an agent runs, agentfootprint captures every stage, every tool call, every LLM decision as structured narrative entries. Not logs. Not spans. Connected evidence with provenance.
User: "Check order ORD-1003 and help me with a refund"
[Seed] Initialized agent state Messages: 1 (1 user)[Subflow: SystemPrompt] Preparing system prompt[Subflow: Messages] Preparing conversation history[Subflow: Tools] Resolving available tools Tools: [lookup_order, check_inventory, ask_human][CallLLM] Called LLM LLM: claude-sonnet-4-20250514 (243in / 45out) → tool_calls: [lookup_order][ParseResponse] Parsed LLM response Parsed: tool_calls → [lookup_order({orderId: "ORD-1003"})][ExecuteToolCalls] Executed tool calls Tool results: {"orderId":"ORD-1003","status":"cancelled","amount":299}[CallLLM] Called LLM LLM: claude-sonnet-4-20250514 (512in / 89out) Reasoning: "The order is cancelled. I should offer a refund..."[Finalize] Extract final answer Result: "Your order ORD-1003 was cancelled. I can process a refund of $299..."Every entry has type, key, stageId, subflowId — use them for grounding analysis (hallucination detection), audit trails, or feeding to another LLM.
import { Agent, defineTool, anthropic } from 'agentfootprint';
const agent = Agent.create({ provider: anthropic('claude-sonnet-4-20250514') }) .system('You are a research assistant.') .tool(searchTool) .build();
const result = await agent.run('Find AI trends');console.log(result.content);console.log(agent.getNarrative()); // connected execution trace| Concept | What it adds | Use case |
|---|---|---|
| LLMCall | Single LLM invocation | Summarization, classification |
| Agent | + Tool use loop (ReAct) | Research, code generation |
| RAG | + Retrieval | Q&A over documents |
| FlowChart | + Sequential pipeline | Approval flows, ETL |
| Swarm | + LLM-driven routing | Customer support, triage |
| Parallel | + Concurrent execution | Multi-perspective analysis |
Each capability is a subpath — import only what you use:
import { Agent, defineTool } from 'agentfootprint';import { anthropic } from 'agentfootprint/providers';import { defineInstruction } from 'agentfootprint/instructions';import { agentObservability } from 'agentfootprint/observe';import { withRetry } from 'agentfootprint/resilience';import { gatedTools } from 'agentfootprint/security';import { ExplainRecorder } from 'agentfootprint/explain';import { SSEFormatter } from 'agentfootprint/stream';