Skip to content

Observability

import { Agent, mock } from 'agentfootprint';
import { agentObservability } from 'agentfootprint/observe';
const provider = mock([{ content: 'Hello!' }]);
const obs = agentObservability();
const agent = Agent.create({ provider })
.recorder(obs)
.build();
await agent.run('hello');
obs.tokens(); // { totalCalls: 2, totalInputTokens: 150, calls: [...] }
obs.tools(); // { totalCalls: 1, byTool: { search: { calls: 1, errors: 0 } } }
obs.cost(); // 0.0042

For fine-grained control, attach individual recorders:

import {
TokenRecorder, CostRecorder, ToolUsageRecorder,
QualityRecorder, GuardrailRecorder, PermissionRecorder,
} from 'agentfootprint/observe';
const tokens = new TokenRecorder();
const cost = new CostRecorder({
pricingTable: { 'claude-sonnet-4-20250514': { input: 3, output: 15 } },
});
const agent = Agent.create({ provider })
.recorder(tokens)
.recorder(cost)
.build();
RecorderWhat it tracksKey method
TokenRecorderInput/output tokens per LLM callgetStats()
CostRecorderUSD cost per modelgetTotalCost()
ToolUsageRecorderCalls, latency, errors per toolgetStats()
QualityRecorderCustom quality score per responsegetScores()
GuardrailRecorderPolicy violations per responsegetViolations()
PermissionRecorderBlocked/allowed tool eventsgetEvents()

Every agent produces a connected narrative — not logs, structured entries:

agent.getNarrative();
// [
// "[Seed] Initialized agent state",
// "[CallLLM] claude-sonnet-4 (127in / 45out)",
// "[ExecuteToolCalls] lookup_order({orderId: 'ORD-1003'})",
// "[CallLLM] claude-sonnet-4 (312in / 89out)",
// "[Finalize] Your order was denied..."
// ]
// Structured entries for programmatic access
agent.getNarrativeEntries();
// Each: { type, text, key, stageId, subflowId }

Export agent events as OpenTelemetry spans — no @opentelemetry dependency in core. Bring your own tracer:

import { trace } from '@opentelemetry/api';
import { OTelRecorder } from 'agentfootprint/observe';
const recorder = new OTelRecorder(trace.getTracer('agentfootprint'));
const agent = Agent.create({ provider })
.recorder(recorder)
.build();
await agent.run('Hello');
// Spans: agent.turn → gen_ai.chat (per LLM call) → tool.* (per tool call)
// Attributes follow OpenTelemetry GenAI semantic conventions

Works with Datadog, New Relic, Grafana, Jaeger, or any OTel-compatible backend.

import { ExplainRecorder } from 'agentfootprint/explain';
const explain = new ExplainRecorder();
// attach via .recorder(explain) before .build()
await agent.run(...);
const report = explain.explain();
report.sources; // tool results (ground truth)
report.claims; // LLM output (to verify)