Anthropic (Claude)
Install
Section titled “Install”npm install agentfootprint @anthropic-ai/sdkThe @anthropic-ai/sdk is a peer dependency — agentfootprint’s AnthropicAdapter wraps it. Default max_tokens is 4096 — override with maxTokens in the provider options.
Server-side (Node.js)
Section titled “Server-side (Node.js)”import { Agent, anthropic, defineTool } from 'agentfootprint';
const provider = anthropic('claude-sonnet-4-20250514', { apiKey: process.env.ANTHROPIC_API_KEY, // or set ANTHROPIC_API_KEY env var});
const agent = Agent.create({ provider }) .system('You are a helpful assistant.') .tool(myTool) .streaming(true) .build();
const result = await agent.run('Hello');Browser / Edge
Section titled “Browser / Edge”No SDK dependency — calls the Anthropic REST API directly:
import { Agent, BrowserAnthropicAdapter } from 'agentfootprint';
const provider = new BrowserAnthropicAdapter({ apiKey: userApiKey, // from your settings UI model: 'claude-sonnet-4-20250514',});
const agent = Agent.create({ provider }) .system('You are a helpful assistant.') .streaming(true) .build();The browser adapter sends
anthropic-dangerous-direct-browser-accessheader. This is for prototyping and playground use only. For production, proxy API calls through your backend to protect your API key.
Available models
Section titled “Available models”| Model | ID | Best for |
|---|---|---|
| Claude Sonnet 4 | claude-sonnet-4-20250514 | Best balance of speed and quality |
| Claude Opus 4 | claude-opus-4-20250514 | Complex reasoning, coding |
| Claude Haiku 3.5 | claude-haiku-3-5-20241022 | Fast, low cost |
Extended thinking
Section titled “Extended thinking”Claude models support extended reasoning. When enabled via the Anthropic API, the model’s thinking is included in the streamed response as tokens. The full reasoning appears in the narrative trace:
const agent = Agent.create({ provider }) .system('You are a math tutor. Show your reasoning step by step.') .streaming(true) .build();
await agent.run('If x² + 3x - 10 = 0, find x', { onEvent: (event) => { if (event.type === 'token') { process.stdout.write(event.content); } },});
// The narrative captures the full LLM response including reasoningconsole.log(agent.getNarrative());Extended thinking content appears as part of the LLM response text in the narrative. The
thinkingevent type inAgentStreamEventis reserved for future dedicated support.
Observability
Section titled “Observability”Track token usage and cost with Anthropic pricing:
import { agentObservability } from 'agentfootprint/observe';
const obs = agentObservability();
const agent = Agent.create({ provider }) .recorder(obs) .build();
await agent.run('Hello');
console.log(obs.tokens());// { totalCalls: 1, totalInputTokens: 24, totalOutputTokens: 15, calls: [...] }
// Cost is auto-calculated from model pricingconsole.log(obs.cost());// 0.000297 (USD)Switching from the Anthropic SDK
Section titled “Switching from the Anthropic SDK”If you’re currently using @anthropic-ai/sdk directly:
import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic();const response = await client.messages.create({ model: 'claude-sonnet-4-20250514', max_tokens: 1024, messages: [{ role: 'user', content: 'Hello' }],});console.log(response.content[0].text);import { Agent, anthropic } from 'agentfootprint';
const agent = Agent.create({ provider: anthropic('claude-sonnet-4-20250514'),}) .system('You are helpful.') .build();
const result = await agent.run('Hello');console.log(result.content);console.log(agent.getNarrative()); // bonus: full execution traceWhat you gain: tool use loop, narrative trace, grounding analysis, mock() testing, streaming events, memory, instructions — all with the same Claude models.