Ollama (Local)
Install
Section titled “Install”npm install agentfootprint openaiThe openai package is required — agentfootprint’s Ollama adapter uses the OpenAI-compatible API under the hood.
Ollama must be running locally. Install Ollama and pull a model:
ollama pull llama3import { Agent, ollama, defineTool } from 'agentfootprint';
const provider = ollama('llama3');
const agent = Agent.create({ provider }) .system('You are a helpful assistant.') .build();
const result = await agent.run('Hello');Switching from raw Ollama API
Section titled “Switching from raw Ollama API”const response = await fetch('http://localhost:11434/api/chat', { method: 'POST', body: JSON.stringify({ model: 'llama3.3', messages: [{ role: 'user', content: 'Hello' }], }),});const data = await response.json();console.log(data.message.content);import { Agent, ollama } from 'agentfootprint';
const agent = Agent.create({ provider: ollama('llama3.3') }) .system('You are helpful.') .build();
const result = await agent.run('Hello');console.log(result.content);console.log(agent.getNarrative()); // full trace, $0 costCustom endpoint
Section titled “Custom endpoint”const provider = ollama('llama3', { baseUrl: 'http://192.168.1.100:11434', // remote Ollama instance});Development workflow
Section titled “Development workflow”Use Ollama for local development, switch to Claude for production:
const provider = process.env.NODE_ENV === 'production' ? anthropic('claude-sonnet-4-20250514') : ollama('llama3');
// Same agent, same tools — just different providerconst agent = Agent.create({ provider }) .system('You are a support agent.') .tool(lookupOrder) .build();Supported models
Section titled “Supported models”Any model available in your local Ollama installation. Common choices:
| Model | Best for |
|---|---|
llama3.3 | General purpose |
llama4 | Latest, multimodal |
qwen3 | Reasoning, multilingual |
deepseek-r1 | Deep reasoning |
gemma3 | Fast, lightweight |
Ollama uses the OpenAI-compatible API. For tool use, choose models with reliable function calling support (llama3.1+, qwen2.5, mistral-nemo). Smaller models may produce unreliable tool calls.