Skip to content

Ollama (Local)

Terminal window
npm install agentfootprint openai

The openai package is required — agentfootprint’s Ollama adapter uses the OpenAI-compatible API under the hood.

Ollama must be running locally. Install Ollama and pull a model:

Terminal window
ollama pull llama3
import { Agent, ollama, defineTool } from 'agentfootprint';
const provider = ollama('llama3');
const agent = Agent.create({ provider })
.system('You are a helpful assistant.')
.build();
const result = await agent.run('Hello');
const response = await fetch('http://localhost:11434/api/chat', {
method: 'POST',
body: JSON.stringify({
model: 'llama3.3',
messages: [{ role: 'user', content: 'Hello' }],
}),
});
const data = await response.json();
console.log(data.message.content);
const provider = ollama('llama3', {
baseUrl: 'http://192.168.1.100:11434', // remote Ollama instance
});

Use Ollama for local development, switch to Claude for production:

const provider = process.env.NODE_ENV === 'production'
? anthropic('claude-sonnet-4-20250514')
: ollama('llama3');
// Same agent, same tools — just different provider
const agent = Agent.create({ provider })
.system('You are a support agent.')
.tool(lookupOrder)
.build();

Any model available in your local Ollama installation. Common choices:

ModelBest for
llama3.3General purpose
llama4Latest, multimodal
qwen3Reasoning, multilingual
deepseek-r1Deep reasoning
gemma3Fast, lightweight

Ollama uses the OpenAI-compatible API. For tool use, choose models with reliable function calling support (llama3.1+, qwen2.5, mistral-nemo). Smaller models may produce unreliable tool calls.