Pausable — Human-in-the-Loop
The pattern
Section titled “The pattern”Some decisions need a human. A refund approval, a deployment confirmation, a content review. The agent pauses, asks the human, and resumes with their answer.
Agent runs → calls ask_human tool → PAUSES → checkpoint saved (JSON) ↓ ... hours later, different server ... ↓ Human responds → agent.resume(answer) → continuesaskHuman tool
Section titled “askHuman tool”Add askHuman() to your agent — it’s a special tool that pauses execution:
import { Agent, mock, askHuman } from 'agentfootprint';
const agent = Agent.create({ provider: mock([ // LLM decides to ask the human { content: '', toolCalls: [{ id: 'tc1', name: 'ask_human', arguments: { question: 'Approve refund of $299 for ORD-1003?' }, }] }, // After resume: LLM responds with the human's answer { content: 'The refund has been approved and processed.' }, ]),}) .system('You are a support agent. Use ask_human for manager approval before refunds.') .tool(askHuman()) .build();
const result = await agent.run('I want a refund for order ORD-1003');
if (result.paused) { console.log(result.pauseData?.question); // "Approve refund of $299 for ORD-1003?"
// Later — could be hours, different server: const final = await agent.resume('Yes, approved'); console.log(final.content); // "The refund has been approved and processed."}How it works
Section titled “How it works”- The LLM calls the
ask_humantool with aquestion - The agent pauses — execution stops, state is serialized
result.paused === trueandresult.pauseDatacontains the question- You store the checkpoint (Redis, database, anywhere)
- When the human responds, call
agent.resume(answer) - The agent continues from where it paused — the answer becomes the tool result
Conditional pause
Section titled “Conditional pause”The LLM decides whether to pause. If it doesn’t call ask_human, the agent completes normally:
const agent = Agent.create({ provider }) .system(`You are a support agent. For refunds under $50, process automatically. For refunds over $50, use ask_human for manager approval.`) .tool(lookupOrder) .tool(processRefund) .tool(askHuman()) .build();
// Small refund — no pauseawait agent.run('Refund order ORD-100 ($25)');// result.paused === false
// Large refund — pausesawait agent.run('Refund order ORD-200 ($500)');// result.paused === trueStoring checkpoints
Section titled “Storing checkpoints”The pause data is JSON-serializable — store it anywhere:
const result = await agent.run(message);
if (result.paused) { // Store in your database await db.save({ sessionId: 'session-123', question: result.pauseData?.question, agentState: 'paused', });
// Return to the UI return { paused: true, question: result.pauseData?.question };}Resuming on a different server
Section titled “Resuming on a different server”The resume doesn’t need to be on the same server. As long as you have the same agent configuration:
// Server B — hours laterconst agent = Agent.create({ provider }) .system('Same system prompt') .tool(lookupOrder) .tool(askHuman()) .build();
// Load the stored stateconst stored = await db.load('session-123');
// Resume with the human's responseconst final = await agent.resume(stored.humanResponse);With streaming
Section titled “With streaming”Pause works with streaming — tool events still fire:
await agent.run('Process refund', { onEvent: (event) => { if (event.type === 'tool_start' && event.toolName === 'ask_human') { console.log('Agent is asking for human input...'); } },});Narrative
Section titled “Narrative”The narrative captures the full lifecycle — before pause, the question, and after resume:
agent.getNarrative();// [// "[CallLLM] Called LLM",// "[ParseResponse] Parsed: tool_calls → [ask_human({question: "Approve?"})]",// "[Pause] Agent paused — waiting for human input",// "--- resume ---",// "[CallLLM] Called LLM",// "[Finalize] Refund approved and processed."// ]Next steps
Section titled “Next steps”- Agent guide — memory, streaming, Dynamic ReAct
- Testing guide — test pause/resume with mock()