Skip to content

Agent / LLMCall

Agent.create(options: { provider: LLMProvider; name?: string }): AgentBuilder

Creates an agent builder. Chain methods to configure, then .build() to get a runner.

MethodDescription
.system(prompt)Set system prompt
.tool(tool)Add a single tool
.tools(tools)Add multiple tools
.toolProvider(provider)Dynamic tool provider
.promptProvider(provider)Dynamic prompt provider
.instruction(inst)Add conditional instruction
.decision(initial)Set initial Decision Scope
.pattern(pattern)AgentPattern.Regular or AgentPattern.Dynamic
.maxIterations(n)Max tool loop iterations (default: 10)
.streaming(bool)Enable token streaming
.memory(config)Conversation persistence
.recorder(rec)Attach an AgentRecorder
.verbose(bool)Verbose narrative output
.build()Returns AgentRunner
MethodReturnsDescription
run(message, options?)AgentResultExecute the agent
resume(response)AgentResultResume after pause
getNarrative()string[]Human-readable trace
getNarrativeEntries()CombinedNarrativeEntry[]Structured entries
getSnapshot()RuntimeSnapshotFull execution state
getSpec()objectFlowchart specification
getMessages()Message[]Conversation history
resetConversation()voidClear history
interface AgentResult {
content: string; // LLM's final response
messages: Message[]; // Full conversation
iterations: number; // Tool loop iterations
paused?: boolean; // true if agent paused (ask_human)
pauseData?: { question: string; toolCallId: string };
}

LLMCall.create(options: { provider: LLMProvider }): LLMCallBuilder

Simpler than Agent — single LLM invocation, no tool loop.

MethodDescription
.system(prompt)Set system prompt
.streaming(bool)Enable token streaming
.recorder(rec)Attach an AgentRecorder
.build()Returns LLMCallRunner
MethodReturnsDescription
run(message, options?){ content, messages }Execute the call
getNarrative()string[]Execution trace
getSnapshot()unknownFull state
getSpec()unknownFlowchart spec