The agent space has many credible primary abstractions. agentfootprint sits above the choice — built on footprintjs, which gives us every one of those abstractions out of the box, so we used the budget those abstractions would have cost us to invest deeply in the injection loop — the layer every other framework leaves to the developer.
We didn’t have to choose between them. agentfootprint is built on footprintjs — the flowchart pattern for backend code. footprintjs gives us every one of those abstractions out of the box:
Capability
What footprintjs hands us
Composition
Sequence · Parallel · Conditional · Loop
State machines
The ReAct loop is a flowchart
Multi-agent crews
Compose Agents through control flow — no special class needed
Durable workflows
pauseHere() plus JSON-portable resume()
Typed observation
57+ events for free, because the framework owns the loop
So we used the budget those abstractions would have cost us to invest deeply in something they all leave to the developer: the injection loop.
vs LangChain — agentfootprint has a 2-primitive + 4-composition substrate; LangChain ships a class-per-paper. If you’re maintaining a “pile of LangChain abstractions” codebase and the abstractions feel like the problem, switch.
vs LangGraph — both treat agents as graphs; agentfootprint records every traversal as a typed event for free (because the framework owns the loop, not you) and adds Injection = slot × trigger × cache as a first-class layer. If you’ve found yourself instrumenting LangGraph manually or building per-node prompt assembly by hand, switch.
vs CrewAI / AutoGen — agentfootprint has no Crew / Agent / Task separate primitives; multi-agent IS composition of single agents through Sequence/Parallel/Conditional/Loop. If role/goal/backstory framing feels like ceremony, switch.
vs Mastra / Genkit / Pydantic AI — those are full-stack bundles (DB + auth + workflows + agents). agentfootprint is the agent layer only. Pick those if you need the bundle; pick agentfootprint if you have a stack and want the best context-engineering primitive.
vs DSPy — DSPy compiles prompts at training time; agentfootprint composes injections at inference time. Different problems. Use DSPy when you want the framework to optimize the prompt; use agentfootprint when you want to control which content lands where, when, and why.
vs Inngest AgentKit — Inngest is durable workflows with agent helpers; agentfootprint is agents with durable workflow primitives (pauseHere, resume). If you already run Inngest, keep it for queue/cron and call agentfootprint inside steps. If you don’t, agentfootprint’s pause/resume covers most of what you’d reach for.
We are not the only framework with multi-agent. CrewAI / LangGraph / AutoGen / Inngest all have it. Ours has a smaller surface; pick on taste.
We are not the only framework with memory. Most have some flavor. Ours is the only one with Causal type (decision-evidence persistence) — see Memory guide.
We are not the only framework with observability. Most ship spans / events. Ours is typed + emitted during DFS traversal so it’s structured + complete by construction, not collected after the fact.
We are not the only framework with reliability primitives. LangGraph’s Pregel does retry; LangChain’s with_retry/with_fallbacks exist. Ours adds first-chunk arbitration (streaming-aware) and routes via declarative rules — see Reliability gate.
You need vector search in your memory store today. Ships InMemory + Redis + AgentCore. Vector-search adapters (pgvector / Pinecone / Qdrant) are roadmap items tracked in GitHub.
You need full multi-modal (images, video) in messages.LLMMessage.content is string today.
You’re already happy with your current framework’s abstractions and your team is shipping. Don’t switch on hype.
You want a managed runtime / dashboard. We’re a library, not a platform. Bring your own Lambda / container / cron.
From LangChain (v0.0.x → v0.3+ era) — most LangChain agent code maps to Agent.create({ provider }).tool(...).build(). Tools are defineTool({ name, description, inputSchema, execute }). Memory becomes defineMemory({ type, strategy, store }).
From LangGraph — StateGraph nodes → Sequence steps; conditional edges → Conditional; the rest of the wiring evaporates. Compositions are typed; no START / END constants needed. Per-node with_retry decorators map to .reliability({...}) rules at the agent level.
From CrewAI / AutoGen — Crew of agents with tasks → Sequence(researcher, writer, editor). Roles + goals → system prompts (or Steering injections). Tools work the same way; no Task class.
From DSPy — your DSPy-compiled prompt is a Steering injection in agentfootprint (always trigger, system slot). DSPy still does the optimizing offline; agentfootprint does the runtime composition.