edit on github↗

Framework Integrations

Agentspan works with the frameworks you already use. Pass your existing agent directly to runtime.run() — definitions, tools, and routing logic stay exactly as written. You get crash recovery, durable human-in-the-loop, and full execution history without changing a single node or handoff.

from agentspan.agents import AgentRuntime

with AgentRuntime() as runtime:
    result = runtime.run(your_existing_agent, "your prompt")

your_existing_agent can be a LangGraph compiled graph, an OpenAI Agents SDK Agent, a Google ADK pipeline, or a native Agentspan Agent. The API is the same.


Supported frameworks

LangGraph

Pass a compiled StateGraph or any graph produced by create_react_agent:

from langgraph.prebuilt import create_react_agent
from agentspan.agents import AgentRuntime

graph = create_react_agent(model="openai/gpt-4o", tools=[search, calculator])

with AgentRuntime() as runtime:
    result = runtime.run(graph, "Research the history of the Eiffel Tower")
    print(result.execution_id)

Note: Do not pass a checkpointer when wrapping with AgentRuntime — Agentspan manages execution state server-side and the two checkpointing models conflict. LangSmith observability is fully compatible and unaffected.

Full LangGraph example — code review bot


OpenAI Agents SDK

Pass an Agent from the agents package directly:

from agents import Agent as OAIAgent, WebSearchTool
from agentspan.agents import AgentRuntime

oai_agent = OAIAgent(
    name="support_agent",
    instructions="You are a helpful customer support agent.",
    tools=[WebSearchTool()],
)

with AgentRuntime() as runtime:
    result = runtime.run(oai_agent, "How do I reset my password?")
    print(result.output["result"])

Agent definitions, handoffs, and tool registrations stay exactly as written.

Full OpenAI Agents SDK example — support agent


Google ADK

Pass any ADK pipeline (SequentialAgent, ParallelAgent, LoopAgent, or a custom BaseAgent):

from google.adk.agents import SequentialAgent, LlmAgent
from agentspan.agents import AgentRuntime

researcher = LlmAgent(name="researcher", model="gemini-2.0-flash", ...)
writer = LlmAgent(name="writer", model="gemini-2.0-flash", ...)
pipeline = SequentialAgent(name="pipeline", sub_agents=[researcher, writer])

with AgentRuntime() as runtime:
    result = runtime.run(pipeline, "Research and summarize quantum computing trends")
    print(result.output["result"])

Full Google ADK example — research assistant


What Agentspan adds to any framework

CapabilityWithout AgentspanWith Agentspan
Process crash mid-runEntire run lostResumes from last completed step
Human approval pauseState held in memoryPaused server-side, survives restarts
Execution historyNoneEvery run stored with inputs, outputs, token usage
Long-running agentsRisk of timeout or OOMRuns detached from your process
ObservabilityFramework-specificUnified across all frameworks

Tool locality

Regardless of which framework you use, tools in Agentspan run in one of two places:

Tool typeWhere it runsWhat you provide
@tool (Python function)Your worker processThe function code
http_tool()Agentspan serverA URL and optional headers
api_tool()Agentspan serverAn OpenAPI/Swagger spec URL
mcp_tool()Agentspan serverAn MCP server URL

When you wrap a LangGraph graph or OpenAI SDK agent with AgentRuntime, its tool functions become worker-executed tasks. Server-side tools (http_tool, api_tool, mcp_tool) run on the server regardless of framework.

See Tools for details.


Native Agentspan agents

If you’re not using an existing framework, define agents natively:

from agentspan.agents import Agent, tool, AgentRuntime

@tool
def search_web(query: str) -> str:
    """Search the web for information."""
    return f"Results for: {query}"

agent = Agent(
    name="researcher",
    model="openai/gpt-4o",
    tools=[search_web],
    instructions="Research topics thoroughly.",
)

with AgentRuntime() as runtime:
    result = runtime.run(agent, "What is quantum entanglement?")
    result.print_result()

Quickstart · Agents concept · Tools concept