This is a Python FastAPI server example that demonstrates agentic workflows with automatic tool execution using TanStack AI's Python SDK.
- π€ Agentic workflows with automatic tool execution
- π§ Built-in tools: Weather lookup and timezone information
- π‘ SSE streaming support for real-time responses
- π Agent loop strategies with configurable iteration limits
- β
Compatible with
@tanstack/ai-client'sfetchServerSentEventsadapter - π Type-safe request/response models using Pydantic
- Python 3.8 or higher
- pip (Python package installer)
- Navigate to the project directory:
cd examples/python-fastapi- Create a virtual environment (recommended):
A virtual environment keeps dependencies isolated from your system Python installation.
python3 -m venv venv-
Activate the virtual environment:
-
On macOS/Linux:
source venv/bin/activate -
On Windows:
venv\Scripts\activate
You should see
(venv)in your terminal prompt when activated. -
-
Install dependencies:
pip install -r requirements.txtThis will install all required packages (FastAPI, Anthropic SDK, Pydantic, etc.).
- Set up environment variables:
Copy env.example to .env and add your Anthropic API key:
cp env.example .env
# Edit .env and add your ANTHROPIC_API_KEY- Run the server:
python anthropic-server.pyOr using uvicorn directly:
uvicorn anthropic-server:app --reload --port 8000The server will start on http://localhost:8000
When you're done, you can deactivate the virtual environment:
deactivateNote: The venv/ directory is already included in .gitignore, so it won't be committed to version control.
Streams chat responses in SSE format with automatic tool execution.
Request Body:
{
"messages": [
{
"role": "user",
"content": "What's the weather in San Francisco?"
}
],
"data": {
"model": "claude-3-5-sonnet-20241022"
}
}Response: Server-Sent Events stream with StreamChunk format:
data: {"type":"content","id":"...","model":"claude-3-5-sonnet-20241022","timestamp":1234567890,"delta":"Let","content":"Let","role":"assistant"}
data: {"type":"tool_call","id":"...","model":"claude-3-5-sonnet-20241022","timestamp":1234567890,"toolCall":{"id":"call_123","type":"function","function":{"name":"get_weather","arguments":"{\"location\":\"San Francisco\"}"}},"index":0}
data: {"type":"tool_result","id":"...","model":"claude-3-5-sonnet-20241022","timestamp":1234567891,"toolCallId":"call_123","content":"{\"temperature\":62,\"conditions\":\"Foggy\",\"location\":\"San Francisco\"}"}
data: {"type":"content","id":"...","model":"claude-3-5-sonnet-20241022","timestamp":1234567892,"delta":"The","content":"The","role":"assistant"}
data: {"type":"done","id":"...","model":"claude-3-5-sonnet-20241022","timestamp":1234567893,"finishReason":"stop","usage":{"promptTokens":150,"completionTokens":75,"totalTokens":225}}
data: [DONE]
Health check endpoint.
The server includes two built-in tools that are automatically executed:
Get weather information for a city (returns static demo data).
Parameters:
location(required): City name (e.g., "San Francisco", "New York", "London")unit(optional): Temperature unit - "celsius" or "fahrenheit" (default: fahrenheit)
Supported Cities:
- San Francisco, New York, London, Tokyo, Paris, Sydney
Example:
{
"messages": [
{
"role": "user",
"content": "What's the weather in Tokyo in celsius?"
}
]
}Get current time in a specific timezone (returns static demo data).
Parameters:
timezone(required): Timezone code (e.g., "PST", "EST", "UTC")
Supported Timezones:
- UTC, PST, EST, GMT, JST, AEST
Example:
{
"messages": [
{
"role": "user",
"content": "What time is it in Tokyo?"
}
]
}This server is compatible with the TypeScript TanStack AI client:
import { ChatClient, fetchServerSentEvents } from '@tanstack/ai-client'
const client = new ChatClient({
connection: fetchServerSentEvents('http://localhost:8000/chat'),
})
await client.sendMessage('Hello!')The tanstack-ai package emits the following StreamChunk types:
content: Text content updates with delta and accumulated contenttool_call: Tool/function call events with argumentstool_result: Results from tool executiondone: Stream completion with finish reason and usage statserror: Error eventstool-input-available: Tool inputs ready for client-side executionapproval-requested: Tool requiring user approval
See packages/python/tanstack-ai/src/tanstack_ai/types.py for full type definitions.
The server uses TanStack AI's agentic flow features:
- Tool Registration: Tools are defined with JSON Schema and execute functions
- Automatic Execution: When Claude calls a tool, it's automatically executed
- Result Injection: Tool results are added to the conversation
- Loop Control: Agent can iterate up to 5 times (configurable)
- Streaming: All events (content, tool calls, results) are streamed to the client
# Example tool definition
weather_tool = tool(
name="get_weather",
description="Get the current weather for a location",
input_schema={
"type": "object",
"properties": {
"location": {"type": "string"},
},
"required": ["location"],
},
execute=get_weather_impl,
)
# Use in chat with automatic execution
async for chunk in chat(
adapter=adapter,
model="claude-3-5-sonnet-20241022",
messages=messages,
tools=[weather_tool],
agent_loop_strategy=max_iterations(5),
):
yield format_sse_chunk(chunk)- β
Anthropic (Claude models) - fully implemented
- claude-3-5-sonnet-20241022 (recommended for tool calling)
- claude-3-5-haiku-20241022
- claude-3-opus-20240229
- And more...
python-fastapi/
βββ anthropic-server.py # FastAPI server example
βββ requirements.txt # Python dependencies (includes tanstack-ai package)
βββ env.example # Environment variables template
βββ README.md # This file
The server uses the tanstack-ai Python SDK located at packages/python/tanstack-ai/:
FastAPI Server (anthropic-server.py)
β
TanStack AI SDK (tanstack-ai package)
β
AnthropicAdapter
β
Claude API
Key Components:
anthropic-server.py: FastAPI endpoints and HTTP streamingAnthropicAdapter: Converts Anthropic events to StreamChunksChatEngine: Orchestrates the agentic loopToolCallManager: Manages tool executionchat()function: Main entry point for agentic chat
The package is installed as an editable dependency, making development easy.
curl -X POST http://localhost:8000/chat \
-H "Content-Type: application/json" \
-d '{
"messages": [
{
"role": "user",
"content": "What'\''s the weather in San Francisco and what time is it in Tokyo?"
}
]
}'import { ChatClient, fetchServerSentEvents } from '@tanstack/ai-client'
const client = new ChatClient({
connection: fetchServerSentEvents('http://localhost:8000/chat'),
})
await client.sendMessage("What's the weather in New York?")- The server uses CORS middleware allowing all origins (configure for production)
- Default model is
claude-3-5-sonnet-20241022(better for tool calling) - Tools are automatically executed on the server side
- Agent loop allows up to 5 iterations (configurable via
max_iterations) - Error handling converts exceptions to error StreamChunks
- All tool executions are logged for debugging