This directory contains comprehensive examples demonstrating TanStack AI across multiple languages and frameworks.
Choose an example based on your use case:
- Want a full-stack TypeScript app? β TanStack Chat (ts-react-chat)
- Need a vanilla JS frontend? β Vanilla Chat
- Building a Python backend? β Python FastAPI Server
- Building a PHP backend? β PHP Slim Framework Server
- Multi-User TypeScript chat app? β Group Chat (ts-group-chat)
A full-featured chat application built with the TanStack ecosystem.
Tech Stack:
- TanStack Start (full-stack React framework)
- TanStack Router (type-safe routing)
- TanStack Store (state management)
@tanstack/ai(AI backend)@tanstack/ai-react(React hooks)@tanstack/ai-client(headless client)
Features:
- β Real-time streaming with OpenAI GPT-4o
- β Automatic tool execution loop
- β Rich markdown rendering
- β Conversation management
- β Modern UI with Tailwind CSS
Getting Started:
cd examples/ts-react-chat
pnpm install
cp env.example .env
# Edit .env and add your OPENAI_API_KEY
pnpm startπ Full Documentation
A real-time multi-user chat application with AI integration, demonstrating WebSocket-based communication and TanStack AI.
Tech Stack:
- TanStack Start (full-stack React framework)
- TanStack Router (type-safe routing)
- Cap'n Web RPC (bidirectional WebSocket RPC)
@tanstack/ai(AI backend)@tanstack/ai-anthropic(Claude adapter)@tanstack/ai-client(headless client)@tanstack/ai-react(React hooks)
Features:
- β Real-time multi-user chat with WebSocket
- β Online presence tracking
- β AI assistant (Claude) integration with queuing
- β Message broadcasting to all users
- β Modern chat UI (iMessage-style)
- β Username-based authentication (no registration)
Getting Started:
cd examples/ts-group-chat
pnpm install
cp .env.example .env
# Edit .env and add your ANTHROPIC_API_KEY
pnpm devOpen http://localhost:4000 in multiple browser tabs to test multi-user functionality.
Key Concepts:
- WebSocket RPC: Uses Cap'n Web RPC for type-safe bidirectional communication
- AI Queuing: Claude requests are queued and processed sequentially
- Real-time Updates: Messages and online users update in real-time
- Message Broadcasting: Server broadcasts messages to all connected clients
π Full Documentation
A framework-free chat application using pure JavaScript and @tanstack/ai-client. Works with both PHP and Python backends.
Tech Stack:
- Vanilla JavaScript (no frameworks!)
@tanstack/ai-client(headless client)- Vite (dev server)
- Compatible with PHP Slim or Python FastAPI backends
Features:
- β Pure vanilla JavaScript
- β Real-time streaming messages
- β Beautiful, responsive UI
- β No framework dependencies
- β Works with multiple backend languages
Getting Started:
Option 1: With Python Backend
# Start the Python backend first
cd examples/python-fastapi
python anthropic-server.py
# Then start the frontend
cd examples/vanilla-chat
pnpm install
pnpm startOption 2: With PHP Backend
# Start the PHP backend and UI together
cd examples/php-slim
pnpm install
composer install
cp env.example .env
# Edit .env and add your ANTHROPIC_API_KEY
pnpm startOpen http://localhost:3001 (UI) - connects to backend on port 8000
π Full Documentation
A FastAPI server that streams AI responses in Server-Sent Events (SSE) format, compatible with TanStack AI clients.
Features:
- β FastAPI with SSE streaming
- β
Converts Anthropic/OpenAI events to
StreamChunkformat - β
Compatible with
@tanstack/ai-client - β Tool call support
- β Type-safe with Pydantic
Getting Started:
cd examples/python-fastapi
# Create virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Set up environment
cp env.example .env
# Edit .env and add your ANTHROPIC_API_KEY or OPENAI_API_KEY
# Run the server
python anthropic-server.py # or openai-server.pyAPI Endpoints:
POST /chat- Stream chat responses in SSE formatGET /health- Health check
Usage with TypeScript Client:
import { ChatClient, fetchServerSentEvents } from '@tanstack/ai-client'
const client = new ChatClient({
connection: fetchServerSentEvents('http://localhost:8000/chat'),
})
await client.sendMessage('Hello!')π Full Documentation
A PHP Slim Framework server that streams AI responses in SSE format, with support for both Anthropic and OpenAI.
Features:
- β Slim Framework with SSE streaming
- β
Converts Anthropic/OpenAI events to
StreamChunkformat - β
Compatible with
@tanstack/ai-client - β Tool call support
- β PHP 8.1+ with type safety
Getting Started:
cd examples/php-slim
# Install dependencies
composer install
# Set up environment
cp env.example .env
# Edit .env and add your ANTHROPIC_API_KEY and/or OPENAI_API_KEY
# Run the server
composer start-anthropic # Runs on port 8000
# or
composer start-openai # Runs on port 8001API Endpoints:
POST /chat- Stream chat responses in SSE formatGET /health- Health check
Usage with TypeScript Client:
import { ChatClient, fetchServerSentEvents } from '@tanstack/ai-client'
const client = new ChatClient({
connection: fetchServerSentEvents('http://localhost:8000/chat'),
})
await client.sendMessage('Hello!')π Full Documentation
Use TanStack AI end-to-end in TypeScript:
Frontend (React)
β (useChat hook)
@tanstack/ai-react
β (ChatClient)
@tanstack/ai-client
β (SSE/HTTP)
Backend (TanStack Start API Route)
β (chat() function)
@tanstack/ai
β (adapter)
AI Provider (OpenAI/Anthropic/etc.)
Example: TanStack Chat (ts-react-chat)
Use Python or PHP for the backend, TypeScript for the frontend:
Frontend (Vanilla JS/React/Vue/etc.)
β (ChatClient)
@tanstack/ai-client
β (SSE/HTTP)
Backend (Python FastAPI or PHP Slim)
β (tanstack-ai or tanstack/ai)
Stream Conversion & Message Formatting
β (provider SDK)
AI Provider (OpenAI/Anthropic/etc.)
Examples:
- Python FastAPI + Vanilla Chat
- PHP Slim + Vanilla Chat
- PHP Slim + any frontend with
@tanstack/ai-client
All examples use SSE for real-time streaming:
Backend (TypeScript):
import { chat, toServerSentEventsResponse } from '@tanstack/ai'
import { openaiText } from '@tanstack/ai-openai'
const stream = chat({
adapter: openaiText(),
model: 'gpt-4o',
messages,
})
return toServerSentEventsResponse(stream)Backend (Python):
from tanstack_ai import StreamChunkConverter, format_sse_chunk
async for event in anthropic_stream:
chunks = await converter.convert_event(event)
for chunk in chunks:
yield format_sse_chunk(chunk)Backend (PHP):
use TanStack\AI\StreamChunkConverter;
use TanStack\AI\SSEFormatter;
foreach ($anthropicStream as $event) {
$chunks = $converter->convertEvent($event);
foreach ($chunks as $chunk) {
echo SSEFormatter::formatChunk($chunk);
}
}Frontend:
import { ChatClient, fetchServerSentEvents } from '@tanstack/ai-client'
const client = new ChatClient({
connection: fetchServerSentEvents('/api/chat'),
})The TypeScript backend (@tanstack/ai) automatically handles tool execution:
import { chat, toolDefinition } from '@tanstack/ai'
import { z } from 'zod'
// Step 1: Define the tool schema
const weatherToolDef = toolDefinition({
name: 'getWeather',
description: 'Get weather for a location',
inputSchema: z.object({
location: z.string().describe('The city and state, e.g. San Francisco, CA'),
}),
outputSchema: z.object({
temp: z.number(),
condition: z.string(),
}),
})
// Step 2: Create server implementation
const weatherTool = weatherToolDef.server(async ({ location }) => {
// This is called automatically by the SDK
return { temp: 72, condition: 'sunny' }
})
const stream = chat({
adapter: openaiText(),
model: 'gpt-4o',
messages,
tools: [weatherTool], // SDK executes these automatically
})Clients receive:
contentchunks - text from the modeltool_callchunks - when the model calls a tooltool_resultchunks - results from tool executiondonechunk - conversation complete
You can run backend and frontend examples together:
# Option 1: Python backend + Vanilla Chat frontend
# Terminal 1: Start Python backend
cd examples/python-fastapi
python anthropic-server.py
# Terminal 2: Start vanilla frontend
cd examples/vanilla-chat
pnpm start
# Option 2: PHP backend + Vanilla Chat frontend (runs together)
cd examples/php-slim
pnpm start # Starts both PHP server and vanilla-chat UI
# Option 3: Full-stack TypeScript
cd examples/ts-react-chat
pnpm startEach example has an env.example file. Copy it to .env and add your API keys:
# TypeScript examples
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
# Python examples
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
# PHP examples
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...TypeScript:
pnpm buildPython:
# Use a production ASGI server
uvicorn anthropic-server:app --host 0.0.0.0 --port 8000PHP:
# Use a production web server (Apache, Nginx, etc.)
# See php-slim/README.md for deployment detailsWhen adding new examples:
- Create a README.md with setup instructions
- Add an env.example file with required environment variables
- Document the tech stack and key features
- Include usage examples with code snippets
- Update this README to list your example
- π Main README - Project overview
- π Documentation - Comprehensive guides
- π TypeScript Packages - Core libraries
- π Python Package - Python utilities
- π PHP Package - PHP utilities
Built with β€οΈ by the TanStack community