Increasing demand for Large Language Models (LLMs) services imposes substantial deployment and computation costs on providers. LLM routing offers a cost-efficient solution by directing queries to the optimal LLM based on model and query features. However, existing works primarily focus on offline scenarios and struggle to adapt to online settings with high query volume and constrained token budgets. In this work, we introduce the first training-free algorithm for online routing scenarios. Our algorithm leverages approximate nearest neighbor search to efficiently estimate query features and performs a one-time optimization over a small set of initial queries to learn a routing strategy that guides future routing. We provide theoretical guarantees demonstrating that our algorithm achieves a competitive ratio of
- [2025.8.15] 🚀 Our code is released!
Clone this repo and install the dependencies with:
git clone https://github.com/fzwark/PORT.git
cd PORT
pip install -r requirements.txtTo reproduce the main results for on RouterBench, a large-scale benchmark of 30k+ prompts and 11 LLMs designed to evaluate LLM routing, run:
python test.py --ops 1 2 3 4 5 6 7 8 --N=10000 --M=11 --E=26497 --alpha=0.0001 --eps=0.025 --budget=1 --split=weighted --embed=bge- You can specify the routing methods from our paper using the
opsoption. - Use
Nto set the size of the test query volume. You can also adjustEto control the number of historical data points used, andMto control the number of deployed LLMs. - The parameters of our online routing algorithm can be adjusted via
alpha(a control parameter) andeps(the fraction of initial queries used for one-time optimization). - The
splitoption determines how the budget is allocated across deployed LLMs. We provide six splitting strategies:"weighted","extreme","random","uniform","cost", and"perf". - The
embedoption specifies the embedding model to be used.
To add a new benchmark into this framework, the dataset must follow a unified schema. Each sample represents a single query, and for that query the dataset should include following fields from every supported LLM m:
index: A unique identifier for the query.prompt: The input text (string).m: The model’s performance score (value in [0, 1]).m|total_cost: The model’s total inference cost (float) for the query.
To train the models used in the model-based baselines, run:
python train.py --dataset routerbenchExtensive experiments on three benchmarks demonstrate that our algorithm consistently outperforms all eight baselines across performance, cost efficiency, and throughput. As shown in the table below, our algorithm outperforms all baselines on average by 3.55× in performance, 1.85× in cost efficiency, and nearly 4.25× in throughput.
We vary query volume from 4000 to 12000 and observe that our algorithm consistently outperforms all baselines, maintaining top performance and robustness as load increases.
Our method consistently outperforms training-free baselines under diverse LLM deployment configurations (ranging from 2 LLMs to 16 LLMs), highlighting its robustness and adaptability.
If you use this codebase, please consider citing our paper:
@inproceedings{
wu2025efficient,
title={Efficient Training-Free Online Routing for High-Volume Multi-{LLM} Serving},
author={Fangzhou Wu and Sandeep Silwal},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},
year={2025},
url={https://openreview.net/forum?id=d4mZyZB5I9}
}


