8000 olmocr · GitHub Topics · GitHub
[go: up one dir, main page]

Skip to content
#

olmocr

Here is 1 public repository matching this topic...

Higher performance OpenAI LLM service than vLLM serve: A pure C++ high-performance OpenAI LLM service implemented with GPRS+TensorRT-LLM+Tokenizers.cpp, supporting chat and function call, AI agents, distributed multi-GPU inference, multimodal capabilities, and a Gradio chat interface.

  • Updated May 14, 2025
  • Python

Improve this page

Add a description, image, and links to the olmocr topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with t 2D55 he olmocr topic, visit your repo's landing page and select "manage topics."

Learn more

0