8000 Triton Inference Server · GitHub
[go: up one dir, main page]

Skip to content

NVIDIA Triton Inference Server Organization

NVIDIA Triton Inference Server provides a cloud and edge inferencing solution optimized for both CPUs and GPUs.

This top level GitHub organization host repositories for officially supported backends, including TensorRT, TensorFlow, PyTorch, Python, ONNX Runtime, and OpenVino. The organization also hosts several popular Triton tools, including:

  • Model Analyzer: A tool to analyze the runtime performance of a model and provide an optimized model configuration for Triton Inference Server.

  • Model Navigator: a tool that provides the ability to automate the process of moving a model from source to optimal format and configuration for deployment on Triton Inference Server.

Getting Started

To learn about NVIDIA Triton Inference Server, refer to the Triton developer page and read our Quickstart Guide. Official Triton Docker containers are available from NVIDIA NGC.

Product Documentation

User documentation on Triton features, APIs, and architecture is located in the server documents on GitHub. A table of contents for the user documentation is located in the server README file.

Release Notes, Support Matrix, and Licenses information are available in the NVIDIA Triton Inference Server Documentation.

Examples

Specific end-to-end examples for popular models, such as ResNet, BERT, and DLRM are located in the NVIDIA Deep Learning Examples page on GitHub. Additional generic examples can be found in the server documents.

FAQ

For technical questions about Triton Inference Server, please consult the Triton FAQ Guide. Information about future support & updates for Triton can be found in the Dynamo FAQ Guide.

Feedback

Share feedback or ask questions about NVIDIA Triton Inference Server by filing a GitHub issue.

Pinned Loading

  1. server server Public

    The Triton Inference Server provides an optimized cloud and edge inferencing solution.

    Python 10.4k 1.7k

  2. core core Public

    The core library and APIs implementing the Triton Inference Server.

    C++ 170 131

    < 7440 /div>
  3. backend backend Public

    Common source, scripts and utilities for creating Triton backends.

    C++ 370 103

  4. client client Public

    Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.

    Python 686 252

  5. model_analyzer model_analyzer Public

    Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models.

    Python 507 85

  6. model_navigator model_navigator Public

    Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.

    Python 218 28

Repositories

Showing 10 of 36 repositories
  • tensorrtllm_backend Public

    The Triton TensorRT-LLM Backend

    triton-inference-server/tensorrtllm_backend’s past year of commit activity
    927 Apache-2.0 136 319 (1 issue needs help) 25 Updated Mar 11, 2026
  • server Public

    The Triton Inference Server provides an optimized cloud and edge inferencing solution.

    triton-inference-server/server’s past year of commit activity
    Python 10,419 BSD-3-Clause 1,729 780 (3 issues need help) 97 Updated Mar 11, 2026
  • common Public

    Common source, scripts and utilities shared across all Triton repositories.

    triton-inference-server/common’s past year of commit activity
    C++ 80 BSD-3-Clause 79 0 9 Updated Mar 11, 2026
  • TensorRT-LLM Public Forked from NVIDIA/TensorRT-LLM

    TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.

    triton-inference-server/TensorRT-LLM’s past year of commit activity
    Python 1 2,181 0 0 Updated Mar 11, 2026
  • core Public

    The core library and APIs implementing the Triton Inference Server.

    triton-inference-server/core’s past year of commit activity
    C++ 170 BSD-3-Clause 131 0 26 Updated Mar 11, 2026
  • vllm_backend Public
    triton-inference-server/vllm_backend’s past year of commit activity
    Python 334 BSD-3-Clause 41 0 9 Updated Mar 10, 2026
  • tutorials Public

    This repository contains tutorials and examples for Triton Inference Server

    triton-inference-server/tutorials’s past year of commit activity
    Python 824 BSD-3-Clause 143 8 18 Updated Mar 10, 2026
  • triton_cli Public

    Triton CLI is an open source command line interface that enables users to create, deploy, and profile models served by the Triton Inference Server.

    triton-inference-server/triton_cli’s past year of commit activity
    Python 74 5 3 3 Updated Mar 10, 2026
  • third_party Public

    Third-party source packages that are modified for use in Triton.

    triton-inference-server/third_party’s past year of commit activity
    C 6 BSD-3-Clause 66 0 4 Updated Mar 10, 2026
  • tensorrt_backend Public

    The Triton backend for TensorRT.

    triton-inference-server/tensorrt_backend’s past year of commit activity
    C++ 87 BSD-3-Clause 37 0 1 Updated Mar 11, 2026
0