Files
vllm/docs/source/deployment/frameworks/triton.md
2025-01-07 11:20:01 +08:00

444 B

(deployment-triton)=

NVIDIA Triton

The Triton Inference Server hosts a tutorial demonstrating how to quickly deploy a simple facebook/opt-125m model using vLLM. Please see Deploying a vLLM model in Triton for more details.