Files
vllm/docs/source/deployment/frameworks/bentoml.md
2025-01-09 23:26:37 +00:00

468 B

(deployment-bentoml)=

BentoML

BentoML allows you to deploy a large language model (LLM) server with vLLM as the backend, which exposes OpenAI-compatible endpoints. You can serve the model locally or containerize it as an OCI-compliant image and deploy it on Kubernetes.

For details, see the tutorial vLLM inference in the BentoML documentation.