Files
vllm/docs/deployment/frameworks/bentoml.md
2025-05-23 02:09:53 -07:00

486 B

title
title
BentoML

{ #deployment-bentoml }

BentoML allows you to deploy a large language model (LLM) server with vLLM as the backend, which exposes OpenAI-compatible endpoints. You can serve the model locally or containerize it as an OCI-compliant image and deploy it on Kubernetes.

For details, see the tutorial vLLM inference in the BentoML documentation.