[Doc] Fix Typo in Doc (#6392)
Co-authored-by: Saliya Ekanayake <esaliya@d-matrix.ai>
This commit is contained in:
@ -38,7 +38,7 @@ vLLM is flexible and easy to use with:
|
||||
|
||||
* Seamless integration with popular HuggingFace models
|
||||
* High-throughput serving with various decoding algorithms, including *parallel sampling*, *beam search*, and more
|
||||
* Tensor parallelism and pipieline parallelism support for distributed inference
|
||||
* Tensor parallelism and pipeline parallelism support for distributed inference
|
||||
* Streaming outputs
|
||||
* OpenAI-compatible API server
|
||||
* Support NVIDIA GPUs and AMD GPUs
|
||||
|
||||
Reference in New Issue
Block a user