Correct capitalisation: VLLM -> vLLM (#14562)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
This commit is contained in:
Harry Mellor
2025-03-10 17:36:21 +01:00
committed by GitHub
parent dea985aef0
commit 3b352a2f92
18 changed files with 25 additions and 25 deletions

View File

@ -37,7 +37,7 @@ you may contact the following individuals:
## Slack Discussion
You may use the `#security` channel in the [VLLM Slack](https://slack.vllm.ai)
You may use the `#security` channel in the [vLLM Slack](https://slack.vllm.ai)
to discuss security-related topics. However, please do not disclose any
vulnerabilities in this channel. If you need to report a vulnerability, please
use the GitHub security advisory system or contact a VMT member privately.

View File

@ -509,7 +509,7 @@ cache to complete other requests), we swap kv cache blocks out to CPU
memory. This is also known as "KV cache offloading" and is configured
with `--swap-space` and `--preemption-mode`.
In v0, [VLLM has long supported beam
In v0, [vLLM has long supported beam
search](gh-issue:6226). The
SequenceGroup encapsulated the idea of N Sequences which
all shared the same prompt kv blocks. This enabled KV cache block