Compare commits
6 Commits
low_latenc
...
disable-sd
| Author | SHA1 | Date | |
|---|---|---|---|
| b73fdb927a | |||
| a0304dc504 | |||
| c7941cca18 | |||
| b6dd32aa07 | |||
| f94886946e | |||
| 72dfe4c74f |
58
docs/source/deployment/security.md
Normal file
58
docs/source/deployment/security.md
Normal file
@ -0,0 +1,58 @@
|
||||
# Security Guide
|
||||
|
||||
## Inter-Node Communication
|
||||
|
||||
All communications between nodes in a multi-node vLLM deployment are **insecure by default** and must be protected by placing the nodes on an isolated network. This includes:
|
||||
|
||||
1. PyTorch Distributed communications
|
||||
2. KV cache transfer communications
|
||||
3. Tensor, Pipeline, and Data parallel communications
|
||||
|
||||
### Configuration Options for Inter-Node Communications
|
||||
|
||||
The following options control inter-node communications in vLLM:
|
||||
|
||||
1. **Environment Variables:**
|
||||
- `VLLM_HOST_IP`: Sets the IP address for vLLM processes to communicate on
|
||||
|
||||
2. **KV Cache Transfer Configuration:**
|
||||
- `--kv-ip`: The IP address for KV cache transfer communications (default: 127.0.0.1)
|
||||
- `--kv-port`: The port for KV cache transfer communications (default: 14579)
|
||||
|
||||
3. **Data Parallel Configuration:**
|
||||
- `data_parallel_master_ip`: IP of the data parallel master (default: 127.0.0.1)
|
||||
- `data_parallel_master_port`: Port of the data parallel master (default: 29500)
|
||||
|
||||
### Notes on PyTorch Distributed
|
||||
|
||||
vLLM uses PyTorch's distributed features for some inter-node communication. For
|
||||
detailed information about PyTorch Distributed security considerations, please
|
||||
refer to the [PyTorch Security
|
||||
Guide](https://github.com/pytorch/pytorch/security/policy#using-distributed-features).
|
||||
|
||||
Key points from the PyTorch security guide:
|
||||
- PyTorch Distributed features are intended for internal communication only
|
||||
- They are not built for use in untrusted environments or networks
|
||||
- No authorization protocol is included for performance reasons
|
||||
- Messages are sent unencrypted
|
||||
- Connections are accepted from anywhere without checks
|
||||
|
||||
### Security Recommendations
|
||||
|
||||
1. **Network Isolation:**
|
||||
- Deploy vLLM nodes on a dedicated, isolated network
|
||||
- Use network segmentation to prevent unauthorized access
|
||||
- Implement appropriate firewall rules
|
||||
|
||||
2. **Configuration Best Practices:**
|
||||
- Always set `VLLM_HOST_IP` to a specific IP address rather than using defaults
|
||||
- Configure firewalls to only allow necessary ports between nodes
|
||||
|
||||
3. **Access Control:**
|
||||
- Restrict physical and network access to the deployment environment
|
||||
- Implement proper authentication and authorization for management interfaces
|
||||
- Follow the principle of least privilege for all system components
|
||||
|
||||
## Reporting Security Vulnerabilities
|
||||
|
||||
If you believe you have found a security vulnerability in vLLM, please report it following the project's security policy. For more information on how to report security issues and the project's security policy, please see the [vLLM Security Policy](https://github.com/vllm-project/vllm/blob/main/SECURITY.md).
|
||||
@ -132,6 +132,7 @@ serving/integrations/index
|
||||
:caption: Deployment
|
||||
:maxdepth: 1
|
||||
|
||||
deployment/security
|
||||
deployment/docker
|
||||
deployment/k8s
|
||||
deployment/nginx
|
||||
|
||||
@ -77,6 +77,10 @@ bash run_cluster.sh \
|
||||
|
||||
Then you get a ray cluster of **containers**. Note that you need to keep the shells running these commands alive to hold the cluster. Any shell disconnect will terminate the cluster. In addition, please note that the argument `ip_of_head_node` should be the IP address of the head node, which is accessible by all the worker nodes. The IP addresses of each worker node should be specified in the `VLLM_HOST_IP` environment variable, and should be different for each worker node. Please check the network configuration of your cluster to make sure the nodes can communicate with each other through the specified IP addresses.
|
||||
|
||||
:::{warning}
|
||||
It is considered best practice to set `VLLM_HOST_IP` to an address on a private network segment for the vLLM cluster. The traffic sent here is not encrypted. The endpoints are also exchanging data in a format that could be exploited to execute arbitrary code should a malicious party gain access to the network. Please ensure that this network is not reachable by any untrusted parties.
|
||||
:::
|
||||
|
||||
:::{warning}
|
||||
Since this is a ray cluster of **containers**, all the following commands should be executed in the **containers**, otherwise you are executing the commands on the host machine, which is not connected to the ray cluster. To enter the container, you can use `docker exec -it node /bin/bash`.
|
||||
:::
|
||||
|
||||
@ -20,15 +20,11 @@ def models_list(*, all: bool = True, keywords: Optional[list[str]] = None):
|
||||
("facebook/opt-125m", {}),
|
||||
("nm-testing/tinyllama-oneshot-w8w8-test-static-shape-change", {
|
||||
"dtype": torch.float16,
|
||||
"quantization": "compressed-tensors"
|
||||
}),
|
||||
("neuralmagic/Llama-3.2-1B-Instruct-FP8-dynamic", {
|
||||
"dtype": torch.float16,
|
||||
"quantization": "compressed-tensors"
|
||||
}),
|
||||
("neuralmagic/Llama-3.2-1B-Instruct-quantized.w8a8", {
|
||||
"quantization": "compressed-tensors"
|
||||
}),
|
||||
("neuralmagic/Llama-3.2-1B-Instruct-quantized.w8a8", {}),
|
||||
("meta-llama/Llama-3.2-1B-Instruct", {}),
|
||||
]
|
||||
|
||||
|
||||
@ -1,14 +1,118 @@
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
import json
|
||||
from argparse import ArgumentError, ArgumentTypeError
|
||||
from contextlib import nullcontext
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Literal, Optional
|
||||
|
||||
import pytest
|
||||
|
||||
from vllm.config import PoolerConfig
|
||||
from vllm.engine.arg_utils import EngineArgs, nullable_kvs
|
||||
from vllm.config import PoolerConfig, config
|
||||
from vllm.engine.arg_utils import (EngineArgs, contains_type, get_kwargs,
|
||||
get_type, is_not_builtin, is_type,
|
||||
nullable_kvs, optional_type)
|
||||
from vllm.utils import FlexibleArgumentParser
|
||||
|
||||
|
||||
@pytest.mark.parametrize(("type", "value", "expected"), [
|
||||
(int, "42", 42),
|
||||
(int, "None", None),
|
||||
(float, "3.14", 3.14),
|
||||
(float, "None", None),
|
||||
(str, "Hello World!", "Hello World!"),
|
||||
(str, "None", None),
|
||||
(json.loads, '{"foo":1,"bar":2}', {
|
||||
"foo": 1,
|
||||
"bar": 2
|
||||
}),
|
||||
(json.loads, "foo=1,bar=2", {
|
||||
"foo": 1,
|
||||
"bar": 2
|
||||
}),
|
||||
(json.loads, "None", None),
|
||||
])
|
||||
def test_optional_type(type, value, expected):
|
||||
optional_type_func = optional_type(type)
|
||||
context = nullcontext()
|
||||
if value == "foo=1,bar=2":
|
||||
context = pytest.warns(DeprecationWarning)
|
||||
with context:
|
||||
assert optional_type_func(value) == expected
|
||||
|
||||
|
||||
@pytest.mark.parametrize(("type_hint", "type", "expected"), [
|
||||
(int, int, True),
|
||||
(int, float, False),
|
||||
(list[int], list, True),
|
||||
(list[int], tuple, False),
|
||||
(Literal[0, 1], Literal, True),
|
||||
])
|
||||
def test_is_type(type_hint, type, expected):
|
||||
assert is_type(type_hint, type) == expected
|
||||
|
||||
|
||||
@pytest.mark.parametrize(("type_hints", "type", "expected"), [
|
||||
({float, int}, int, True),
|
||||
({int, tuple[int]}, int, True),
|
||||
({int, tuple[int]}, float, False),
|
||||
({str, Literal["x", "y"]}, Literal, True),
|
||||
])
|
||||
def test_contains_type(type_hints, type, expected):
|
||||
assert contains_type(type_hints, type) == expected
|
||||
|
||||
|
||||
@pytest.mark.parametrize(("type_hints", "type", "expected"), [
|
||||
({int, float}, int, int),
|
||||
({int, float}, str, None),
|
||||
({str, Literal["x", "y"]}, Literal, Literal["x", "y"]),
|
||||
])
|
||||
def test_get_type(type_hints, type, expected):
|
||||
assert get_type(type_hints, type) == expected
|
||||
|
||||
|
||||
@config
|
||||
@dataclass
|
||||
class DummyConfigClass:
|
||||
regular_bool: bool = True
|
||||
"""Regular bool with default True"""
|
||||
optional_bool: Optional[bool] = None
|
||||
"""Optional bool with default None"""
|
||||
optional_literal: Optional[Literal["x", "y"]] = None
|
||||
"""Optional literal with default None"""
|
||||
tuple_n: tuple[int, ...] = field(default_factory=lambda: (1, 2, 3))
|
||||
"""Tuple with default (1, 2, 3)"""
|
||||
tuple_2: tuple[int, int] = field(default_factory=lambda: (1, 2))
|
||||
"""Tuple with default (1, 2)"""
|
||||
list_n: list[int] = field(default_factory=lambda: [1, 2, 3])
|
||||
"""List with default [1, 2, 3]"""
|
||||
|
||||
|
||||
@pytest.mark.parametrize(("type_hint", "expected"), [
|
||||
(int, False),
|
||||
(DummyConfigClass, True),
|
||||
])
|
||||
def test_is_not_builtin(type_hint, expected):
|
||||
assert is_not_builtin(type_hint) == expected
|
||||
|
||||
|
||||
def test_get_kwargs():
|
||||
kwargs = get_kwargs(DummyConfigClass)
|
||||
print(kwargs)
|
||||
|
||||
# bools should not have their type set
|
||||
assert kwargs["regular_bool"].get("type") is None
|
||||
assert kwargs["optional_bool"].get("type") is None
|
||||
# optional literals should have None as a choice
|
||||
assert kwargs["optional_literal"]["choices"] == ["x", "y", "None"]
|
||||
# tuples should have the correct nargs
|
||||
assert kwargs["tuple_n"]["nargs"] == "+"
|
||||
assert kwargs["tuple_2"]["nargs"] == 2
|
||||
# lists should work
|
||||
assert kwargs["list_n"]["type"] is int
|
||||
assert kwargs["list_n"]["nargs"] == "+"
|
||||
|
||||
|
||||
@pytest.mark.parametrize(("arg", "expected"), [
|
||||
(None, dict()),
|
||||
("image=16", {
|
||||
|
||||
@ -24,6 +24,7 @@ def create_scheduler(
|
||||
model: str = "facebook/opt-125m",
|
||||
max_num_seqs: int = 16,
|
||||
max_num_batched_tokens: int = 8192,
|
||||
max_num_spec_tokens: Optional[int] = None,
|
||||
enable_prefix_caching: Optional[bool] = None,
|
||||
long_prefill_token_threshold: int = 0,
|
||||
disable_chunked_mm_input: bool = False,
|
||||
@ -51,6 +52,7 @@ def create_scheduler(
|
||||
scheduler_config = SchedulerConfig(
|
||||
max_num_seqs=max_num_seqs,
|
||||
max_num_batched_tokens=max_num_batched_tokens,
|
||||
max_num_spec_tokens=max_num_spec_tokens,
|
||||
max_model_len=max_model_len,
|
||||
long_prefill_token_threshold=long_prefill_token_threshold,
|
||||
disable_chunked_mm_input=disable_chunked_mm_input,
|
||||
@ -684,6 +686,59 @@ def test_schedule_concurrent_batches(enable_prefix_caching: Optional[bool],
|
||||
scheduler.update_from_output(scheduler_output1, model_runner_output)
|
||||
|
||||
|
||||
def test_spec_token_budget():
|
||||
"""Test scheduling behavior when spec token buget limits the total
|
||||
number of scheduled tokens."""
|
||||
# Create scheduler with spec_token_budget=5
|
||||
scheduler = create_scheduler(
|
||||
max_num_batched_tokens=100,
|
||||
max_num_spec_tokens=14, # Total spec budget for this test
|
||||
)
|
||||
|
||||
requests = create_requests(
|
||||
num_requests=2,
|
||||
num_tokens=10,
|
||||
)
|
||||
|
||||
spec_tokens = [list(range(10)), list(range(5))]
|
||||
req_ids = []
|
||||
req_to_index = {}
|
||||
for i, request in enumerate(requests):
|
||||
scheduler.add_request(request)
|
||||
req_ids.append(request.request_id)
|
||||
req_to_index[request.request_id] = i
|
||||
output = scheduler.schedule()
|
||||
model_runner_output = ModelRunnerOutput(
|
||||
req_ids=req_ids,
|
||||
req_id_to_index=req_to_index,
|
||||
sampled_token_ids=[[0] for _ in range(len(requests))],
|
||||
spec_token_ids=spec_tokens,
|
||||
logprobs=None,
|
||||
prompt_logprobs_dict={},
|
||||
)
|
||||
scheduler.update_from_output(output, model_runner_output)
|
||||
|
||||
output = scheduler.schedule()
|
||||
request1, request2 = requests
|
||||
# --- Verify request1 ---
|
||||
# num_new_tokens = 11
|
||||
# num_scheduled_spec_tokens = 10
|
||||
# Budget starts at 14: 10 <= 14 → no truncation
|
||||
# num_new_tokens = min(10, 14) → 10
|
||||
assert len(request1.spec_token_ids) == 10 # Not truncated
|
||||
assert output.num_scheduled_tokens[request1.request_id] == 11
|
||||
assert len(output.scheduled_spec_decode_tokens[request1.request_id]) == 10
|
||||
|
||||
# --- Verify request2 ---
|
||||
# Remaining budget after request1: 14 - 10 = 4
|
||||
# num_new_tokens = 6
|
||||
# num_scheduled_spec_tokens = 6-1 = 5 > 4 → truncate to 4
|
||||
# num_new_tokens = min(5, 4) → 4
|
||||
assert len(request2.spec_token_ids) == 4 # Truncated from 5
|
||||
assert output.num_scheduled_tokens[request2.request_id] == 5
|
||||
assert len(output.scheduled_spec_decode_tokens[request2.request_id]) == 4
|
||||
|
||||
|
||||
# Note - these test cases mirror some of those in test_rejection_sampler.py
|
||||
@pytest.mark.parametrize(
|
||||
"spec_tokens,output_tokens,expected",
|
||||
|
||||
@ -28,6 +28,7 @@ import vllm.envs as envs
|
||||
from vllm.compilation.inductor_pass import CallableInductorPass, InductorPass
|
||||
from vllm.logger import init_logger
|
||||
from vllm.model_executor.layers.quantization import (QUANTIZATION_METHODS,
|
||||
QuantizationMethods,
|
||||
get_quantization_config)
|
||||
from vllm.model_executor.models import ModelRegistry
|
||||
from vllm.platforms import CpuArchEnum, current_platform
|
||||
@ -752,9 +753,8 @@ class ModelConfig:
|
||||
supported_quantization = QUANTIZATION_METHODS
|
||||
optimized_quantization_methods = [
|
||||
"fp8", "marlin", "modelopt", "gptq_marlin_24", "gptq_marlin",
|
||||
"awq_marlin", "fbgemm_fp8", "compressed_tensors",
|
||||
"compressed-tensors", "experts_int8", "quark", "nvfp4", "bitblas",
|
||||
"gptq_bitblas"
|
||||
"awq_marlin", "fbgemm_fp8", "compressed-tensors", "experts_int8",
|
||||
"quark", "nvfp4", "bitblas", "gptq_bitblas"
|
||||
]
|
||||
if self.quantization is not None:
|
||||
self.quantization = self.quantization.lower()
|
||||
@ -764,13 +764,47 @@ class ModelConfig:
|
||||
|
||||
if quant_cfg is not None:
|
||||
quant_method = quant_cfg.get("quant_method", "").lower()
|
||||
quant_method = quant_method.replace("compressed_tensors",
|
||||
"compressed-tensors")
|
||||
quant_cfg["quant_method"] = quant_method
|
||||
|
||||
# Quantization methods which are overrides (i.e. they have a
|
||||
# `override_quantization_method` method) must be checked in order
|
||||
# of preference (this is particularly important for GPTQ).
|
||||
overrides = [
|
||||
"marlin",
|
||||
"bitblas",
|
||||
"gptq_marlin_24",
|
||||
"gptq_marlin",
|
||||
"gptq_bitblas",
|
||||
"awq_marlin",
|
||||
"ipex",
|
||||
"moe_wna16",
|
||||
]
|
||||
quantization_methods = [
|
||||
q for q in supported_quantization if q not in overrides
|
||||
]
|
||||
# Any custom overrides will be in quantization_methods so we place
|
||||
# them at the start of the list so custom overrides have preference
|
||||
# over the built in ones.
|
||||
quantization_methods = quantization_methods + overrides
|
||||
|
||||
# Detect which checkpoint is it
|
||||
for name in QUANTIZATION_METHODS:
|
||||
for name in quantization_methods:
|
||||
method = get_quantization_config(name)
|
||||
quantization_override = method.override_quantization_method(
|
||||
quant_cfg, self.quantization)
|
||||
if quantization_override:
|
||||
if quantization_override is not None:
|
||||
# Raise error if the override is not custom (custom would
|
||||
# be in QUANTIZATION_METHODS but not QuantizationMethods)
|
||||
# and hasn't been added to the overrides list.
|
||||
if (name in get_args(QuantizationMethods)
|
||||
and name not in overrides):
|
||||
raise ValueError(
|
||||
f"Quantization method {name} is an override but "
|
||||
"is has not been added to the `overrides` list "
|
||||
"above. This is necessary to ensure that the "
|
||||
"overrides are checked in order of preference.")
|
||||
quant_method = quantization_override
|
||||
self.quantization = quantization_override
|
||||
break
|
||||
@ -1807,6 +1841,9 @@ class SchedulerConfig:
|
||||
is primarily set in `ModelConfig` and that value should be manually
|
||||
duplicated here."""
|
||||
|
||||
max_num_spec_tokens: int = None # type: ignore
|
||||
"""Maximum number of speculative tokens for all requests in the batch."""
|
||||
|
||||
max_num_partial_prefills: int = 1
|
||||
"""For chunked prefill, the maximum number of sequences that can be
|
||||
partially prefilled concurrently."""
|
||||
|
||||
@ -241,7 +241,7 @@ class MessageQueue:
|
||||
self.remote_socket.setsockopt(IPV6, 1)
|
||||
remote_addr_ipv6 = True
|
||||
connect_ip = f"[{connect_ip}]"
|
||||
socket_addr = f"tcp://*:{remote_subscribe_port}"
|
||||
socket_addr = f"tcp://{connect_ip}:{remote_subscribe_port}"
|
||||
self.remote_socket.bind(socket_addr)
|
||||
remote_subscribe_addr = f"tcp://{connect_ip}:{remote_subscribe_port}"
|
||||
else:
|
||||
|
||||
@ -11,7 +11,7 @@ from typing import (Any, Callable, Dict, List, Literal, Optional, Type,
|
||||
TypeVar, Union, cast, get_args, get_origin)
|
||||
|
||||
import torch
|
||||
from typing_extensions import TypeIs
|
||||
from typing_extensions import TypeIs, deprecated
|
||||
|
||||
import vllm.envs as envs
|
||||
from vllm import version
|
||||
@ -48,33 +48,29 @@ TypeHint = Union[type[Any], object]
|
||||
TypeHintT = Union[type[T], object]
|
||||
|
||||
|
||||
def optional_arg(val: str, return_type: Callable[[str], T]) -> Optional[T]:
|
||||
if val == "" or val == "None":
|
||||
return None
|
||||
try:
|
||||
return return_type(val)
|
||||
except ValueError as e:
|
||||
raise argparse.ArgumentTypeError(
|
||||
f"Value {val} cannot be converted to {return_type}.") from e
|
||||
def optional_type(
|
||||
return_type: Callable[[str], T]) -> Callable[[str], Optional[T]]:
|
||||
|
||||
def _optional_type(val: str) -> Optional[T]:
|
||||
if val == "" or val == "None":
|
||||
return None
|
||||
try:
|
||||
if return_type is json.loads and not re.match("^{.*}$", val):
|
||||
return cast(T, nullable_kvs(val))
|
||||
return return_type(val)
|
||||
except ValueError as e:
|
||||
raise argparse.ArgumentTypeError(
|
||||
f"Value {val} cannot be converted to {return_type}.") from e
|
||||
|
||||
return _optional_type
|
||||
|
||||
|
||||
def optional_str(val: str) -> Optional[str]:
|
||||
return optional_arg(val, str)
|
||||
|
||||
|
||||
def optional_int(val: str) -> Optional[int]:
|
||||
return optional_arg(val, int)
|
||||
|
||||
|
||||
def optional_float(val: str) -> Optional[float]:
|
||||
return optional_arg(val, float)
|
||||
|
||||
|
||||
def nullable_kvs(val: str) -> Optional[dict[str, int]]:
|
||||
"""NOTE: This function is deprecated, args should be passed as JSON
|
||||
strings instead.
|
||||
|
||||
Parses a string containing comma separate key [str] to value [int]
|
||||
@deprecated(
|
||||
"Passing a JSON argument as a string containing comma separated key=value "
|
||||
"pairs is deprecated. This will be removed in v0.10.0. Please use a JSON "
|
||||
"string instead.")
|
||||
def nullable_kvs(val: str) -> dict[str, int]:
|
||||
"""Parses a string containing comma separate key [str] to value [int]
|
||||
pairs into a dictionary.
|
||||
|
||||
Args:
|
||||
@ -83,10 +79,7 @@ def nullable_kvs(val: str) -> Optional[dict[str, int]]:
|
||||
Returns:
|
||||
Dictionary with parsed values.
|
||||
"""
|
||||
if len(val) == 0:
|
||||
return None
|
||||
|
||||
out_dict: Dict[str, int] = {}
|
||||
out_dict: dict[str, int] = {}
|
||||
for item in val.split(","):
|
||||
kv_parts = [part.lower().strip() for part in item.split("=")]
|
||||
if len(kv_parts) != 2:
|
||||
@ -108,15 +101,103 @@ def nullable_kvs(val: str) -> Optional[dict[str, int]]:
|
||||
return out_dict
|
||||
|
||||
|
||||
def optional_dict(val: str) -> Optional[dict[str, int]]:
|
||||
if re.match("^{.*}$", val):
|
||||
return optional_arg(val, json.loads)
|
||||
def is_type(type_hint: TypeHint, type: TypeHintT) -> TypeIs[TypeHintT]:
|
||||
"""Check if the type hint is a specific type."""
|
||||
return type_hint is type or get_origin(type_hint) is type
|
||||
|
||||
logger.warning(
|
||||
"Failed to parse JSON string. Attempting to parse as "
|
||||
"comma-separated key=value pairs. This will be deprecated in a "
|
||||
"future release.")
|
||||
return nullable_kvs(val)
|
||||
|
||||
def contains_type(type_hints: set[TypeHint], type: TypeHintT) -> bool:
|
||||
"""Check if the type hints contain a specific type."""
|
||||
return any(is_type(type_hint, type) for type_hint in type_hints)
|
||||
|
||||
|
||||
def get_type(type_hints: set[TypeHint], type: TypeHintT) -> TypeHintT:
|
||||
"""Get the specific type from the type hints."""
|
||||
return next((th for th in type_hints if is_type(th, type)), None)
|
||||
|
||||
|
||||
def is_not_builtin(type_hint: TypeHint) -> bool:
|
||||
"""Check if the class is not a built-in type."""
|
||||
return type_hint.__module__ != "builtins"
|
||||
|
||||
|
||||
def get_kwargs(cls: ConfigType) -> dict[str, Any]:
|
||||
cls_docs = get_attr_docs(cls)
|
||||
kwargs = {}
|
||||
for field in fields(cls):
|
||||
# Get the default value of the field
|
||||
default = field.default
|
||||
if field.default_factory is not MISSING:
|
||||
default = field.default_factory()
|
||||
|
||||
# Get the help text for the field
|
||||
name = field.name
|
||||
help = cls_docs[name]
|
||||
# Escape % for argparse
|
||||
help = help.replace("%", "%%")
|
||||
|
||||
# Initialise the kwargs dictionary for the field
|
||||
kwargs[name] = {"default": default, "help": help}
|
||||
|
||||
# Get the set of possible types for the field
|
||||
type_hints: set[TypeHint] = set()
|
||||
if get_origin(field.type) is Union:
|
||||
type_hints.update(get_args(field.type))
|
||||
else:
|
||||
type_hints.add(field.type)
|
||||
|
||||
# Set other kwargs based on the type hints
|
||||
if contains_type(type_hints, bool):
|
||||
# Creates --no-<name> and --<name> flags
|
||||
kwargs[name]["action"] = argparse.BooleanOptionalAction
|
||||
elif contains_type(type_hints, Literal):
|
||||
# Creates choices from Literal arguments
|
||||
type_hint = get_type(type_hints, Literal)
|
||||
choices = sorted(get_args(type_hint))
|
||||
kwargs[name]["choices"] = choices
|
||||
choice_type = type(choices[0])
|
||||
assert all(type(c) is choice_type for c in choices), (
|
||||
"All choices must be of the same type. "
|
||||
f"Got {choices} with types {[type(c) for c in choices]}")
|
||||
kwargs[name]["type"] = choice_type
|
||||
elif contains_type(type_hints, tuple):
|
||||
type_hint = get_type(type_hints, tuple)
|
||||
types = get_args(type_hint)
|
||||
tuple_type = types[0]
|
||||
assert all(t is tuple_type for t in types if t is not Ellipsis), (
|
||||
"All non-Ellipsis tuple elements must be of the same "
|
||||
f"type. Got {types}.")
|
||||
kwargs[name]["type"] = tuple_type
|
||||
kwargs[name]["nargs"] = "+" if Ellipsis in types else len(types)
|
||||
elif contains_type(type_hints, list):
|
||||
type_hint = get_type(type_hints, list)
|
||||
types = get_args(type_hint)
|
||||
assert len(types) == 1, (
|
||||
"List type must have exactly one type. Got "
|
||||
f"{type_hint} with types {types}")
|
||||
kwargs[name]["type"] = types[0]
|
||||
kwargs[name]["nargs"] = "+"
|
||||
elif contains_type(type_hints, int):
|
||||
kwargs[name]["type"] = int
|
||||
elif contains_type(type_hints, float):
|
||||
kwargs[name]["type"] = float
|
||||
elif contains_type(type_hints, dict):
|
||||
# Dict arguments will always be optional
|
||||
kwargs[name]["type"] = optional_type(json.loads)
|
||||
elif (contains_type(type_hints, str)
|
||||
or any(is_not_builtin(th) for th in type_hints)):
|
||||
kwargs[name]["type"] = str
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Unsupported type {type_hints} for argument {name}.")
|
||||
|
||||
# If None is in type_hints, make the argument optional.
|
||||
# But not if it's a bool, argparse will handle this better.
|
||||
if type(None) in type_hints and not contains_type(type_hints, bool):
|
||||
kwargs[name]["type"] = optional_type(kwargs[name]["type"])
|
||||
if kwargs[name].get("choices"):
|
||||
kwargs[name]["choices"].append("None")
|
||||
return kwargs
|
||||
|
||||
|
||||
@dataclass
|
||||
@ -279,100 +360,6 @@ class EngineArgs:
|
||||
def add_cli_args(parser: FlexibleArgumentParser) -> FlexibleArgumentParser:
|
||||
"""Shared CLI arguments for vLLM engine."""
|
||||
|
||||
def is_type_in_union(cls: TypeHint, type: TypeHint) -> bool:
|
||||
"""Check if the class is a type in a union type."""
|
||||
is_union = get_origin(cls) is Union
|
||||
type_in_union = type in [get_origin(a) or a for a in get_args(cls)]
|
||||
return is_union and type_in_union
|
||||
|
||||
def get_type_from_union(cls: TypeHint, type: TypeHintT) -> TypeHintT:
|
||||
"""Get the type in a union type."""
|
||||
for arg in get_args(cls):
|
||||
if (get_origin(arg) or arg) is type:
|
||||
return arg
|
||||
raise ValueError(f"Type {type} not found in union type {cls}.")
|
||||
|
||||
def is_optional(cls: TypeHint) -> TypeIs[Union[Any, None]]:
|
||||
"""Check if the class is an optional type."""
|
||||
return is_type_in_union(cls, type(None))
|
||||
|
||||
def can_be_type(cls: TypeHint, type: TypeHintT) -> TypeIs[TypeHintT]:
|
||||
"""Check if the class can be of type."""
|
||||
return cls is type or get_origin(cls) is type or is_type_in_union(
|
||||
cls, type)
|
||||
|
||||
def is_custom_type(cls: TypeHint) -> bool:
|
||||
"""Check if the class is a custom type."""
|
||||
return cls.__module__ != "builtins"
|
||||
|
||||
def get_kwargs(cls: ConfigType) -> dict[str, Any]:
|
||||
cls_docs = get_attr_docs(cls)
|
||||
kwargs = {}
|
||||
for field in fields(cls):
|
||||
# Get the default value of the field
|
||||
default = field.default
|
||||
if field.default_factory is not MISSING:
|
||||
default = field.default_factory()
|
||||
|
||||
# Get the help text for the field
|
||||
name = field.name
|
||||
help = cls_docs[name]
|
||||
# Escape % for argparse
|
||||
help = help.replace("%", "%%")
|
||||
|
||||
# Initialise the kwargs dictionary for the field
|
||||
kwargs[name] = {"default": default, "help": help}
|
||||
|
||||
# Make note of if the field is optional and get the actual
|
||||
# type of the field if it is
|
||||
optional = is_optional(field.type)
|
||||
field_type = get_args(
|
||||
field.type)[0] if optional else field.type
|
||||
|
||||
# Set type, action and choices for the field depending on the
|
||||
# type of the field
|
||||
if can_be_type(field_type, bool):
|
||||
# Creates --no-<name> and --<name> flags
|
||||
kwargs[name]["action"] = argparse.BooleanOptionalAction
|
||||
kwargs[name]["type"] = bool
|
||||
elif can_be_type(field_type, Literal):
|
||||
# Creates choices from Literal arguments
|
||||
if is_type_in_union(field_type, Literal):
|
||||
field_type = get_type_from_union(field_type, Literal)
|
||||
choices = get_args(field_type)
|
||||
kwargs[name]["choices"] = choices
|
||||
choice_type = type(choices[0])
|
||||
assert all(type(c) is choice_type for c in choices), (
|
||||
"All choices must be of the same type. "
|
||||
f"Got {choices} with types {[type(c) for c in choices]}"
|
||||
)
|
||||
kwargs[name]["type"] = choice_type
|
||||
elif can_be_type(field_type, tuple):
|
||||
if is_type_in_union(field_type, tuple):
|
||||
field_type = get_type_from_union(field_type, tuple)
|
||||
dtypes = get_args(field_type)
|
||||
dtype = dtypes[0]
|
||||
assert all(
|
||||
d is dtype for d in dtypes if d is not Ellipsis
|
||||
), ("All non-Ellipsis tuple elements must be of the same "
|
||||
f"type. Got {dtypes}.")
|
||||
kwargs[name]["type"] = dtype
|
||||
kwargs[name]["nargs"] = "+"
|
||||
elif can_be_type(field_type, int):
|
||||
kwargs[name]["type"] = optional_int if optional else int
|
||||
elif can_be_type(field_type, float):
|
||||
kwargs[name][
|
||||
"type"] = optional_float if optional else float
|
||||
elif can_be_type(field_type, dict):
|
||||
kwargs[name]["type"] = optional_dict
|
||||
elif (can_be_type(field_type, str)
|
||||
or is_custom_type(field_type)):
|
||||
kwargs[name]["type"] = optional_str if optional else str
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Unsupported type {field.type} for argument {name}. ")
|
||||
return kwargs
|
||||
|
||||
# Model arguments
|
||||
parser.add_argument(
|
||||
'--model',
|
||||
@ -390,13 +377,13 @@ class EngineArgs:
|
||||
'which task to use.')
|
||||
parser.add_argument(
|
||||
'--tokenizer',
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
default=EngineArgs.tokenizer,
|
||||
help='Name or path of the huggingface tokenizer to use. '
|
||||
'If unspecified, model name or path will be used.')
|
||||
parser.add_argument(
|
||||
"--hf-config-path",
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
default=EngineArgs.hf_config_path,
|
||||
help='Name or path of the huggingface config to use. '
|
||||
'If unspecified, model name or path will be used.')
|
||||
@ -408,21 +395,21 @@ class EngineArgs:
|
||||
'the input. The generated output will contain token ids.')
|
||||
parser.add_argument(
|
||||
'--revision',
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
default=None,
|
||||
help='The specific model version to use. It can be a branch '
|
||||
'name, a tag name, or a commit id. If unspecified, will use '
|
||||
'the default version.')
|
||||
parser.add_argument(
|
||||
'--code-revision',
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
default=None,
|
||||
help='The specific revision to use for the model code on '
|
||||
'Hugging Face Hub. It can be a branch name, a tag name, or a '
|
||||
'commit id. If unspecified, will use the default version.')
|
||||
parser.add_argument(
|
||||
'--tokenizer-revision',
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
default=None,
|
||||
help='Revision of the huggingface tokenizer to use. '
|
||||
'It can be a branch name, a tag name, or a commit id. '
|
||||
@ -513,7 +500,7 @@ class EngineArgs:
|
||||
|
||||
parser.add_argument(
|
||||
'--logits-processor-pattern',
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
default=None,
|
||||
help='Optional regex pattern specifying valid logits processor '
|
||||
'qualified names that can be passed with the `logits_processors` '
|
||||
@ -612,7 +599,7 @@ class EngineArgs:
|
||||
# Quantization settings.
|
||||
parser.add_argument('--quantization',
|
||||
'-q',
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
choices=[*QUANTIZATION_METHODS, None],
|
||||
default=EngineArgs.quantization,
|
||||
help='Method used to quantize the weights. If '
|
||||
@ -921,7 +908,7 @@ class EngineArgs:
|
||||
'class without changing the existing functions.')
|
||||
parser.add_argument(
|
||||
"--generation-config",
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
default="auto",
|
||||
help="The folder path to the generation config. "
|
||||
"Defaults to 'auto', the generation config will be loaded from "
|
||||
|
||||
@ -11,7 +11,7 @@ import ssl
|
||||
from collections.abc import Sequence
|
||||
from typing import Optional, Union, get_args
|
||||
|
||||
from vllm.engine.arg_utils import AsyncEngineArgs, optional_str
|
||||
from vllm.engine.arg_utils import AsyncEngineArgs, optional_type
|
||||
from vllm.entrypoints.chat_utils import (ChatTemplateContentFormatOption,
|
||||
validate_chat_template)
|
||||
from vllm.entrypoints.openai.serving_models import (LoRAModulePath,
|
||||
@ -79,7 +79,7 @@ class PromptAdapterParserAction(argparse.Action):
|
||||
|
||||
def make_arg_parser(parser: FlexibleArgumentParser) -> FlexibleArgumentParser:
|
||||
parser.add_argument("--host",
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
default=None,
|
||||
help="Host name.")
|
||||
parser.add_argument("--port", type=int, default=8000, help="Port number.")
|
||||
@ -108,13 +108,13 @@ def make_arg_parser(parser: FlexibleArgumentParser) -> FlexibleArgumentParser:
|
||||
default=["*"],
|
||||
help="Allowed headers.")
|
||||
parser.add_argument("--api-key",
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
default=None,
|
||||
help="If provided, the server will require this key "
|
||||
"to be presented in the header.")
|
||||
parser.add_argument(
|
||||
"--lora-modules",
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
default=None,
|
||||
nargs='+',
|
||||
action=LoRAParserAction,
|
||||
@ -126,14 +126,14 @@ def make_arg_parser(parser: FlexibleArgumentParser) -> FlexibleArgumentParser:
|
||||
"\"base_model_name\": \"id\"}``")
|
||||
parser.add_argument(
|
||||
"--prompt-adapters",
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
default=None,
|
||||
nargs='+',
|
||||
action=PromptAdapterParserAction,
|
||||
help="Prompt adapter configurations in the format name=path. "
|
||||
"Multiple adapters can be specified.")
|
||||
parser.add_argument("--chat-template",
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
default=None,
|
||||
help="The file path to the chat template, "
|
||||
"or the template in single-line form "
|
||||
@ -151,20 +151,20 @@ def make_arg_parser(parser: FlexibleArgumentParser) -> FlexibleArgumentParser:
|
||||
'similar to OpenAI schema. '
|
||||
'Example: ``[{"type": "text", "text": "Hello world!"}]``')
|
||||
parser.add_argument("--response-role",
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
default="assistant",
|
||||
help="The role name to return if "
|
||||
"``request.add_generation_prompt=true``.")
|
||||
parser.add_argument("--ssl-keyfile",
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
default=None,
|
||||
help="The file path to the SSL key file.")
|
||||
parser.add_argument("--ssl-certfile",
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
default=None,
|
||||
help="The file path to the SSL cert file.")
|
||||
parser.add_argument("--ssl-ca-certs",
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
default=None,
|
||||
help="The CA certificates file.")
|
||||
parser.add_argument(
|
||||
@ -180,13 +180,13 @@ def make_arg_parser(parser: FlexibleArgumentParser) -> FlexibleArgumentParser:
|
||||
)
|
||||
parser.add_argument(
|
||||
"--root-path",
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
default=None,
|
||||
help="FastAPI root_path when app is behind a path based routing proxy."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--middleware",
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
action="append",
|
||||
default=[],
|
||||
help="Additional ASGI middleware to apply to the app. "
|
||||
|
||||
@ -12,7 +12,7 @@ import torch
|
||||
from prometheus_client import start_http_server
|
||||
from tqdm import tqdm
|
||||
|
||||
from vllm.engine.arg_utils import AsyncEngineArgs, optional_str
|
||||
from vllm.engine.arg_utils import AsyncEngineArgs, optional_type
|
||||
from vllm.engine.async_llm_engine import AsyncLLMEngine
|
||||
from vllm.entrypoints.logger import RequestLogger, logger
|
||||
# yapf: disable
|
||||
@ -61,7 +61,7 @@ def parse_args():
|
||||
"to the output URL.",
|
||||
)
|
||||
parser.add_argument("--response-role",
|
||||
type=optional_str,
|
||||
type=optional_type(str),
|
||||
default="assistant",
|
||||
help="The role name to return if "
|
||||
"`request.add_generation_prompt=True`.")
|
||||
|
||||
@ -1,11 +1,11 @@
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
from typing import Dict, List, Type
|
||||
from typing import Literal, Type, get_args
|
||||
|
||||
from vllm.model_executor.layers.quantization.base_config import (
|
||||
QuantizationConfig)
|
||||
|
||||
QUANTIZATION_METHODS: List[str] = [
|
||||
QuantizationMethods = Literal[
|
||||
"aqlm",
|
||||
"awq",
|
||||
"deepspeedfp",
|
||||
@ -15,8 +15,6 @@ QUANTIZATION_METHODS: List[str] = [
|
||||
"fbgemm_fp8",
|
||||
"modelopt",
|
||||
"nvfp4",
|
||||
# The order of gptq methods is important for config.py iteration over
|
||||
# override_quantization_method(..)
|
||||
"marlin",
|
||||
"bitblas",
|
||||
"gguf",
|
||||
@ -36,6 +34,7 @@ QUANTIZATION_METHODS: List[str] = [
|
||||
"moe_wna16",
|
||||
"torchao",
|
||||
]
|
||||
QUANTIZATION_METHODS: list[str] = list(get_args(QuantizationMethods))
|
||||
|
||||
# The customized quantization methods which will be added to this dict.
|
||||
_CUSTOMIZED_METHOD_TO_QUANT_CONFIG = {}
|
||||
@ -111,7 +110,7 @@ def get_quantization_config(quantization: str) -> Type[QuantizationConfig]:
|
||||
from .torchao import TorchAOConfig
|
||||
from .tpu_int8 import Int8TpuConfig
|
||||
|
||||
method_to_config: Dict[str, Type[QuantizationConfig]] = {
|
||||
method_to_config: dict[str, Type[QuantizationConfig]] = {
|
||||
"aqlm": AQLMConfig,
|
||||
"awq": AWQConfig,
|
||||
"deepspeedfp": DeepSpeedFPConfig,
|
||||
@ -120,8 +119,6 @@ def get_quantization_config(quantization: str) -> Type[QuantizationConfig]:
|
||||
"fbgemm_fp8": FBGEMMFp8Config,
|
||||
"modelopt": ModelOptFp8Config,
|
||||
"nvfp4": ModelOptNvFp4Config,
|
||||
# The order of gptq methods is important for config.py iteration over
|
||||
# override_quantization_method(..)
|
||||
"marlin": MarlinConfig,
|
||||
"bitblas": BitBLASConfig,
|
||||
"gguf": GGUFConfig,
|
||||
@ -150,6 +147,7 @@ def get_quantization_config(quantization: str) -> Type[QuantizationConfig]:
|
||||
|
||||
__all__ = [
|
||||
"QuantizationConfig",
|
||||
"QuantizationMethods",
|
||||
"get_quantization_config",
|
||||
"QUANTIZATION_METHODS",
|
||||
]
|
||||
@ -72,7 +72,7 @@ class CompressedTensorsConfig(QuantizationConfig):
|
||||
return 70
|
||||
|
||||
def get_name(self) -> str:
|
||||
return "compressed_tensors"
|
||||
return "compressed-tensors"
|
||||
|
||||
def get_quant_method(
|
||||
self,
|
||||
|
||||
@ -130,8 +130,8 @@ class RocmPlatform(Platform):
|
||||
device_control_env_var: str = "CUDA_VISIBLE_DEVICES"
|
||||
|
||||
supported_quantization: list[str] = [
|
||||
"awq", "gptq", "fp8", "compressed_tensors", "compressed-tensors",
|
||||
"fbgemm_fp8", "gguf", "quark", "ptpc_fp8"
|
||||
"awq", "gptq", "fp8", "compressed-tensors", "fbgemm_fp8", "gguf",
|
||||
"quark", "ptpc_fp8"
|
||||
]
|
||||
|
||||
@classmethod
|
||||
|
||||
@ -30,9 +30,7 @@ class TpuPlatform(Platform):
|
||||
ray_device_key: str = "TPU"
|
||||
device_control_env_var: str = "TPU_VISIBLE_CHIPS"
|
||||
|
||||
supported_quantization: list[str] = [
|
||||
"tpu_int8", "compressed-tensors", "compressed_tensors"
|
||||
]
|
||||
supported_quantization: list[str] = ["tpu_int8", "compressed-tensors"]
|
||||
|
||||
additional_env_vars: list[str] = [
|
||||
"TPU_CHIPS_PER_HOST_BOUNDS", "TPU_HOST_BOUNDS"
|
||||
|
||||
@ -62,6 +62,7 @@ class Scheduler(SchedulerInterface):
|
||||
self.max_num_scheduled_tokens = \
|
||||
self.scheduler_config.max_num_batched_tokens
|
||||
self.max_model_len = self.scheduler_config.max_model_len
|
||||
self.max_num_spec_tokens = self.scheduler_config.max_num_spec_tokens
|
||||
|
||||
# Create KVConnector for the Scheduler. Note that each Worker
|
||||
# will have a corresponding KVConnector with Role=WORKER.
|
||||
@ -162,6 +163,8 @@ class Scheduler(SchedulerInterface):
|
||||
req_to_new_block_ids: dict[str, list[int]] = {}
|
||||
num_scheduled_tokens: dict[str, int] = {}
|
||||
token_budget = self.max_num_scheduled_tokens
|
||||
spec_token_budget = self.max_num_spec_tokens
|
||||
|
||||
# Encoder-related.
|
||||
scheduled_encoder_inputs: dict[str, list[int]] = {}
|
||||
encoder_budget = self.max_num_encoder_input_tokens
|
||||
@ -184,6 +187,19 @@ class Scheduler(SchedulerInterface):
|
||||
self.scheduler_config.long_prefill_token_threshold)
|
||||
num_new_tokens = min(num_new_tokens, token_budget)
|
||||
|
||||
num_scheduled_spec_tokens = (num_new_tokens +
|
||||
request.num_computed_tokens -
|
||||
request.num_tokens)
|
||||
if spec_token_budget:
|
||||
if num_scheduled_spec_tokens > spec_token_budget:
|
||||
# We don't truncate the spec_token_ids list here because
|
||||
# it will be trimmed in the end of the while loop.
|
||||
num_scheduled_spec_tokens = spec_token_budget
|
||||
# +1 here to include the last generated token.
|
||||
num_new_tokens = min(num_new_tokens,
|
||||
num_scheduled_spec_tokens + 1)
|
||||
spec_token_budget -= num_scheduled_spec_tokens
|
||||
|
||||
# Make sure the input position does not exceed the max model len.
|
||||
# This is necessary when using spec decoding.
|
||||
num_new_tokens = min(
|
||||
|
||||
Reference in New Issue
Block a user