Logo
Explore Help
Sign In
youngkingdom/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions Packages Projects Releases Wiki Activity
Files
5f08050d8d0bfcdaced0fe706cdfc9e311e0f263
vllm/tests
History
Woosuk Kwon d7afab6d3a [BugFix] Fix GC bug for LLM class (#2882)
2024-02-14 22:17:44 -08:00
..
async_engine
[Experimental] Add multi-LoRA support (#1804)
2024-01-23 15:26:37 -08:00
distributed
Implement custom all reduce kernels (#2192)
2024-01-27 12:46:35 -08:00
engine
Migrate linter from pylint to ruff (#1665)
2023-11-20 11:58:01 -08:00
entrypoints
Support Batch Completion in Server (#2529)
2024-01-24 17:11:07 -08:00
kernels
[Minor] More fix of test_cache.py CI test failure (#2750)
2024-02-06 11:38:38 -08:00
lora
Add LoRA support for Mixtral (#2831)
2024-02-14 00:55:45 +01:00
models
Add StableLM3B model (#2372)
2024-01-16 20:32:40 -08:00
prefix_caching
[Experimental] Prefix Caching Support (#1669)
2024-01-17 16:32:10 -08:00
prompts
[BugFix] Fix input positions for long context with sliding window (#2088)
2023-12-13 12:28:13 -08:00
samplers
Remove hardcoded device="cuda" to support more devices (#2503)
2024-02-01 15:46:39 -08:00
worker
Remove hardcoded device="cuda" to support more devices (#2503)
2024-02-01 15:46:39 -08:00
__init__.py
[Small] Formatter only checks lints in changed files (#1528)
2023-10-31 15:39:38 -07:00
conftest.py
[BUGFIX] Fix the path of test prompts (#2273)
2023-12-26 10:37:21 -08:00
test_regression.py
[BugFix] Fix GC bug for LLM class (#2882)
2024-02-14 22:17:44 -08:00
test_sampling_params.py
[Bugfix] fix crash if max_tokens=None (#2570)
2024-01-23 22:38:55 -08:00
Powered by Gitea Version: 1.24.2 Page: 123ms Template: 9ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API