Logo
Explore Help
Sign In
youngkingdom/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions Packages Projects Releases Wiki Activity
Files
07064cb1d49d2b04ec58d8876bee2cd8281eedf5
vllm/tests/v1
History
Woosuk Kwon 73001445fb [V1] Implement Cascade Attention (#11635)
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
2025-01-01 21:56:46 +09:00
..
core
[V1] Simpify vision block hash for prefix caching by removing offset from hash (#11646)
2024-12-31 08:56:01 +00:00
e2e
[V1] Implement Cascade Attention (#11635)
2025-01-01 21:56:46 +09:00
engine
[V1] [5/N] API Server: unify Detokenizer and EngineCore input (#11545)
2024-12-28 20:51:57 +00:00
sample
[V1] Use FlashInfer Sampling Kernel for Top-P & Top-K Sampling (#11394)
2024-12-27 09:32:38 +09:00
worker
[V1] Adding min tokens/repetition/presence/frequence penalties to V1 sampler (#10681)
2024-12-26 19:02:58 +09:00
__init__.py
[V1] AsyncLLM Implementation (#9826)
2024-11-11 23:05:38 +00:00
Powered by Gitea Version: 1.24.2 Page: 191ms Template: 5ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API