Logo
Explore Help
Sign In
youngkingdom/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions Packages Projects Releases Wiki Activity
Files
a96d63c21d18ad6610adfcabd3aae02c6357334e
vllm/cacheflow
History
Woosuk Kwon a96d63c21d Add support for GPT-NeoX (Pythia) (#50)
2023-04-28 00:32:10 -07:00
..
http_frontend
Add an option to use dummy model weights (#33)
2023-04-08 23:36:12 -07:00
master
Support various block sizes & Change default block size to 16 (#38)
2023-04-15 09:03:24 -07:00
models
Add support for GPT-NeoX (Pythia) (#50)
2023-04-28 00:32:10 -07:00
parallel_utils
Add CUDA graph-based all reduce launcher (#26)
2023-04-05 11:16:57 -07:00
worker
Add an option to use dummy model weights (#33)
2023-04-08 23:36:12 -07:00
block.py
Support beam search & parallel generation (#7)
2023-03-10 09:58:21 -08:00
sampling_params.py
FastAPI-based working frontend (#10)
2023-03-29 14:48:56 +08:00
sequence.py
Collect system stats in scheduler & Add scripts for experiments (#30)
2023-04-12 15:03:49 -07:00
utils.py
FastAPI-based working frontend (#10)
2023-03-29 14:48:56 +08:00
Powered by Gitea Version: 1.24.2 Page: 78ms Template: 8ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API