Logo
Explore Help
Sign In
youngkingdom/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions Packages Projects Releases Wiki Activity
306 Commits 157 Branches 93 Tags
ce741ba3e4fea00bacd2e1c609ca587ec35eb161
Commit Graph

14 Commits

Author SHA1 Message Date
Ricardo Lu
8c4b2592fb fix: enable trust-remote-code in api server & benchmark. (#509) 2023-07-19 17:06:15 -07:00
WRH
cf21a9bd5c support trust_remote_code in benchmark (#518) 2023-07-19 17:02:40 -07:00
Woosuk Kwon
4338cc4750 [Tokenizer] Add an option to specify tokenizer (#284) 2023-06-28 09:46:58 -07:00
Zhuohan Li
43710e8d09 [Fix] Fix default port number in benchmark scripts (#265) 2023-06-26 13:15:35 -07:00
Zhuohan Li
0370afa2e5 Remove benchmark_async_llm_server.py (#155) 2023-06-19 11:12:37 +08:00
Woosuk Kwon
3f92038b99 Add comments on swap space (#154) 2023-06-18 11:39:35 -07:00
Woosuk Kwon
0b98ba15c7 Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
Zhuohan Li
e5464ee484 Rename servers to engines (#152) 2023-06-17 17:25:21 +08:00
Woosuk Kwon
bab8f3dd0d [Minor] Fix benchmark_throughput.py (#151) 2023-06-16 21:00:52 -07:00
Zhuohan Li
eedb46bf03 Rename servers and change port numbers to reduce confusion (#149) 2023-06-17 00:13:02 +08:00
Woosuk Kwon
311490a720 Add script for benchmarking serving throughput (#145) 2023-06-14 19:55:38 -07:00
Zhuohan Li
1a956e136b Fix various issues of async servers (#135) 2023-06-05 23:44:50 +08:00
Woosuk Kwon
8274ca23ac Add docstrings for LLM (#137) 2023-06-04 12:52:41 -07:00
Woosuk Kwon
211318d44a Add throughput benchmarking script (#133) 2023-05-28 03:20:05 -07:00
Powered by Gitea Version: 1.24.2 Page: 84ms Template: 15ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API