Logo
Explore Help
Sign In
youngkingdom/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions Packages Projects Releases Wiki Activity
6,768 Commits 157 Branches 93 Tags
6881107948c00a8564bc2fa85308f6fc2f065d64
Commit Graph

165 Commits

Author SHA1 Message Date
Woosuk Kwon
dcda03b4cb Write README and front page of doc (#147) 2023-06-18 03:19:38 -07:00
Woosuk Kwon
0b98ba15c7 Change the name to vLLM (#150) 2023-06-17 03:07:40 -07:00
Woosuk Kwon
e38074b1e6 Support FP32 (#141) 2023-06-07 00:40:21 -07:00
Woosuk Kwon
376725ce74 [PyPI] Packaging for PyPI distribution (#140) 2023-06-05 20:03:14 -07:00
Woosuk Kwon
d721168449 Improve setup script & Add a guard for bfloat16 kernels (#130) 2023-05-27 00:59:32 -07:00
Woosuk Kwon
7addca5935 Specify python package dependencies in requirements.txt (#78) 2023-05-07 16:30:43 -07:00
Woosuk Kwon
e070829ae8 Support bfloat16 data type (#54) 2023-05-03 14:09:44 -07:00
Woosuk Kwon
436e523bf1 Refactor attention kernels (#53) 2023-05-03 13:40:13 -07:00
Woosuk Kwon
897cb2ae28 Optimize data movement (#20) 2023-04-02 00:30:17 -07:00
Woosuk Kwon
09e9245478 Add custom kernel for RMS normalization (#16) 2023-04-01 00:51:22 +08:00
Woosuk Kwon
88c0268a18 Implement custom kernel for LLaMA rotary embedding (#14) 2023-03-30 11:04:21 -07:00
Woosuk Kwon
0deacbce6e Implement single_query_cached_kv_attention kernel (#3) 2023-03-01 15:02:19 -08:00
Woosuk Kwon
ffad4e1e03 cache_kernel -> cache_kernels 2023-02-16 20:05:45 +00:00
Woosuk Kwon
6f058c7ba8 Implement cache ops 2023-02-16 07:47:03 +00:00
Woosuk Kwon
3be29a1104 Add blank setup file 2023-02-09 11:37:06 +00:00
First Previous 1 2 3 4 Next Last
Powered by Gitea Version: 1.24.2 Page: 336ms Template: 11ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API