Commit Graph

103 Commits

Author SHA1 Message Date
add055e151 Enhance model loader (#83) 2023-05-09 15:46:42 -07:00
7c041ab578 Refactor system architecture (#82) 2023-05-09 15:30:12 -07:00
8917782af6 Add a system logger (#85) 2023-05-08 23:03:35 -07:00
c84e924287 [Minor] Fix a dtype bug (#79) 2023-05-06 02:12:12 -07:00
c9d5b6d4a8 Replace FlashAttention with xformers (#70) 2023-05-05 02:01:08 -07:00
189ae23133 Use dtype from model config & Add Dolly V2 (#63) 2023-05-04 03:05:37 -07:00
e548c1488a Add support for GPT-2 (#60) 2023-05-04 02:59:56 -07:00
e070829ae8 Support bfloat16 data type (#54) 2023-05-03 14:09:44 -07:00
27f1410d06 New weight loader without np copy (#52) 2023-05-03 15:32:04 +08:00
4858f3bb45 Add an option to launch cacheflow without ray (#51) 2023-04-30 15:42:17 +08:00
a96d63c21d Add support for GPT-NeoX (Pythia) (#50) 2023-04-28 00:32:10 -07:00
0f4b32199e Support various block sizes & Change default block size to 16 (#38) 2023-04-15 09:03:24 -07:00
84eee24e20 Collect system stats in scheduler & Add scripts for experiments (#30) 2023-04-12 15:03:49 -07:00
b9926f7f66 Support block size 32 (#35) 2023-04-09 23:07:18 -07:00
ee88a7e5f3 Add an option to use dummy model weights (#33) 2023-04-08 23:36:12 -07:00
0f40557af6 Implement block copy kernel to optimize beam search (#32) 2023-04-07 17:45:07 -07:00
a490aafa36 Fix potential bugs in FastAPI frontend and add comments (#28) 2023-04-06 13:44:24 +08:00
12659a0bd7 Add CUDA graph-based all reduce launcher (#26) 2023-04-05 11:16:57 -07:00
897cb2ae28 Optimize data movement (#20) 2023-04-02 00:30:17 -07:00
1f01a18d39 Merge QKV into one linear layer (#15) 2023-04-02 00:23:29 -07:00
a90c97d727 Use FP32 for log probabilities (#19) 2023-03-31 23:33:43 -07:00
09e9245478 Add custom kernel for RMS normalization (#16) 2023-04-01 00:51:22 +08:00
c45f3c3ab6 Optimize tensor parallel execution speed (#17) 2023-04-01 00:51:08 +08:00
7a7929abe8 Implement preemption via recomputation & Refactor scheduling logic (#12) 2023-03-30 14:51:46 -07:00
88c0268a18 Implement custom kernel for LLaMA rotary embedding (#14) 2023-03-30 11:04:21 -07:00
80a2f812f1 Implement LLaMA (#9)
Co-authored-by: Zhuohan Li <zhuohan123@gmail.com>
2023-03-30 12:25:32 +08:00
64e0e38314 Add cache watermark to avoid frequent cache eviction (#11) 2023-03-29 16:38:48 -07:00
721fa3df15 FastAPI-based working frontend (#10) 2023-03-29 14:48:56 +08:00
d359cda5fa Minor 2023-03-26 08:00:39 +00:00
2f49f15585 Support tensor parallel (#2) 2023-03-21 13:45:42 -07:00
cfae35b861 Add miscellaneous updates (#8) 2023-03-13 13:48:38 -07:00
e9d3f2ff77 Add memory analyzer & utomatically configure KV cache size (#6) 2023-03-11 23:23:14 -08:00
1a7eb7da61 Support beam search & parallel generation (#7) 2023-03-10 09:58:21 -08:00
04e5acc08e Fix a bug in 1D input shape (#5) 2023-03-06 10:05:27 -08:00
3e9f991d6a Use FlashAttention for multi_query_kv_attention (#4) 2023-03-01 21:13:08 -08:00
0deacbce6e Implement single_query_cached_kv_attention kernel (#3) 2023-03-01 15:02:19 -08:00
cbf8779afa Fix a bug in tying OPT embeddings (#1) 2023-02-24 16:29:36 -08:00
6aef2278f4 [Minor] Fix printing format 2023-02-24 11:56:06 +00:00
1132fae0ca Add Frontend 2023-02-24 11:46:43 +00:00
46ce1356f7 Add max_num_steps to SamplingParams 2023-02-24 11:44:40 +00:00
b39f149a08 Add is_finished 2023-02-24 11:44:21 +00:00
ef6098ec51 Merge pre_step and step 2023-02-24 10:36:08 +00:00
53f70e7334 Reduce the number of states in scheduler 2023-02-24 10:22:39 +00:00
762fd1c3fa Refactor and annotate types for attention 2023-02-24 08:58:46 +00:00
7f22f90e8c Remove xformers 2023-02-24 08:36:16 +00:00
afdbe5d373 [WIP] Add server script 2023-02-24 01:33:37 +00:00
932844f1cd Fix attention 2023-02-23 23:02:25 +00:00
ba84b8728a Fix attention 2023-02-23 22:29:46 +00:00
87e0bcd426 Fix attention 2023-02-23 21:32:02 +00:00
1ce1333573 Set default dtype to half 2023-02-23 21:31:39 +00:00