Logo
Explore Help
Sign In
youngkingdom/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions Packages Projects Releases Wiki Activity
Files
dd66fd2b01e1195b7ccc8ffcd4b5d49ff1946a56
vllm/tests/weight_loading
History
Martin Gleize bbe5f9de7d [Model] Support for fairseq2 Llama (#11442)
Signed-off-by: Martin Gleize <mgleize@meta.com>
Co-authored-by: mgleize user <mgleize@a100-st-p4de24xlarge-4.fair-a100.hpcaas>
2025-01-19 10:40:40 -08:00
..
models-large.txt
[Bugfix] Fix Weight Loading Multiple GPU Test - Large Models (#9213)
2024-10-10 14:15:40 +08:00
models.txt
[Model] Support for fairseq2 Llama (#11442)
2025-01-19 10:40:40 -08:00
run_model_weight_loading_test.sh
[Kernel]: Cutlass 2:4 Sparsity + FP8/Int8 Quant Support (#10995)
2024-12-18 09:57:16 -05:00
test_weight_loading.py
[Model] Support for fairseq2 Llama (#11442)
2025-01-19 10:40:40 -08:00
Powered by Gitea Version: 1.24.2 Page: 166ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API