Logo
Explore Help
Sign In
youngkingdom/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions Packages Projects Releases Wiki Activity
Files
72d3a30c6327e70de3595d00f04e2d577fcbbb68
vllm/tests/lora
History
Kunshang Ji 96b6f475dd Remove hardcoded device="cuda" to support more devices (#2503)
Co-authored-by: Jiang Li <jiang1.li@intel.com>
Co-authored-by: Kunshang Ji <kunshang.ji@intel.com>
2024-02-01 15:46:39 -08:00
..
__init__.py
[Experimental] Add multi-LoRA support (#1804)
2024-01-23 15:26:37 -08:00
conftest.py
Remove hardcoded device="cuda" to support more devices (#2503)
2024-02-01 15:46:39 -08:00
test_layers.py
Remove hardcoded device="cuda" to support more devices (#2503)
2024-02-01 15:46:39 -08:00
test_llama.py
[Experimental] Add multi-LoRA support (#1804)
2024-01-23 15:26:37 -08:00
test_lora_manager.py
[Experimental] Add multi-LoRA support (#1804)
2024-01-23 15:26:37 -08:00
test_lora.py
[Experimental] Add multi-LoRA support (#1804)
2024-01-23 15:26:37 -08:00
test_punica.py
[Experimental] Add multi-LoRA support (#1804)
2024-01-23 15:26:37 -08:00
test_tokenizer.py
[Experimental] Add multi-LoRA support (#1804)
2024-01-23 15:26:37 -08:00
test_utils.py
[Experimental] Add multi-LoRA support (#1804)
2024-01-23 15:26:37 -08:00
test_worker.py
Remove hardcoded device="cuda" to support more devices (#2503)
2024-02-01 15:46:39 -08:00
utils.py
[Experimental] Add multi-LoRA support (#1804)
2024-01-23 15:26:37 -08:00
Powered by Gitea Version: 1.24.2 Page: 104ms Template: 7ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API