Logo
Explore Help
Sign In
youngkingdom/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions Packages Projects Releases Wiki Activity
Files
468d16654ab1eb3883ed79c78042d9edc6461baa
vllm/tests/kernels/moe
History
Bill Nell 468d16654a cleanup quantization
Signed-off-by: Bill Nell <bnell@redhat.com>
2025-05-28 23:40:53 +00:00
..
test_batched_moe.py
cleanup quantization
2025-05-28 23:40:53 +00:00
test_cutlass_moe.py
Modularize fused experts and integrate PPLX kernels (#15956)
2025-05-14 13:11:54 -07:00
test_moe_permute_unpermute.py
[Build/CI] Fix CUDA 11.8 build (#17679)
2025-05-22 12:13:54 -07:00
test_moe.py
[Bugfix] Reduce moe_sum test size to avoid OOM (#18484)
2025-05-21 06:46:39 -07:00
test_nvfp4_moe.py
[Hardware/NVIDIA/Kernel] Enable nvidia/DeepSeek-R1-FP4 Model (#16362)
2025-05-09 16:24:41 -07:00
test_pplx_moe.py
Modularize fused experts and integrate PPLX kernels (#15956)
2025-05-14 13:11:54 -07:00
test_rocm_aiter_topk.py
[FEAT] [ROCm] [V1]: Add AITER biased group topk for DeepSeekV3 (#17955)
2025-05-13 22:03:47 -07:00
test_triton_moe_ptpc_fp8.py
Modularize fused experts and integrate PPLX kernels (#15956)
2025-05-14 13:11:54 -07:00
Powered by Gitea Version: 1.24.2 Page: 101ms Template: 5ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API