Logo
Explore Help
Sign In
youngkingdom/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions Packages Projects Releases Wiki Activity
Files
00a4e56d8dd470615f0dde2e4c996ed5564da35f
vllm/docker
History
Po-Han Huang (NVIDIA) 78336a0c3e Upgrade FlashInfer to v0.3.0 (#24086)
Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
Co-authored-by: Simon Mo <simon.mo@hey.com>
2025-09-04 09:49:20 -07:00
..
Dockerfile
Upgrade FlashInfer to v0.3.0 (#24086)
2025-09-04 09:49:20 -07:00
Dockerfile.cpu
[Bugfix] Fix environment variable setting in CPU Dockerfile (#21730)
2025-07-28 11:02:39 +00:00
Dockerfile.neuron
[Bugfix] Use cmake 3.26.1 instead of 3.26 to avoid build failure (#19019)
2025-06-03 00:16:17 -07:00
Dockerfile.nightly_torch
[CI/Build] get rid of unused VLLM_FA_CMAKE_GPU_ARCHES (#21599)
2025-07-31 15:00:08 +08:00
Dockerfile.ppc64le
Fixed ppc build when it runs on non-RHEL based linux distros (#18422)
2025-06-06 11:54:26 -07:00
Dockerfile.rocm
[Deprecation] Remove prompt_token_ids arg fallback in LLM.generate and LLM.embed (#18800)
2025-08-22 10:56:57 +08:00
Dockerfile.rocm_base
[V1] [ROCm] [AITER] Upgrade AITER to commit 916bf3c and bugfix APIs (#20880)
2025-07-13 15:19:32 +00:00
Dockerfile.s390x
[Hardware][IBM Z]Enable v1 for s390x and s390x dockerfile fixes (#22725)
2025-08-19 04:40:37 +00:00
Dockerfile.tpu
Always use cache mounts when installing vllm to avoid populating pip cache in the image. Also remove apt cache. (#23270)
2025-08-21 18:01:03 -04:00
Dockerfile.xpu
[XPU] upgrade torch 2.8 on for XPU (#22300)
2025-08-08 17:03:45 -07:00
Powered by Gitea Version: 1.24.2 Page: 179ms Template: 6ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API