Logo
Explore Help
Sign In
youngkingdom/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions Packages Projects Releases Wiki Activity
Files
ab66536dbfedff4ffcbb6dc9f9a21d0a9ac0ec91
vllm/docs/source
History
Kunshang Ji 728c4c8a06 [Hardware][Intel GPU] Add Intel GPU(XPU) inference backend (#3814)
Co-authored-by: Jiang Li <jiang1.li@intel.com>
Co-authored-by: Abhilash Majumder <abhilash.majumder@intel.com>
Co-authored-by: Abhilash Majumder <30946547+abhilash1910@users.noreply.github.com>
2024-06-17 11:01:25 -07:00
..
assets
[Doc] add visualization for multi-stage dockerfile (#4456)
2024-04-30 17:41:59 +00:00
automatic_prefix_caching
[Doc] Add an automatic prefix caching section in vllm documentation (#5324)
2024-06-11 10:24:59 -07:00
community
[Docs] Add ZhenFund as a Sponsor (#5548)
2024-06-14 11:17:21 -07:00
dev
[Core] Support image processor (#4197)
2024-06-02 22:56:41 -07:00
getting_started
[Hardware][Intel GPU] Add Intel GPU(XPU) inference backend (#3814)
2024-06-17 11:01:25 -07:00
models
[Doc] Update LLaVA docs (#5437)
2024-06-13 11:22:07 -07:00
quantization
[CI] docfix (#5410)
2024-06-11 01:28:50 -07:00
serving
[Doc] Update documentation on Tensorizer (#5471)
2024-06-14 11:27:57 -07:00
conf.py
[Doc][Typo] Fixing Missing Comma (#5403)
2024-06-11 00:20:28 -07:00
generate_examples.py
Add example scripts to documentation (#4225)
2024-04-22 16:36:54 +00:00
index.rst
[Hardware][Intel GPU] Add Intel GPU(XPU) inference backend (#3814)
2024-06-17 11:01:25 -07:00
Powered by Gitea Version: 1.24.2 Page: 115ms Template: 7ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API