This website requires JavaScript.
Explore
Help
Sign In
youngkingdom
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
Packages
Projects
Releases
Wiki
Activity
Files
f8a1e39fae05ca610be8d5a78be9d40f5274e5fc
vllm
/
examples
History
Zhuohan Li
9d9072a069
Implement prompt logprobs & Batched topk for computing logprobs (
#1328
)
...
Co-authored-by: Yunmo Chen <
16273544+wanmok@users.noreply.github.com
>
2023-10-16 10:56:50 -07:00
..
api_client.py
[Quality] Add code formatter and linter (
#326
)
2023-07-03 11:31:55 -07:00
gradio_webserver.py
API server support ipv4 / ipv6 dualstack (
#1288
)
2023-10-07 15:15:54 -07:00
llm_engine_example.py
Implement prompt logprobs & Batched topk for computing logprobs (
#1328
)
2023-10-16 10:56:50 -07:00
offline_inference.py
[Quality] Add code formatter and linter (
#326
)
2023-07-03 11:31:55 -07:00
openai_chatcompletion_client.py
[Fix] Add chat completion Example and simplify dependencies (
#576
)
2023-07-25 23:45:48 -07:00
openai_completion_client.py
[Fix] Add chat completion Example and simplify dependencies (
#576
)
2023-07-25 23:45:48 -07:00