[Docs] Have a try to improve frameworks/streamlit.md (#24841)

Signed-off-by: windsonsea <haifeng.yao@daocloud.io>
This commit is contained in:
Michael Yao
2025-09-15 12:50:36 +08:00
committed by GitHub
parent 8e5cdcda4e
commit 78818dd1b0

View File

@ -6,35 +6,33 @@ It can be quickly integrated with vLLM as a backend API server, enabling powerfu
## Prerequisites
- Setup vLLM environment
Set up the vLLM environment by installing all required packages:
```bash
pip install vllm streamlit openai
```
## Deploy
- Start the vLLM server with the supported chat completion model, e.g.
1. Start the vLLM server with a supported chat completion model, e.g.
```bash
vllm serve qwen/Qwen1.5-0.5B-Chat
```
```bash
vllm serve Qwen/Qwen1.5-0.5B-Chat
```
- Install streamlit and openai:
1. Use the script: <gh-file:examples/online_serving/streamlit_openai_chatbot_webserver.py>
```bash
pip install streamlit openai
```
1. Start the streamlit web UI and start to chat:
- Use the script: <gh-file:examples/online_serving/streamlit_openai_chatbot_webserver.py>
- Start the streamlit web UI and start to chat:
```bash
streamlit run streamlit_openai_chatbot_webserver.py
# or specify the VLLM_API_BASE or VLLM_API_KEY
VLLM_API_BASE="http://vllm-server-host:vllm-server-port/v1" \
```bash
streamlit run streamlit_openai_chatbot_webserver.py
# start with debug mode to view more details
streamlit run streamlit_openai_chatbot_webserver.py --logger.level=debug
```
# or specify the VLLM_API_BASE or VLLM_API_KEY
VLLM_API_BASE="http://vllm-server-host:vllm-server-port/v1" \
streamlit run streamlit_openai_chatbot_webserver.py
![](../../assets/deployment/streamlit-chat.png)
# start with debug mode to view more details
streamlit run streamlit_openai_chatbot_webserver.py --logger.level=debug
```
![Chat with vLLM assistant in Streamlit](../../assets/deployment/streamlit-chat.png)