revert white-space changes in docs (#12557)

### What problem does this PR solve?

Trailing white-spaces in commit 6814ace1aa
got automatically trimmed by code editor may causes documentation
typesetting broken.

Mostly for double spaces for soft line breaks.  

### Type of change

- [x] Documentation Update
This commit is contained in:
Jimmy Ben Klieve
2026-01-13 09:41:02 +08:00
committed by GitHub
parent fd0a1fde6b
commit 867ec94258
71 changed files with 660 additions and 731 deletions

View File

@ -86,22 +86,22 @@ They are highly consistent at the technical base (e.g., vector retrieval, keywor
RAG has demonstrated clear value in several typical scenarios:
1. Enterprise Knowledge Q&A and Internal Search
1. Enterprise Knowledge Q&A and Internal Search
By vectorizing corporate private data and combining it with an LLM, RAG can directly return natural language answers based on authoritative sources, rather than document lists. While meeting intelligent Q&A needs, it inherently aligns with corporate requirements for data security, access control, and compliance.
2. Complex Document Understanding and Professional Q&A
2. Complex Document Understanding and Professional Q&A
For structurally complex documents like contracts and regulations, the value of RAG lies in its ability to generate accurate, verifiable answers while maintaining context integrity. Its system accuracy largely depends on text chunking and semantic understanding strategies.
3. Dynamic Knowledge Fusion and Decision Support
3. Dynamic Knowledge Fusion and Decision Support
In business scenarios requiring the synthesis of information from multiple sources, RAG evolves into a knowledge orchestration and reasoning support system for business decisions. Through a multi-path recall mechanism, it fuses knowledge from different systems and formats, maintaining factual consistency and logical controllability during the generation phase.
## The future of RAG
The evolution of RAG is unfolding along several clear paths:
1. RAG as the data foundation for Agents
1. RAG as the data foundation for Agents
RAG and agents have an architecture vs. scenario relationship. For agents to achieve autonomous and reliable decision-making and execution, they must rely on accurate and timely knowledge. RAG provides them with a standardized capability to access private domain knowledge and is an inevitable choice for building knowledge-aware agents.
2. Advanced RAG: Using LLMs to optimize retrieval itself
2. Advanced RAG: Using LLMs to optimize retrieval itself
The core feature of next-generation RAG is fully utilizing the reasoning capabilities of LLMs to optimize the retrieval process, such as rewriting queries, summarizing or fusing results, or implementing intelligent routing. Empowering every aspect of retrieval with LLMs is key to breaking through current performance bottlenecks.
3. Towards context engineering 2.0
3. Towards context engineering 2.0
Current RAG can be viewed as Context Engineering 1.0, whose core is assembling static knowledge context for single Q&A tasks. The forthcoming Context Engineering 2.0 will extend with RAG technology at its core, becoming a system that automatically and dynamically assembles comprehensive context for agents. The context fused by this system will come not only from documents but also include interaction memory, available tools/skills, and real-time environmental information. This marks the transition of agent development from a "handicraft workshop" model to the industrial starting point of automated context engineering.
The essence of RAG is to build a dedicated, efficient, and trustworthy external data interface for large language models; its core is Retrieval, not Generation. Starting from the practical need to solve private data access, its technical depth is reflected in the optimization of retrieval for complex unstructured data. With its deep integration into agent architectures and its development towards automated context engineering, RAG is evolving from a technology that improves Q&A quality into the core infrastructure for building the next generation of trustworthy, controllable, and scalable intelligent applications.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
sidebarIcon: LucideCog
}
---
# Configuration
Configurations for deploying RAGFlow via Docker.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideBookA
}
---
# Contribution guidelines
General guidelines for RAGFlow's community contributors.
@ -35,7 +34,7 @@ The list below mentions some contributions you can make, but it is not a complet
1. Fork our GitHub repository.
2. Clone your fork to your local machine:
`git clone git@github.com:<yourname>/ragflow.git`
3. Create a local branch:
3. Create a local branch:
`git checkout -b my-branch`
4. Provide sufficient information in your commit message
`git commit -m 'Provide sufficient info in your commit message'`

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideKey
}
---
# Acquire RAGFlow API key
An API key is required for the RAGFlow server to authenticate your HTTP/Python or MCP requests. This documents provides instructions on obtaining a RAGFlow API key.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucidePackage
}
---
# Build RAGFlow Docker image
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideMonitorPlay
}
---
# Launch service from source
A guide explaining how to set up a RAGFlow service from its source code. By following this guide, you'll be able to debug using the source code.
@ -39,7 +38,7 @@ cd ragflow/
### Install Python dependencies
1. Install uv:
```bash
pipx install uv
```
@ -91,13 +90,13 @@ docker compose -f docker/docker-compose-base.yml up -d
```
3. **Optional:** If you cannot access HuggingFace, set the HF_ENDPOINT environment variable to use a mirror site:
```bash
export HF_ENDPOINT=https://hf-mirror.com
```
4. Check the configuration in **conf/service_conf.yaml**, ensuring all hosts and ports are correctly set.
5. Run the **entrypoint.sh** script to launch the backend service:
```shell
@ -126,10 +125,10 @@ docker compose -f docker/docker-compose-base.yml up -d
3. Start up the RAGFlow frontend service:
```bash
npm run dev
npm run dev
```
*The following message appears, showing the IP address and port number of your frontend service:*
*The following message appears, showing the IP address and port number of your frontend service:*
![](https://github.com/user-attachments/assets/0daf462c-a24d-4496-a66f-92533534e187)

View File

@ -5,20 +5,19 @@ sidebar_custom_props: {
categoryIcon: LucideTvMinimalPlay
}
---
# Launch RAGFlow MCP server
Launch an MCP server from source or via Docker.
---
A RAGFlow Model Context Protocol (MCP) server is designed as an independent component to complement the RAGFlow server. Note that an MCP server must operate alongside a properly functioning RAGFlow server.
A RAGFlow Model Context Protocol (MCP) server is designed as an independent component to complement the RAGFlow server. Note that an MCP server must operate alongside a properly functioning RAGFlow server.
An MCP server can start up in either self-host mode (default) or host mode:
An MCP server can start up in either self-host mode (default) or host mode:
- **Self-host mode**:
- **Self-host mode**:
When launching an MCP server in self-host mode, you must provide an API key to authenticate the MCP server with the RAGFlow server. In this mode, the MCP server can access *only* the datasets of a specified tenant on the RAGFlow server.
- **Host mode**:
- **Host mode**:
In host mode, each MCP client can access their own datasets on the RAGFlow server. However, each client request must include a valid API key to authenticate the client with the RAGFlow server.
Once a connection is established, an MCP server communicates with its client in MCP HTTP+SSE (Server-Sent Events) mode, unidirectionally pushing responses from the RAGFlow server to its client in real time.
@ -32,9 +31,9 @@ Once a connection is established, an MCP server communicates with its client in
If you wish to try out our MCP server without upgrading RAGFlow, community contributor [yiminghub2024](https://github.com/yiminghub2024) 👏 shares their recommended steps [here](#launch-an-mcp-server-without-upgrading-ragflow).
:::
## Launch an MCP server
## Launch an MCP server
You can start an MCP server either from source code or via Docker.
You can start an MCP server either from source code or via Docker.
### Launch from source code
@ -51,7 +50,7 @@ uv run mcp/server/server.py --host=127.0.0.1 --port=9382 --base-url=http://127.0
# uv run mcp/server/server.py --host=127.0.0.1 --port=9382 --base-url=http://127.0.0.1:9380 --mode=host
```
Where:
Where:
- `host`: The MCP server's host address.
- `port`: The MCP server's listening port.
@ -97,7 +96,7 @@ The MCP server is designed as an optional component that complements the RAGFlow
# - --no-json-response # Disables JSON responses for the streamable-HTTP transport
```
Where:
Where:
- `mcp-host`: The MCP server's host address.
- `mcp-port`: The MCP server's listening port.
@ -122,13 +121,13 @@ Run `docker compose -f docker-compose.yml up` to launch the RAGFlow server toget
docker-ragflow-cpu-1 | Starting MCP Server on 0.0.0.0:9382 with base URL http://127.0.0.1:9380...
docker-ragflow-cpu-1 | Starting 1 task executor(s) on host 'dd0b5e07e76f'...
docker-ragflow-cpu-1 | 2025-04-18 15:41:18,816 INFO 27 ragflow_server log path: /ragflow/logs/ragflow_server.log, log levels: {'peewee': 'WARNING', 'pdfminer': 'WARNING', 'root': 'INFO'}
docker-ragflow-cpu-1 |
docker-ragflow-cpu-1 |
docker-ragflow-cpu-1 | __ __ ____ ____ ____ _____ ______ _______ ____
docker-ragflow-cpu-1 | | \/ |/ ___| _ \ / ___|| ____| _ \ \ / / ____| _ \
docker-ragflow-cpu-1 | | |\/| | | | |_) | \___ \| _| | |_) \ \ / /| _| | |_) |
docker-ragflow-cpu-1 | | | | | |___| __/ ___) | |___| _ < \ V / | |___| _ <
docker-ragflow-cpu-1 | |_| |_|\____|_| |____/|_____|_| \_\ \_/ |_____|_| \_\
docker-ragflow-cpu-1 |
docker-ragflow-cpu-1 |
docker-ragflow-cpu-1 | MCP launch mode: self-host
docker-ragflow-cpu-1 | MCP host: 0.0.0.0
docker-ragflow-cpu-1 | MCP port: 9382
@ -141,13 +140,13 @@ Run `docker compose -f docker-compose.yml up` to launch the RAGFlow server toget
docker-ragflow-cpu-1 | 2025-04-18 15:41:23,263 INFO 27 init database on cluster mode successfully
docker-ragflow-cpu-1 | 2025-04-18 15:41:25,318 INFO 27 load_model /ragflow/rag/res/deepdoc/det.onnx uses CPU
docker-ragflow-cpu-1 | 2025-04-18 15:41:25,367 INFO 27 load_model /ragflow/rag/res/deepdoc/rec.onnx uses CPU
docker-ragflow-cpu-1 | ____ ___ ______ ______ __
docker-ragflow-cpu-1 | ____ ___ ______ ______ __
docker-ragflow-cpu-1 | / __ \ / | / ____// ____// /____ _ __
docker-ragflow-cpu-1 | / /_/ // /| | / / __ / /_ / // __ \| | /| / /
docker-ragflow-cpu-1 | / _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
docker-ragflow-cpu-1 | /_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
docker-ragflow-cpu-1 |
docker-ragflow-cpu-1 |
docker-ragflow-cpu-1 | / _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
docker-ragflow-cpu-1 | /_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
docker-ragflow-cpu-1 |
docker-ragflow-cpu-1 |
docker-ragflow-cpu-1 | 2025-04-18 15:41:29,088 INFO 27 RAGFlow version: v0.18.0-285-gb2c299fa full
docker-ragflow-cpu-1 | 2025-04-18 15:41:29,088 INFO 27 project base: /ragflow
docker-ragflow-cpu-1 | 2025-04-18 15:41:29,088 INFO 27 Current configs, from /ragflow/conf/service_conf.yaml:
@ -156,12 +155,12 @@ Run `docker compose -f docker-compose.yml up` to launch the RAGFlow server toget
docker-ragflow-cpu-1 | * Running on all addresses (0.0.0.0)
docker-ragflow-cpu-1 | * Running on http://127.0.0.1:9380
docker-ragflow-cpu-1 | * Running on http://172.19.0.6:9380
docker-ragflow-cpu-1 | ______ __ ______ __
docker-ragflow-cpu-1 | ______ __ ______ __
docker-ragflow-cpu-1 | /_ __/___ ______/ /__ / ____/ _____ _______ __/ /_____ _____
docker-ragflow-cpu-1 | / / / __ `/ ___/ //_/ / __/ | |/_/ _ \/ ___/ / / / __/ __ \/ ___/
docker-ragflow-cpu-1 | / / / /_/ (__ ) ,< / /____> </ __/ /__/ /_/ / /_/ /_/ / /
docker-ragflow-cpu-1 | /_/ \__,_/____/_/|_| /_____/_/|_|\___/\___/\__,_/\__/\____/_/
docker-ragflow-cpu-1 |
docker-ragflow-cpu-1 | / / / /_/ (__ ) ,< / /____> </ __/ /__/ /_/ / /_/ /_/ / /
docker-ragflow-cpu-1 | /_/ \__,_/____/_/|_| /_____/_/|_|\___/\___/\__,_/\__/\____/_/
docker-ragflow-cpu-1 |
docker-ragflow-cpu-1 | 2025-04-18 15:41:34,501 INFO 32 TaskExecutor: RAGFlow version: v0.18.0-285-gb2c299fa full
docker-ragflow-cpu-1 | 2025-04-18 15:41:34,501 INFO 32 Use Elasticsearch http://es01:9200 as the doc engine.
...
@ -173,11 +172,11 @@ Run `docker compose -f docker-compose.yml up` to launch the RAGFlow server toget
This section is contributed by our community contributor [yiminghub2024](https://github.com/yiminghub2024). 👏
:::
1. Prepare all MCP-specific files and directories.
i. Copy the [mcp/](https://github.com/infiniflow/ragflow/tree/main/mcp) directory to your local working directory.
ii. Copy [docker/docker-compose.yml](https://github.com/infiniflow/ragflow/blob/main/docker/docker-compose.yml) locally.
iii. Copy [docker/entrypoint.sh](https://github.com/infiniflow/ragflow/blob/main/docker/entrypoint.sh) locally.
iv. Install the required dependencies using `uv`:
1. Prepare all MCP-specific files and directories.
i. Copy the [mcp/](https://github.com/infiniflow/ragflow/tree/main/mcp) directory to your local working directory.
ii. Copy [docker/docker-compose.yml](https://github.com/infiniflow/ragflow/blob/main/docker/docker-compose.yml) locally.
iii. Copy [docker/entrypoint.sh](https://github.com/infiniflow/ragflow/blob/main/docker/entrypoint.sh) locally.
iv. Install the required dependencies using `uv`:
- Run `uv add mcp` or
- Copy [pyproject.toml](https://github.com/infiniflow/ragflow/blob/main/pyproject.toml) locally and run `uv sync --python 3.12`.
2. Edit **docker-compose.yml** to enable MCP (disabled by default).
@ -197,7 +196,7 @@ docker logs docker-ragflow-cpu-1
## Security considerations
As MCP technology is still at early stage and no official best practices for authentication or authorization have been established, RAGFlow currently uses [API key](./acquire_ragflow_api_key.md) to validate identity for the operations described earlier. However, in public environments, this makeshift solution could expose your MCP server to potential network attacks. Therefore, when running a local SSE server, it is recommended to bind only to localhost (`127.0.0.1`) rather than to all interfaces (`0.0.0.0`).
As MCP technology is still at early stage and no official best practices for authentication or authorization have been established, RAGFlow currently uses [API key](./acquire_ragflow_api_key.md) to validate identity for the operations described earlier. However, in public environments, this makeshift solution could expose your MCP server to potential network attacks. Therefore, when running a local SSE server, it is recommended to bind only to localhost (`127.0.0.1`) rather than to all interfaces (`0.0.0.0`).
For further guidance, see the [official MCP documentation](https://modelcontextprotocol.io/docs/concepts/transports#security-considerations).
@ -205,11 +204,11 @@ For further guidance, see the [official MCP documentation](https://modelcontextp
### When to use an API key for authentication?
The use of an API key depends on the operating mode of your MCP server.
The use of an API key depends on the operating mode of your MCP server.
- **Self-host mode** (default):
When starting the MCP server in self-host mode, you should provide an API key when launching it to authenticate it with the RAGFlow server:
- If launching from source, include the API key in the command.
- **Self-host mode** (default):
When starting the MCP server in self-host mode, you should provide an API key when launching it to authenticate it with the RAGFlow server:
- If launching from source, include the API key in the command.
- If launching from Docker, update the API key in **docker/docker-compose.yml**.
- **Host mode**:
- **Host mode**:
If your RAGFlow MCP server is working in host mode, include the API key in the `headers` of your client requests to authenticate your client with the RAGFlow server. An example is available [here](https://github.com/infiniflow/ragflow/blob/main/mcp/client/client.py).

View File

@ -6,7 +6,6 @@ sidebar_custom_props: {
}
---
# RAGFlow MCP client examples
Python and curl MCP client examples.
@ -39,11 +38,11 @@ When interacting with the MCP server via HTTP requests, follow this initializati
1. **The client sends an `initialize` request** with protocol version and capabilities.
2. **The server replies with an `initialize` response**, including the supported protocol and capabilities.
3. **The client confirms readiness with an `initialized` notification**.
3. **The client confirms readiness with an `initialized` notification**.
_The connection is established between the client and the server, and further operations (such as tool listing) may proceed._
:::tip NOTE
For more information about this initialization process, see [here](https://modelcontextprotocol.io/docs/concepts/architecture#1-initialization).
For more information about this initialization process, see [here](https://modelcontextprotocol.io/docs/concepts/architecture#1-initialization).
:::
In the following sections, we will walk you through a complete tool calling process.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideToolCase
}
---
# RAGFlow MCP tools
The MCP server currently offers a specialized tool to assist users in searching for relevant information powered by RAGFlow DeepDoc technology:

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideShuffle
}
---
# Switch document engine
Switch your doc engine from Elasticsearch to Infinity.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
sidebarIcon: LucideCircleQuestionMark
}
---
# FAQs
Answers to questions about general features, troubleshooting, usage, and more.
@ -44,11 +43,11 @@ You can find the RAGFlow version number on the **System** page of the UI:
If you build RAGFlow from source, the version number is also in the system log:
```
____ ___ ______ ______ __
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
2025-02-18 10:10:43,835 INFO 1445658 RAGFlow version: v0.15.0-50-g6daae7f2
```
@ -178,7 +177,7 @@ To fix this issue, use https://hf-mirror.com instead:
3. Start up the server:
```bash
docker compose up -d
docker compose up -d
```
---
@ -211,11 +210,11 @@ You will not log in to RAGFlow unless the server is fully initialized. Run `dock
*The server is successfully initialized, if your system displays the following:*
```
____ ___ ______ ______ __
____ ___ ______ ______ __
/ __ \ / | / ____// ____// /____ _ __
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:9380
@ -318,7 +317,7 @@ The status of a Docker container status does not necessarily reflect the status
$ docker ps
```
*The status of a healthy Elasticsearch component should look as follows:*
*The status of a healthy Elasticsearch component should look as follows:*
```
91220e3285dd docker.elastic.co/elasticsearch/elasticsearch:8.11.3 "/bin/tini -- /usr/l…" 11 hours ago Up 11 hours (healthy) 9300/tcp, 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp ragflow-es-01
@ -371,7 +370,7 @@ Yes, we do. See the Python files under the **rag/app** folder.
$ docker ps
```
*The status of a healthy Elasticsearch component should look as follows:*
*The status of a healthy Elasticsearch component should look as follows:*
```bash
cd29bcb254bc quay.io/minio/minio:RELEASE.2023-12-20T01-00-02Z "/usr/bin/docker-ent…" 2 weeks ago Up 11 hours 0.0.0.0:9001->9001/tcp, :::9001->9001/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp ragflow-minio
@ -454,7 +453,7 @@ See [Upgrade RAGFlow](./guides/upgrade_ragflow.mdx) for more information.
To switch your document engine from Elasticsearch to [Infinity](https://github.com/infiniflow/infinity):
1. Stop all running containers:
1. Stop all running containers:
```bash
$ docker compose -f docker/docker-compose.yml down -v
@ -464,7 +463,7 @@ To switch your document engine from Elasticsearch to [Infinity](https://github.c
:::
2. In **docker/.env**, set `DOC_ENGINE=${DOC_ENGINE:-infinity}`
3. Restart your Docker image:
3. Restart your Docker image:
```bash
$ docker compose -f docker-compose.yml up -d
@ -509,12 +508,12 @@ From v0.22.0 onwards, RAGFlow includes MinerU (&ge; 2.6.3) as an optional PDF pa
- `"vlm-mlx-engine"`
- `"vlm-vllm-async-engine"`
- `"vlm-lmdeploy-engine"`.
- `MINERU_SERVER_URL`: (optional) The downstream vLLM HTTP server (e.g., `http://vllm-host:30000`). Applicable when `MINERU_BACKEND` is set to `"vlm-http-client"`.
- `MINERU_SERVER_URL`: (optional) The downstream vLLM HTTP server (e.g., `http://vllm-host:30000`). Applicable when `MINERU_BACKEND` is set to `"vlm-http-client"`.
- `MINERU_OUTPUT_DIR`: (optional) The local directory for holding the outputs of the MinerU API service (zip/JSON) before ingestion.
- `MINERU_DELETE_OUTPUT`: Whether to delete temporary output when a temporary directory is used:
- `1`: Delete.
- `0`: Retain.
3. In the web UI, navigate to your dataset's **Configuration** page and find the **Ingestion pipeline** section:
3. In the web UI, navigate to your dataset's **Configuration** page and find the **Ingestion pipeline** section:
- If you decide to use a chunking method from the **Built-in** dropdown, ensure it supports PDF parsing, then select **MinerU** from the **PDF parser** dropdown.
- If you use a custom ingestion pipeline instead, select **MinerU** in the **PDF parser** section of the **Parser** component.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideSquareTerminal
}
---
# Admin CLI
The RAGFlow Admin CLI is a command-line-based system administration tool that offers administrators an efficient and flexible method for system interaction and control. Operating on a client-server architecture, it communicates in real-time with the Admin Service, receiving administrator commands and dynamically returning execution results.
@ -30,9 +29,9 @@ The RAGFlow Admin CLI is a command-line-based system administration tool that of
The default password is admin.
**Parameters:**
- -h: RAGFlow admin server host address
- -p: RAGFlow admin server port
## Default administrative account

View File

@ -5,8 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideActivity
}
---
# Admin Service
The Admin Service is the core backend management service of the RAGFlow system, providing comprehensive system administration capabilities through centralized API interfaces for managing and controlling the entire platform. Adopting a client-server architecture, it supports access and operations via both a Web UI and an Admin CLI, ensuring flexible and efficient execution of administrative tasks.
@ -27,7 +25,7 @@ With its unified interface design, the Admin Service combines the convenience of
python admin/server/admin_server.py
```
The service will start and listen for incoming connections from the CLI on the configured port.
The service will start and listen for incoming connections from the CLI on the configured port.
### Using docker image

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucidePalette
}
---
# Admin UI
The RAGFlow Admin UI is a web-based interface that provides comprehensive system status monitoring and user management capabilities.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: RagAiAgent
}
---
# Agent component
The component equipped with reasoning, tool usage, and multi-agent collaboration capabilities.
@ -19,7 +18,7 @@ An **Agent** component fine-tunes the LLM and sets its prompt. From v0.20.5 onwa
## Scenarios
An **Agent** component is essential when you need the LLM to assist with summarizing, translating, or controlling various tasks.
An **Agent** component is essential when you need the LLM to assist with summarizing, translating, or controlling various tasks.
## Prerequisites
@ -31,13 +30,13 @@ An **Agent** component is essential when you need the LLM to assist with summari
## Quickstart
### 1. Click on an **Agent** component to show its configuration panel
### 1. Click on an **Agent** component to show its configuration panel
The corresponding configuration panel appears to the right of the canvas. Use this panel to define and fine-tune the **Agent** component's behavior.
### 2. Select your model
Click **Model**, and select a chat model from the dropdown menu.
Click **Model**, and select a chat model from the dropdown menu.
:::tip NOTE
If no model appears, check if your have added a chat model on the **Model providers** page.
@ -58,7 +57,7 @@ In this quickstart, we assume your **Agent** component is used standalone (witho
### 5. Skip Tools and Agent
The **+ Add tools** and **+ Add agent** sections are used *only* when you need to configure your **Agent** component as a planner (with tools or sub-Agents beneath). In this quickstart, we assume your **Agent** component is used standalone (without tools or sub-Agents beneath).
The **+ Add tools** and **+ Add agent** sections are used *only* when you need to configure your **Agent** component as a planner (with tools or sub-Agents beneath). In this quickstart, we assume your **Agent** component is used standalone (without tools or sub-Agents beneath).
### 6. Choose the next component
@ -74,7 +73,7 @@ In this section, we assume your **Agent** will be configured as a planner, with
![](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/mcp_page.jpg)
### 2. Configure your Tavily MCP server
### 2. Configure your Tavily MCP server
Update your MCP server's name, URL (including the API key), server type, and other necessary settings. When configured correctly, the available tools will be displayed.
@ -113,7 +112,7 @@ On the canvas, click the newly-populated Tavily server to view and select its av
Click the dropdown menu of **Model** to show the model configuration window.
- **Model**: The chat model to use.
- **Model**: The chat model to use.
- Ensure you set the chat model correctly on the **Model providers** page.
- You can use different models for different components to increase flexibility or improve overall performance.
- **Creativity**: A shortcut to **Temperature**, **Top P**, **Presence penalty**, and **Frequency penalty** settings, indicating the freedom level of the model. From **Improvise**, **Precise**, to **Balance**, each preset configuration corresponds to a unique combination of **Temperature**, **Top P**, **Presence penalty**, and **Frequency penalty**.
@ -121,21 +120,21 @@ Click the dropdown menu of **Model** to show the model configuration window.
- **Improvise**: Produces more creative responses.
- **Precise**: (Default) Produces more conservative responses.
- **Balance**: A middle ground between **Improvise** and **Precise**.
- **Temperature**: The randomness level of the model's output.
- **Temperature**: The randomness level of the model's output.
Defaults to 0.1.
- Lower values lead to more deterministic and predictable outputs.
- Higher values lead to more creative and varied outputs.
- A temperature of zero results in the same output for the same prompt.
- **Top P**: Nucleus sampling.
- **Top P**: Nucleus sampling.
- Reduces the likelihood of generating repetitive or unnatural text by setting a threshold *P* and restricting the sampling to tokens with a cumulative probability exceeding *P*.
- Defaults to 0.3.
- **Presence penalty**: Encourages the model to include a more diverse range of tokens in the response.
- **Presence penalty**: Encourages the model to include a more diverse range of tokens in the response.
- A higher **presence penalty** value results in the model being more likely to generate tokens not yet been included in the generated text.
- Defaults to 0.4.
- **Frequency penalty**: Discourages the model from repeating the same words or phrases too frequently in the generated text.
- **Frequency penalty**: Discourages the model from repeating the same words or phrases too frequently in the generated text.
- A higher **frequency penalty** value results in the model being more conservative in its use of repeated tokens.
- Defaults to 0.7.
- **Max tokens**:
- **Max tokens**:
This sets the maximum length of the model's output, measured in the number of tokens (words or pieces of words). It is disabled by default, allowing the model to determine the number of tokens in its responses.
:::tip NOTE
@ -145,7 +144,7 @@ Click the dropdown menu of **Model** to show the model configuration window.
### System prompt
Typically, you use the system prompt to describe the task for the LLM, specify how it should respond, and outline other miscellaneous requirements. We do not plan to elaborate on this topic, as it can be as extensive as prompt engineering. However, please be aware that the system prompt is often used in conjunction with keys (variables), which serve as various data inputs for the LLM.
Typically, you use the system prompt to describe the task for the LLM, specify how it should respond, and outline other miscellaneous requirements. We do not plan to elaborate on this topic, as it can be as extensive as prompt engineering. However, please be aware that the system prompt is often used in conjunction with keys (variables), which serve as various data inputs for the LLM.
An **Agent** component relies on keys (variables) to specify its data inputs. Its immediate upstream component is *not* necessarily its data input, and the arrows in the workflow indicate *only* the processing sequence. Keys in a **Agent** component are used in conjunction with the system prompt to specify data inputs for the LLM. Use a forward slash `/` or the **(x)** button to show the keys to use.
@ -193,11 +192,11 @@ From v0.20.5 onwards, four framework-level prompt blocks are available in the **
The user-defined prompt. Defaults to `sys.query`, the user query. As a general rule, when using the **Agent** component as a standalone module (not as a planner), you usually need to specify the corresponding **Retrieval** components output variable (`formalized_content`) here as part of the input to the LLM.
### Tools
### Tools
You can use an **Agent** component as a collaborator that reasons and reflects with the aid of other tools; for instance, **Retrieval** can serve as one such tool for an **Agent**.
### Agent
### Agent
You use an **Agent** component as a collaborator that reasons and reflects with the aid of subagents or other tools, forming a multi-agent system.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideMessageSquareDot
}
---
# Await response component
A component that halts the workflow and awaits user input.
@ -26,7 +25,7 @@ Whether to show the message defined in the **Message** field.
### Message
The static message to send out.
The static message to send out.
Click **+ Add message** to add message options. When multiple messages are supplied, the **Message** component randomly selects one to send.
@ -34,9 +33,9 @@ Click **+ Add message** to add message options. When multiple messages are suppl
You can define global variables within the **Await response** component, which can be either mandatory or optional. Once set, users will need to provide values for these variables when engaging with the agent. Click **+** to add a global variable, each with the following attributes:
- **Name**: _Required_
A descriptive name providing additional details about the variable.
- **Type**: _Required_
- **Name**: _Required_
A descriptive name providing additional details about the variable.
- **Type**: _Required_
The type of the variable:
- **Single-line text**: Accepts a single line of text without line breaks.
- **Paragraph text**: Accepts multiple lines of text, including line breaks.
@ -44,7 +43,7 @@ You can define global variables within the **Await response** component, which c
- **file upload**: Requires the user to upload one or multiple files.
- **Number**: Accepts a number as input.
- **Boolean**: Requires the user to toggle between on and off.
- **Key**: _Required_
- **Key**: _Required_
The unique variable name.
- **Optional**: A toggle indicating whether the variable is optional.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideHome
}
---
# Begin component
The starting component in a workflow.
@ -39,9 +38,9 @@ An agent in conversational mode begins with an opening greeting. It is the agent
You can define global variables within the **Begin** component, which can be either mandatory or optional. Once set, users will need to provide values for these variables when engaging with the agent. Click **+ Add variable** to add a global variable, each with the following attributes:
- **Name**: _Required_
A descriptive name providing additional details about the variable.
- **Type**: _Required_
- **Name**: _Required_
A descriptive name providing additional details about the variable.
- **Type**: _Required_
The type of the variable:
- **Single-line text**: Accepts a single line of text without line breaks.
- **Paragraph text**: Accepts multiple lines of text, including line breaks.
@ -49,7 +48,7 @@ You can define global variables within the **Begin** component, which can be eit
- **file upload**: Requires the user to upload one or multiple files.
- **Number**: Accepts a number as input.
- **Boolean**: Requires the user to toggle between on and off.
- **Key**: _Required_
- **Key**: _Required_
The unique variable name.
- **Optional**: A toggle indicating whether the variable is optional.

View File

@ -5,10 +5,9 @@ sidebar_custom_props: {
categoryIcon: LucideSwatchBook
}
---
# Categorize component
A component that classifies user inputs and applies strategies accordingly.
A component that classifies user inputs and applies strategies accordingly.
---
@ -26,7 +25,7 @@ A **Categorize** component is essential when you need the LLM to help you identi
Select the source for categorization.
The **Categorize** component relies on query variables to specify its data inputs (queries). All global variables defined before the **Categorize** component are available in the dropdown list.
The **Categorize** component relies on query variables to specify its data inputs (queries). All global variables defined before the **Categorize** component are available in the dropdown list.
### Input
@ -34,7 +33,7 @@ The **Categorize** component relies on query variables to specify its data input
The **Categorize** component relies on input variables to specify its data inputs (queries). Click **+ Add variable** in the **Input** section to add the desired input variables. There are two types of input variables: **Reference** and **Text**.
- **Reference**: Uses a component's output or a user input as the data source. You are required to select from the dropdown menu:
- A component ID under **Component Output**, or
- A component ID under **Component Output**, or
- A global variable under **Begin input**, which is defined in the **Begin** component.
- **Text**: Uses fixed text as the query. You are required to enter static text.
@ -42,29 +41,29 @@ The **Categorize** component relies on input variables to specify its data input
Click the dropdown menu of **Model** to show the model configuration window.
- **Model**: The chat model to use.
- **Model**: The chat model to use.
- Ensure you set the chat model correctly on the **Model providers** page.
- You can use different models for different components to increase flexibility or improve overall performance.
- **Creativity**: A shortcut to **Temperature**, **Top P**, **Presence penalty**, and **Frequency penalty** settings, indicating the freedom level of the model. From **Improvise**, **Precise**, to **Balance**, each preset configuration corresponds to a unique combination of **Temperature**, **Top P**, **Presence penalty**, and **Frequency penalty**.
This parameter has three options:
This parameter has three options:
- **Improvise**: Produces more creative responses.
- **Precise**: (Default) Produces more conservative responses.
- **Balance**: A middle ground between **Improvise** and **Precise**.
- **Temperature**: The randomness level of the model's output.
Defaults to 0.1.
- **Temperature**: The randomness level of the model's output.
Defaults to 0.1.
- Lower values lead to more deterministic and predictable outputs.
- Higher values lead to more creative and varied outputs.
- A temperature of zero results in the same output for the same prompt.
- **Top P**: Nucleus sampling.
- **Top P**: Nucleus sampling.
- Reduces the likelihood of generating repetitive or unnatural text by setting a threshold *P* and restricting the sampling to tokens with a cumulative probability exceeding *P*.
- Defaults to 0.3.
- **Presence penalty**: Encourages the model to include a more diverse range of tokens in the response.
- **Presence penalty**: Encourages the model to include a more diverse range of tokens in the response.
- A higher **presence penalty** value results in the model being more likely to generate tokens not yet been included in the generated text.
- Defaults to 0.4.
- **Frequency penalty**: Discourages the model from repeating the same words or phrases too frequently in the generated text.
- **Frequency penalty**: Discourages the model from repeating the same words or phrases too frequently in the generated text.
- A higher **frequency penalty** value results in the model being more conservative in its use of repeated tokens.
- Defaults to 0.7.
- **Max tokens**:
- **Max tokens**:
This sets the maximum length of the model's output, measured in the number of tokens (words or pieces of words). It is disabled by default, allowing the model to determine the number of tokens in its responses.
:::tip NOTE
@ -84,7 +83,7 @@ This feature is used for multi-turn dialogue *only*. If your **Categorize** comp
### Category name
A **Categorize** component must have at least two categories. This field sets the name of the category. Click **+ Add Item** to include the intended categories.
A **Categorize** component must have at least two categories. This field sets the name of the category. Click **+ Add Item** to include the intended categories.
:::tip NOTE
You will notice that the category name is auto-populated. No worries. Each category is assigned a random name upon creation. Feel free to change it to a name that is understandable to the LLM.
@ -92,7 +91,7 @@ You will notice that the category name is auto-populated. No worries. Each categ
#### Description
Description of this category.
Description of this category.
You can input criteria, situation, or information that may help the LLM determine which inputs belong in this category.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideBlocks
}
---
# Title chunker component
A component that splits texts into chunks by heading level.
@ -26,7 +25,7 @@ Placing a **Title chunker** after a **Token chunker** is invalid and will cause
### Hierarchy
Specifies the heading level to define chunk boundaries:
Specifies the heading level to define chunk boundaries:
- H1
- H2

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideBlocks
}
---
# Token chunker component
A component that splits texts into chunks, respecting a maximum token limit and using delimiters to find optimal breakpoints.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideCodeXml
}
---
# Code component
A component that enables users to integrate Python or JavaScript codes into their Agent for dynamic data processing.
@ -36,7 +35,7 @@ If your RAGFlow Sandbox is not working, please be sure to consult the [Troublesh
### 3. (Optional) Install necessary dependencies
If you need to import your own Python or JavaScript packages into Sandbox, please follow the commands provided in the [How to import my own Python or JavaScript packages into Sandbox?](#how-to-import-my-own-python-or-javascript-packages-into-sandbox) section to install the additional dependencies.
If you need to import your own Python or JavaScript packages into Sandbox, please follow the commands provided in the [How to import my own Python or JavaScript packages into Sandbox?](#how-to-import-my-own-python-or-javascript-packages-into-sandbox) section to install the additional dependencies.
### 4. Enable Sandbox-specific settings in RAGFlow
@ -46,11 +45,11 @@ Ensure all Sandbox-specific settings are enabled in **ragflow/docker/.env**.
Any changes to the configuration or environment *require* a full service restart to take effect.
## Configurations
## Configurations
### Input
You can specify multiple input sources for the **Code** component. Click **+ Add variable** in the **Input variables** section to include the desired input variables.
You can specify multiple input sources for the **Code** component. Click **+ Add variable** in the **Input variables** section to include the desired input variables.
### Code
@ -62,7 +61,7 @@ If your code implementation includes defined variables, whether input or output
#### A Python code example
```Python
```Python
def main(arg1: str, arg2: str) -> dict:
return {
"result": arg1 + arg2,
@ -105,7 +104,7 @@ The defined output variable(s) will be auto-populated here.
### `HTTPConnectionPool(host='sandbox-executor-manager', port=9385): Read timed out.`
**Root cause**
**Root cause**
- You did not properly install gVisor and `runsc` was not recognized as a valid Docker runtime.
- You did not pull the required base images for the runners and no runner was started.
@ -147,11 +146,11 @@ docker build -t sandbox-executor-manager:latest ./sandbox/executor_manager
### `HTTPConnectionPool(host='none', port=9385): Max retries exceeded.`
**Root cause**
**Root cause**
`sandbox-executor-manager` is not mapped in `/etc/hosts`.
**Solution**
**Solution**
Add a new entry to `/etc/hosts`:
@ -159,11 +158,11 @@ Add a new entry to `/etc/hosts`:
### `Container pool is busy`
**Root cause**
**Root cause**
All runners are currently in use, executing tasks.
All runners are currently in use, executing tasks.
**Solution**
**Solution**
Please try again shortly or increase the pool size in the configuration to improve availability and reduce waiting times.
@ -208,7 +207,7 @@ To import your JavaScript packages, navigate to `sandbox_base_image/nodejs` and
(ragflow) ➜ ragflow/sandbox main ✓ cd sandbox_base_image/nodejs
(ragflow) ➜ ragflow/sandbox/sandbox_base_image/nodejs main ✓ npm install lodash
(ragflow) ➜ ragflow/sandbox/sandbox_base_image/nodejs main ✓ npm install lodash
(ragflow) ➜ ragflow/sandbox/sandbox_base_image/nodejs main ✓ cd ../.. # go back to sandbox root directory

View File

@ -5,14 +5,13 @@ sidebar_custom_props: {
categoryIcon: RagSql
}
---
# Execute SQL tool
A tool that execute SQL queries on a specified relational database.
---
The **Execute SQL** tool enables you to connect to a relational database and run SQL queries, whether entered directly or generated by the systems Text2SQL capability via an **Agent** component.
The **Execute SQL** tool enables you to connect to a relational database and run SQL queries, whether entered directly or generated by the systems Text2SQL capability via an **Agent** component.
## Prerequisites

View File

@ -5,10 +5,9 @@ sidebar_custom_props: {
categoryIcon: RagHTTP
}
---
# HTTP request component
A component that calls remote services.
A component that calls remote services.
---

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideListPlus
}
---
# Indexer component
A component that defines how chunks are indexed.

View File

@ -5,19 +5,18 @@ sidebar_custom_props: {
categoryIcon: LucideRepeat2
}
---
# Iteration component
A component that splits text input into text segments and iterates a predefined workflow for each one.
---
An **Interaction** component can divide text input into text segments and apply its built-in component workflow to each segment.
An **Interaction** component can divide text input into text segments and apply its built-in component workflow to each segment.
## Scenario
An **Iteration** component is essential when a workflow loop is required and the loop count is *not* fixed but depends on number of segments created from the output of specific agent components.
An **Iteration** component is essential when a workflow loop is required and the loop count is *not* fixed but depends on number of segments created from the output of specific agent components.
- If, for instance, you plan to feed several paragraphs into an LLM for content generation, each with its own focus, and feeding them to the LLM all at once could create confusion or contradictions, then you can use an **Iteration** component, which encapsulates a **Generate** component, to repeat the content generation process for each paragraph.
- Another example: If you wish to use the LLM to translate a lengthy paper into a target language without exceeding its token limit, consider using an **Iteration** component, which encapsulates a **Generate** component, to break the paper into smaller pieces and repeat the translation process for each one.
@ -32,12 +31,12 @@ Each **Iteration** component includes an internal **IterationItem** component. T
The **IterationItem** component is visible *only* to the components encapsulated by the current **Iteration** components.
:::
### Build an internal workflow
### Build an internal workflow
You are allowed to pull other components into the **Iteration** component to build an internal workflow, and these "added internal components" are no longer visible to components outside of the current **Iteration** component.
:::danger IMPORTANT
To reference the created text segments from an added internal component, simply add a **Reference** variable that equals **IterationItem** within the **Input** section of that internal component. There is no need to reference the corresponding external component, as the **IterationItem** component manages the loop of the workflow for all created text segments.
To reference the created text segments from an added internal component, simply add a **Reference** variable that equals **IterationItem** within the **Input** section of that internal component. There is no need to reference the corresponding external component, as the **IterationItem** component manages the loop of the workflow for all created text segments.
:::
:::tip NOTE
@ -51,7 +50,7 @@ An added internal component can reference an external component when necessary.
The **Iteration** component uses input variables to specify its data inputs, namely the texts to be segmented. You are allowed to specify multiple input sources for the **Iteration** component. Click **+ Add variable** in the **Input** section to include the desired input variables. There are two types of input variables: **Reference** and **Text**.
- **Reference**: Uses a component's output or a user input as the data source. You are required to select from the dropdown menu:
- A component ID under **Component Output**, or
- A component ID under **Component Output**, or
- A global variable under **Begin input**, which is defined in the **Begin** component.
- **Text**: Uses fixed text as the query. You are required to enter static text.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideMessageSquareReply
}
---
# Message component
A component that sends out a static or dynamic message.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideFilePlay
}
---
# Parser component
A component that sets the parsing rules for your dataset.
@ -57,12 +56,12 @@ Starting from v0.22.0, RAGFlow includes MinerU (&ge; 2.6.3) as an optional PDF p
- `"vlm-mlx-engine"`
- `"vlm-vllm-async-engine"`
- `"vlm-lmdeploy-engine"`.
- `MINERU_SERVER_URL`: (optional) The downstream vLLM HTTP server (e.g., `http://vllm-host:30000`). Applicable when `MINERU_BACKEND` is set to `"vlm-http-client"`.
- `MINERU_SERVER_URL`: (optional) The downstream vLLM HTTP server (e.g., `http://vllm-host:30000`). Applicable when `MINERU_BACKEND` is set to `"vlm-http-client"`.
- `MINERU_OUTPUT_DIR`: (optional) The local directory for holding the outputs of the MinerU API service (zip/JSON) before ingestion.
- `MINERU_DELETE_OUTPUT`: Whether to delete temporary output when a temporary directory is used:
- `1`: Delete.
- `0`: Retain.
3. In the web UI, navigate to your dataset's **Configuration** page and find the **Ingestion pipeline** section:
3. In the web UI, navigate to your dataset's **Configuration** page and find the **Ingestion pipeline** section:
- If you decide to use a chunking method from the **Built-in** dropdown, ensure it supports PDF parsing, then select **MinerU** from the **PDF parser** dropdown.
- If you use a custom ingestion pipeline instead, select **MinerU** in the **PDF parser** section of the **Parser** component.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideFolderSearch
}
---
# Retrieval component
A component that retrieves information from specified datasets.
@ -24,13 +23,13 @@ Ensure you [have properly configured your target dataset(s)](../../dataset/confi
## Quickstart
### 1. Click on a **Retrieval** component to show its configuration panel
### 1. Click on a **Retrieval** component to show its configuration panel
The corresponding configuration panel appears to the right of the canvas. Use this panel to define and fine-tune the **Retrieval** component's search behavior.
### 2. Input query variable(s)
The **Retrieval** component depends on query variables to specify its queries.
The **Retrieval** component depends on query variables to specify its queries.
:::caution IMPORTANT
- If you use the **Retrieval** component as a standalone workflow module, input query variables in the **Input Variables** text box.
@ -77,7 +76,7 @@ Select the query source for retrieval. Defaults to `sys.query`, which is the def
The **Retrieval** component relies on query variables to specify its queries. All global variables defined before the **Retrieval** component can also be used as queries. Use the `(x)` button or type `/` to show all the available query variables.
### Knowledge bases
### Knowledge bases
Select the dataset(s) to retrieve data from.
@ -113,7 +112,7 @@ Using a rerank model will *significantly* increase the system's response time.
### Empty response
- Set this as a response if no results are retrieved from the dataset(s) for your query, or
- Set this as a response if no results are retrieved from the dataset(s) for your query, or
- Leave this field blank to allow the chat model to improvise when nothing is found.
:::caution WARNING

View File

@ -5,10 +5,9 @@ sidebar_custom_props: {
categoryIcon: LucideSplit
}
---
# Switch component
A component that evaluates whether specified conditions are met and directs the follow of execution accordingly.
A component that evaluates whether specified conditions are met and directs the follow of execution accordingly.
---
@ -16,7 +15,7 @@ A **Switch** component evaluates conditions based on the output of specific comp
## Scenarios
A **Switch** component is essential for condition-based direction of execution flow. While it shares similarities with the [Categorize](./categorize.mdx) component, which is also used in multi-pronged strategies, the key distinction lies in their approach: the evaluation of the **Switch** component is rule-based, whereas the **Categorize** component involves AI and uses an LLM for decision-making.
A **Switch** component is essential for condition-based direction of execution flow. While it shares similarities with the [Categorize](./categorize.mdx) component, which is also used in multi-pronged strategies, the key distinction lies in their approach: the evaluation of the **Switch** component is rule-based, whereas the **Categorize** component involves AI and uses an LLM for decision-making.
## Configurations
@ -42,12 +41,12 @@ When you have added multiple conditions for a specific case, a **Logical operato
- Greater equal
- Less than
- Less equal
- Contains
- Not contains
- Contains
- Not contains
- Starts with
- Ends with
- Is empty
- Not empty
- **Value**: A single value, which can be an integer, float, or string.
- **Value**: A single value, which can be an integer, float, or string.
- Delimiters, multiple values, or expressions are *not* supported.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideType
}
---
# Text processing component
A component that merges or splits texts.
@ -27,7 +26,7 @@ Appears only when you select **Split** as method.
The variable to be split. Type `/` to quickly insert variables.
### Script
### Script
Template for the merge. Appears only when you select **Merge** as method. Type `/` to quickly insert variables.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideFileStack
}
---
# Transformer component
A component that uses an LLM to extract insights from the chunks.
@ -16,7 +15,7 @@ A **Transformer** component indexes chunks and configures their storage formats
## Scenario
A **Transformer** component is essential when you need the LLM to extract new information, such as keywords, questions, metadata, and summaries, from the original chunks.
A **Transformer** component is essential when you need the LLM to extract new information, such as keywords, questions, metadata, and summaries, from the original chunks.
## Configurations
@ -24,29 +23,29 @@ A **Transformer** component is essential when you need the LLM to extract new in
Click the dropdown menu of **Model** to show the model configuration window.
- **Model**: The chat model to use.
- **Model**: The chat model to use.
- Ensure you set the chat model correctly on the **Model providers** page.
- You can use different models for different components to increase flexibility or improve overall performance.
- **Creativity**: A shortcut to **Temperature**, **Top P**, **Presence penalty**, and **Frequency penalty** settings, indicating the freedom level of the model. From **Improvise**, **Precise**, to **Balance**, each preset configuration corresponds to a unique combination of **Temperature**, **Top P**, **Presence penalty**, and **Frequency penalty**.
- **Creativity**: A shortcut to **Temperature**, **Top P**, **Presence penalty**, and **Frequency penalty** settings, indicating the freedom level of the model. From **Improvise**, **Precise**, to **Balance**, each preset configuration corresponds to a unique combination of **Temperature**, **Top P**, **Presence penalty**, and **Frequency penalty**.
This parameter has three options:
- **Improvise**: Produces more creative responses.
- **Precise**: (Default) Produces more conservative responses.
- **Balance**: A middle ground between **Improvise** and **Precise**.
- **Temperature**: The randomness level of the model's output.
- **Temperature**: The randomness level of the model's output.
Defaults to 0.1.
- Lower values lead to more deterministic and predictable outputs.
- Higher values lead to more creative and varied outputs.
- A temperature of zero results in the same output for the same prompt.
- **Top P**: Nucleus sampling.
- **Top P**: Nucleus sampling.
- Reduces the likelihood of generating repetitive or unnatural text by setting a threshold *P* and restricting the sampling to tokens with a cumulative probability exceeding *P*.
- Defaults to 0.3.
- **Presence penalty**: Encourages the model to include a more diverse range of tokens in the response.
- **Presence penalty**: Encourages the model to include a more diverse range of tokens in the response.
- A higher **presence penalty** value results in the model being more likely to generate tokens not yet been included in the generated text.
- Defaults to 0.4.
- **Frequency penalty**: Discourages the model from repeating the same words or phrases too frequently in the generated text.
- **Frequency penalty**: Discourages the model from repeating the same words or phrases too frequently in the generated text.
- A higher **frequency penalty** value results in the model being more conservative in its use of repeated tokens.
- Defaults to 0.7.
- **Max tokens**:
- **Max tokens**:
This sets the maximum length of the model's output, measured in the number of tokens (words or pieces of words). It is disabled by default, allowing the model to determine the number of tokens in its responses.
:::tip NOTE
@ -65,7 +64,7 @@ Select the type of output to be generated by the LLM:
### System prompt
Typically, you use the system prompt to describe the task for the LLM, specify how it should respond, and outline other miscellaneous requirements. We do not plan to elaborate on this topic, as it can be as extensive as prompt engineering.
Typically, you use the system prompt to describe the task for the LLM, specify how it should respond, and outline other miscellaneous requirements. We do not plan to elaborate on this topic, as it can be as extensive as prompt engineering.
:::tip NOTE
The system prompt here automatically updates to match your selected **Result destination**.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideBookOpenText
}
---
# Introduction to agents
Key concepts, basic operations, a quick view of the agent editor.
@ -27,7 +26,7 @@ Agents and RAG are complementary techniques, each enhancing the others capabi
:::tip NOTE
Before proceeding, ensure that:
Before proceeding, ensure that:
1. You have properly set the LLM to use. See the guides on [Configure your API key](../models/llm_api_key_setup.md) or [Deploy a local LLM](../models/deploy_local_llm.mdx) for more information.
2. You have a dataset configured and the corresponding files properly parsed. See the guide on [Configure a dataset](../dataset/configure_knowledge_base.md) for more information.
@ -44,7 +43,7 @@ We also provide templates catered to different business scenarios. You can eithe
![agent_template](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/agent_template_list.jpg)
2. To create an agent from scratch, click **Create Agent**. Alternatively, to create an agent from one of our templates, click the desired card, such as **Deep Research**, name your agent in the pop-up dialogue, and click **OK** to confirm.
2. To create an agent from scratch, click **Create Agent**. Alternatively, to create an agent from one of our templates, click the desired card, such as **Deep Research**, name your agent in the pop-up dialogue, and click **OK** to confirm.
*You are now taken to the **no-code workflow editor** page.*

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideMonitorDot
}
---
# Embed agent into webpage
You can use iframe to embed an agent into a third-party webpage.

View File

@ -5,12 +5,11 @@ sidebar_custom_props: {
categoryIcon: LucideCodesandbox
}
---
# Sandbox quickstart
A secure, pluggable code execution backend designed for RAGFlow and other applications requiring isolated code execution environments.
## Features:
## Features:
- Seamless RAGFlow Integration — Works out-of-the-box with the code component of RAGFlow.
- High Security — Uses gVisor for syscall-level sandboxing to isolate execution.
@ -58,7 +57,7 @@ Next, build the executor manager image:
docker build -t sandbox-executor-manager:latest ./executor_manager
```
## Running with RAGFlow
## Running with RAGFlow
1. Verify that gVisor is properly installed and operational.

View File

@ -5,14 +5,13 @@ sidebar_custom_props: {
categoryIcon: LucideSearch
}
---
# Search
Conduct an AI search.
---
An AI search is a single-turn AI conversation using a predefined retrieval strategy (a hybrid search of weighted keyword similarity and weighted vector similarity) and the system's default chat model. It does not involve advanced RAG strategies like knowledge graph, auto-keyword, or auto-question. The related chunks are listed below the chat model's response in descending order based on their similarity scores.
An AI search is a single-turn AI conversation using a predefined retrieval strategy (a hybrid search of weighted keyword similarity and weighted vector similarity) and the system's default chat model. It does not involve advanced RAG strategies like knowledge graph, auto-keyword, or auto-question. The related chunks are listed below the chat model's response in descending order based on their similarity scores.
![Create search app](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/create_search_app.jpg)

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideScanSearch
}
---
# Implement deep research
Implements deep research for agentic reasoning.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideVariable
}
---
# Set variables
Set variables to be used together with the system prompt for your LLM.
@ -94,7 +93,7 @@ from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
assistant = rag_object.list_chats(name="Miss R")
assistant = assistant[0]
session = assistant.create_session()
session = assistant.create_session()
print("\n==================== Miss R =====================\n")
print("Hello. What can I do for you?")
@ -102,9 +101,9 @@ print("Hello. What can I do for you?")
while True:
question = input("\n==================== User =====================\n> ")
style = input("Please enter your preferred style (e.g., formal, informal, hilarious): ")
print("\n==================== Miss R =====================\n")
cont = ""
for ans in session.ask(question, stream=True, style=style):
print(ans.content[len(cont):], end='', flush=True)

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideBot
}
---
# Start AI chat
Initiate an AI-powered chat with a configured chat assistant.
@ -45,8 +44,8 @@ You start an AI conversation by creating an assistant.
- **Rerank model** sets the reranker model to use. It is left empty by default.
- If **Rerank model** is left empty, the hybrid score system uses keyword similarity and vector similarity, and the default weight assigned to the vector similarity component is 1-0.7=0.3.
- If **Rerank model** is selected, the hybrid score system uses keyword similarity and reranker score, and the default weight assigned to the reranker score is 1-0.7=0.3.
- [Cross-language search](../../references/glossary.mdx#cross-language-search): Optional
Select one or more target languages from the dropdown menu. The systems default chat model will then translate your query into the selected target language(s). This translation ensures accurate semantic matching across languages, allowing you to retrieve relevant results regardless of language differences.
- [Cross-language search](../../references/glossary.mdx#cross-language-search): Optional
Select one or more target languages from the dropdown menu. The systems default chat model will then translate your query into the selected target language(s). This translation ensures accurate semantic matching across languages, allowing you to retrieve relevant results regardless of language differences.
- When selecting target languages, please ensure that these languages are present in the dataset to guarantee an effective search.
- If no target language is selected, the system will search only in the language of your query, which may cause relevant information in other languages to be missed.
- **Variable** refers to the variables (keys) to be used in the system prompt. `{knowledge}` is a reserved variable. Click **Add** to add more variables for the system prompt.
@ -58,23 +57,23 @@ You start an AI conversation by creating an assistant.
4. Update Model-specific Settings:
- In **Model**: you select the chat model. Though you have selected the default chat model in **System Model Settings**, RAGFlow allows you to choose an alternative chat model for your dialogue.
- **Creativity**: A shortcut to **Temperature**, **Top P**, **Presence penalty**, and **Frequency penalty** settings, indicating the freedom level of the model. From **Improvise**, **Precise**, to **Balance**, each preset configuration corresponds to a unique combination of **Temperature**, **Top P**, **Presence penalty**, and **Frequency penalty**.
- **Creativity**: A shortcut to **Temperature**, **Top P**, **Presence penalty**, and **Frequency penalty** settings, indicating the freedom level of the model. From **Improvise**, **Precise**, to **Balance**, each preset configuration corresponds to a unique combination of **Temperature**, **Top P**, **Presence penalty**, and **Frequency penalty**.
This parameter has three options:
- **Improvise**: Produces more creative responses.
- **Precise**: (Default) Produces more conservative responses.
- **Balance**: A middle ground between **Improvise** and **Precise**.
- **Temperature**: The randomness level of the model's output.
- **Temperature**: The randomness level of the model's output.
Defaults to 0.1.
- Lower values lead to more deterministic and predictable outputs.
- Higher values lead to more creative and varied outputs.
- A temperature of zero results in the same output for the same prompt.
- **Top P**: Nucleus sampling.
- **Top P**: Nucleus sampling.
- Reduces the likelihood of generating repetitive or unnatural text by setting a threshold *P* and restricting the sampling to tokens with a cumulative probability exceeding *P*.
- Defaults to 0.3.
- **Presence penalty**: Encourages the model to include a more diverse range of tokens in the response.
- **Presence penalty**: Encourages the model to include a more diverse range of tokens in the response.
- A higher **presence penalty** value results in the model being more likely to generate tokens not yet been included in the generated text.
- Defaults to 0.4.
- **Frequency penalty**: Discourages the model from repeating the same words or phrases too frequently in the generated text.
- **Frequency penalty**: Discourages the model from repeating the same words or phrases too frequently in the generated text.
- A higher **frequency penalty** value results in the model being more conservative in its use of repeated tokens.
- Defaults to 0.7.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: SiGoogledrive
}
---
# Add Google Drive
## 1. Create a Google Cloud Project
@ -13,9 +12,9 @@ sidebar_custom_props: {
You can either create a dedicated project for RAGFlow or use an existing
Google Cloud external project.
**Steps:**
**Steps:**
1. Open the project creation page\
`https://console.cloud.google.com/projectcreate`
`https://console.cloud.google.com/projectcreate`
![placeholder-image](https://github.com/infiniflow/ragflow-docs/blob/040e4acd4c1eac6dc73dc44e934a6518de78d097/images/google_drive/image1.jpeg?raw=true)
2. Select **External** as the Audience
![placeholder-image](https://github.com/infiniflow/ragflow-docs/blob/040e4acd4c1eac6dc73dc44e934a6518de78d097/images/google_drive/image2.png?raw=true)
@ -99,11 +98,11 @@ Navigate to the Google API Library:\
Enable the following APIs:
- Google Drive API
- Admin SDK API
- Google Sheets API
- Google Drive API
- Admin SDK API
- Google Sheets API
- Google Docs API
![placeholder-image](https://github.com/infiniflow/ragflow-docs/blob/040e4acd4c1eac6dc73dc44e934a6518de78d097/images/google_drive/image15.png?raw=true)
@ -129,7 +128,7 @@ Enable the following APIs:
![placeholder-image](https://github.com/infiniflow/ragflow-docs/blob/040e4acd4c1eac6dc73dc44e934a6518de78d097/images/google_drive/image23.png?raw=true)
5. Click **Authorize with Google**
A browser window will appear.
A browser window will appear.
![placeholder-image](https://github.com/infiniflow/ragflow-docs/blob/040e4acd4c1eac6dc73dc44e934a6518de78d097/images/google_drive/image25.jpeg?raw=true)
Click: - **Continue** - **Select All → Continue** - Authorization should
succeed - Select **OK** to add the data source

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideFileCodeCorner
}
---
# Auto-extract metadata
Automatically extract metadata from uploaded files.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideSlidersHorizontal
}
---
# Auto-keyword Auto-question
import APITable from '@site/src/components/APITable';
@ -23,14 +22,14 @@ Enabling this feature increases document indexing time and uses extra tokens, as
Auto-keyword refers to the auto-keyword generation feature of RAGFlow. It uses a chat model to generate a set of keywords or synonyms from each chunk to correct errors and enhance retrieval accuracy. This feature is implemented as a slider under **Page rank** on the **Configuration** page of your dataset.
**Values**:
**Values**:
- 0: (Default) Disabled.
- Between 3 and 5 (inclusive): Recommended if you have chunks of approximately 1,000 characters.
- 30 (maximum)
- 0: (Default) Disabled.
- Between 3 and 5 (inclusive): Recommended if you have chunks of approximately 1,000 characters.
- 30 (maximum)
:::tip NOTE
- If your chunk size increases, you can increase the value accordingly. Please note, as the value increases, the marginal benefit decreases.
- If your chunk size increases, you can increase the value accordingly. Please note, as the value increases, the marginal benefit decreases.
- An Auto-keyword value must be an integer. If you set it to a non-integer, say 1.7, it will be rounded down to the nearest integer, which in this case is 1.
:::
@ -40,12 +39,12 @@ Auto-question is a feature of RAGFlow that automatically generates questions fro
**Values**:
- 0: (Default) Disabled.
- 1 or 2: Recommended if you have chunks of approximately 1,000 characters.
- 0: (Default) Disabled.
- 1 or 2: Recommended if you have chunks of approximately 1,000 characters.
- 10 (maximum)
:::tip NOTE
- If your chunk size increases, you can increase the value accordingly. Please note, as the value increases, the marginal benefit decreases.
- If your chunk size increases, you can increase the value accordingly. Please note, as the value increases, the marginal benefit decreases.
- An Auto-question value must be an integer. If you set it to a non-integer, say 1.7, it will be rounded down to the nearest integer, which in this case is 1.
:::

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideGroup
}
---
# Configure child chunking strategy
Set parent-child chunking strategy to improve retrieval.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideCog
}
---
# Configure dataset
Most of RAGFlow's chat assistants and Agents are based on datasets. Each of RAGFlow's datasets serves as a knowledge source, *parsing* files uploaded from your local machine and file references generated in RAGFlow's File system into the real 'knowledge' for future AI chats. This guide demonstrates some basic usages of the dataset feature, covering the following topics:
@ -25,7 +24,7 @@ _Each time a dataset is created, a folder with the same name is generated in the
## Configure dataset
The following screenshot shows the configuration page of a dataset. A proper configuration of your dataset is crucial for future AI chats. For example, choosing the wrong embedding model or chunking method would cause unexpected semantic loss or mismatched answers in chats.
The following screenshot shows the configuration page of a dataset. A proper configuration of your dataset is crucial for future AI chats. For example, choosing the wrong embedding model or chunking method would cause unexpected semantic loss or mismatched answers in chats.
![dataset configuration](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/configure_knowledge_base.jpg)
@ -63,14 +62,14 @@ You can also change a file's chunking method on the **Files** page.
<details>
<summary>From v0.21.0 onward, RAGFlow supports ingestion pipeline for customized data ingestion and cleansing workflows.</summary>
To use a customized data pipeline:
1. On the **Agent** page, click **+ Create agent** > **Create from blank**.
2. Select **Ingestion pipeline** and name your data pipeline in the popup, then click **Save** to show the data pipeline canvas.
3. After updating your data pipeline, click **Save** on the top right of the canvas.
4. Navigate to the **Configuration** page of your dataset, select **Choose pipeline** in **Ingestion pipeline**.
*Your saved data pipeline will appear in the dropdown menu below.*
</details>
@ -86,9 +85,9 @@ Some embedding models are optimized for specific languages, so performance may b
### Upload file
- RAGFlow's File system allows you to link a file to multiple datasets, in which case each target dataset holds a reference to the file.
- In **Knowledge Base**, you are also given the option of uploading a single file or a folder of files (bulk upload) from your local machine to a dataset, in which case the dataset holds file copies.
- In **Knowledge Base**, you are also given the option of uploading a single file or a folder of files (bulk upload) from your local machine to a dataset, in which case the dataset holds file copies.
While uploading files directly to a dataset seems more convenient, we *highly* recommend uploading files to RAGFlow's File system and then linking them to the target datasets. This way, you can avoid permanently deleting files uploaded to the dataset.
While uploading files directly to a dataset seems more convenient, we *highly* recommend uploading files to RAGFlow's File system and then linking them to the target datasets. This way, you can avoid permanently deleting files uploaded to the dataset.
### Parse file
@ -96,14 +95,14 @@ File parsing is a crucial topic in dataset configuration. The meaning of file pa
![parse file](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/parse_file.jpg)
- As shown above, RAGFlow allows you to use a different chunking method for a particular file, offering flexibility beyond the default method.
- As shown above, RAGFlow allows you to enable or disable individual files, offering finer control over dataset-based AI chats.
- As shown above, RAGFlow allows you to use a different chunking method for a particular file, offering flexibility beyond the default method.
- As shown above, RAGFlow allows you to enable or disable individual files, offering finer control over dataset-based AI chats.
### Intervene with file parsing results
RAGFlow features visibility and explainability, allowing you to view the chunking results and intervene where necessary. To do so:
RAGFlow features visibility and explainability, allowing you to view the chunking results and intervene where necessary. To do so:
1. Click on the file that completes file parsing to view the chunking results:
1. Click on the file that completes file parsing to view the chunking results:
_You are taken to the **Chunk** page:_
@ -116,7 +115,7 @@ RAGFlow features visibility and explainability, allowing you to view the chunkin
![update chunk](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/add_keyword_question.jpg)
:::caution NOTE
You can add keywords to a file chunk to increase its ranking for queries containing those keywords. This action increases its keyword weight and can improve its position in search list.
You can add keywords to a file chunk to increase its ranking for queries containing those keywords. This action increases its keyword weight and can improve its position in search list.
:::
4. In Retrieval testing, ask a quick question in **Test text** to double-check if your configurations work:
@ -144,7 +143,7 @@ As of RAGFlow v0.23.1, the search feature is still in a rudimentary form, suppor
You are allowed to delete a dataset. Hover your mouse over the three dot of the intended dataset card and the **Delete** option appears. Once you delete a dataset, the associated folder under **root/.knowledge** directory is AUTOMATICALLY REMOVED. The consequence is:
- The files uploaded directly to the dataset are gone;
- The file references, which you created from within RAGFlow's File system, are gone, but the associated files still exist.
- The files uploaded directly to the dataset are gone;
- The file references, which you created from within RAGFlow's File system, are gone, but the associated files still exist.
![delete dataset](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/delete_datasets.jpg)

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideWandSparkles
}
---
# Construct knowledge graph
Generate a knowledge graph for your dataset.
@ -66,7 +65,7 @@ In a knowledge graph, a community is a cluster of entities linked by relationshi
## Quickstart
1. Navigate to the **Configuration** page of your dataset and update:
- Entity types: *Required* - Specifies the entity types in the knowledge graph to generate. You don't have to stick with the default, but you need to customize them for your documents.
- Method: *Optional*
- Entity resolution: *Optional*
@ -77,12 +76,12 @@ In a knowledge graph, a community is a cluster of entities linked by relationshi
*You can click the pause button in the dropdown to halt the build process when necessary.*
3. Go back to the **Configuration** page:
3. Go back to the **Configuration** page:
*Once a knowledge graph is generated, the **Knowledge graph** field changes from `Not generated` to `Generated at a specific timestamp`. You can delete it by clicking the recycle bin button to the right of the field.*
4. To use the created knowledge graph, do either of the following:
- In the **Chat setting** panel of your chat app, switch on the **Use knowledge graph** toggle.
- If you are using an agent, click the **Retrieval** agent component to specify the dataset(s) and switch on the **Use knowledge graph** toggle.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideToggleRight
}
---
# Enable Excel2HTML
Convert complex Excel spreadsheets into HTML tables.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideNetwork
}
---
# Enable RAPTOR
A recursive abstractive method used in long-context knowledge retrieval and summarization, balancing broad semantic understanding with fine details.
@ -79,7 +78,7 @@ A random seed. Click **+** to change the seed value.
## Quickstart
1. Navigate to the **Configuration** page of your dataset and update:
- Prompt: *Optional* - We recommend that you keep it as-is until you understand the mechanism behind.
- Max token: *Optional*
- Threshold: *Optional*
@ -89,8 +88,8 @@ A random seed. Click **+** to change the seed value.
*You can click the pause button in the dropdown to halt the build process when necessary.*
3. Go back to the **Configuration** page:
3. Go back to the **Configuration** page:
*The **RAPTOR** field changes from `Not generated` to `Generated at a specific timestamp` when a RAPTOR hierarchical tree structure is generated. You can delete it by clicking the recycle bin button to the right of the field.*
4. Once a RAPTOR hierarchical tree structure is generated, your chat assistant and **Retrieval** agent component will use it for retrieval as a default.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideTableOfContents
}
---
# Extract table of contents
Extract table of contents (TOC) from documents to provide long context RAG and improve retrieval.
@ -31,7 +30,7 @@ The system's default chat model is used to summarize clustered content. Before p
2. Enable **TOC Enhance**.
3. To use this technique during retrieval, do either of the following:
- In the **Chat setting** panel of your chat app, switch on the **TOC Enhance** toggle.
- If you are using an agent, click the **Retrieval** agent component to specify the dataset(s) and switch on the **TOC Enhance** toggle.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideCode
}
---
# Manage metadata
Manage metadata for your dataset and for your individual documents.
@ -22,7 +21,7 @@ From v0.23.0 onwards, RAGFlow allows you to manage metadata both at the dataset
![](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/click_metadata.png)
2. On the **Manage Metadata** page, you can do either of the following:
2. On the **Manage Metadata** page, you can do either of the following:
- Edit Values: You can modify existing values. If you rename two values to be identical, they will be automatically merged.
- Delete: You can delete specific values or entire fields. These changes will apply to all associated files.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideTextSearch
}
---
# Run retrieval test
Conduct a retrieval test on your dataset to check whether the intended chunks can be retrieved.
@ -56,7 +55,7 @@ The switch is disabled by default. When enabled, RAGFlow performs the following
3. Find similar entities and their N-hop relationships from the graph using the embeddings of the extracted query entities.
4. Retrieve similar relationships from the graph using the query embedding.
5. Rank these retrieved entities and relationships by multiplying each one's PageRank value with its similarity score to the query, returning the top n as the final retrieval.
6. Retrieve the report for the community involving the most entities in the final retrieval.
6. Retrieve the report for the community involving the most entities in the final retrieval.
*The retrieved entity descriptions, relationship descriptions, and the top 1 community report are sent to the LLM for content generation.*
:::danger IMPORTANT
@ -81,10 +80,10 @@ This field is where you put in your testing query.
1. Navigate to the **Retrieval testing** page of your dataset, enter your query in **Test text**, and click **Testing** to run the test.
2. If the results are unsatisfactory, tune the options listed in the Configuration section and rerun the test.
*The following is a screenshot of a retrieval test conducted without using knowledge graph. It demonstrates a hybrid search combining weighted keyword similarity and weighted vector cosine similarity. The overall hybrid similarity score is 28.56, calculated as 25.17 (term similarity score) x 0.7 + 36.49 (vector similarity score) x 0.3:*
*The following is a screenshot of a retrieval test conducted without using knowledge graph. It demonstrates a hybrid search combining weighted keyword similarity and weighted vector cosine similarity. The overall hybrid similarity score is 28.56, calculated as 25.17 (term similarity score) x 0.7 + 36.49 (vector similarity score) x 0.3:*
![Image](https://github.com/user-attachments/assets/541554d4-3f3e-44e1-954b-0ae77d7372c6)
*The following is a screenshot of a retrieval test conducted using a knowledge graph. It shows that only vector similarity is used for knowledge graph-generated chunks:*
*The following is a screenshot of a retrieval test conducted using a knowledge graph. It shows that only vector similarity is used for knowledge graph-generated chunks:*
![Image](https://github.com/user-attachments/assets/30a03091-0f7b-4058-901a-f4dc5ca5aa6b)
:::caution WARNING

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideFileText
}
---
# Select PDF parser
Select a visual model for parsing your PDFs.
@ -57,12 +56,12 @@ Starting from v0.22.0, RAGFlow includes MinerU (&ge; 2.6.3) as an optional PDF p
- `"vlm-mlx-engine"`
- `"vlm-vllm-async-engine"`
- `"vlm-lmdeploy-engine"`.
- `MINERU_SERVER_URL`: (optional) The downstream vLLM HTTP server (e.g., `http://vllm-host:30000`). Applicable when `MINERU_BACKEND` is set to `"vlm-http-client"`.
- `MINERU_SERVER_URL`: (optional) The downstream vLLM HTTP server (e.g., `http://vllm-host:30000`). Applicable when `MINERU_BACKEND` is set to `"vlm-http-client"`.
- `MINERU_OUTPUT_DIR`: (optional) The local directory for holding the outputs of the MinerU API service (zip/JSON) before ingestion.
- `MINERU_DELETE_OUTPUT`: Whether to delete temporary output when a temporary directory is used:
- `1`: Delete.
- `0`: Retain.
3. In the web UI, navigate to your dataset's **Configuration** page and find the **Ingestion pipeline** section:
3. In the web UI, navigate to your dataset's **Configuration** page and find the **Ingestion pipeline** section:
- If you decide to use a chunking method from the **Built-in** dropdown, ensure it supports PDF parsing, then select **MinerU** from the **PDF parser** dropdown.
- If you use a custom ingestion pipeline instead, select **MinerU** in the **PDF parser** section of the **Parser** component.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideListChevronsUpDown
}
---
# Set context window size
Set context window size for images and tables to improve long-context RAG performances.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideCode
}
---
# Set metadata
Manually add metadata to an uploaded file

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideStickyNote
}
---
# Set page rank
Create a step-retrieval strategy using page rank.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideTags
}
---
# Use tag set
Use a tag set to auto-tag chunks in your datasets.
@ -46,10 +45,10 @@ A tag set is *not* involved in document indexing or retrieval. Do not specify a
1. Click **+ Create dataset** to create a dataset.
2. Navigate to the **Configuration** page of the created dataset, select **Built-in** in **Ingestion pipeline**, then choose **Tag** as the default chunking method from the **Built-in** drop-down menu.
3. Go back to the **Files** page and upload and parse your table file in XLSX, CSV, or TXT formats.
_A tag cloud appears under the **Tag view** section, indicating the tag set is created:_
3. Go back to the **Files** page and upload and parse your table file in XLSX, CSV, or TXT formats.
_A tag cloud appears under the **Tag view** section, indicating the tag set is created:_
![Image](https://github.com/user-attachments/assets/abefbcbf-c130-4abe-95e1-267b0d2a0505)
4. Click the **Table** tab to view the tag frequency table:
4. Click the **Table** tab to view the tag frequency table:
![Image](https://github.com/user-attachments/assets/af91d10c-5ea5-491f-ab21-3803d5ebf59f)
## 2. Tag chunks
@ -63,12 +62,12 @@ Once a tag set is created, you can apply it to your dataset:
If the tag set is missing from the dropdown, check that it has been created or configured correctly.
:::
3. Re-parse your documents to start the auto-tagging process.
3. Re-parse your documents to start the auto-tagging process.
_In an AI chat scenario using auto-tagged datasets, each query will be tagged using the corresponding tag set(s) and chunks with these tags will have a higher chance to be retrieved._
## 3. Update tag set
Creating a tag set is *not* for once and for all. Oftentimes, you may find it necessary to update or delete existing tags or add new entries.
Creating a tag set is *not* for once and for all. Oftentimes, you may find it necessary to update or delete existing tags or add new entries.
- You can update the existing tag set in the tag frequency table.
- To add new entries, you can add and parse new table files in XLSX, CSV, or TXT formats.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideFolderDot
}
---
# Files
RAGFlow's file management allows you to upload files individually or in bulk. You can then link an uploaded file to multiple target datasets. This guide showcases some basic usages of the file management feature.
@ -16,7 +15,7 @@ Compared to uploading files directly to various datasets, uploading them to RAGF
## Create folder
RAGFlow's file management allows you to establish your file system with nested folder structures. To create a folder in the root directory of RAGFlow:
RAGFlow's file management allows you to establish your file system with nested folder structures. To create a folder in the root directory of RAGFlow:
![create new folder](https://github.com/infiniflow/ragflow/assets/93570324/3a37a5f4-43a6-426d-a62a-e5cd2ff7a533)
@ -26,7 +25,7 @@ Each dataset in RAGFlow has a corresponding folder under the **root/.knowledgeba
## Upload file
RAGFlow's file management supports file uploads from your local machine, allowing both individual and bulk uploads:
RAGFlow's file management supports file uploads from your local machine, allowing both individual and bulk uploads:
![upload file](https://github.com/infiniflow/ragflow/assets/93570324/5d7ded14-ce2b-4703-8567-9356a978f45c)
@ -48,7 +47,7 @@ RAGFlow's file management allows you to *link* an uploaded file to multiple data
![link knowledgebase](https://github.com/infiniflow/ragflow/assets/93570324/6c6b8db4-3269-4e35-9434-6089887e3e3f)
You can link your file to one dataset or multiple datasets at one time:
You can link your file to one dataset or multiple datasets at one time:
![link multiple kb](https://github.com/infiniflow/ragflow/assets/93570324/6c508803-fb1f-435d-b688-683066fd7fff)
@ -71,9 +70,9 @@ RAGFlow's file management allows you to rename a file or folder:
## Delete files or folders
RAGFlow's file management allows you to delete files or folders individually or in bulk.
RAGFlow's file management allows you to delete files or folders individually or in bulk.
To delete a file or folder:
To delete a file or folder:
![delete file](https://github.com/infiniflow/ragflow/assets/93570324/85872728-125d-45e9-a0ee-21e9d4cedb8b)
@ -81,7 +80,7 @@ To bulk delete files or folders:
![bulk delete](https://github.com/infiniflow/ragflow/assets/93570324/519b99ab-ec7f-4c8a-8cea-e0b6dcb3cb46)
> - You are not allowed to delete the **root/.knowledgebase** folder.
> - You are not allowed to delete the **root/.knowledgebase** folder.
> - Deleting files that have been linked to datasets will **AUTOMATICALLY REMOVE** all associated file references across the datasets.
## Download uploaded file
@ -90,4 +89,4 @@ RAGFlow's file management allows you to download an uploaded file:
![download_file](https://github.com/infiniflow/ragflow/assets/93570324/cf3b297f-7d9b-4522-bf5f-4f45743e4ed5)
> As of RAGFlow v0.23.1, bulk download is not supported, nor can you download an entire folder.
> As of RAGFlow v0.23.1, bulk download is not supported, nor can you download an entire folder.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideMonitorCog
}
---
# Deploy local models
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
@ -56,9 +55,9 @@ $ sudo docker exec ollama ollama pull llama3.2
```
```bash
$ sudo docker exec ollama ollama pull bge-m3
> pulling daec91ffb5dd... 100% ▕████████████████▏ 1.2 GB
> success
$ sudo docker exec ollama ollama pull bge-m3
> pulling daec91ffb5dd... 100% ▕████████████████▏ 1.2 GB
> success
```
### 2. Find Ollama URL and ensure it is accessible
@ -108,7 +107,7 @@ Max retries exceeded with url: /api/chat (Caused by NewConnectionError('<urllib3
### 5. Update System Model Settings
Click on your logo **>** **Model providers** **>** **System Model Settings** to update your model:
- *You should now be able to find **llama3.2** from the dropdown list under **Chat model**, and **bge-m3** from the dropdown list under **Embedding model**.*
### 6. Update Chat Configuration
@ -128,7 +127,7 @@ To deploy a local model, e.g., **Mistral**, using Xinference:
### 1. Check firewall settings
Ensure that your host machine's firewall allows inbound connections on port 9997.
Ensure that your host machine's firewall allows inbound connections on port 9997.
### 2. Start an Xinference instance
@ -151,13 +150,13 @@ In RAGFlow, click on your logo on the top right of the page **>** **Model provid
### 5. Complete basic Xinference settings
Enter an accessible base URL, such as `http://<your-xinference-endpoint-domain>:9997/v1`.
Enter an accessible base URL, such as `http://<your-xinference-endpoint-domain>:9997/v1`.
> For rerank model, please use the `http://<your-xinference-endpoint-domain>:9997/v1/rerank` as the base URL.
### 6. Update System Model Settings
Click on your logo **>** **Model providers** **>** **System Model Settings** to update your model.
*You should now be able to find **mistral** from the dropdown list under **Chat model**.*
### 7. Update Chat Configuration
@ -173,7 +172,7 @@ To deploy a local model, e.g., **Qwen2**, using IPEX-LLM-accelerated Ollama:
### 1. Check firewall settings
Ensure that your host machine's firewall allows inbound connections on port 11434. For example:
```bash
sudo ufw allow 11434/tcp
```
@ -182,7 +181,7 @@ sudo ufw allow 11434/tcp
#### 2.1 Install IPEX-LLM for Ollama
:::tip NOTE
:::tip NOTE
IPEX-LLM's supports Ollama on Linux and Windows systems.
:::
@ -194,7 +193,7 @@ For detailed information about installing IPEX-LLM for Ollama, see [Run llama.cp
#### 2.2 Initialize Ollama
1. Activate the `llm-cpp` Conda environment and initialize Ollama:
1. Activate the `llm-cpp` Conda environment and initialize Ollama:
<Tabs
defaultValue="linux"
@ -203,7 +202,7 @@ For detailed information about installing IPEX-LLM for Ollama, see [Run llama.cp
{label: 'Windows', value: 'windows'},
]}>
<TabItem value="linux">
```bash
conda activate llm-cpp
init-ollama
@ -221,7 +220,7 @@ For detailed information about installing IPEX-LLM for Ollama, see [Run llama.cp
</Tabs>
2. If the installed `ipex-llm[cpp]` requires an upgrade to the Ollama binary files, remove the old binary files and reinitialize Ollama using `init-ollama` (Linux) or `init-ollama.bat` (Windows).
*A symbolic link to Ollama appears in your current directory, and you can use this executable file following standard Ollama commands.*
#### 2.3 Launch Ollama service
@ -229,7 +228,7 @@ For detailed information about installing IPEX-LLM for Ollama, see [Run llama.cp
1. Set the environment variable `OLLAMA_NUM_GPU` to `999` to ensure that all layers of your model run on the Intel GPU; otherwise, some layers may default to CPU.
2. For optimal performance on Intel Arc™ A-Series Graphics with Linux OS (Kernel 6.2), set the following environment variable before launching the Ollama service:
```bash
```bash
export SYCL_PI_LEVEL_ZERO_USE_IMMEDIATE_COMMANDLISTS=1
```
3. Launch the Ollama service:
@ -317,12 +316,12 @@ To enable IPEX-LLM accelerated Ollama in RAGFlow, you must also complete the con
3. [Update System Model Settings](#6-update-system-model-settings)
4. [Update Chat Configuration](#7-update-chat-configuration)
### 5. Deploy VLLM
### 5. Deploy VLLM
ubuntu 22.04/24.04
```bash
pip install vllm
pip install vllm
```
### 5.1 RUN VLLM WITH BEST PRACTISE

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideKey
}
---
# Configure model API key
An API key is required for RAGFlow to interact with an online AI model. This guide provides information about setting your model API key in RAGFlow.
@ -33,7 +32,7 @@ You have two options for configuring your model API key:
- Update `api_key` with yours.
- Update `base_url` if you use a proxy to connect to the remote service.
3. Reboot your system for your changes to take effect.
4. Log into RAGFlow.
4. Log into RAGFlow.
_After logging into RAGFlow, you will find your chosen model appears under **Added models** on the **Model providers** page._
### Configure model API key after logging into RAGFlow

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideLogOut
}
---
# Join or leave a team
Accept an invitation to join a team, decline an invitation, or leave a team.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideUserCog
}
---
# Manage team members
Invite or remove team members.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideShare2
}
---
# Share Agent
Share an Agent with your team members.
@ -14,7 +13,7 @@ Share an Agent with your team members.
When ready, you may share your Agents with your team members so that they can use them. Please note that your Agents are not shared automatically; you must manually enable sharing by selecting the corresponding **Permissions** radio button:
1. Click the intended Agent to open its editing canvas.
1. Click the intended Agent to open its editing canvas.
2. Click **Management** > **Settings** to show the **Agent settings** dialogue.
3. Change **Permissions** from **Only me** to **Team**.
4. Click **Save** to apply your changes.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideShare2
}
---
# Share chat assistant
Sharing chat assistant is currently exclusive to RAGFlow Enterprise, but will be made available in due course.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideShare2
}
---
# Share dataset
Share a dataset with team members.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideShare2
}
---
# Share models
Sharing models is currently exclusive to RAGFlow Enterprise.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideLocateFixed
}
---
# Tracing
Observability & Tracing with Langfuse.
@ -18,10 +17,10 @@ This document is contributed by our community contributor [jannikmaierhoefer](ht
RAGFlow ships with a built-in [Langfuse](https://langfuse.com) integration so that you can **inspect and debug every retrieval and generation step** of your RAG pipelines in near real-time.
Langfuse stores traces, spans and prompt payloads in a purpose-built observability backend and offers filtering and visualisations on top.
Langfuse stores traces, spans and prompt payloads in a purpose-built observability backend and offers filtering and visualisations on top.
:::info NOTE
• RAGFlow **≥ 0.18.0** (contains the Langfuse connector)
• RAGFlow **≥ 0.18.0** (contains the Langfuse connector)
• A Langfuse workspace (cloud or self-hosted) with a _Project Public Key_ and _Secret Key_
:::
@ -29,9 +28,9 @@ Langfuse stores traces, spans and prompt payloads in a purpose-built observabili
## 1. Collect your Langfuse credentials
1. Sign in to your Langfuse dashboard.
2. Open **Settings ▸ Projects** and either create a new project or select an existing one.
3. Copy the **Public Key** and **Secret Key**.
1. Sign in to your Langfuse dashboard.
2. Open **Settings ▸ Projects** and either create a new project or select an existing one.
3. Copy the **Public Key** and **Secret Key**.
4. Note the Langfuse **host** (e.g. `https://cloud.langfuse.com`). Use the base URL of your own installation if you self-host.
> The keys are _project-scoped_: one pair of keys is enough for all environments that should write into the same project.
@ -42,10 +41,10 @@ Langfuse stores traces, spans and prompt payloads in a purpose-built observabili
RAGFlow stores the credentials _per tenant_. You can configure them either via the web UI or the HTTP API.
1. Log in to RAGFlow and click your avatar in the top-right corner.
2. Select **API ▸ Scroll down to the bottom ▸ Langfuse Configuration**.
3. Fill in you Langfuse **Host**, **Public Key** and **Secret Key**.
4. Click **Save**.
1. Log in to RAGFlow and click your avatar in the top-right corner.
2. Select **API ▸ Scroll down to the bottom ▸ Langfuse Configuration**.
3. Fill in you Langfuse **Host**, **Public Key** and **Secret Key**.
4. Click **Save**.
![Example RAGFlow trace in Langfuse](https://langfuse.com/images/docs/ragflow/ragflow-configuration.gif)
@ -55,14 +54,14 @@ Once saved, RAGFlow starts emitting traces automatically no code change requ
## 3. Run a pipeline and watch the traces
1. Execute any chat or retrieval pipeline in RAGFlow (e.g. the Quickstart demo).
2. Open your Langfuse project ▸ **Traces**.
1. Execute any chat or retrieval pipeline in RAGFlow (e.g. the Quickstart demo).
2. Open your Langfuse project ▸ **Traces**.
3. Filter by **name ~ `ragflow-*`** (RAGFlow prefixes each trace with `ragflow-`).
For every user request you will see:
• a **trace** representing the overall request
• **spans** for retrieval, ranking and generation steps
• a **trace** representing the overall request
• **spans** for retrieval, ranking and generation steps
• the complete **prompts**, **retrieved documents** and **LLM responses** as metadata
![Example RAGFlow trace in Langfuse](https://langfuse.com/images/docs/ragflow/ragflow-trace-frame.png)

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideArrowBigUpDash
}
---
# Upgrading
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
sidebarIcon: LucideRocket
}
---
# Get started
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
@ -15,9 +14,9 @@ RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on d
This quick start guide describes a general process from:
- Starting up a local RAGFlow server,
- Creating a dataset,
- Intervening with file parsing, to
- Starting up a local RAGFlow server,
- Creating a dataset,
- Intervening with file parsing, to
- Establishing an AI chat based on your datasets.
:::danger IMPORTANT
@ -74,7 +73,7 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
:::caution WARNING
This change will be reset after a system reboot. If you forget to update the value the next time you start up the server, you may get a `Can't connect to ES cluster` exception.
:::
1.3. To ensure your change remains permanent, add or update the `vm.max_map_count` value in **/etc/sysctl.conf** accordingly:
```bash
@ -148,7 +147,7 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
```
#### If you are on Windows with Docker Desktop WSL 2 backend, then use docker-desktop to set `vm.max_map_count`:
1.1. Run the following in WSL:
1.1. Run the following in WSL:
```bash
$ wsl -d docker-desktop -u root
$ sysctl -w vm.max_map_count=262144
@ -175,7 +174,7 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
```
```bash
# Append a line, which reads:
# Append a line, which reads:
vm.max_map_count = 262144
```
:::
@ -230,13 +229,13 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
/ /_/ // /| | / / __ / /_ / // __ \| | /| / /
/ _, _// ___ |/ /_/ // __/ / // /_/ /| |/ |/ /
/_/ |_|/_/ |_|\____//_/ /_/ \____/ |__/|__/
* Running on all addresses (0.0.0.0)
```
:::danger IMPORTANT
If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a `network anomaly` error because, at that moment, your RAGFlow may not be fully initialized.
:::
:::
5. In your web browser, enter the IP address of your server and log in to RAGFlow.
@ -248,24 +247,24 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
RAGFlow is a RAG engine and needs to work with an LLM to offer grounded, hallucination-free question-answering capabilities. RAGFlow supports most mainstream LLMs. For a complete list of supported models, please refer to [Supported Models](./references/supported_models.mdx).
:::note
RAGFlow also supports deploying LLMs locally using Ollama, Xinference, or LocalAI, but this part is not covered in this quick start guide.
:::note
RAGFlow also supports deploying LLMs locally using Ollama, Xinference, or LocalAI, but this part is not covered in this quick start guide.
:::
To add and configure an LLM:
To add and configure an LLM:
1. Click on your logo on the top right of the page **>** **Model providers**.
2. Click on the desired LLM and update the API key accordingly.
3. Click **System Model Settings** to select the default models:
3. Click **System Model Settings** to select the default models:
- Chat model,
- Embedding model,
- Chat model,
- Embedding model,
- Image-to-text model,
- and more.
> Some models, such as the image-to-text model **qwen-vl-max**, are subsidiary to a specific LLM. And you may need to update your API key to access these models.
> Some models, such as the image-to-text model **qwen-vl-max**, are subsidiary to a specific LLM. And you may need to update your API key to access these models.
## Create your first dataset
@ -281,21 +280,21 @@ To create your first dataset:
![dataset configuration](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/configure_knowledge_base.jpg)
3. RAGFlow offers multiple chunk templates that cater to different document layouts and file formats. Select the embedding model and chunking method (template) for your dataset.
3. RAGFlow offers multiple chunk templates that cater to different document layouts and file formats. Select the embedding model and chunking method (template) for your dataset.
:::danger IMPORTANT
Once you have selected an embedding model and used it to parse a file, you are no longer allowed to change it. The obvious reason is that we must ensure that all files in a specific dataset are parsed using the *same* embedding model (ensure that they are being compared in the same embedding space).
:::danger IMPORTANT
Once you have selected an embedding model and used it to parse a file, you are no longer allowed to change it. The obvious reason is that we must ensure that all files in a specific dataset are parsed using the *same* embedding model (ensure that they are being compared in the same embedding space).
:::
_You are taken to the **Dataset** page of your dataset._
4. Click **+ Add file** **>** **Local files** to start uploading a particular file to the dataset.
4. Click **+ Add file** **>** **Local files** to start uploading a particular file to the dataset.
5. In the uploaded file entry, click the play button to start file parsing:
![parse file](https://raw.githubusercontent.com/infiniflow/ragflow-docs/main/images/parse_file.jpg)
:::caution NOTE
:::caution NOTE
- If your file parsing gets stuck at below 1%, see [this FAQ](./faq.mdx#why-does-my-document-parsing-stall-at-under-one-percent).
- If your file parsing gets stuck at near completion, see [this FAQ](./faq.mdx#why-does-my-pdf-parsing-stall-near-completion-while-the-log-does-not-show-any-error)
:::

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideCaseUpper
}
---
# Glossary
Definitions of key terms and basic concepts related to RAGFlow.

File diff suppressed because it is too large Load Diff

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: SiPython
}
---
# Python API
A complete reference for RAGFlow's Python APIs. Before proceeding, please ensure you [have your RAGFlow API key ready for authentication](https://ragflow.io/docs/dev/acquire_ragflow_api_key).
@ -111,7 +110,7 @@ RAGFlow.create_dataset(
avatar: Optional[str] = None,
description: Optional[str] = None,
embedding_model: Optional[str] = "BAAI/bge-large-zh-v1.5@BAAI",
permission: str = "me",
permission: str = "me",
chunk_method: str = "naive",
parser_config: DataSet.ParserConfig = None
) -> DataSet
@ -139,7 +138,7 @@ A brief description of the dataset to create. Defaults to `None`.
##### permission
Specifies who can access the dataset to create. Available options:
Specifies who can access the dataset to create. Available options:
- `"me"`: (Default) Only you can manage the dataset.
- `"team"`: All team members can manage the dataset.
@ -164,29 +163,29 @@ The chunking method of the dataset to create. Available options:
The parser configuration of the dataset. A `ParserConfig` object's attributes vary based on the selected `chunk_method`:
- `chunk_method`=`"naive"`:
- `chunk_method`=`"naive"`:
`{"chunk_token_num":512,"delimiter":"\\n","html4excel":False,"layout_recognize":True,"raptor":{"use_raptor":False}}`.
- `chunk_method`=`"qa"`:
- `chunk_method`=`"qa"`:
`{"raptor": {"use_raptor": False}}`
- `chunk_method`=`"manuel"`:
- `chunk_method`=`"manuel"`:
`{"raptor": {"use_raptor": False}}`
- `chunk_method`=`"table"`:
- `chunk_method`=`"table"`:
`None`
- `chunk_method`=`"paper"`:
- `chunk_method`=`"paper"`:
`{"raptor": {"use_raptor": False}}`
- `chunk_method`=`"book"`:
- `chunk_method`=`"book"`:
`{"raptor": {"use_raptor": False}}`
- `chunk_method`=`"laws"`:
- `chunk_method`=`"laws"`:
`{"raptor": {"use_raptor": False}}`
- `chunk_method`=`"picture"`:
- `chunk_method`=`"picture"`:
`None`
- `chunk_method`=`"presentation"`:
- `chunk_method`=`"presentation"`:
`{"raptor": {"use_raptor": False}}`
- `chunk_method`=`"one"`:
- `chunk_method`=`"one"`:
`None`
- `chunk_method`=`"knowledge-graph"`:
- `chunk_method`=`"knowledge-graph"`:
`{"chunk_token_num":128,"delimiter":"\\n","entity_types":["organization","person","location","event","time"]}`
- `chunk_method`=`"email"`:
- `chunk_method`=`"email"`:
`None`
#### Returns
@ -239,9 +238,9 @@ rag_object.delete_datasets(ids=["d94a8dc02c9711f0930f7fbc369eab6d","e94a8dc02c97
```python
RAGFlow.list_datasets(
page: int = 1,
page_size: int = 30,
orderby: str = "create_time",
page: int = 1,
page_size: int = 30,
orderby: str = "create_time",
desc: bool = True,
id: str = None,
name: str = None
@ -320,25 +319,25 @@ A dictionary representing the attributes to update, with the following keys:
- Basic Multilingual Plane (BMP) only
- Maximum 128 characters
- Case-insensitive
- `"avatar"`: (*Body parameter*), `string`
- `"avatar"`: (*Body parameter*), `string`
The updated base64 encoding of the avatar.
- Maximum 65535 characters
- `"embedding_model"`: (*Body parameter*), `string`
The updated embedding model name.
- `"embedding_model"`: (*Body parameter*), `string`
The updated embedding model name.
- Ensure that `"chunk_count"` is `0` before updating `"embedding_model"`.
- Maximum 255 characters
- Must follow `model_name@model_factory` format
- `"permission"`: (*Body parameter*), `string`
The updated dataset permission. Available options:
- `"permission"`: (*Body parameter*), `string`
The updated dataset permission. Available options:
- `"me"`: (Default) Only you can manage the dataset.
- `"team"`: All team members can manage the dataset.
- `"pagerank"`: (*Body parameter*), `int`
- `"pagerank"`: (*Body parameter*), `int`
refer to [Set page rank](https://ragflow.io/docs/dev/set_page_rank)
- Default: `0`
- Minimum: `0`
- Maximum: `100`
- `"chunk_method"`: (*Body parameter*), `enum<string>`
The chunking method for the dataset. Available options:
- `"chunk_method"`: (*Body parameter*), `enum<string>`
The chunking method for the dataset. Available options:
- `"naive"`: General (default)
- `"book"`: Book
- `"email"`: Email
@ -388,7 +387,7 @@ Uploads documents to the current dataset.
A list of dictionaries representing the documents to upload, each containing the following keys:
- `"display_name"`: (Optional) The file name to display in the dataset.
- `"display_name"`: (Optional) The file name to display in the dataset.
- `"blob"`: (Optional) The binary content of the file to upload.
#### Returns
@ -434,29 +433,29 @@ A dictionary representing the attributes to update, with the following keys:
- `"one"`: One
- `"email"`: Email
- `"parser_config"`: `dict[str, Any]` The parsing configuration for the document. Its attributes vary based on the selected `"chunk_method"`:
- `"chunk_method"`=`"naive"`:
- `"chunk_method"`=`"naive"`:
`{"chunk_token_num":128,"delimiter":"\\n","html4excel":False,"layout_recognize":True,"raptor":{"use_raptor":False}}`.
- `chunk_method`=`"qa"`:
- `chunk_method`=`"qa"`:
`{"raptor": {"use_raptor": False}}`
- `chunk_method`=`"manuel"`:
- `chunk_method`=`"manuel"`:
`{"raptor": {"use_raptor": False}}`
- `chunk_method`=`"table"`:
- `chunk_method`=`"table"`:
`None`
- `chunk_method`=`"paper"`:
- `chunk_method`=`"paper"`:
`{"raptor": {"use_raptor": False}}`
- `chunk_method`=`"book"`:
- `chunk_method`=`"book"`:
`{"raptor": {"use_raptor": False}}`
- `chunk_method`=`"laws"`:
- `chunk_method`=`"laws"`:
`{"raptor": {"use_raptor": False}}`
- `chunk_method`=`"presentation"`:
- `chunk_method`=`"presentation"`:
`{"raptor": {"use_raptor": False}}`
- `chunk_method`=`"picture"`:
- `chunk_method`=`"picture"`:
`None`
- `chunk_method`=`"one"`:
- `chunk_method`=`"one"`:
`None`
- `chunk_method`=`"knowledge-graph"`:
- `chunk_method`=`"knowledge-graph"`:
`{"chunk_token_num":128,"delimiter":"\\n","entity_types":["organization","person","location","event","time"]}`
- `chunk_method`=`"email"`:
- `chunk_method`=`"email"`:
`None`
#### Returns
@ -589,27 +588,27 @@ A `Document` object contains the following attributes:
- `"FAIL"`
- `status`: `str` Reserved for future use.
- `parser_config`: `ParserConfig` Configuration object for the parser. Its attributes vary based on the selected `chunk_method`:
- `chunk_method`=`"naive"`:
- `chunk_method`=`"naive"`:
`{"chunk_token_num":128,"delimiter":"\\n","html4excel":False,"layout_recognize":True,"raptor":{"use_raptor":False}}`.
- `chunk_method`=`"qa"`:
- `chunk_method`=`"qa"`:
`{"raptor": {"use_raptor": False}}`
- `chunk_method`=`"manuel"`:
- `chunk_method`=`"manuel"`:
`{"raptor": {"use_raptor": False}}`
- `chunk_method`=`"table"`:
- `chunk_method`=`"table"`:
`None`
- `chunk_method`=`"paper"`:
- `chunk_method`=`"paper"`:
`{"raptor": {"use_raptor": False}}`
- `chunk_method`=`"book"`:
- `chunk_method`=`"book"`:
`{"raptor": {"use_raptor": False}}`
- `chunk_method`=`"laws"`:
- `chunk_method`=`"laws"`:
`{"raptor": {"use_raptor": False}}`
- `chunk_method`=`"presentation"`:
- `chunk_method`=`"presentation"`:
`{"raptor": {"use_raptor": False}}`
- `chunk_method`=`"picure"`:
- `chunk_method`=`"picure"`:
`None`
- `chunk_method`=`"one"`:
- `chunk_method`=`"one"`:
`None`
- `chunk_method`=`"email"`:
- `chunk_method`=`"email"`:
`None`
#### Examples
@ -727,9 +726,9 @@ A list of tuples with detailed parsing results:
...
]
```
- `status`: The final parsing state (e.g., `success`, `failed`, `cancelled`).
- `chunk_count`: The number of content chunks created from the document.
- `token_count`: The total number of tokens processed.
- `status`: The final parsing state (e.g., `success`, `failed`, `cancelled`).
- `chunk_count`: The number of content chunks created from the document.
- `token_count`: The total number of tokens processed.
---
@ -989,11 +988,11 @@ The user query or query keywords. Defaults to `""`.
##### dataset_ids: `list[str]`, *Required*
The IDs of the datasets to search. Defaults to `None`.
The IDs of the datasets to search. Defaults to `None`.
##### document_ids: `list[str]`
The IDs of the documents to search. Defaults to `None`. You must ensure all selected documents use the same embedding model. Otherwise, an error will occur.
The IDs of the documents to search. Defaults to `None`. You must ensure all selected documents use the same embedding model. Otherwise, an error will occur.
##### page: `int`
@ -1026,7 +1025,7 @@ Indicates whether to enable keyword-based matching:
- `True`: Enable keyword-based matching.
- `False`: Disable keyword-based matching (default).
##### cross_languages: `list[string]`
##### cross_languages: `list[string]`
The languages that should be translated into, in order to achieve keywords retrievals in different languages.
@ -1067,10 +1066,10 @@ for c in rag_object.retrieve(dataset_ids=[dataset.id],document_ids=[doc.id]):
```python
RAGFlow.create_chat(
name: str,
avatar: str = "",
dataset_ids: list[str] = [],
llm: Chat.LLM = None,
name: str,
avatar: str = "",
dataset_ids: list[str] = [],
llm: Chat.LLM = None,
prompt: Chat.Prompt = None
) -> Chat
```
@ -1095,15 +1094,15 @@ The IDs of the associated datasets. Defaults to `[""]`.
The LLM settings for the chat assistant to create. Defaults to `None`. When the value is `None`, a dictionary with the following values will be generated as the default. An `LLM` object contains the following attributes:
- `model_name`: `str`
The chat model name. If it is `None`, the user's default chat model will be used.
- `temperature`: `float`
Controls the randomness of the model's predictions. A lower temperature results in more conservative responses, while a higher temperature yields more creative and diverse responses. Defaults to `0.1`.
- `top_p`: `float`
Also known as “nucleus sampling”, this parameter sets a threshold to select a smaller set of words to sample from. It focuses on the most likely words, cutting off the less probable ones. Defaults to `0.3`
- `presence_penalty`: `float`
- `model_name`: `str`
The chat model name. If it is `None`, the user's default chat model will be used.
- `temperature`: `float`
Controls the randomness of the model's predictions. A lower temperature results in more conservative responses, while a higher temperature yields more creative and diverse responses. Defaults to `0.1`.
- `top_p`: `float`
Also known as “nucleus sampling”, this parameter sets a threshold to select a smaller set of words to sample from. It focuses on the most likely words, cutting off the less probable ones. Defaults to `0.3`
- `presence_penalty`: `float`
This discourages the model from repeating the same information by penalizing words that have already appeared in the conversation. Defaults to `0.2`.
- `frequency penalty`: `float`
- `frequency penalty`: `float`
Similar to the presence penalty, this reduces the models tendency to repeat the same words frequently. Defaults to `0.7`.
##### prompt: `Chat.Prompt`
@ -1163,8 +1162,8 @@ A dictionary representing the attributes to update, with the following keys:
- `"dataset_ids"`: `list[str]` The datasets to update.
- `"llm"`: `dict` The LLM settings:
- `"model_name"`, `str` The chat model name.
- `"temperature"`, `float` Controls the randomness of the model's predictions. A lower temperature results in more conservative responses, while a higher temperature yields more creative and diverse responses.
- `"top_p"`, `float` Also known as “nucleus sampling”, this parameter sets a threshold to select a smaller set of words to sample from.
- `"temperature"`, `float` Controls the randomness of the model's predictions. A lower temperature results in more conservative responses, while a higher temperature yields more creative and diverse responses.
- `"top_p"`, `float` Also known as “nucleus sampling”, this parameter sets a threshold to select a smaller set of words to sample from.
- `"presence_penalty"`, `float` This discourages the model from repeating the same information by penalizing words that have appeared in the conversation.
- `"frequency penalty"`, `float` Similar to presence penalty, this reduces the models tendency to repeat the same words.
- `"prompt"` : Instructions for the LLM to follow.
@ -1234,9 +1233,9 @@ rag_object.delete_chats(ids=["id_1","id_2"])
```python
RAGFlow.list_chats(
page: int = 1,
page_size: int = 30,
orderby: str = "create_time",
page: int = 1,
page_size: int = 30,
orderby: str = "create_time",
desc: bool = True,
id: str = None,
name: str = None
@ -1266,11 +1265,11 @@ The attribute by which the results are sorted. Available options:
Indicates whether the retrieved chat assistants should be sorted in descending order. Defaults to `True`.
##### id: `str`
##### id: `str`
The ID of the chat assistant to retrieve. Defaults to `None`.
##### name: `str`
##### name: `str`
The name of the chat assistant to retrieve. Defaults to `None`.
@ -1370,9 +1369,9 @@ session.update({"name": "updated_name"})
```python
Chat.list_sessions(
page: int = 1,
page_size: int = 30,
orderby: str = "create_time",
page: int = 1,
page_size: int = 30,
orderby: str = "create_time",
desc: bool = True,
id: str = None,
name: str = None
@ -1509,25 +1508,25 @@ The content of the message. Defaults to `"Hi! I am your assistant, can I help yo
A list of `Chunk` objects representing references to the message, each containing the following attributes:
- `id` `str`
- `id` `str`
The chunk ID.
- `content` `str`
- `content` `str`
The content of the chunk.
- `img_id` `str`
- `img_id` `str`
The ID of the snapshot of the chunk. Applicable only when the source of the chunk is an image, PPT, PPTX, or PDF file.
- `document_id` `str`
- `document_id` `str`
The ID of the referenced document.
- `document_name` `str`
- `document_name` `str`
The name of the referenced document.
- `position` `list[str]`
- `position` `list[str]`
The location information of the chunk within the referenced document.
- `dataset_id` `str`
- `dataset_id` `str`
The ID of the dataset to which the referenced document belongs.
- `similarity` `float`
- `similarity` `float`
A composite similarity score of the chunk ranging from `0` to `1`, with a higher value indicating greater similarity. It is the weighted sum of `vector_similarity` and `term_similarity`.
- `vector_similarity` `float`
- `vector_similarity` `float`
A vector similarity score of the chunk ranging from `0` to `1`, with a higher value indicating greater similarity between vector embeddings.
- `term_similarity` `float`
- `term_similarity` `float`
A keyword similarity score of the chunk ranging from `0` to `1`, with a higher value indicating greater similarity between keywords.
#### Examples
@ -1538,7 +1537,7 @@ from ragflow_sdk import RAGFlow
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
assistant = rag_object.list_chats(name="Miss R")
assistant = assistant[0]
session = assistant.create_session()
session = assistant.create_session()
print("\n==================== Miss R =====================\n")
print("Hello. What can I do for you?")
@ -1546,7 +1545,7 @@ print("Hello. What can I do for you?")
while True:
question = input("\n==================== User =====================\n> ")
print("\n==================== Miss R =====================\n")
cont = ""
for ans in session.ask(question, stream=True):
print(ans.content[len(cont):], end='', flush=True)
@ -1634,25 +1633,25 @@ The content of the message. Defaults to `"Hi! I am your assistant, can I help yo
A list of `Chunk` objects representing references to the message, each containing the following attributes:
- `id` `str`
- `id` `str`
The chunk ID.
- `content` `str`
- `content` `str`
The content of the chunk.
- `image_id` `str`
- `image_id` `str`
The ID of the snapshot of the chunk. Applicable only when the source of the chunk is an image, PPT, PPTX, or PDF file.
- `document_id` `str`
- `document_id` `str`
The ID of the referenced document.
- `document_name` `str`
- `document_name` `str`
The name of the referenced document.
- `position` `list[str]`
- `position` `list[str]`
The location information of the chunk within the referenced document.
- `dataset_id` `str`
- `dataset_id` `str`
The ID of the dataset to which the referenced document belongs.
- `similarity` `float`
- `similarity` `float`
A composite similarity score of the chunk ranging from `0` to `1`, with a higher value indicating greater similarity. It is the weighted sum of `vector_similarity` and `term_similarity`.
- `vector_similarity` `float`
- `vector_similarity` `float`
A vector similarity score of the chunk ranging from `0` to `1`, with a higher value indicating greater similarity between vector embeddings.
- `term_similarity` `float`
- `term_similarity` `float`
A keyword similarity score of the chunk ranging from `0` to `1`, with a higher value indicating greater similarity between keywords.
#### Examples
@ -1663,7 +1662,7 @@ from ragflow_sdk import RAGFlow, Agent
rag_object = RAGFlow(api_key="<YOUR_API_KEY>", base_url="http://<YOUR_BASE_URL>:9380")
AGENT_id = "AGENT_ID"
agent = rag_object.list_agents(id = AGENT_id)[0]
session = agent.create_session()
session = agent.create_session()
print("\n===== Miss R ====\n")
print("Hello. What can I do for you?")
@ -1671,7 +1670,7 @@ print("Hello. What can I do for you?")
while True:
question = input("\n===== User ====\n> ")
print("\n==== Miss R ====\n")
cont = ""
for ans in session.ask(question, stream=True):
print(ans.content[len(cont):], end='', flush=True)
@ -1684,9 +1683,9 @@ while True:
```python
Agent.list_sessions(
page: int = 1,
page_size: int = 30,
orderby: str = "update_time",
page: int = 1,
page_size: int = 30,
orderby: str = "update_time",
desc: bool = True,
id: str = None
) -> List[Session]
@ -1777,9 +1776,9 @@ agent.delete_sessions(ids=["id_1","id_2"])
```python
RAGFlow.list_agents(
page: int = 1,
page_size: int = 30,
orderby: str = "create_time",
page: int = 1,
page_size: int = 30,
orderby: str = "create_time",
desc: bool = True,
id: str = None,
title: str = None
@ -1809,11 +1808,11 @@ The attribute by which the results are sorted. Available options:
Indicates whether the retrieved agents should be sorted in descending order. Defaults to `True`.
##### id: `str`
##### id: `str`
The ID of the agent to retrieve. Defaults to `None`.
##### name: `str`
##### name: `str`
The name of the agent to retrieve. Defaults to `None`.

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
categoryIcon: LucideBox
}
---
# Supported models
import APITable from '@site/src/components/APITable';

View File

@ -5,7 +5,6 @@ sidebar_custom_props: {
sidebarIcon: LucideClipboardPenLine
}
---
# Releases
Key features, improvements and bug fixes in the latest releases.
@ -23,7 +22,7 @@ Released on December 31, 2025.
### Fixed issues
- Memory:
- Memory:
- The RAGFlow server failed to start if an empty memory object existed.
- Unable to delete a newly created empty Memory.
- RAG: MDX file parsing was not supported.
@ -259,7 +258,7 @@ Ecommerce Customer Service Workflow: A template designed to handle enquiries abo
### Fixed issues
- Dataset:
- Dataset:
- Unable to share resources with the team.
- Inappropriate restrictions on the number and size of uploaded files.
- Chat:
@ -275,13 +274,13 @@ Released on August 20, 2025.
### Improvements
- Revamps the user interface for the **Datasets**, **Chat**, and **Search** pages.
- Revamps the user interface for the **Datasets**, **Chat**, and **Search** pages.
- Search and Chat: Introduces document-level metadata filtering, allowing automatic or manual filtering during chats or searches.
- Search: Supports creating search apps tailored to various business scenarios
- Chat: Supports comparing answer performance of up to three chat model settings on a single **Chat** page.
- Agent:
- Implements a toggle in the **Agent** component to enable or disable citation.
- Introduces a drag-and-drop method for creating components.
- Agent:
- Implements a toggle in the **Agent** component to enable or disable citation.
- Introduces a drag-and-drop method for creating components.
- Documentation: Corrects inaccuracies in the API reference.
### New Agent templates
@ -291,8 +290,8 @@ Released on August 20, 2025.
### Fixed issues
- The timeout mechanism introduced in v0.20.0 caused tasks like GraphRAG to halt.
- Predefined opening greeting in the **Agent** component was missing during conversations.
- An automatic line break issue in the prompt editor.
- Predefined opening greeting in the **Agent** component was missing during conversations.
- An automatic line break issue in the prompt editor.
- A memory leak issue caused by PyPDF. [#9469](https://github.com/infiniflow/ragflow/pull/9469)
### API changes
@ -376,7 +375,7 @@ Released on June 23, 2025.
### Newly supported models
- Qwen 3 Embedding. [#8184](https://github.com/infiniflow/ragflow/pull/8184)
- Qwen 3 Embedding. [#8184](https://github.com/infiniflow/ragflow/pull/8184)
- Voyage Multimodal 3. [#7987](https://github.com/infiniflow/ragflow/pull/7987)
## v0.19.0