mirror of
https://github.com/infiniflow/ragflow.git
synced 2026-05-04 09:17:48 +08:00
Docs: Refactored documentation (#13340)
### What problem does this PR solve? Refactored documentation. ### Type of change - [x] Documentation Update
This commit is contained in:
@ -1,11 +1,11 @@
|
||||
---
|
||||
sidebar_position: 0
|
||||
sidebar_position: 2
|
||||
slug: /
|
||||
sidebar_custom_props: {
|
||||
sidebarIcon: LucideRocket
|
||||
}
|
||||
---
|
||||
# Get started
|
||||
# Quickstart
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
import APITable from '@site/src/components/APITable';
|
||||
@ -249,7 +249,7 @@ This section provides instructions on setting up the RAGFlow server on Linux. If
|
||||
|
||||
## Configure LLMs
|
||||
|
||||
RAGFlow is a RAG engine and needs to work with an LLM to offer grounded, hallucination-free question-answering capabilities. RAGFlow supports most mainstream LLMs. For a complete list of supported models, please refer to [Supported Models](./references/supported_models.mdx).
|
||||
RAGFlow is a RAG engine and needs to work with an LLM to offer grounded, hallucination-free question-answering capabilities. RAGFlow supports most mainstream LLMs. For a complete list of supported models, please refer to [Supported Models](./guides/models/supported_models.mdx).
|
||||
|
||||
:::note
|
||||
RAGFlow also supports deploying LLMs locally using Ollama, Xinference, or LocalAI, but this part is not covered in this quick start guide.
|
||||
|
||||
Reference in New Issue
Block a user