mirror of
https://github.com/langgenius/dify.git
synced 2026-01-25 22:35:57 +08:00
Compare commits
41 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 6fa0e4072d | |||
| e15d18aa1c | |||
| 164ef26a60 | |||
| 0dada847ef | |||
| 36b7dbb8d0 | |||
| 02e483c99b | |||
| afe30e15a0 | |||
| 9a1ea9ac03 | |||
| 693647a141 | |||
| cea107b165 | |||
| 509c640a80 | |||
| 617e7cee81 | |||
| d87d4b9b56 | |||
| c889717d24 | |||
| 1f302990c6 | |||
| 37024afe9c | |||
| 18b855140d | |||
| 7c520b52c1 | |||
| b98e363a5c | |||
| 0a7ea9d206 | |||
| 3d473b9763 | |||
| e0df7505f6 | |||
| 43bb0b0b93 | |||
| 6164604462 | |||
| 826c422ac4 | |||
| bf63a43bda | |||
| 55fc46c707 | |||
| 5102430a68 | |||
| 0f897bc1f9 | |||
| d948b0b49b | |||
| b6de97ad53 | |||
| 8cefa6b82e | |||
| 81e1b3fc61 | |||
| 4c1cfd9278 | |||
| 14bb0b02ac | |||
| 240c793e7a | |||
| 89a853212b | |||
| 97d1e0bbbb | |||
| cfb5ccc7d3 | |||
| 835e547195 | |||
| af9ccb7072 |
263
README.md
263
README.md
@ -1,96 +1,176 @@
|
||||
[](https://dify.ai)
|
||||

|
||||
|
||||
<p align="center">
|
||||
<a href="./README.md">English</a> |
|
||||
<a href="./README_CN.md">简体中文</a> |
|
||||
<a href="./README_JA.md">日本語</a> |
|
||||
<a href="./README_ES.md">Español</a> |
|
||||
<a href="./README_KL.md">Klingon</a> |
|
||||
<a href="./README_FR.md">Français</a>
|
||||
<a href="https://cloud.dify.ai">Dify Cloud</a> ·
|
||||
<a href="https://docs.dify.ai/getting-started/install-self-hosted">Self-hosting</a> ·
|
||||
<a href="https://docs.dify.ai">Documentation</a> ·
|
||||
<a href="https://cal.com/guchenhe/30min">Commercial inquiry</a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://dify.ai" target="_blank">
|
||||
<img alt="Static Badge" src="https://img.shields.io/badge/AI-Dify?logo=AI&logoColor=%20%23f5f5f5&label=Dify&labelColor=%20%23155EEF&color=%23EAECF0"></a>
|
||||
<img alt="Static Badge" src="https://img.shields.io/badge/Product-F04438"></a>
|
||||
<a href="https://dify.ai/pricing" target="_blank">
|
||||
<img alt="Static Badge" src="https://img.shields.io/badge/free-pricing?logo=free&color=%20%23155EEF&label=pricing&labelColor=%20%23528bff"></a>
|
||||
<a href="https://discord.gg/FngNHpbcY7" target="_blank">
|
||||
<img src="https://img.shields.io/discord/1082486657678311454?logo=discord"
|
||||
<img src="https://img.shields.io/discord/1082486657678311454?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb"
|
||||
alt="chat on Discord"></a>
|
||||
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
|
||||
<img src="https://img.shields.io/twitter/follow/dify_ai?style=social&logo=X"
|
||||
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
|
||||
alt="follow on Twitter"></a>
|
||||
<a href="https://hub.docker.com/u/langgenius" target="_blank">
|
||||
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web"></a>
|
||||
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
|
||||
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
|
||||
<img alt="Commits last month" src="https://img.shields.io/github/commit-activity/m/langgenius/dify?labelColor=%20%2332b583&color=%20%2312b76a"></a>
|
||||
<a href="https://github.com/langgenius/dify/" target="_blank">
|
||||
<img alt="Issues closed" src="https://img.shields.io/github/issues-search?query=repo%3Alanggenius%2Fdify%20is%3Aclosed&label=issues%20closed&labelColor=%20%237d89b0&color=%20%235d6b98"></a>
|
||||
<a href="https://github.com/langgenius/dify/discussions/" target="_blank">
|
||||
<img alt="Discussion posts" src="https://img.shields.io/github/discussions/langgenius/dify?labelColor=%20%239b8afb&color=%20%237a5af8"></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6" target="_blank">
|
||||
📌 Check out Dify Premium on AWS and deploy it to your own AWS VPC with one-click.
|
||||
</a>
|
||||
<a href="./README.md"><img alt="Commits last month" src="https://img.shields.io/badge/English-d9d9d9"></a>
|
||||
<a href="./README_CN.md"><img alt="Commits last month" src="https://img.shields.io/badge/简体中文-d9d9d9"></a>
|
||||
<a href="./README_JA.md"><img alt="Commits last month" src="https://img.shields.io/badge/日本語-d9d9d9"></a>
|
||||
<a href="./README_ES.md"><img alt="Commits last month" src="https://img.shields.io/badge/Español-d9d9d9"></a>
|
||||
<a href="./README_KL.md"><img alt="Commits last month" src="https://img.shields.io/badge/Français-d9d9d9"></a>
|
||||
<a href="./README_FR.md"><img alt="Commits last month" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
|
||||
</p>
|
||||
|
||||
**Dify** is an open-source LLM app development platform. Dify's intuitive interface combines a RAG pipeline, AI workflow orchestration, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
|
||||
#
|
||||
|
||||
https://github.com/langgenius/dify/assets/13230914/979e7a68-f067-4bbc-b38e-2deb2cc2bbb5
|
||||
<p align="center">
|
||||
<a href="https://trendshift.io/repositories/2152" target="_blank"><img src="https://trendshift.io/api/badge/repositories/2152" alt="langgenius%2Fdify | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
</p>
|
||||
Dify is an open-source LLM app development platform. Its intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production. Here's a list of the core features:
|
||||
</br> </br>
|
||||
|
||||
**1. Workflow**:
|
||||
Build and test powerful AI workflows on a visual canvas, leveraging all the following features and beyond.
|
||||
|
||||
|
||||
## Using Dify Cloud
|
||||
|
||||
You can try out [Dify Cloud](https://dify.ai) now. It provides all the capabilities of the self-deployed version, and includes 200 free GPT-4 calls.
|
||||
|
||||
## Dify for Enterprise / Organizations
|
||||
|
||||
[Schedule a meeting with us](#Direct-Meetings) or [send us an email](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry) to discuss enterprise needs.
|
||||
|
||||
For startups and small businesses using AWS, check out [Dify Premium on AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) and deploy it to your own AWS VPC with one-click. It's an affordable AMI offering with the option to create apps with custom logo and branding.
|
||||
|
||||
## Features
|
||||
|
||||

|
||||
|
||||
**1. Workflow**: Create and test complex AI workflows on a visual canvas, with pre-built nodes taking advantage of the power of all the following features and beyond.
|
||||
|
||||
**2. Extensive LLM support**: Seamless integration with hundreds of proprietary / open-source LLMs and dozens of inference providers, including GPT, Mistral, Llama2, and OpenAI API-compatible models. A full list of supported model providers is kept [here](https://docs.dify.ai/getting-started/readme/model-providers).
|
||||
|
||||
**3. Prompt IDE**: Visual orchestration of applications and services based on any LLMs. Easily share with your team.
|
||||
|
||||
**4. RAG Engine**: Includes various RAG capabilities based on full-text indexing or vector database embeddings, allowing direct upload of PDFs, TXTs, and other text formats.
|
||||
|
||||
**5. AI Agent**: Based on Function Calling and ReAct, the Agent inference framework allows users to customize tools, what you see is what you get. Dify provides more than a dozen built-in tools for AI agents, such as Google Search, DELL·E, Stable Diffusion, WolframAlpha, etc.
|
||||
|
||||
**6. LLMOps**: Monitor and analyze application logs and performance, continuously improving Prompts, datasets, or models based on production data.
|
||||
https://github.com/langgenius/dify/assets/13230914/356df23e-1604-483d-80a6-9517ece318aa
|
||||
|
||||
|
||||
## Dify vs. LangChain vs. Assistants API
|
||||
|
||||
| Feature | Dify.AI | Assistants API | LangChain |
|
||||
|---------|---------|----------------|-----------|
|
||||
| **Programming Approach** | API-oriented | API-oriented | Python Code-oriented |
|
||||
| **Ecosystem Strategy** | Open Source | Close Source | Open Source |
|
||||
| **RAG Engine** | Supported | Supported | Not Supported |
|
||||
| **Prompt IDE** | Included | Included | None |
|
||||
| **Supported LLMs** | Rich Variety | OpenAI-only | Rich Variety |
|
||||
| **Local Deployment** | Supported | Not Supported | Not Applicable |
|
||||
**2. Comprehensive model support**:
|
||||
Seamless integration with hundreds of proprietary / open-source LLMs from dozens of inference providers and self-hosted solutions, covering GPT, Mistral, Llama2, and any OpenAI API-compatible models. A full list of supported model providers can be found [here](https://docs.dify.ai/getting-started/readme/model-providers).
|
||||
|
||||

|
||||
|
||||
|
||||
## Before You Start
|
||||
**3. Prompt IDE**:
|
||||
Intuitive interface for crafting prompts, comparing model performance, and adding additional features such as text-to-speech to a chat-based app.
|
||||
|
||||
**Star us on GitHub, and be instantly notified for new releases!**
|
||||

|
||||
- [Website](https://dify.ai)
|
||||
- [Docs](https://docs.dify.ai)
|
||||
- [Deployment Docs](https://docs.dify.ai/getting-started/install-self-hosted)
|
||||
- [FAQ](https://docs.dify.ai/getting-started/faq)
|
||||
**4. RAG Pipeline**:
|
||||
Extensive RAG capabilities that cover everything from document ingestion to retrieval, with out-of-box support for text extraction from PDFs, PPTs, and other common document formats.
|
||||
|
||||
**5. Agent capabilities**:
|
||||
You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DELL·E, Stable Diffusion and WolframAlpha.
|
||||
|
||||
**6. LLMOps**:
|
||||
Monitor and analyze application logs and performance over time. You could continuously improve prompts, datasets, and models based on production data and annotations.
|
||||
|
||||
**7. Backend-as-a-Service**:
|
||||
All of Dify's offerings come with corresponding APIs, so you could effortlessly integrate Dify into your own business logic.
|
||||
|
||||
|
||||
## Install the Community Edition
|
||||
## Feature Comparison
|
||||
<table style="width: 100%;">
|
||||
<tr>
|
||||
<th align="center">Feature</th>
|
||||
<th align="center">Dify.AI</th>
|
||||
<th align="center">LangChain</th>
|
||||
<th align="center">Flowise</th>
|
||||
<th align="center">OpenAI Assistants API</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">Programming Approach</td>
|
||||
<td align="center">API + App-oriented</td>
|
||||
<td align="center">Python Code</td>
|
||||
<td align="center">App-oriented</td>
|
||||
<td align="center">API-oriented</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">Supported LLMs</td>
|
||||
<td align="center">Rich Variety</td>
|
||||
<td align="center">Rich Variety</td>
|
||||
<td align="center">Rich Variety</td>
|
||||
<td align="center">OpenAI-only</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">RAG Engine</td>
|
||||
<td align="center">✅</td>
|
||||
<td align="center">✅</td>
|
||||
<td align="center">✅</td>
|
||||
<td align="center">✅</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">Agent</td>
|
||||
<td align="center">✅</td>
|
||||
<td align="center">✅</td>
|
||||
<td align="center">✅</td>
|
||||
<td align="center">✅</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">Workflow</td>
|
||||
<td align="center">✅</td>
|
||||
<td align="center">❌</td>
|
||||
<td align="center">✅</td>
|
||||
<td align="center">❌</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">Observability</td>
|
||||
<td align="center">✅</td>
|
||||
<td align="center">✅</td>
|
||||
<td align="center">❌</td>
|
||||
<td align="center">❌</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">Enterprise Feature (SSO/Access control)</td>
|
||||
<td align="center">✅</td>
|
||||
<td align="center">❌</td>
|
||||
<td align="center">❌</td>
|
||||
<td align="center">❌</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">Local Deployment</td>
|
||||
<td align="center">✅</td>
|
||||
<td align="center">✅</td>
|
||||
<td align="center">✅</td>
|
||||
<td align="center">❌</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
### System Requirements
|
||||
## Using Dify
|
||||
|
||||
Before installing Dify, make sure your machine meets the following minimum system requirements:
|
||||
- **Cloud </br>**
|
||||
We host a [Dify Cloud](https://dify.ai) service for anyone to try with zero setup. It provides all the capabilities of the self-deployed version, and includes 200 free GPT-4 calls in the sandbox plan.
|
||||
|
||||
- CPU >= 2 Core
|
||||
- RAM >= 4GB
|
||||
- **Self-hosting Dify Community Edition</br>**
|
||||
Quickly get Dify running in your environment with this [starter guide](#quick-start).
|
||||
Use our [documentation](https://docs.dify.ai) for further references and more in-depth instructions.
|
||||
|
||||
### Quick Start
|
||||
- **Dify for Enterprise / Organizations</br>**
|
||||
We provide additional enterprise-centric features. [Schedule a meeting with us](https://cal.com/guchenhe/30min) or [send us an email](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry) to discuss enterprise needs. </br>
|
||||
> For startups and small businesses using AWS, check out [Dify Premium on AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) and deploy it to your own AWS VPC with one-click. It's an affordable AMI offering with the option to create apps with custom logo and branding.
|
||||
|
||||
|
||||
## Staying ahead
|
||||
|
||||
Star Dify on GitHub and be instantly notified of new releases.
|
||||
|
||||

|
||||
|
||||
|
||||
|
||||
## Quick Start
|
||||
> Before installing Dify, make sure your machine meets the following minimum system requirements:
|
||||
>
|
||||
>- CPU >= 2 Core
|
||||
>- RAM >= 4GB
|
||||
|
||||
</br>
|
||||
|
||||
The easiest way to start the Dify server is to run our [docker-compose.yml](docker/docker-compose.yaml) file. Before running the installation command, make sure that [Docker](https://docs.docker.com/get-docker/) and [Docker Compose](https://docs.docker.com/compose/install/) are installed on your machine:
|
||||
|
||||
@ -99,58 +179,63 @@ cd docker
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
After running, you can access the Dify dashboard in your browser at [http://localhost/install](http://localhost/install) and start the initialization installation process.
|
||||
After running, you can access the Dify dashboard in your browser at [http://localhost/install](http://localhost/install) and start the initialization process.
|
||||
|
||||
#### Deploy with Helm Chart
|
||||
> If you'd like to contribute to Dify or do additional development, refer to our [guide to deploying from source code](https://docs.dify.ai/getting-started/install-self-hosted/local-source-code)
|
||||
|
||||
[Helm Chart](https://helm.sh/) version, which allows Dify to be deployed on Kubernetes.
|
||||
## Next steps
|
||||
|
||||
If you need to customize the configuration, please refer to the comments in our [docker-compose.yml](docker/docker-compose.yaml) file and manually set the environment configuration. After making the changes, please run `docker-compose up -d` again. You can see the full list of environment variables [here](https://docs.dify.ai/getting-started/install-self-hosted/environments).
|
||||
|
||||
If you'd like to configure a highly-available setup, there are community-contributed [Helm Charts](https://helm.sh/) which allow Dify to be deployed on Kubernetes.
|
||||
|
||||
- [Helm Chart by @LeoQuote](https://github.com/douban/charts/tree/master/charts/dify)
|
||||
- [Helm Chart by @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)
|
||||
|
||||
### Configuration
|
||||
|
||||
If you need to customize the configuration, please refer to the comments in our [docker-compose.yml](docker/docker-compose.yaml) file and manually set the environment configuration. After making the changes, please run `docker-compose up -d` again. You can see the full list of environment variables in our [docs](https://docs.dify.ai/getting-started/install-self-hosted/environments).
|
||||
|
||||
|
||||
## Star History
|
||||
|
||||
[](https://star-history.com/#langgenius/dify&Date)
|
||||
|
||||
## Contributing
|
||||
|
||||
For those who'd like to contribute code, see our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
|
||||
|
||||
At the same time, please consider supporting Dify by sharing it on social media and at events and conferences.
|
||||
|
||||
### Projects made by community
|
||||
|
||||
- [Chatbot Chrome Extension by @charli117](https://github.com/langgenius/chatbot-chrome-extension)
|
||||
> We are looking for contributors to help with translating Dify to languages other than Mandarin or English. If you are interested in helping, please see the [i18n README](https://github.com/langgenius/dify/blob/main/web/i18n/README.md) for more information, and leave us a comment in the `global-users` channel of our [Discord Community Server](https://discord.gg/8Tpq4AcN9c).
|
||||
|
||||
### Contributors
|
||||
**Contributors**
|
||||
|
||||
<a href="https://github.com/langgenius/dify/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=langgenius/dify" />
|
||||
</a>
|
||||
|
||||
### Translations
|
||||
|
||||
We are looking for contributors to help with translating Dify to languages other than Mandarin or English. If you are interested in helping, please see the [i18n README](https://github.com/langgenius/dify/blob/main/web/i18n/README.md) for more information, and leave us a comment in the `global-users` channel of our [Discord Community Server](https://discord.gg/8Tpq4AcN9c).
|
||||
|
||||
## Community & Support
|
||||
## Community & Contact
|
||||
|
||||
* [Github Discussion](https://github.com/langgenius/dify/discussions). Best for: sharing feedback and asking questions.
|
||||
* [GitHub Issues](https://github.com/langgenius/dify/issues). Best for: bugs you encounter using Dify.AI, and feature proposals. See our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
|
||||
* [Email Support](mailto:hello@dify.ai?subject=[GitHub]Questions%20About%20Dify). Best for: questions you have about using Dify.AI.
|
||||
* [Email](mailto:support@dify.ai?subject=[GitHub]Questions%20About%20Dify). Best for: questions you have about using Dify.AI.
|
||||
* [Discord](https://discord.gg/FngNHpbcY7). Best for: sharing your applications and hanging out with the community.
|
||||
* [Twitter](https://twitter.com/dify_ai). Best for: sharing your applications and hanging out with the community.
|
||||
|
||||
### Direct Meetings
|
||||
Or, schedule a meeting directly with a team member:
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<th>Point of Contact</th>
|
||||
<th>Purpose</th>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><a href='https://cal.com/guchenhe/15min' target='_blank'><img class="schedule-button" src='https://github.com/langgenius/dify/assets/13230914/9ebcd111-1205-4d71-83d5-948d70b809f5' alt='Git-Hub-README-Button-3x' style="width: 180px; height: auto; object-fit: contain;"/></a></td>
|
||||
<td>Business enquiries & product feedback</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><a href='https://cal.com/pinkbanana' target='_blank'><img class="schedule-button" src='https://github.com/langgenius/dify/assets/13230914/d1edd00a-d7e4-4513-be6c-e57038e143fd' alt='Git-Hub-README-Button-2x' style="width: 180px; height: auto; object-fit: contain;"/></a></td>
|
||||
<td>Contributions, issues & feature requests</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
## Star History
|
||||
|
||||
[](https://star-history.com/#langgenius/dify&Date)
|
||||
|
||||
| Point of Contact | Purpose |
|
||||
| :----------------------------------------------------------: | :----------------------------------------------------------: |
|
||||
| <a href='https://cal.com/guchenhe/15min' target='_blank'><img src='https://i.postimg.cc/fWBqSmjP/Git-Hub-README-Button-3x.png' border='0' alt='Git-Hub-README-Button-3x' height="60" width="214"/></a> | Business enquiries & product feedback. |
|
||||
| <a href='https://cal.com/pinkbanana' target='_blank'><img src='https://i.postimg.cc/LsRTh87D/Git-Hub-README-Button-2x.png' border='0' alt='Git-Hub-README-Button-2x' height="60" width="225"/></a> | Contributions, issues & feature requests |
|
||||
|
||||
## Security Disclosure
|
||||
|
||||
|
||||
@ -21,6 +21,10 @@
|
||||
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web"></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://trendshift.io/repositories/2152" target="_blank"><img src="https://trendshift.io/api/badge/repositories/2152" alt="langgenius%2Fdify | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://mp.weixin.qq.com/s/TnyfIuH-tPi9o1KNjwVArw" target="_blank">
|
||||
Dify 发布 AI Agent 能力:基于不同的大型语言模型构建 GPTs 和 Assistants
|
||||
|
||||
@ -11,7 +11,8 @@ RUN apt-get update \
|
||||
|
||||
COPY requirements.txt /requirements.txt
|
||||
|
||||
RUN pip install --prefix=/pkg -r requirements.txt
|
||||
RUN --mount=type=cache,target=/root/.cache/pip \
|
||||
pip install --prefix=/pkg -r requirements.txt
|
||||
|
||||
# production stage
|
||||
FROM base AS production
|
||||
|
||||
@ -42,7 +42,7 @@ DEFAULTS = {
|
||||
'HOSTED_OPENAI_TRIAL_ENABLED': 'False',
|
||||
'HOSTED_OPENAI_TRIAL_MODELS': 'gpt-3.5-turbo,gpt-3.5-turbo-1106,gpt-3.5-turbo-instruct,gpt-3.5-turbo-16k,gpt-3.5-turbo-16k-0613,gpt-3.5-turbo-0613,gpt-3.5-turbo-0125,text-davinci-003',
|
||||
'HOSTED_OPENAI_PAID_ENABLED': 'False',
|
||||
'HOSTED_OPENAI_PAID_MODELS': 'gpt-4,gpt-4-turbo-preview,gpt-4-1106-preview,gpt-4-0125-preview,gpt-3.5-turbo,gpt-3.5-turbo-16k,gpt-3.5-turbo-16k-0613,gpt-3.5-turbo-1106,gpt-3.5-turbo-0613,gpt-3.5-turbo-0125,gpt-3.5-turbo-instruct,text-davinci-003',
|
||||
'HOSTED_OPENAI_PAID_MODELS': 'gpt-4,gpt-4-turbo-preview,gpt-4-turbo-2024-04-09,gpt-4-1106-preview,gpt-4-0125-preview,gpt-3.5-turbo,gpt-3.5-turbo-16k,gpt-3.5-turbo-16k-0613,gpt-3.5-turbo-1106,gpt-3.5-turbo-0613,gpt-3.5-turbo-0125,gpt-3.5-turbo-instruct,text-davinci-003',
|
||||
'HOSTED_AZURE_OPENAI_ENABLED': 'False',
|
||||
'HOSTED_AZURE_OPENAI_QUOTA_LIMIT': 200,
|
||||
'HOSTED_ANTHROPIC_QUOTA_LIMIT': 600000,
|
||||
@ -99,7 +99,7 @@ class Config:
|
||||
# ------------------------
|
||||
# General Configurations.
|
||||
# ------------------------
|
||||
self.CURRENT_VERSION = "0.6.1"
|
||||
self.CURRENT_VERSION = "0.6.2"
|
||||
self.COMMIT_SHA = get_env('COMMIT_SHA')
|
||||
self.EDITION = "SELF_HOSTED"
|
||||
self.DEPLOY_ENV = get_env('DEPLOY_ENV')
|
||||
|
||||
@ -1,14 +1,11 @@
|
||||
import json
|
||||
|
||||
from flask import current_app
|
||||
from flask_restful import fields, marshal_with, Resource
|
||||
from flask_restful import Resource, fields, marshal_with
|
||||
|
||||
from controllers.service_api import api
|
||||
from controllers.service_api.app.error import AppUnavailableError
|
||||
from controllers.service_api.wraps import validate_app_token
|
||||
from extensions.ext_database import db
|
||||
from models.model import App, AppModelConfig, AppMode
|
||||
from models.tools import ApiToolProvider
|
||||
from models.model import App, AppMode
|
||||
from services.app_service import AppService
|
||||
|
||||
|
||||
@ -92,6 +89,16 @@ class AppMetaApi(Resource):
|
||||
"""Get app meta"""
|
||||
return AppService().get_app_meta(app_model)
|
||||
|
||||
class AppInfoApi(Resource):
|
||||
@validate_app_token
|
||||
def get(self, app_model: App):
|
||||
"""Get app infomation"""
|
||||
return {
|
||||
'name':app_model.name,
|
||||
'description':app_model.description
|
||||
}
|
||||
|
||||
|
||||
api.add_resource(AppParameterApi, '/parameters')
|
||||
api.add_resource(AppMetaApi, '/meta')
|
||||
api.add_resource(AppInfoApi, '/info')
|
||||
|
||||
@ -5,6 +5,7 @@ from datetime import datetime
|
||||
from typing import Optional, Union, cast
|
||||
|
||||
from core.agent.entities import AgentEntity, AgentToolEntity
|
||||
from core.app.app_config.features.file_upload.manager import FileUploadConfigManager
|
||||
from core.app.apps.agent_chat.app_config_manager import AgentChatAppConfig
|
||||
from core.app.apps.base_app_queue_manager import AppQueueManager
|
||||
from core.app.apps.base_app_runner import AppRunner
|
||||
@ -14,6 +15,7 @@ from core.app.entities.app_invoke_entities import (
|
||||
)
|
||||
from core.callback_handler.agent_tool_callback_handler import DifyAgentCallbackHandler
|
||||
from core.callback_handler.index_tool_callback_handler import DatasetIndexToolCallbackHandler
|
||||
from core.file.message_file_parser import MessageFileParser
|
||||
from core.memory.token_buffer_memory import TokenBufferMemory
|
||||
from core.model_manager import ModelInstance
|
||||
from core.model_runtime.entities.llm_entities import LLMUsage
|
||||
@ -22,6 +24,7 @@ from core.model_runtime.entities.message_entities import (
|
||||
PromptMessage,
|
||||
PromptMessageTool,
|
||||
SystemPromptMessage,
|
||||
TextPromptMessageContent,
|
||||
ToolPromptMessage,
|
||||
UserPromptMessage,
|
||||
)
|
||||
@ -37,7 +40,7 @@ from core.tools.tool.dataset_retriever_tool import DatasetRetrieverTool
|
||||
from core.tools.tool.tool import Tool
|
||||
from core.tools.tool_manager import ToolManager
|
||||
from extensions.ext_database import db
|
||||
from models.model import Message, MessageAgentThought
|
||||
from models.model import Conversation, Message, MessageAgentThought
|
||||
from models.tools import ToolConversationVariables
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@ -45,6 +48,7 @@ logger = logging.getLogger(__name__)
|
||||
class BaseAgentRunner(AppRunner):
|
||||
def __init__(self, tenant_id: str,
|
||||
application_generate_entity: AgentChatAppGenerateEntity,
|
||||
conversation: Conversation,
|
||||
app_config: AgentChatAppConfig,
|
||||
model_config: ModelConfigWithCredentialsEntity,
|
||||
config: AgentEntity,
|
||||
@ -72,6 +76,7 @@ class BaseAgentRunner(AppRunner):
|
||||
"""
|
||||
self.tenant_id = tenant_id
|
||||
self.application_generate_entity = application_generate_entity
|
||||
self.conversation = conversation
|
||||
self.app_config = app_config
|
||||
self.model_config = model_config
|
||||
self.config = config
|
||||
@ -118,6 +123,12 @@ class BaseAgentRunner(AppRunner):
|
||||
else:
|
||||
self.stream_tool_call = False
|
||||
|
||||
# check if model supports vision
|
||||
if model_schema and ModelFeature.VISION in (model_schema.features or []):
|
||||
self.files = application_generate_entity.files
|
||||
else:
|
||||
self.files = []
|
||||
|
||||
def _repack_app_generate_entity(self, app_generate_entity: AgentChatAppGenerateEntity) \
|
||||
-> AgentChatAppGenerateEntity:
|
||||
"""
|
||||
@ -227,6 +238,34 @@ class BaseAgentRunner(AppRunner):
|
||||
|
||||
return prompt_tool
|
||||
|
||||
def _init_prompt_tools(self) -> tuple[dict[str, Tool], list[PromptMessageTool]]:
|
||||
"""
|
||||
Init tools
|
||||
"""
|
||||
tool_instances = {}
|
||||
prompt_messages_tools = []
|
||||
|
||||
for tool in self.app_config.agent.tools if self.app_config.agent else []:
|
||||
try:
|
||||
prompt_tool, tool_entity = self._convert_tool_to_prompt_message_tool(tool)
|
||||
except Exception:
|
||||
# api tool may be deleted
|
||||
continue
|
||||
# save tool entity
|
||||
tool_instances[tool.tool_name] = tool_entity
|
||||
# save prompt tool
|
||||
prompt_messages_tools.append(prompt_tool)
|
||||
|
||||
# convert dataset tools into ModelRuntime Tool format
|
||||
for dataset_tool in self.dataset_tools:
|
||||
prompt_tool = self._convert_dataset_retriever_tool_to_prompt_message_tool(dataset_tool)
|
||||
# save prompt tool
|
||||
prompt_messages_tools.append(prompt_tool)
|
||||
# save tool entity
|
||||
tool_instances[dataset_tool.identity.name] = dataset_tool
|
||||
|
||||
return tool_instances, prompt_messages_tools
|
||||
|
||||
def update_prompt_message_tool(self, tool: Tool, prompt_tool: PromptMessageTool) -> PromptMessageTool:
|
||||
"""
|
||||
update prompt message tool
|
||||
@ -314,7 +353,7 @@ class BaseAgentRunner(AppRunner):
|
||||
tool_name: str,
|
||||
tool_input: Union[str, dict],
|
||||
thought: str,
|
||||
observation: Union[str, str],
|
||||
observation: Union[str, dict],
|
||||
tool_invoke_meta: Union[str, dict],
|
||||
answer: str,
|
||||
messages_ids: list[str],
|
||||
@ -412,15 +451,19 @@ class BaseAgentRunner(AppRunner):
|
||||
"""
|
||||
result = []
|
||||
# check if there is a system message in the beginning of the conversation
|
||||
if prompt_messages and isinstance(prompt_messages[0], SystemPromptMessage):
|
||||
result.append(prompt_messages[0])
|
||||
for prompt_message in prompt_messages:
|
||||
if isinstance(prompt_message, SystemPromptMessage):
|
||||
result.append(prompt_message)
|
||||
|
||||
messages: list[Message] = db.session.query(Message).filter(
|
||||
Message.conversation_id == self.message.conversation_id,
|
||||
).order_by(Message.created_at.asc()).all()
|
||||
|
||||
for message in messages:
|
||||
result.append(UserPromptMessage(content=message.query))
|
||||
if message.id == self.message.id:
|
||||
continue
|
||||
|
||||
result.append(self.organize_agent_user_prompt(message))
|
||||
agent_thoughts: list[MessageAgentThought] = message.agent_thoughts
|
||||
if agent_thoughts:
|
||||
for agent_thought in agent_thoughts:
|
||||
@ -471,3 +514,32 @@ class BaseAgentRunner(AppRunner):
|
||||
db.session.close()
|
||||
|
||||
return result
|
||||
|
||||
def organize_agent_user_prompt(self, message: Message) -> UserPromptMessage:
|
||||
message_file_parser = MessageFileParser(
|
||||
tenant_id=self.tenant_id,
|
||||
app_id=self.app_config.app_id,
|
||||
)
|
||||
|
||||
files = message.message_files
|
||||
if files:
|
||||
file_extra_config = FileUploadConfigManager.convert(message.app_model_config.to_dict())
|
||||
|
||||
if file_extra_config:
|
||||
file_objs = message_file_parser.transform_message_files(
|
||||
files,
|
||||
file_extra_config
|
||||
)
|
||||
else:
|
||||
file_objs = []
|
||||
|
||||
if not file_objs:
|
||||
return UserPromptMessage(content=message.query)
|
||||
else:
|
||||
prompt_message_contents = [TextPromptMessageContent(data=message.query)]
|
||||
for file_obj in file_objs:
|
||||
prompt_message_contents.append(file_obj.prompt_message_content)
|
||||
|
||||
return UserPromptMessage(content=prompt_message_contents)
|
||||
else:
|
||||
return UserPromptMessage(content=message.query)
|
||||
|
||||
@ -1,33 +1,36 @@
|
||||
import json
|
||||
import re
|
||||
from abc import ABC, abstractmethod
|
||||
from collections.abc import Generator
|
||||
from typing import Literal, Union
|
||||
from typing import Union
|
||||
|
||||
from core.agent.base_agent_runner import BaseAgentRunner
|
||||
from core.agent.entities import AgentPromptEntity, AgentScratchpadUnit
|
||||
from core.agent.entities import AgentScratchpadUnit
|
||||
from core.agent.output_parser.cot_output_parser import CotAgentOutputParser
|
||||
from core.app.apps.base_app_queue_manager import PublishFrom
|
||||
from core.app.entities.queue_entities import QueueAgentThoughtEvent, QueueMessageEndEvent, QueueMessageFileEvent
|
||||
from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage
|
||||
from core.model_runtime.entities.message_entities import (
|
||||
AssistantPromptMessage,
|
||||
PromptMessage,
|
||||
PromptMessageTool,
|
||||
SystemPromptMessage,
|
||||
ToolPromptMessage,
|
||||
UserPromptMessage,
|
||||
)
|
||||
from core.model_runtime.utils.encoders import jsonable_encoder
|
||||
from core.tools.entities.tool_entities import ToolInvokeMeta
|
||||
from core.tools.tool.tool import Tool
|
||||
from core.tools.tool_engine import ToolEngine
|
||||
from models.model import Conversation, Message
|
||||
from models.model import Message
|
||||
|
||||
|
||||
class CotAgentRunner(BaseAgentRunner):
|
||||
class CotAgentRunner(BaseAgentRunner, ABC):
|
||||
_is_first_iteration = True
|
||||
_ignore_observation_providers = ['wenxin']
|
||||
_historic_prompt_messages: list[PromptMessage] = None
|
||||
_agent_scratchpad: list[AgentScratchpadUnit] = None
|
||||
_instruction: str = None
|
||||
_query: str = None
|
||||
_prompt_messages_tools: list[PromptMessage] = None
|
||||
|
||||
def run(self, conversation: Conversation,
|
||||
message: Message,
|
||||
def run(self, message: Message,
|
||||
query: str,
|
||||
inputs: dict[str, str],
|
||||
) -> Union[Generator, LLMResult]:
|
||||
@ -36,9 +39,7 @@ class CotAgentRunner(BaseAgentRunner):
|
||||
"""
|
||||
app_generate_entity = self.application_generate_entity
|
||||
self._repack_app_generate_entity(app_generate_entity)
|
||||
|
||||
agent_scratchpad: list[AgentScratchpadUnit] = []
|
||||
self._init_agent_scratchpad(agent_scratchpad, self.history_prompt_messages)
|
||||
self._init_react_state(query)
|
||||
|
||||
# check model mode
|
||||
if 'Observation' not in app_generate_entity.model_config.stop:
|
||||
@ -47,38 +48,19 @@ class CotAgentRunner(BaseAgentRunner):
|
||||
|
||||
app_config = self.app_config
|
||||
|
||||
# override inputs
|
||||
# init instruction
|
||||
inputs = inputs or {}
|
||||
instruction = app_config.prompt_template.simple_prompt_template
|
||||
instruction = self._fill_in_inputs_from_external_data_tools(instruction, inputs)
|
||||
self._instruction = self._fill_in_inputs_from_external_data_tools(instruction, inputs)
|
||||
|
||||
iteration_step = 1
|
||||
max_iteration_steps = min(app_config.agent.max_iteration, 5) + 1
|
||||
|
||||
prompt_messages = self.history_prompt_messages
|
||||
|
||||
# convert tools into ModelRuntime Tool format
|
||||
prompt_messages_tools: list[PromptMessageTool] = []
|
||||
tool_instances = {}
|
||||
for tool in app_config.agent.tools if app_config.agent else []:
|
||||
try:
|
||||
prompt_tool, tool_entity = self._convert_tool_to_prompt_message_tool(tool)
|
||||
except Exception:
|
||||
# api tool may be deleted
|
||||
continue
|
||||
# save tool entity
|
||||
tool_instances[tool.tool_name] = tool_entity
|
||||
# save prompt tool
|
||||
prompt_messages_tools.append(prompt_tool)
|
||||
|
||||
# convert dataset tools into ModelRuntime Tool format
|
||||
for dataset_tool in self.dataset_tools:
|
||||
prompt_tool = self._convert_dataset_retriever_tool_to_prompt_message_tool(dataset_tool)
|
||||
# save prompt tool
|
||||
prompt_messages_tools.append(prompt_tool)
|
||||
# save tool entity
|
||||
tool_instances[dataset_tool.identity.name] = dataset_tool
|
||||
tool_instances, self._prompt_messages_tools = self._init_prompt_tools()
|
||||
|
||||
prompt_messages = self._organize_prompt_messages()
|
||||
|
||||
function_call_state = True
|
||||
llm_usage = {
|
||||
'usage': None
|
||||
@ -103,7 +85,7 @@ class CotAgentRunner(BaseAgentRunner):
|
||||
|
||||
if iteration_step == max_iteration_steps:
|
||||
# the last iteration, remove all tools
|
||||
prompt_messages_tools = []
|
||||
self._prompt_messages_tools = []
|
||||
|
||||
message_file_ids = []
|
||||
|
||||
@ -120,18 +102,8 @@ class CotAgentRunner(BaseAgentRunner):
|
||||
agent_thought_id=agent_thought.id
|
||||
), PublishFrom.APPLICATION_MANAGER)
|
||||
|
||||
# update prompt messages
|
||||
prompt_messages = self._organize_cot_prompt_messages(
|
||||
mode=app_generate_entity.model_config.mode,
|
||||
prompt_messages=prompt_messages,
|
||||
tools=prompt_messages_tools,
|
||||
agent_scratchpad=agent_scratchpad,
|
||||
agent_prompt_message=app_config.agent.prompt,
|
||||
instruction=instruction,
|
||||
input=query
|
||||
)
|
||||
|
||||
# recalc llm max tokens
|
||||
prompt_messages = self._organize_prompt_messages()
|
||||
self.recalc_llm_max_tokens(self.model_config, prompt_messages)
|
||||
# invoke model
|
||||
chunks: Generator[LLMResultChunk, None, None] = model_instance.invoke_llm(
|
||||
@ -149,7 +121,7 @@ class CotAgentRunner(BaseAgentRunner):
|
||||
raise ValueError("failed to invoke llm")
|
||||
|
||||
usage_dict = {}
|
||||
react_chunks = self._handle_stream_react(chunks, usage_dict)
|
||||
react_chunks = CotAgentOutputParser.handle_react_stream_output(chunks)
|
||||
scratchpad = AgentScratchpadUnit(
|
||||
agent_response='',
|
||||
thought='',
|
||||
@ -165,30 +137,12 @@ class CotAgentRunner(BaseAgentRunner):
|
||||
), PublishFrom.APPLICATION_MANAGER)
|
||||
|
||||
for chunk in react_chunks:
|
||||
if isinstance(chunk, dict):
|
||||
scratchpad.agent_response += json.dumps(chunk)
|
||||
try:
|
||||
if scratchpad.action:
|
||||
raise Exception("")
|
||||
scratchpad.action_str = json.dumps(chunk)
|
||||
scratchpad.action = AgentScratchpadUnit.Action(
|
||||
action_name=chunk['action'],
|
||||
action_input=chunk['action_input']
|
||||
)
|
||||
except:
|
||||
scratchpad.thought += json.dumps(chunk)
|
||||
yield LLMResultChunk(
|
||||
model=self.model_config.model,
|
||||
prompt_messages=prompt_messages,
|
||||
system_fingerprint='',
|
||||
delta=LLMResultChunkDelta(
|
||||
index=0,
|
||||
message=AssistantPromptMessage(
|
||||
content=json.dumps(chunk, ensure_ascii=False) # if ensure_ascii=True, the text in webui maybe garbled text
|
||||
),
|
||||
usage=None
|
||||
)
|
||||
)
|
||||
if isinstance(chunk, AgentScratchpadUnit.Action):
|
||||
action = chunk
|
||||
# detect action
|
||||
scratchpad.agent_response += json.dumps(chunk.dict())
|
||||
scratchpad.action_str = json.dumps(chunk.dict())
|
||||
scratchpad.action = action
|
||||
else:
|
||||
scratchpad.agent_response += chunk
|
||||
scratchpad.thought += chunk
|
||||
@ -206,27 +160,29 @@ class CotAgentRunner(BaseAgentRunner):
|
||||
)
|
||||
|
||||
scratchpad.thought = scratchpad.thought.strip() or 'I am thinking about how to help you'
|
||||
agent_scratchpad.append(scratchpad)
|
||||
|
||||
self._agent_scratchpad.append(scratchpad)
|
||||
|
||||
# get llm usage
|
||||
if 'usage' in usage_dict:
|
||||
increase_usage(llm_usage, usage_dict['usage'])
|
||||
else:
|
||||
usage_dict['usage'] = LLMUsage.empty_usage()
|
||||
|
||||
self.save_agent_thought(agent_thought=agent_thought,
|
||||
tool_name=scratchpad.action.action_name if scratchpad.action else '',
|
||||
tool_input={
|
||||
scratchpad.action.action_name: scratchpad.action.action_input
|
||||
} if scratchpad.action else '',
|
||||
tool_invoke_meta={},
|
||||
thought=scratchpad.thought,
|
||||
observation='',
|
||||
answer=scratchpad.agent_response,
|
||||
messages_ids=[],
|
||||
llm_usage=usage_dict['usage'])
|
||||
self.save_agent_thought(
|
||||
agent_thought=agent_thought,
|
||||
tool_name=scratchpad.action.action_name if scratchpad.action else '',
|
||||
tool_input={
|
||||
scratchpad.action.action_name: scratchpad.action.action_input
|
||||
} if scratchpad.action else {},
|
||||
tool_invoke_meta={},
|
||||
thought=scratchpad.thought,
|
||||
observation='',
|
||||
answer=scratchpad.agent_response,
|
||||
messages_ids=[],
|
||||
llm_usage=usage_dict['usage']
|
||||
)
|
||||
|
||||
if scratchpad.action and scratchpad.action.action_name.lower() != "final answer":
|
||||
if not scratchpad.is_final():
|
||||
self.queue_manager.publish(QueueAgentThoughtEvent(
|
||||
agent_thought_id=agent_thought.id
|
||||
), PublishFrom.APPLICATION_MANAGER)
|
||||
@ -238,106 +194,43 @@ class CotAgentRunner(BaseAgentRunner):
|
||||
if scratchpad.action.action_name.lower() == "final answer":
|
||||
# action is final answer, return final answer directly
|
||||
try:
|
||||
final_answer = scratchpad.action.action_input if \
|
||||
isinstance(scratchpad.action.action_input, str) else \
|
||||
json.dumps(scratchpad.action.action_input)
|
||||
if isinstance(scratchpad.action.action_input, dict):
|
||||
final_answer = json.dumps(scratchpad.action.action_input)
|
||||
elif isinstance(scratchpad.action.action_input, str):
|
||||
final_answer = scratchpad.action.action_input
|
||||
else:
|
||||
final_answer = f'{scratchpad.action.action_input}'
|
||||
except json.JSONDecodeError:
|
||||
final_answer = f'{scratchpad.action.action_input}'
|
||||
else:
|
||||
function_call_state = True
|
||||
|
||||
# action is tool call, invoke tool
|
||||
tool_call_name = scratchpad.action.action_name
|
||||
tool_call_args = scratchpad.action.action_input
|
||||
tool_instance = tool_instances.get(tool_call_name)
|
||||
if not tool_instance:
|
||||
answer = f"there is not a tool named {tool_call_name}"
|
||||
self.save_agent_thought(
|
||||
agent_thought=agent_thought,
|
||||
tool_name='',
|
||||
tool_input='',
|
||||
tool_invoke_meta=ToolInvokeMeta.error_instance(
|
||||
f"there is not a tool named {tool_call_name}"
|
||||
).to_dict(),
|
||||
thought=None,
|
||||
observation={
|
||||
tool_call_name: answer
|
||||
},
|
||||
answer=answer,
|
||||
messages_ids=[]
|
||||
)
|
||||
self.queue_manager.publish(QueueAgentThoughtEvent(
|
||||
agent_thought_id=agent_thought.id
|
||||
), PublishFrom.APPLICATION_MANAGER)
|
||||
else:
|
||||
if isinstance(tool_call_args, str):
|
||||
try:
|
||||
tool_call_args = json.loads(tool_call_args)
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
tool_invoke_response, tool_invoke_meta = self._handle_invoke_action(
|
||||
action=scratchpad.action,
|
||||
tool_instances=tool_instances,
|
||||
message_file_ids=message_file_ids
|
||||
)
|
||||
scratchpad.observation = tool_invoke_response
|
||||
scratchpad.agent_response = tool_invoke_response
|
||||
|
||||
# invoke tool
|
||||
tool_invoke_response, message_files, tool_invoke_meta = ToolEngine.agent_invoke(
|
||||
tool=tool_instance,
|
||||
tool_parameters=tool_call_args,
|
||||
user_id=self.user_id,
|
||||
tenant_id=self.tenant_id,
|
||||
message=self.message,
|
||||
invoke_from=self.application_generate_entity.invoke_from,
|
||||
agent_tool_callback=self.agent_callback
|
||||
)
|
||||
# publish files
|
||||
for message_file, save_as in message_files:
|
||||
if save_as:
|
||||
self.variables_pool.set_file(tool_name=tool_call_name, value=message_file.id, name=save_as)
|
||||
self.save_agent_thought(
|
||||
agent_thought=agent_thought,
|
||||
tool_name=scratchpad.action.action_name,
|
||||
tool_input={scratchpad.action.action_name: scratchpad.action.action_input},
|
||||
thought=scratchpad.thought,
|
||||
observation={scratchpad.action.action_name: tool_invoke_response},
|
||||
tool_invoke_meta=tool_invoke_meta.to_dict(),
|
||||
answer=scratchpad.agent_response,
|
||||
messages_ids=message_file_ids,
|
||||
llm_usage=usage_dict['usage']
|
||||
)
|
||||
|
||||
# publish message file
|
||||
self.queue_manager.publish(QueueMessageFileEvent(
|
||||
message_file_id=message_file.id
|
||||
), PublishFrom.APPLICATION_MANAGER)
|
||||
# add message file ids
|
||||
message_file_ids.append(message_file.id)
|
||||
|
||||
# publish files
|
||||
for message_file, save_as in message_files:
|
||||
if save_as:
|
||||
self.variables_pool.set_file(tool_name=tool_call_name,
|
||||
value=message_file.id,
|
||||
name=save_as)
|
||||
self.queue_manager.publish(QueueMessageFileEvent(
|
||||
message_file_id=message_file.id
|
||||
), PublishFrom.APPLICATION_MANAGER)
|
||||
|
||||
message_file_ids = [message_file.id for message_file, _ in message_files]
|
||||
|
||||
observation = tool_invoke_response
|
||||
|
||||
# save scratchpad
|
||||
scratchpad.observation = observation
|
||||
|
||||
# save agent thought
|
||||
self.save_agent_thought(
|
||||
agent_thought=agent_thought,
|
||||
tool_name=tool_call_name,
|
||||
tool_input={
|
||||
tool_call_name: tool_call_args
|
||||
},
|
||||
tool_invoke_meta={
|
||||
tool_call_name: tool_invoke_meta.to_dict()
|
||||
},
|
||||
thought=None,
|
||||
observation={
|
||||
tool_call_name: observation
|
||||
},
|
||||
answer=scratchpad.agent_response,
|
||||
messages_ids=message_file_ids,
|
||||
)
|
||||
self.queue_manager.publish(QueueAgentThoughtEvent(
|
||||
agent_thought_id=agent_thought.id
|
||||
), PublishFrom.APPLICATION_MANAGER)
|
||||
self.queue_manager.publish(QueueAgentThoughtEvent(
|
||||
agent_thought_id=agent_thought.id
|
||||
), PublishFrom.APPLICATION_MANAGER)
|
||||
|
||||
# update prompt tool message
|
||||
for prompt_tool in prompt_messages_tools:
|
||||
for prompt_tool in self._prompt_messages_tools:
|
||||
self.update_prompt_message_tool(tool_instances[prompt_tool.name], prompt_tool)
|
||||
|
||||
iteration_step += 1
|
||||
@ -379,96 +272,63 @@ class CotAgentRunner(BaseAgentRunner):
|
||||
system_fingerprint=''
|
||||
)), PublishFrom.APPLICATION_MANAGER)
|
||||
|
||||
def _handle_stream_react(self, llm_response: Generator[LLMResultChunk, None, None], usage: dict) \
|
||||
-> Generator[Union[str, dict], None, None]:
|
||||
def parse_json(json_str):
|
||||
def _handle_invoke_action(self, action: AgentScratchpadUnit.Action,
|
||||
tool_instances: dict[str, Tool],
|
||||
message_file_ids: list[str]) -> tuple[str, ToolInvokeMeta]:
|
||||
"""
|
||||
handle invoke action
|
||||
:param action: action
|
||||
:param tool_instances: tool instances
|
||||
:return: observation, meta
|
||||
"""
|
||||
# action is tool call, invoke tool
|
||||
tool_call_name = action.action_name
|
||||
tool_call_args = action.action_input
|
||||
tool_instance = tool_instances.get(tool_call_name)
|
||||
|
||||
if not tool_instance:
|
||||
answer = f"there is not a tool named {tool_call_name}"
|
||||
return answer, ToolInvokeMeta.error_instance(answer)
|
||||
|
||||
if isinstance(tool_call_args, str):
|
||||
try:
|
||||
return json.loads(json_str.strip())
|
||||
except:
|
||||
return json_str
|
||||
|
||||
def extra_json_from_code_block(code_block) -> Generator[Union[dict, str], None, None]:
|
||||
code_blocks = re.findall(r'```(.*?)```', code_block, re.DOTALL)
|
||||
if not code_blocks:
|
||||
return
|
||||
for block in code_blocks:
|
||||
json_text = re.sub(r'^[a-zA-Z]+\n', '', block.strip(), flags=re.MULTILINE)
|
||||
yield parse_json(json_text)
|
||||
|
||||
code_block_cache = ''
|
||||
code_block_delimiter_count = 0
|
||||
in_code_block = False
|
||||
json_cache = ''
|
||||
json_quote_count = 0
|
||||
in_json = False
|
||||
got_json = False
|
||||
|
||||
for response in llm_response:
|
||||
response = response.delta.message.content
|
||||
if not isinstance(response, str):
|
||||
continue
|
||||
tool_call_args = json.loads(tool_call_args)
|
||||
except json.JSONDecodeError:
|
||||
pass
|
||||
|
||||
# stream
|
||||
index = 0
|
||||
while index < len(response):
|
||||
steps = 1
|
||||
delta = response[index:index+steps]
|
||||
if delta == '`':
|
||||
code_block_cache += delta
|
||||
code_block_delimiter_count += 1
|
||||
else:
|
||||
if not in_code_block:
|
||||
if code_block_delimiter_count > 0:
|
||||
yield code_block_cache
|
||||
code_block_cache = ''
|
||||
else:
|
||||
code_block_cache += delta
|
||||
code_block_delimiter_count = 0
|
||||
# invoke tool
|
||||
tool_invoke_response, message_files, tool_invoke_meta = ToolEngine.agent_invoke(
|
||||
tool=tool_instance,
|
||||
tool_parameters=tool_call_args,
|
||||
user_id=self.user_id,
|
||||
tenant_id=self.tenant_id,
|
||||
message=self.message,
|
||||
invoke_from=self.application_generate_entity.invoke_from,
|
||||
agent_tool_callback=self.agent_callback
|
||||
)
|
||||
|
||||
if code_block_delimiter_count == 3:
|
||||
if in_code_block:
|
||||
yield from extra_json_from_code_block(code_block_cache)
|
||||
code_block_cache = ''
|
||||
|
||||
in_code_block = not in_code_block
|
||||
code_block_delimiter_count = 0
|
||||
# publish files
|
||||
for message_file, save_as in message_files:
|
||||
if save_as:
|
||||
self.variables_pool.set_file(tool_name=tool_call_name, value=message_file.id, name=save_as)
|
||||
|
||||
if not in_code_block:
|
||||
# handle single json
|
||||
if delta == '{':
|
||||
json_quote_count += 1
|
||||
in_json = True
|
||||
json_cache += delta
|
||||
elif delta == '}':
|
||||
json_cache += delta
|
||||
if json_quote_count > 0:
|
||||
json_quote_count -= 1
|
||||
if json_quote_count == 0:
|
||||
in_json = False
|
||||
got_json = True
|
||||
index += steps
|
||||
continue
|
||||
else:
|
||||
if in_json:
|
||||
json_cache += delta
|
||||
# publish message file
|
||||
self.queue_manager.publish(QueueMessageFileEvent(
|
||||
message_file_id=message_file.id
|
||||
), PublishFrom.APPLICATION_MANAGER)
|
||||
# add message file ids
|
||||
message_file_ids.append(message_file.id)
|
||||
|
||||
if got_json:
|
||||
got_json = False
|
||||
yield parse_json(json_cache)
|
||||
json_cache = ''
|
||||
json_quote_count = 0
|
||||
in_json = False
|
||||
|
||||
if not in_code_block and not in_json:
|
||||
yield delta.replace('`', '')
|
||||
return tool_invoke_response, tool_invoke_meta
|
||||
|
||||
index += steps
|
||||
|
||||
if code_block_cache:
|
||||
yield code_block_cache
|
||||
|
||||
if json_cache:
|
||||
yield parse_json(json_cache)
|
||||
def _convert_dict_to_action(self, action: dict) -> AgentScratchpadUnit.Action:
|
||||
"""
|
||||
convert dict to action
|
||||
"""
|
||||
return AgentScratchpadUnit.Action(
|
||||
action_name=action['action'],
|
||||
action_input=action['action_input']
|
||||
)
|
||||
|
||||
def _fill_in_inputs_from_external_data_tools(self, instruction: str, inputs: dict) -> str:
|
||||
"""
|
||||
@ -482,15 +342,46 @@ class CotAgentRunner(BaseAgentRunner):
|
||||
|
||||
return instruction
|
||||
|
||||
def _init_agent_scratchpad(self,
|
||||
agent_scratchpad: list[AgentScratchpadUnit],
|
||||
messages: list[PromptMessage]
|
||||
) -> list[AgentScratchpadUnit]:
|
||||
def _init_react_state(self, query) -> None:
|
||||
"""
|
||||
init agent scratchpad
|
||||
"""
|
||||
self._query = query
|
||||
self._agent_scratchpad = []
|
||||
self._historic_prompt_messages = self._organize_historic_prompt_messages()
|
||||
|
||||
@abstractmethod
|
||||
def _organize_prompt_messages(self) -> list[PromptMessage]:
|
||||
"""
|
||||
organize prompt messages
|
||||
"""
|
||||
|
||||
def _format_assistant_message(self, agent_scratchpad: list[AgentScratchpadUnit]) -> str:
|
||||
"""
|
||||
format assistant message
|
||||
"""
|
||||
message = ''
|
||||
for scratchpad in agent_scratchpad:
|
||||
if scratchpad.is_final():
|
||||
message += f"Final Answer: {scratchpad.agent_response}"
|
||||
else:
|
||||
message += f"Thought: {scratchpad.thought}\n\n"
|
||||
if scratchpad.action_str:
|
||||
message += f"Action: {scratchpad.action_str}\n\n"
|
||||
if scratchpad.observation:
|
||||
message += f"Observation: {scratchpad.observation}\n\n"
|
||||
|
||||
return message
|
||||
|
||||
def _organize_historic_prompt_messages(self) -> list[PromptMessage]:
|
||||
"""
|
||||
organize historic prompt messages
|
||||
"""
|
||||
result: list[PromptMessage] = []
|
||||
scratchpad: list[AgentScratchpadUnit] = []
|
||||
current_scratchpad: AgentScratchpadUnit = None
|
||||
for message in messages:
|
||||
|
||||
for message in self.history_prompt_messages:
|
||||
if isinstance(message, AssistantPromptMessage):
|
||||
current_scratchpad = AgentScratchpadUnit(
|
||||
agent_response=message.content,
|
||||
@ -505,186 +396,29 @@ class CotAgentRunner(BaseAgentRunner):
|
||||
action_name=message.tool_calls[0].function.name,
|
||||
action_input=json.loads(message.tool_calls[0].function.arguments)
|
||||
)
|
||||
current_scratchpad.action_str = json.dumps(
|
||||
current_scratchpad.action.to_dict()
|
||||
)
|
||||
except:
|
||||
pass
|
||||
|
||||
agent_scratchpad.append(current_scratchpad)
|
||||
|
||||
scratchpad.append(current_scratchpad)
|
||||
elif isinstance(message, ToolPromptMessage):
|
||||
if current_scratchpad:
|
||||
current_scratchpad.observation = message.content
|
||||
elif isinstance(message, UserPromptMessage):
|
||||
result.append(message)
|
||||
|
||||
if scratchpad:
|
||||
result.append(AssistantPromptMessage(
|
||||
content=self._format_assistant_message(scratchpad)
|
||||
))
|
||||
|
||||
scratchpad = []
|
||||
|
||||
if scratchpad:
|
||||
result.append(AssistantPromptMessage(
|
||||
content=self._format_assistant_message(scratchpad)
|
||||
))
|
||||
|
||||
return agent_scratchpad
|
||||
|
||||
def _check_cot_prompt_messages(self, mode: Literal["completion", "chat"],
|
||||
agent_prompt_message: AgentPromptEntity,
|
||||
):
|
||||
"""
|
||||
check chain of thought prompt messages, a standard prompt message is like:
|
||||
Respond to the human as helpfully and accurately as possible.
|
||||
|
||||
{{instruction}}
|
||||
|
||||
You have access to the following tools:
|
||||
|
||||
{{tools}}
|
||||
|
||||
Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).
|
||||
Valid action values: "Final Answer" or {{tool_names}}
|
||||
|
||||
Provide only ONE action per $JSON_BLOB, as shown:
|
||||
|
||||
```
|
||||
{
|
||||
"action": $TOOL_NAME,
|
||||
"action_input": $ACTION_INPUT
|
||||
}
|
||||
```
|
||||
"""
|
||||
|
||||
# parse agent prompt message
|
||||
first_prompt = agent_prompt_message.first_prompt
|
||||
next_iteration = agent_prompt_message.next_iteration
|
||||
|
||||
if not isinstance(first_prompt, str) or not isinstance(next_iteration, str):
|
||||
raise ValueError("first_prompt or next_iteration is required in CoT agent mode")
|
||||
|
||||
# check instruction, tools, and tool_names slots
|
||||
if not first_prompt.find("{{instruction}}") >= 0:
|
||||
raise ValueError("{{instruction}} is required in first_prompt")
|
||||
if not first_prompt.find("{{tools}}") >= 0:
|
||||
raise ValueError("{{tools}} is required in first_prompt")
|
||||
if not first_prompt.find("{{tool_names}}") >= 0:
|
||||
raise ValueError("{{tool_names}} is required in first_prompt")
|
||||
|
||||
if mode == "completion":
|
||||
if not first_prompt.find("{{query}}") >= 0:
|
||||
raise ValueError("{{query}} is required in first_prompt")
|
||||
if not first_prompt.find("{{agent_scratchpad}}") >= 0:
|
||||
raise ValueError("{{agent_scratchpad}} is required in first_prompt")
|
||||
|
||||
if mode == "completion":
|
||||
if not next_iteration.find("{{observation}}") >= 0:
|
||||
raise ValueError("{{observation}} is required in next_iteration")
|
||||
|
||||
def _convert_scratchpad_list_to_str(self, agent_scratchpad: list[AgentScratchpadUnit]) -> str:
|
||||
"""
|
||||
convert agent scratchpad list to str
|
||||
"""
|
||||
next_iteration = self.app_config.agent.prompt.next_iteration
|
||||
|
||||
result = ''
|
||||
for scratchpad in agent_scratchpad:
|
||||
result += (scratchpad.thought or '') + (scratchpad.action_str or '') + \
|
||||
next_iteration.replace("{{observation}}", scratchpad.observation or 'It seems that no response is available')
|
||||
|
||||
return result
|
||||
|
||||
def _organize_cot_prompt_messages(self, mode: Literal["completion", "chat"],
|
||||
prompt_messages: list[PromptMessage],
|
||||
tools: list[PromptMessageTool],
|
||||
agent_scratchpad: list[AgentScratchpadUnit],
|
||||
agent_prompt_message: AgentPromptEntity,
|
||||
instruction: str,
|
||||
input: str,
|
||||
) -> list[PromptMessage]:
|
||||
"""
|
||||
organize chain of thought prompt messages, a standard prompt message is like:
|
||||
Respond to the human as helpfully and accurately as possible.
|
||||
|
||||
{{instruction}}
|
||||
|
||||
You have access to the following tools:
|
||||
|
||||
{{tools}}
|
||||
|
||||
Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).
|
||||
Valid action values: "Final Answer" or {{tool_names}}
|
||||
|
||||
Provide only ONE action per $JSON_BLOB, as shown:
|
||||
|
||||
```
|
||||
{{{{
|
||||
"action": $TOOL_NAME,
|
||||
"action_input": $ACTION_INPUT
|
||||
}}}}
|
||||
```
|
||||
"""
|
||||
|
||||
self._check_cot_prompt_messages(mode, agent_prompt_message)
|
||||
|
||||
# parse agent prompt message
|
||||
first_prompt = agent_prompt_message.first_prompt
|
||||
|
||||
# parse tools
|
||||
tools_str = self._jsonify_tool_prompt_messages(tools)
|
||||
|
||||
# parse tools name
|
||||
tool_names = '"' + '","'.join([tool.name for tool in tools]) + '"'
|
||||
|
||||
# get system message
|
||||
system_message = first_prompt.replace("{{instruction}}", instruction) \
|
||||
.replace("{{tools}}", tools_str) \
|
||||
.replace("{{tool_names}}", tool_names)
|
||||
|
||||
# organize prompt messages
|
||||
if mode == "chat":
|
||||
# override system message
|
||||
overridden = False
|
||||
prompt_messages = prompt_messages.copy()
|
||||
for prompt_message in prompt_messages:
|
||||
if isinstance(prompt_message, SystemPromptMessage):
|
||||
prompt_message.content = system_message
|
||||
overridden = True
|
||||
break
|
||||
|
||||
# convert tool prompt messages to user prompt messages
|
||||
for idx, prompt_message in enumerate(prompt_messages):
|
||||
if isinstance(prompt_message, ToolPromptMessage):
|
||||
prompt_messages[idx] = UserPromptMessage(
|
||||
content=prompt_message.content
|
||||
)
|
||||
|
||||
if not overridden:
|
||||
prompt_messages.insert(0, SystemPromptMessage(
|
||||
content=system_message,
|
||||
))
|
||||
|
||||
# add assistant message
|
||||
if len(agent_scratchpad) > 0 and not self._is_first_iteration:
|
||||
prompt_messages.append(AssistantPromptMessage(
|
||||
content=(agent_scratchpad[-1].thought or '') + (agent_scratchpad[-1].action_str or ''),
|
||||
))
|
||||
|
||||
# add user message
|
||||
if len(agent_scratchpad) > 0 and not self._is_first_iteration:
|
||||
prompt_messages.append(UserPromptMessage(
|
||||
content=(agent_scratchpad[-1].observation or 'It seems that no response is available'),
|
||||
))
|
||||
|
||||
self._is_first_iteration = False
|
||||
|
||||
return prompt_messages
|
||||
elif mode == "completion":
|
||||
# parse agent scratchpad
|
||||
agent_scratchpad_str = self._convert_scratchpad_list_to_str(agent_scratchpad)
|
||||
self._is_first_iteration = False
|
||||
# parse prompt messages
|
||||
return [UserPromptMessage(
|
||||
content=first_prompt.replace("{{instruction}}", instruction)
|
||||
.replace("{{tools}}", tools_str)
|
||||
.replace("{{tool_names}}", tool_names)
|
||||
.replace("{{query}}", input)
|
||||
.replace("{{agent_scratchpad}}", agent_scratchpad_str),
|
||||
)]
|
||||
else:
|
||||
raise ValueError(f"mode {mode} is not supported")
|
||||
|
||||
def _jsonify_tool_prompt_messages(self, tools: list[PromptMessageTool]) -> str:
|
||||
"""
|
||||
jsonify tool prompt messages
|
||||
"""
|
||||
tools = jsonable_encoder(tools)
|
||||
try:
|
||||
return json.dumps(tools, ensure_ascii=False)
|
||||
except json.JSONDecodeError:
|
||||
return json.dumps(tools)
|
||||
return result
|
||||
71
api/core/agent/cot_chat_agent_runner.py
Normal file
71
api/core/agent/cot_chat_agent_runner.py
Normal file
@ -0,0 +1,71 @@
|
||||
import json
|
||||
|
||||
from core.agent.cot_agent_runner import CotAgentRunner
|
||||
from core.model_runtime.entities.message_entities import (
|
||||
AssistantPromptMessage,
|
||||
PromptMessage,
|
||||
SystemPromptMessage,
|
||||
UserPromptMessage,
|
||||
)
|
||||
from core.model_runtime.utils.encoders import jsonable_encoder
|
||||
|
||||
|
||||
class CotChatAgentRunner(CotAgentRunner):
|
||||
def _organize_system_prompt(self) -> SystemPromptMessage:
|
||||
"""
|
||||
Organize system prompt
|
||||
"""
|
||||
prompt_entity = self.app_config.agent.prompt
|
||||
first_prompt = prompt_entity.first_prompt
|
||||
|
||||
system_prompt = first_prompt \
|
||||
.replace("{{instruction}}", self._instruction) \
|
||||
.replace("{{tools}}", json.dumps(jsonable_encoder(self._prompt_messages_tools))) \
|
||||
.replace("{{tool_names}}", ', '.join([tool.name for tool in self._prompt_messages_tools]))
|
||||
|
||||
return SystemPromptMessage(content=system_prompt)
|
||||
|
||||
def _organize_prompt_messages(self) -> list[PromptMessage]:
|
||||
"""
|
||||
Organize
|
||||
"""
|
||||
# organize system prompt
|
||||
system_message = self._organize_system_prompt()
|
||||
|
||||
# organize historic prompt messages
|
||||
historic_messages = self._historic_prompt_messages
|
||||
|
||||
# organize current assistant messages
|
||||
agent_scratchpad = self._agent_scratchpad
|
||||
if not agent_scratchpad:
|
||||
assistant_messages = []
|
||||
else:
|
||||
assistant_message = AssistantPromptMessage(content='')
|
||||
for unit in agent_scratchpad:
|
||||
if unit.is_final():
|
||||
assistant_message.content += f"Final Answer: {unit.agent_response}"
|
||||
else:
|
||||
assistant_message.content += f"Thought: {unit.thought}\n\n"
|
||||
if unit.action_str:
|
||||
assistant_message.content += f"Action: {unit.action_str}\n\n"
|
||||
if unit.observation:
|
||||
assistant_message.content += f"Observation: {unit.observation}\n\n"
|
||||
|
||||
assistant_messages = [assistant_message]
|
||||
|
||||
# query messages
|
||||
query_messages = UserPromptMessage(content=self._query)
|
||||
|
||||
if assistant_messages:
|
||||
messages = [
|
||||
system_message,
|
||||
*historic_messages,
|
||||
query_messages,
|
||||
*assistant_messages,
|
||||
UserPromptMessage(content='continue')
|
||||
]
|
||||
else:
|
||||
messages = [system_message, *historic_messages, query_messages]
|
||||
|
||||
# join all messages
|
||||
return messages
|
||||
69
api/core/agent/cot_completion_agent_runner.py
Normal file
69
api/core/agent/cot_completion_agent_runner.py
Normal file
@ -0,0 +1,69 @@
|
||||
import json
|
||||
|
||||
from core.agent.cot_agent_runner import CotAgentRunner
|
||||
from core.model_runtime.entities.message_entities import AssistantPromptMessage, PromptMessage, UserPromptMessage
|
||||
from core.model_runtime.utils.encoders import jsonable_encoder
|
||||
|
||||
|
||||
class CotCompletionAgentRunner(CotAgentRunner):
|
||||
def _organize_instruction_prompt(self) -> str:
|
||||
"""
|
||||
Organize instruction prompt
|
||||
"""
|
||||
prompt_entity = self.app_config.agent.prompt
|
||||
first_prompt = prompt_entity.first_prompt
|
||||
|
||||
system_prompt = first_prompt.replace("{{instruction}}", self._instruction) \
|
||||
.replace("{{tools}}", json.dumps(jsonable_encoder(self._prompt_messages_tools))) \
|
||||
.replace("{{tool_names}}", ', '.join([tool.name for tool in self._prompt_messages_tools]))
|
||||
|
||||
return system_prompt
|
||||
|
||||
def _organize_historic_prompt(self) -> str:
|
||||
"""
|
||||
Organize historic prompt
|
||||
"""
|
||||
historic_prompt_messages = self._historic_prompt_messages
|
||||
historic_prompt = ""
|
||||
|
||||
for message in historic_prompt_messages:
|
||||
if isinstance(message, UserPromptMessage):
|
||||
historic_prompt += f"Question: {message.content}\n\n"
|
||||
elif isinstance(message, AssistantPromptMessage):
|
||||
historic_prompt += message.content + "\n\n"
|
||||
|
||||
return historic_prompt
|
||||
|
||||
def _organize_prompt_messages(self) -> list[PromptMessage]:
|
||||
"""
|
||||
Organize prompt messages
|
||||
"""
|
||||
# organize system prompt
|
||||
system_prompt = self._organize_instruction_prompt()
|
||||
|
||||
# organize historic prompt messages
|
||||
historic_prompt = self._organize_historic_prompt()
|
||||
|
||||
# organize current assistant messages
|
||||
agent_scratchpad = self._agent_scratchpad
|
||||
assistant_prompt = ''
|
||||
for unit in agent_scratchpad:
|
||||
if unit.is_final():
|
||||
assistant_prompt += f"Final Answer: {unit.agent_response}"
|
||||
else:
|
||||
assistant_prompt += f"Thought: {unit.thought}\n\n"
|
||||
if unit.action_str:
|
||||
assistant_prompt += f"Action: {unit.action_str}\n\n"
|
||||
if unit.observation:
|
||||
assistant_prompt += f"Observation: {unit.observation}\n\n"
|
||||
|
||||
# query messages
|
||||
query_prompt = f"Question: {self._query}"
|
||||
|
||||
# join all messages
|
||||
prompt = system_prompt \
|
||||
.replace("{{historic_messages}}", historic_prompt) \
|
||||
.replace("{{agent_scratchpad}}", assistant_prompt) \
|
||||
.replace("{{query}}", query_prompt)
|
||||
|
||||
return [UserPromptMessage(content=prompt)]
|
||||
@ -34,12 +34,29 @@ class AgentScratchpadUnit(BaseModel):
|
||||
action_name: str
|
||||
action_input: Union[dict, str]
|
||||
|
||||
def to_dict(self) -> dict:
|
||||
"""
|
||||
Convert to dictionary.
|
||||
"""
|
||||
return {
|
||||
'action': self.action_name,
|
||||
'action_input': self.action_input,
|
||||
}
|
||||
|
||||
agent_response: Optional[str] = None
|
||||
thought: Optional[str] = None
|
||||
action_str: Optional[str] = None
|
||||
observation: Optional[str] = None
|
||||
action: Optional[Action] = None
|
||||
|
||||
def is_final(self) -> bool:
|
||||
"""
|
||||
Check if the scratchpad unit is final.
|
||||
"""
|
||||
return self.action is None or (
|
||||
'final' in self.action.action_name.lower() and
|
||||
'answer' in self.action.action_name.lower()
|
||||
)
|
||||
|
||||
class AgentEntity(BaseModel):
|
||||
"""
|
||||
|
||||
@ -1,6 +1,7 @@
|
||||
import json
|
||||
import logging
|
||||
from collections.abc import Generator
|
||||
from copy import deepcopy
|
||||
from typing import Any, Union
|
||||
|
||||
from core.agent.base_agent_runner import BaseAgentRunner
|
||||
@ -10,21 +11,21 @@ from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk,
|
||||
from core.model_runtime.entities.message_entities import (
|
||||
AssistantPromptMessage,
|
||||
PromptMessage,
|
||||
PromptMessageTool,
|
||||
PromptMessageContentType,
|
||||
SystemPromptMessage,
|
||||
TextPromptMessageContent,
|
||||
ToolPromptMessage,
|
||||
UserPromptMessage,
|
||||
)
|
||||
from core.tools.entities.tool_entities import ToolInvokeMeta
|
||||
from core.tools.tool_engine import ToolEngine
|
||||
from models.model import Conversation, Message, MessageAgentThought
|
||||
from models.model import Message
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class FunctionCallAgentRunner(BaseAgentRunner):
|
||||
def run(self, conversation: Conversation,
|
||||
message: Message,
|
||||
query: str,
|
||||
def run(self,
|
||||
message: Message, query: str, **kwargs: Any
|
||||
) -> Generator[LLMResultChunk, None, None]:
|
||||
"""
|
||||
Run FunctionCall agent application
|
||||
@ -35,40 +36,17 @@ class FunctionCallAgentRunner(BaseAgentRunner):
|
||||
|
||||
prompt_template = app_config.prompt_template.simple_prompt_template or ''
|
||||
prompt_messages = self.history_prompt_messages
|
||||
prompt_messages = self.organize_prompt_messages(
|
||||
prompt_template=prompt_template,
|
||||
query=query,
|
||||
prompt_messages=prompt_messages
|
||||
)
|
||||
prompt_messages = self._init_system_message(prompt_template, prompt_messages)
|
||||
prompt_messages = self._organize_user_query(query, prompt_messages)
|
||||
|
||||
# convert tools into ModelRuntime Tool format
|
||||
prompt_messages_tools: list[PromptMessageTool] = []
|
||||
tool_instances = {}
|
||||
for tool in app_config.agent.tools if app_config.agent else []:
|
||||
try:
|
||||
prompt_tool, tool_entity = self._convert_tool_to_prompt_message_tool(tool)
|
||||
except Exception:
|
||||
# api tool may be deleted
|
||||
continue
|
||||
# save tool entity
|
||||
tool_instances[tool.tool_name] = tool_entity
|
||||
# save prompt tool
|
||||
prompt_messages_tools.append(prompt_tool)
|
||||
|
||||
# convert dataset tools into ModelRuntime Tool format
|
||||
for dataset_tool in self.dataset_tools:
|
||||
prompt_tool = self._convert_dataset_retriever_tool_to_prompt_message_tool(dataset_tool)
|
||||
# save prompt tool
|
||||
prompt_messages_tools.append(prompt_tool)
|
||||
# save tool entity
|
||||
tool_instances[dataset_tool.identity.name] = dataset_tool
|
||||
tool_instances, prompt_messages_tools = self._init_prompt_tools()
|
||||
|
||||
iteration_step = 1
|
||||
max_iteration_steps = min(app_config.agent.max_iteration, 5) + 1
|
||||
|
||||
# continue to run until there is not any tool call
|
||||
function_call_state = True
|
||||
agent_thoughts: list[MessageAgentThought] = []
|
||||
llm_usage = {
|
||||
'usage': None
|
||||
}
|
||||
@ -287,9 +265,7 @@ class FunctionCallAgentRunner(BaseAgentRunner):
|
||||
}
|
||||
|
||||
tool_responses.append(tool_response)
|
||||
prompt_messages = self.organize_prompt_messages(
|
||||
prompt_template=prompt_template,
|
||||
query=None,
|
||||
prompt_messages = self._organize_assistant_message(
|
||||
tool_call_id=tool_call_id,
|
||||
tool_call_name=tool_call_name,
|
||||
tool_response=tool_response['tool_response'],
|
||||
@ -324,6 +300,8 @@ class FunctionCallAgentRunner(BaseAgentRunner):
|
||||
|
||||
iteration_step += 1
|
||||
|
||||
prompt_messages = self._clear_user_prompt_image_messages(prompt_messages)
|
||||
|
||||
self.update_db_variables(self.variables_pool, self.db_variables_pool)
|
||||
# publish end event
|
||||
self.queue_manager.publish(QueueMessageEndEvent(llm_result=LLMResult(
|
||||
@ -386,29 +364,68 @@ class FunctionCallAgentRunner(BaseAgentRunner):
|
||||
|
||||
return tool_calls
|
||||
|
||||
def organize_prompt_messages(self, prompt_template: str,
|
||||
query: str = None,
|
||||
tool_call_id: str = None, tool_call_name: str = None, tool_response: str = None,
|
||||
prompt_messages: list[PromptMessage] = None
|
||||
) -> list[PromptMessage]:
|
||||
def _init_system_message(self, prompt_template: str, prompt_messages: list[PromptMessage] = None) -> list[PromptMessage]:
|
||||
"""
|
||||
Organize prompt messages
|
||||
Initialize system message
|
||||
"""
|
||||
|
||||
if not prompt_messages:
|
||||
prompt_messages = [
|
||||
if not prompt_messages and prompt_template:
|
||||
return [
|
||||
SystemPromptMessage(content=prompt_template),
|
||||
UserPromptMessage(content=query),
|
||||
]
|
||||
|
||||
if prompt_messages and not isinstance(prompt_messages[0], SystemPromptMessage) and prompt_template:
|
||||
prompt_messages.insert(0, SystemPromptMessage(content=prompt_template))
|
||||
|
||||
return prompt_messages
|
||||
|
||||
def _organize_user_query(self, query, prompt_messages: list[PromptMessage] = None) -> list[PromptMessage]:
|
||||
"""
|
||||
Organize user query
|
||||
"""
|
||||
if self.files:
|
||||
prompt_message_contents = [TextPromptMessageContent(data=query)]
|
||||
for file_obj in self.files:
|
||||
prompt_message_contents.append(file_obj.prompt_message_content)
|
||||
|
||||
prompt_messages.append(UserPromptMessage(content=prompt_message_contents))
|
||||
else:
|
||||
if tool_response:
|
||||
prompt_messages = prompt_messages.copy()
|
||||
prompt_messages.append(
|
||||
ToolPromptMessage(
|
||||
content=tool_response,
|
||||
tool_call_id=tool_call_id,
|
||||
name=tool_call_name,
|
||||
)
|
||||
prompt_messages.append(UserPromptMessage(content=query))
|
||||
|
||||
return prompt_messages
|
||||
|
||||
def _organize_assistant_message(self, tool_call_id: str = None, tool_call_name: str = None, tool_response: str = None,
|
||||
prompt_messages: list[PromptMessage] = None) -> list[PromptMessage]:
|
||||
"""
|
||||
Organize assistant message
|
||||
"""
|
||||
prompt_messages = deepcopy(prompt_messages)
|
||||
|
||||
if tool_response is not None:
|
||||
prompt_messages.append(
|
||||
ToolPromptMessage(
|
||||
content=tool_response,
|
||||
tool_call_id=tool_call_id,
|
||||
name=tool_call_name,
|
||||
)
|
||||
)
|
||||
|
||||
return prompt_messages
|
||||
|
||||
def _clear_user_prompt_image_messages(self, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
|
||||
"""
|
||||
As for now, gpt supports both fc and vision at the first iteration.
|
||||
We need to remove the image messages from the prompt messages at the first iteration.
|
||||
"""
|
||||
prompt_messages = deepcopy(prompt_messages)
|
||||
|
||||
for prompt_message in prompt_messages:
|
||||
if isinstance(prompt_message, UserPromptMessage):
|
||||
if isinstance(prompt_message.content, list):
|
||||
prompt_message.content = '\n'.join([
|
||||
content.data if content.type == PromptMessageContentType.TEXT else
|
||||
'[image]' if content.type == PromptMessageContentType.IMAGE else
|
||||
'[file]'
|
||||
for content in prompt_message.content
|
||||
])
|
||||
|
||||
return prompt_messages
|
||||
183
api/core/agent/output_parser/cot_output_parser.py
Normal file
183
api/core/agent/output_parser/cot_output_parser.py
Normal file
@ -0,0 +1,183 @@
|
||||
import json
|
||||
import re
|
||||
from collections.abc import Generator
|
||||
from typing import Union
|
||||
|
||||
from core.agent.entities import AgentScratchpadUnit
|
||||
from core.model_runtime.entities.llm_entities import LLMResultChunk
|
||||
|
||||
|
||||
class CotAgentOutputParser:
|
||||
@classmethod
|
||||
def handle_react_stream_output(cls, llm_response: Generator[LLMResultChunk, None, None]) -> \
|
||||
Generator[Union[str, AgentScratchpadUnit.Action], None, None]:
|
||||
def parse_action(json_str):
|
||||
try:
|
||||
action = json.loads(json_str)
|
||||
action_name = None
|
||||
action_input = None
|
||||
|
||||
for key, value in action.items():
|
||||
if 'input' in key.lower():
|
||||
action_input = value
|
||||
else:
|
||||
action_name = value
|
||||
|
||||
if action_name is not None and action_input is not None:
|
||||
return AgentScratchpadUnit.Action(
|
||||
action_name=action_name,
|
||||
action_input=action_input,
|
||||
)
|
||||
else:
|
||||
return json_str or ''
|
||||
except:
|
||||
return json_str or ''
|
||||
|
||||
def extra_json_from_code_block(code_block) -> Generator[Union[dict, str], None, None]:
|
||||
code_blocks = re.findall(r'```(.*?)```', code_block, re.DOTALL)
|
||||
if not code_blocks:
|
||||
return
|
||||
for block in code_blocks:
|
||||
json_text = re.sub(r'^[a-zA-Z]+\n', '', block.strip(), flags=re.MULTILINE)
|
||||
yield parse_action(json_text)
|
||||
|
||||
code_block_cache = ''
|
||||
code_block_delimiter_count = 0
|
||||
in_code_block = False
|
||||
json_cache = ''
|
||||
json_quote_count = 0
|
||||
in_json = False
|
||||
got_json = False
|
||||
|
||||
action_cache = ''
|
||||
action_str = 'action:'
|
||||
action_idx = 0
|
||||
|
||||
thought_cache = ''
|
||||
thought_str = 'thought:'
|
||||
thought_idx = 0
|
||||
|
||||
for response in llm_response:
|
||||
response = response.delta.message.content
|
||||
if not isinstance(response, str):
|
||||
continue
|
||||
|
||||
# stream
|
||||
index = 0
|
||||
while index < len(response):
|
||||
steps = 1
|
||||
delta = response[index:index+steps]
|
||||
last_character = response[index-1] if index > 0 else ''
|
||||
|
||||
if delta == '`':
|
||||
code_block_cache += delta
|
||||
code_block_delimiter_count += 1
|
||||
else:
|
||||
if not in_code_block:
|
||||
if code_block_delimiter_count > 0:
|
||||
yield code_block_cache
|
||||
code_block_cache = ''
|
||||
else:
|
||||
code_block_cache += delta
|
||||
code_block_delimiter_count = 0
|
||||
|
||||
if not in_code_block and not in_json:
|
||||
if delta.lower() == action_str[action_idx] and action_idx == 0:
|
||||
if last_character not in ['\n', ' ', '']:
|
||||
index += steps
|
||||
yield delta
|
||||
continue
|
||||
|
||||
action_cache += delta
|
||||
action_idx += 1
|
||||
if action_idx == len(action_str):
|
||||
action_cache = ''
|
||||
action_idx = 0
|
||||
index += steps
|
||||
continue
|
||||
elif delta.lower() == action_str[action_idx] and action_idx > 0:
|
||||
action_cache += delta
|
||||
action_idx += 1
|
||||
if action_idx == len(action_str):
|
||||
action_cache = ''
|
||||
action_idx = 0
|
||||
index += steps
|
||||
continue
|
||||
else:
|
||||
if action_cache:
|
||||
yield action_cache
|
||||
action_cache = ''
|
||||
action_idx = 0
|
||||
|
||||
if delta.lower() == thought_str[thought_idx] and thought_idx == 0:
|
||||
if last_character not in ['\n', ' ', '']:
|
||||
index += steps
|
||||
yield delta
|
||||
continue
|
||||
|
||||
thought_cache += delta
|
||||
thought_idx += 1
|
||||
if thought_idx == len(thought_str):
|
||||
thought_cache = ''
|
||||
thought_idx = 0
|
||||
index += steps
|
||||
continue
|
||||
elif delta.lower() == thought_str[thought_idx] and thought_idx > 0:
|
||||
thought_cache += delta
|
||||
thought_idx += 1
|
||||
if thought_idx == len(thought_str):
|
||||
thought_cache = ''
|
||||
thought_idx = 0
|
||||
index += steps
|
||||
continue
|
||||
else:
|
||||
if thought_cache:
|
||||
yield thought_cache
|
||||
thought_cache = ''
|
||||
thought_idx = 0
|
||||
|
||||
if code_block_delimiter_count == 3:
|
||||
if in_code_block:
|
||||
yield from extra_json_from_code_block(code_block_cache)
|
||||
code_block_cache = ''
|
||||
|
||||
in_code_block = not in_code_block
|
||||
code_block_delimiter_count = 0
|
||||
|
||||
if not in_code_block:
|
||||
# handle single json
|
||||
if delta == '{':
|
||||
json_quote_count += 1
|
||||
in_json = True
|
||||
json_cache += delta
|
||||
elif delta == '}':
|
||||
json_cache += delta
|
||||
if json_quote_count > 0:
|
||||
json_quote_count -= 1
|
||||
if json_quote_count == 0:
|
||||
in_json = False
|
||||
got_json = True
|
||||
index += steps
|
||||
continue
|
||||
else:
|
||||
if in_json:
|
||||
json_cache += delta
|
||||
|
||||
if got_json:
|
||||
got_json = False
|
||||
yield parse_action(json_cache)
|
||||
json_cache = ''
|
||||
json_quote_count = 0
|
||||
in_json = False
|
||||
|
||||
if not in_code_block and not in_json:
|
||||
yield delta.replace('`', '')
|
||||
|
||||
index += steps
|
||||
|
||||
if code_block_cache:
|
||||
yield code_block_cache
|
||||
|
||||
if json_cache:
|
||||
yield parse_action(json_cache)
|
||||
|
||||
@ -1,7 +1,8 @@
|
||||
import logging
|
||||
from typing import cast
|
||||
|
||||
from core.agent.cot_agent_runner import CotAgentRunner
|
||||
from core.agent.cot_chat_agent_runner import CotChatAgentRunner
|
||||
from core.agent.cot_completion_agent_runner import CotCompletionAgentRunner
|
||||
from core.agent.entities import AgentEntity
|
||||
from core.agent.fc_agent_runner import FunctionCallAgentRunner
|
||||
from core.app.apps.agent_chat.app_config_manager import AgentChatAppConfig
|
||||
@ -11,8 +12,8 @@ from core.app.entities.app_invoke_entities import AgentChatAppGenerateEntity, Mo
|
||||
from core.app.entities.queue_entities import QueueAnnotationReplyEvent
|
||||
from core.memory.token_buffer_memory import TokenBufferMemory
|
||||
from core.model_manager import ModelInstance
|
||||
from core.model_runtime.entities.llm_entities import LLMUsage
|
||||
from core.model_runtime.entities.model_entities import ModelFeature
|
||||
from core.model_runtime.entities.llm_entities import LLMMode, LLMUsage
|
||||
from core.model_runtime.entities.model_entities import ModelFeature, ModelPropertyKey
|
||||
from core.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel
|
||||
from core.moderation.base import ModerationException
|
||||
from core.tools.entities.tool_entities import ToolRuntimeVariablePool
|
||||
@ -207,48 +208,40 @@ class AgentChatAppRunner(AppRunner):
|
||||
|
||||
# start agent runner
|
||||
if agent_entity.strategy == AgentEntity.Strategy.CHAIN_OF_THOUGHT:
|
||||
assistant_cot_runner = CotAgentRunner(
|
||||
tenant_id=app_config.tenant_id,
|
||||
application_generate_entity=application_generate_entity,
|
||||
app_config=app_config,
|
||||
model_config=application_generate_entity.model_config,
|
||||
config=agent_entity,
|
||||
queue_manager=queue_manager,
|
||||
message=message,
|
||||
user_id=application_generate_entity.user_id,
|
||||
memory=memory,
|
||||
prompt_messages=prompt_message,
|
||||
variables_pool=tool_variables,
|
||||
db_variables=tool_conversation_variables,
|
||||
model_instance=model_instance
|
||||
)
|
||||
invoke_result = assistant_cot_runner.run(
|
||||
conversation=conversation,
|
||||
message=message,
|
||||
query=query,
|
||||
inputs=inputs,
|
||||
)
|
||||
# check LLM mode
|
||||
if model_schema.model_properties.get(ModelPropertyKey.MODE) == LLMMode.CHAT.value:
|
||||
runner_cls = CotChatAgentRunner
|
||||
elif model_schema.model_properties.get(ModelPropertyKey.MODE) == LLMMode.COMPLETION.value:
|
||||
runner_cls = CotCompletionAgentRunner
|
||||
else:
|
||||
raise ValueError(f"Invalid LLM mode: {model_schema.model_properties.get(ModelPropertyKey.MODE)}")
|
||||
elif agent_entity.strategy == AgentEntity.Strategy.FUNCTION_CALLING:
|
||||
assistant_fc_runner = FunctionCallAgentRunner(
|
||||
tenant_id=app_config.tenant_id,
|
||||
application_generate_entity=application_generate_entity,
|
||||
app_config=app_config,
|
||||
model_config=application_generate_entity.model_config,
|
||||
config=agent_entity,
|
||||
queue_manager=queue_manager,
|
||||
message=message,
|
||||
user_id=application_generate_entity.user_id,
|
||||
memory=memory,
|
||||
prompt_messages=prompt_message,
|
||||
variables_pool=tool_variables,
|
||||
db_variables=tool_conversation_variables,
|
||||
model_instance=model_instance
|
||||
)
|
||||
invoke_result = assistant_fc_runner.run(
|
||||
conversation=conversation,
|
||||
message=message,
|
||||
query=query,
|
||||
)
|
||||
runner_cls = FunctionCallAgentRunner
|
||||
else:
|
||||
raise ValueError(f"Invalid agent strategy: {agent_entity.strategy}")
|
||||
|
||||
runner = runner_cls(
|
||||
tenant_id=app_config.tenant_id,
|
||||
application_generate_entity=application_generate_entity,
|
||||
conversation=conversation,
|
||||
app_config=app_config,
|
||||
model_config=application_generate_entity.model_config,
|
||||
config=agent_entity,
|
||||
queue_manager=queue_manager,
|
||||
message=message,
|
||||
user_id=application_generate_entity.user_id,
|
||||
memory=memory,
|
||||
prompt_messages=prompt_message,
|
||||
variables_pool=tool_variables,
|
||||
db_variables=tool_conversation_variables,
|
||||
model_instance=model_instance
|
||||
)
|
||||
|
||||
invoke_result = runner.run(
|
||||
message=message,
|
||||
query=query,
|
||||
inputs=inputs,
|
||||
)
|
||||
|
||||
# handle invoke result
|
||||
self._handle_invoke_result(
|
||||
|
||||
@ -156,6 +156,8 @@ class ChatAppRunner(AppRunner):
|
||||
|
||||
dataset_retrieval = DatasetRetrieval()
|
||||
context = dataset_retrieval.retrieve(
|
||||
app_id=app_record.id,
|
||||
user_id=application_generate_entity.user_id,
|
||||
tenant_id=app_record.tenant_id,
|
||||
model_config=application_generate_entity.model_config,
|
||||
config=app_config.dataset,
|
||||
|
||||
@ -116,6 +116,8 @@ class CompletionAppRunner(AppRunner):
|
||||
|
||||
dataset_retrieval = DatasetRetrieval()
|
||||
context = dataset_retrieval.retrieve(
|
||||
app_id=app_record.id,
|
||||
user_id=application_generate_entity.user_id,
|
||||
tenant_id=app_record.tenant_id,
|
||||
model_config=application_generate_entity.model_config,
|
||||
config=dataset_config,
|
||||
|
||||
@ -99,6 +99,12 @@ model_credential_schema:
|
||||
show_on:
|
||||
- variable: __model_type
|
||||
value: llm
|
||||
- label:
|
||||
en_US: gpt-4-turbo-2024-04-09
|
||||
value: gpt-4-turbo-2024-04-09
|
||||
show_on:
|
||||
- variable: __model_type
|
||||
value: llm
|
||||
- label:
|
||||
en_US: gpt-4-0125-preview
|
||||
value: gpt-4-0125-preview
|
||||
|
||||
@ -2,8 +2,6 @@ model: amazon.titan-text-express-v1
|
||||
label:
|
||||
en_US: Titan Text G1 - Express
|
||||
model_type: llm
|
||||
features:
|
||||
- agent-thought
|
||||
model_properties:
|
||||
mode: chat
|
||||
context_size: 8192
|
||||
|
||||
@ -2,8 +2,6 @@ model: amazon.titan-text-lite-v1
|
||||
label:
|
||||
en_US: Titan Text G1 - Lite
|
||||
model_type: llm
|
||||
features:
|
||||
- agent-thought
|
||||
model_properties:
|
||||
mode: chat
|
||||
context_size: 4096
|
||||
|
||||
@ -50,3 +50,4 @@ pricing:
|
||||
output: '0.024'
|
||||
unit: '0.001'
|
||||
currency: USD
|
||||
deprecated: true
|
||||
|
||||
@ -22,7 +22,7 @@ parameter_rules:
|
||||
min: 0
|
||||
max: 500
|
||||
default: 0
|
||||
- name: max_tokens_to_sample
|
||||
- name: max_tokens
|
||||
use_template: max_tokens
|
||||
required: true
|
||||
default: 4096
|
||||
|
||||
@ -8,9 +8,9 @@ model_properties:
|
||||
parameter_rules:
|
||||
- name: temperature
|
||||
use_template: temperature
|
||||
- name: top_p
|
||||
- name: p
|
||||
use_template: top_p
|
||||
- name: top_k
|
||||
- name: k
|
||||
label:
|
||||
zh_Hans: 取样数量
|
||||
en_US: Top k
|
||||
@ -19,7 +19,7 @@ parameter_rules:
|
||||
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
|
||||
en_US: Only sample from the top K options for each subsequent token.
|
||||
required: false
|
||||
- name: max_tokens_to_sample
|
||||
- name: max_tokens
|
||||
use_template: max_tokens
|
||||
required: true
|
||||
default: 4096
|
||||
|
||||
@ -503,7 +503,7 @@ class BedrockLargeLanguageModel(LargeLanguageModel):
|
||||
|
||||
if model_prefix == "amazon":
|
||||
payload["textGenerationConfig"] = { **model_parameters }
|
||||
payload["textGenerationConfig"]["stopSequences"] = ["User:"] + (stop if stop else [])
|
||||
payload["textGenerationConfig"]["stopSequences"] = ["User:"]
|
||||
|
||||
payload["inputText"] = self._convert_messages_to_prompt(prompt_messages, model_prefix)
|
||||
|
||||
@ -513,10 +513,6 @@ class BedrockLargeLanguageModel(LargeLanguageModel):
|
||||
payload["maxTokens"] = model_parameters.get("maxTokens")
|
||||
payload["prompt"] = self._convert_messages_to_prompt(prompt_messages, model_prefix)
|
||||
|
||||
# jurassic models only support a single stop sequence
|
||||
if stop:
|
||||
payload["stopSequences"] = stop[0]
|
||||
|
||||
if model_parameters.get("presencePenalty"):
|
||||
payload["presencePenalty"] = {model_parameters.get("presencePenalty")}
|
||||
if model_parameters.get("frequencyPenalty"):
|
||||
|
||||
@ -1,3 +1,5 @@
|
||||
- command-r
|
||||
- command-r-plus
|
||||
- command-chat
|
||||
- command-light-chat
|
||||
- command-nightly-chat
|
||||
|
||||
@ -31,7 +31,7 @@ parameter_rules:
|
||||
max: 500
|
||||
- name: max_tokens
|
||||
use_template: max_tokens
|
||||
default: 256
|
||||
default: 1024
|
||||
max: 4096
|
||||
- name: preamble_override
|
||||
label:
|
||||
|
||||
@ -31,7 +31,7 @@ parameter_rules:
|
||||
max: 500
|
||||
- name: max_tokens
|
||||
use_template: max_tokens
|
||||
default: 256
|
||||
default: 1024
|
||||
max: 4096
|
||||
- name: preamble_override
|
||||
label:
|
||||
|
||||
@ -31,7 +31,7 @@ parameter_rules:
|
||||
max: 500
|
||||
- name: max_tokens
|
||||
use_template: max_tokens
|
||||
default: 256
|
||||
default: 1024
|
||||
max: 4096
|
||||
- name: preamble_override
|
||||
label:
|
||||
|
||||
@ -35,7 +35,7 @@ parameter_rules:
|
||||
use_template: frequency_penalty
|
||||
- name: max_tokens
|
||||
use_template: max_tokens
|
||||
default: 256
|
||||
default: 1024
|
||||
max: 4096
|
||||
pricing:
|
||||
input: '0.3'
|
||||
|
||||
@ -35,7 +35,7 @@ parameter_rules:
|
||||
use_template: frequency_penalty
|
||||
- name: max_tokens
|
||||
use_template: max_tokens
|
||||
default: 256
|
||||
default: 1024
|
||||
max: 4096
|
||||
pricing:
|
||||
input: '0.3'
|
||||
|
||||
@ -31,7 +31,7 @@ parameter_rules:
|
||||
max: 500
|
||||
- name: max_tokens
|
||||
use_template: max_tokens
|
||||
default: 256
|
||||
default: 1024
|
||||
max: 4096
|
||||
- name: preamble_override
|
||||
label:
|
||||
|
||||
@ -35,7 +35,7 @@ parameter_rules:
|
||||
use_template: frequency_penalty
|
||||
- name: max_tokens
|
||||
use_template: max_tokens
|
||||
default: 256
|
||||
default: 1024
|
||||
max: 4096
|
||||
pricing:
|
||||
input: '1.0'
|
||||
|
||||
@ -0,0 +1,45 @@
|
||||
model: command-r-plus
|
||||
label:
|
||||
en_US: command-r-plus
|
||||
model_type: llm
|
||||
features:
|
||||
- multi-tool-call
|
||||
- agent-thought
|
||||
- stream-tool-call
|
||||
model_properties:
|
||||
mode: chat
|
||||
context_size: 128000
|
||||
parameter_rules:
|
||||
- name: temperature
|
||||
use_template: temperature
|
||||
max: 5.0
|
||||
- name: p
|
||||
use_template: top_p
|
||||
default: 0.75
|
||||
min: 0.01
|
||||
max: 0.99
|
||||
- name: k
|
||||
label:
|
||||
zh_Hans: 取样数量
|
||||
en_US: Top k
|
||||
type: int
|
||||
help:
|
||||
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
|
||||
en_US: Only sample from the top K options for each subsequent token.
|
||||
required: false
|
||||
default: 0
|
||||
min: 0
|
||||
max: 500
|
||||
- name: presence_penalty
|
||||
use_template: presence_penalty
|
||||
- name: frequency_penalty
|
||||
use_template: frequency_penalty
|
||||
- name: max_tokens
|
||||
use_template: max_tokens
|
||||
default: 1024
|
||||
max: 4096
|
||||
pricing:
|
||||
input: '3'
|
||||
output: '15'
|
||||
unit: '0.000001'
|
||||
currency: USD
|
||||
@ -0,0 +1,45 @@
|
||||
model: command-r
|
||||
label:
|
||||
en_US: command-r
|
||||
model_type: llm
|
||||
features:
|
||||
- multi-tool-call
|
||||
- agent-thought
|
||||
- stream-tool-call
|
||||
model_properties:
|
||||
mode: chat
|
||||
context_size: 128000
|
||||
parameter_rules:
|
||||
- name: temperature
|
||||
use_template: temperature
|
||||
max: 5.0
|
||||
- name: p
|
||||
use_template: top_p
|
||||
default: 0.75
|
||||
min: 0.01
|
||||
max: 0.99
|
||||
- name: k
|
||||
label:
|
||||
zh_Hans: 取样数量
|
||||
en_US: Top k
|
||||
type: int
|
||||
help:
|
||||
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
|
||||
en_US: Only sample from the top K options for each subsequent token.
|
||||
required: false
|
||||
default: 0
|
||||
min: 0
|
||||
max: 500
|
||||
- name: presence_penalty
|
||||
use_template: presence_penalty
|
||||
- name: frequency_penalty
|
||||
use_template: frequency_penalty
|
||||
- name: max_tokens
|
||||
use_template: max_tokens
|
||||
default: 1024
|
||||
max: 4096
|
||||
pricing:
|
||||
input: '0.5'
|
||||
output: '1.5'
|
||||
unit: '0.000001'
|
||||
currency: USD
|
||||
@ -35,7 +35,7 @@ parameter_rules:
|
||||
use_template: frequency_penalty
|
||||
- name: max_tokens
|
||||
use_template: max_tokens
|
||||
default: 256
|
||||
default: 1024
|
||||
max: 4096
|
||||
pricing:
|
||||
input: '1.0'
|
||||
|
||||
@ -1,20 +1,38 @@
|
||||
import json
|
||||
import logging
|
||||
from collections.abc import Generator
|
||||
from collections.abc import Generator, Iterator
|
||||
from typing import Optional, Union, cast
|
||||
|
||||
import cohere
|
||||
from cohere.responses import Chat, Generations
|
||||
from cohere.responses.chat import StreamEnd, StreamingChat, StreamTextGeneration
|
||||
from cohere.responses.generation import StreamingGenerations, StreamingText
|
||||
from cohere import (
|
||||
ChatMessage,
|
||||
ChatStreamRequestToolResultsItem,
|
||||
GenerateStreamedResponse,
|
||||
GenerateStreamedResponse_StreamEnd,
|
||||
GenerateStreamedResponse_StreamError,
|
||||
GenerateStreamedResponse_TextGeneration,
|
||||
Generation,
|
||||
NonStreamedChatResponse,
|
||||
StreamedChatResponse,
|
||||
StreamedChatResponse_StreamEnd,
|
||||
StreamedChatResponse_TextGeneration,
|
||||
StreamedChatResponse_ToolCallsGeneration,
|
||||
Tool,
|
||||
ToolCall,
|
||||
ToolParameterDefinitionsValue,
|
||||
)
|
||||
from cohere.core import RequestOptions
|
||||
|
||||
from core.model_runtime.entities.llm_entities import LLMMode, LLMResult, LLMResultChunk, LLMResultChunkDelta
|
||||
from core.model_runtime.entities.message_entities import (
|
||||
AssistantPromptMessage,
|
||||
PromptMessage,
|
||||
PromptMessageContentType,
|
||||
PromptMessageRole,
|
||||
PromptMessageTool,
|
||||
SystemPromptMessage,
|
||||
TextPromptMessageContent,
|
||||
ToolPromptMessage,
|
||||
UserPromptMessage,
|
||||
)
|
||||
from core.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, I18nObject, ModelType
|
||||
@ -64,6 +82,7 @@ class CohereLargeLanguageModel(LargeLanguageModel):
|
||||
credentials=credentials,
|
||||
prompt_messages=prompt_messages,
|
||||
model_parameters=model_parameters,
|
||||
tools=tools,
|
||||
stop=stop,
|
||||
stream=stream,
|
||||
user=user
|
||||
@ -159,19 +178,26 @@ class CohereLargeLanguageModel(LargeLanguageModel):
|
||||
if stop:
|
||||
model_parameters['end_sequences'] = stop
|
||||
|
||||
response = client.generate(
|
||||
prompt=prompt_messages[0].content,
|
||||
model=model,
|
||||
stream=stream,
|
||||
**model_parameters,
|
||||
)
|
||||
|
||||
if stream:
|
||||
response = client.generate_stream(
|
||||
prompt=prompt_messages[0].content,
|
||||
model=model,
|
||||
**model_parameters,
|
||||
request_options=RequestOptions(max_retries=0)
|
||||
)
|
||||
|
||||
return self._handle_generate_stream_response(model, credentials, response, prompt_messages)
|
||||
else:
|
||||
response = client.generate(
|
||||
prompt=prompt_messages[0].content,
|
||||
model=model,
|
||||
**model_parameters,
|
||||
request_options=RequestOptions(max_retries=0)
|
||||
)
|
||||
|
||||
return self._handle_generate_response(model, credentials, response, prompt_messages)
|
||||
return self._handle_generate_response(model, credentials, response, prompt_messages)
|
||||
|
||||
def _handle_generate_response(self, model: str, credentials: dict, response: Generations,
|
||||
def _handle_generate_response(self, model: str, credentials: dict, response: Generation,
|
||||
prompt_messages: list[PromptMessage]) \
|
||||
-> LLMResult:
|
||||
"""
|
||||
@ -191,8 +217,8 @@ class CohereLargeLanguageModel(LargeLanguageModel):
|
||||
)
|
||||
|
||||
# calculate num tokens
|
||||
prompt_tokens = response.meta['billed_units']['input_tokens']
|
||||
completion_tokens = response.meta['billed_units']['output_tokens']
|
||||
prompt_tokens = int(response.meta.billed_units.input_tokens)
|
||||
completion_tokens = int(response.meta.billed_units.output_tokens)
|
||||
|
||||
# transform usage
|
||||
usage = self._calc_response_usage(model, credentials, prompt_tokens, completion_tokens)
|
||||
@ -207,7 +233,7 @@ class CohereLargeLanguageModel(LargeLanguageModel):
|
||||
|
||||
return response
|
||||
|
||||
def _handle_generate_stream_response(self, model: str, credentials: dict, response: StreamingGenerations,
|
||||
def _handle_generate_stream_response(self, model: str, credentials: dict, response: Iterator[GenerateStreamedResponse],
|
||||
prompt_messages: list[PromptMessage]) -> Generator:
|
||||
"""
|
||||
Handle llm stream response
|
||||
@ -220,8 +246,8 @@ class CohereLargeLanguageModel(LargeLanguageModel):
|
||||
index = 1
|
||||
full_assistant_content = ''
|
||||
for chunk in response:
|
||||
if isinstance(chunk, StreamingText):
|
||||
chunk = cast(StreamingText, chunk)
|
||||
if isinstance(chunk, GenerateStreamedResponse_TextGeneration):
|
||||
chunk = cast(GenerateStreamedResponse_TextGeneration, chunk)
|
||||
text = chunk.text
|
||||
|
||||
if text is None:
|
||||
@ -244,10 +270,16 @@ class CohereLargeLanguageModel(LargeLanguageModel):
|
||||
)
|
||||
|
||||
index += 1
|
||||
elif chunk is None:
|
||||
elif isinstance(chunk, GenerateStreamedResponse_StreamEnd):
|
||||
chunk = cast(GenerateStreamedResponse_StreamEnd, chunk)
|
||||
|
||||
# calculate num tokens
|
||||
prompt_tokens = response.meta['billed_units']['input_tokens']
|
||||
completion_tokens = response.meta['billed_units']['output_tokens']
|
||||
prompt_tokens = self._num_tokens_from_messages(model, credentials, prompt_messages)
|
||||
completion_tokens = self._num_tokens_from_messages(
|
||||
model,
|
||||
credentials,
|
||||
[AssistantPromptMessage(content=full_assistant_content)]
|
||||
)
|
||||
|
||||
# transform usage
|
||||
usage = self._calc_response_usage(model, credentials, prompt_tokens, completion_tokens)
|
||||
@ -258,14 +290,18 @@ class CohereLargeLanguageModel(LargeLanguageModel):
|
||||
delta=LLMResultChunkDelta(
|
||||
index=index,
|
||||
message=AssistantPromptMessage(content=''),
|
||||
finish_reason=response.finish_reason,
|
||||
finish_reason=chunk.finish_reason,
|
||||
usage=usage
|
||||
)
|
||||
)
|
||||
break
|
||||
elif isinstance(chunk, GenerateStreamedResponse_StreamError):
|
||||
chunk = cast(GenerateStreamedResponse_StreamError, chunk)
|
||||
raise InvokeBadRequestError(chunk.err)
|
||||
|
||||
def _chat_generate(self, model: str, credentials: dict,
|
||||
prompt_messages: list[PromptMessage], model_parameters: dict, stop: Optional[list[str]] = None,
|
||||
prompt_messages: list[PromptMessage], model_parameters: dict,
|
||||
tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None,
|
||||
stream: bool = True, user: Optional[str] = None) -> Union[LLMResult, Generator]:
|
||||
"""
|
||||
Invoke llm chat model
|
||||
@ -274,6 +310,7 @@ class CohereLargeLanguageModel(LargeLanguageModel):
|
||||
:param credentials: credentials
|
||||
:param prompt_messages: prompt messages
|
||||
:param model_parameters: model parameters
|
||||
:param tools: tools for tool calling
|
||||
:param stop: stop words
|
||||
:param stream: is stream response
|
||||
:param user: unique user id
|
||||
@ -282,31 +319,49 @@ class CohereLargeLanguageModel(LargeLanguageModel):
|
||||
# initialize client
|
||||
client = cohere.Client(credentials.get('api_key'))
|
||||
|
||||
if user:
|
||||
model_parameters['user_name'] = user
|
||||
if stop:
|
||||
model_parameters['stop_sequences'] = stop
|
||||
|
||||
message, chat_histories = self._convert_prompt_messages_to_message_and_chat_histories(prompt_messages)
|
||||
if tools:
|
||||
if len(tools) == 1:
|
||||
raise ValueError("Cohere tool call requires at least two tools to be specified.")
|
||||
|
||||
model_parameters['tools'] = self._convert_tools(tools)
|
||||
|
||||
message, chat_histories, tool_results \
|
||||
= self._convert_prompt_messages_to_message_and_chat_histories(prompt_messages)
|
||||
|
||||
if tool_results:
|
||||
model_parameters['tool_results'] = tool_results
|
||||
|
||||
# chat model
|
||||
real_model = model
|
||||
if self.get_model_schema(model, credentials).fetch_from == FetchFrom.PREDEFINED_MODEL:
|
||||
real_model = model.removesuffix('-chat')
|
||||
|
||||
response = client.chat(
|
||||
message=message,
|
||||
chat_history=chat_histories,
|
||||
model=real_model,
|
||||
stream=stream,
|
||||
**model_parameters,
|
||||
)
|
||||
|
||||
if stream:
|
||||
return self._handle_chat_generate_stream_response(model, credentials, response, prompt_messages, stop)
|
||||
response = client.chat_stream(
|
||||
message=message,
|
||||
chat_history=chat_histories,
|
||||
model=real_model,
|
||||
**model_parameters,
|
||||
request_options=RequestOptions(max_retries=0)
|
||||
)
|
||||
|
||||
return self._handle_chat_generate_response(model, credentials, response, prompt_messages, stop)
|
||||
return self._handle_chat_generate_stream_response(model, credentials, response, prompt_messages)
|
||||
else:
|
||||
response = client.chat(
|
||||
message=message,
|
||||
chat_history=chat_histories,
|
||||
model=real_model,
|
||||
**model_parameters,
|
||||
request_options=RequestOptions(max_retries=0)
|
||||
)
|
||||
|
||||
def _handle_chat_generate_response(self, model: str, credentials: dict, response: Chat,
|
||||
prompt_messages: list[PromptMessage], stop: Optional[list[str]] = None) \
|
||||
return self._handle_chat_generate_response(model, credentials, response, prompt_messages)
|
||||
|
||||
def _handle_chat_generate_response(self, model: str, credentials: dict, response: NonStreamedChatResponse,
|
||||
prompt_messages: list[PromptMessage]) \
|
||||
-> LLMResult:
|
||||
"""
|
||||
Handle llm chat response
|
||||
@ -315,14 +370,27 @@ class CohereLargeLanguageModel(LargeLanguageModel):
|
||||
:param credentials: credentials
|
||||
:param response: response
|
||||
:param prompt_messages: prompt messages
|
||||
:param stop: stop words
|
||||
:return: llm response
|
||||
"""
|
||||
assistant_text = response.text
|
||||
|
||||
tool_calls = []
|
||||
if response.tool_calls:
|
||||
for cohere_tool_call in response.tool_calls:
|
||||
tool_call = AssistantPromptMessage.ToolCall(
|
||||
id=cohere_tool_call.name,
|
||||
type='function',
|
||||
function=AssistantPromptMessage.ToolCall.ToolCallFunction(
|
||||
name=cohere_tool_call.name,
|
||||
arguments=json.dumps(cohere_tool_call.parameters)
|
||||
)
|
||||
)
|
||||
tool_calls.append(tool_call)
|
||||
|
||||
# transform assistant message to prompt message
|
||||
assistant_prompt_message = AssistantPromptMessage(
|
||||
content=assistant_text
|
||||
content=assistant_text,
|
||||
tool_calls=tool_calls
|
||||
)
|
||||
|
||||
# calculate num tokens
|
||||
@ -332,44 +400,38 @@ class CohereLargeLanguageModel(LargeLanguageModel):
|
||||
# transform usage
|
||||
usage = self._calc_response_usage(model, credentials, prompt_tokens, completion_tokens)
|
||||
|
||||
if stop:
|
||||
# enforce stop tokens
|
||||
assistant_text = self.enforce_stop_tokens(assistant_text, stop)
|
||||
assistant_prompt_message = AssistantPromptMessage(
|
||||
content=assistant_text
|
||||
)
|
||||
|
||||
# transform response
|
||||
response = LLMResult(
|
||||
model=model,
|
||||
prompt_messages=prompt_messages,
|
||||
message=assistant_prompt_message,
|
||||
usage=usage,
|
||||
system_fingerprint=response.preamble
|
||||
usage=usage
|
||||
)
|
||||
|
||||
return response
|
||||
|
||||
def _handle_chat_generate_stream_response(self, model: str, credentials: dict, response: StreamingChat,
|
||||
prompt_messages: list[PromptMessage],
|
||||
stop: Optional[list[str]] = None) -> Generator:
|
||||
def _handle_chat_generate_stream_response(self, model: str, credentials: dict,
|
||||
response: Iterator[StreamedChatResponse],
|
||||
prompt_messages: list[PromptMessage]) -> Generator:
|
||||
"""
|
||||
Handle llm chat stream response
|
||||
|
||||
:param model: model name
|
||||
:param response: response
|
||||
:param prompt_messages: prompt messages
|
||||
:param stop: stop words
|
||||
:return: llm response chunk generator
|
||||
"""
|
||||
|
||||
def final_response(full_text: str, index: int, finish_reason: Optional[str] = None,
|
||||
preamble: Optional[str] = None) -> LLMResultChunk:
|
||||
def final_response(full_text: str,
|
||||
tool_calls: list[AssistantPromptMessage.ToolCall],
|
||||
index: int,
|
||||
finish_reason: Optional[str] = None) -> LLMResultChunk:
|
||||
# calculate num tokens
|
||||
prompt_tokens = self._num_tokens_from_messages(model, credentials, prompt_messages)
|
||||
|
||||
full_assistant_prompt_message = AssistantPromptMessage(
|
||||
content=full_text
|
||||
content=full_text,
|
||||
tool_calls=tool_calls
|
||||
)
|
||||
completion_tokens = self._num_tokens_from_messages(model, credentials, [full_assistant_prompt_message])
|
||||
|
||||
@ -379,10 +441,9 @@ class CohereLargeLanguageModel(LargeLanguageModel):
|
||||
return LLMResultChunk(
|
||||
model=model,
|
||||
prompt_messages=prompt_messages,
|
||||
system_fingerprint=preamble,
|
||||
delta=LLMResultChunkDelta(
|
||||
index=index,
|
||||
message=AssistantPromptMessage(content=''),
|
||||
message=AssistantPromptMessage(content='', tool_calls=tool_calls),
|
||||
finish_reason=finish_reason,
|
||||
usage=usage
|
||||
)
|
||||
@ -390,9 +451,10 @@ class CohereLargeLanguageModel(LargeLanguageModel):
|
||||
|
||||
index = 1
|
||||
full_assistant_content = ''
|
||||
tool_calls = []
|
||||
for chunk in response:
|
||||
if isinstance(chunk, StreamTextGeneration):
|
||||
chunk = cast(StreamTextGeneration, chunk)
|
||||
if isinstance(chunk, StreamedChatResponse_TextGeneration):
|
||||
chunk = cast(StreamedChatResponse_TextGeneration, chunk)
|
||||
text = chunk.text
|
||||
|
||||
if text is None:
|
||||
@ -403,12 +465,6 @@ class CohereLargeLanguageModel(LargeLanguageModel):
|
||||
content=text
|
||||
)
|
||||
|
||||
# stop
|
||||
# notice: This logic can only cover few stop scenarios
|
||||
if stop and text in stop:
|
||||
yield final_response(full_assistant_content, index, 'stop')
|
||||
break
|
||||
|
||||
full_assistant_content += text
|
||||
|
||||
yield LLMResultChunk(
|
||||
@ -421,39 +477,96 @@ class CohereLargeLanguageModel(LargeLanguageModel):
|
||||
)
|
||||
|
||||
index += 1
|
||||
elif isinstance(chunk, StreamEnd):
|
||||
chunk = cast(StreamEnd, chunk)
|
||||
yield final_response(full_assistant_content, index, chunk.finish_reason, response.preamble)
|
||||
elif isinstance(chunk, StreamedChatResponse_ToolCallsGeneration):
|
||||
chunk = cast(StreamedChatResponse_ToolCallsGeneration, chunk)
|
||||
if chunk.tool_calls:
|
||||
for cohere_tool_call in chunk.tool_calls:
|
||||
tool_call = AssistantPromptMessage.ToolCall(
|
||||
id=cohere_tool_call.name,
|
||||
type='function',
|
||||
function=AssistantPromptMessage.ToolCall.ToolCallFunction(
|
||||
name=cohere_tool_call.name,
|
||||
arguments=json.dumps(cohere_tool_call.parameters)
|
||||
)
|
||||
)
|
||||
tool_calls.append(tool_call)
|
||||
elif isinstance(chunk, StreamedChatResponse_StreamEnd):
|
||||
chunk = cast(StreamedChatResponse_StreamEnd, chunk)
|
||||
yield final_response(full_assistant_content, tool_calls, index, chunk.finish_reason)
|
||||
index += 1
|
||||
|
||||
def _convert_prompt_messages_to_message_and_chat_histories(self, prompt_messages: list[PromptMessage]) \
|
||||
-> tuple[str, list[dict]]:
|
||||
-> tuple[str, list[ChatMessage], list[ChatStreamRequestToolResultsItem]]:
|
||||
"""
|
||||
Convert prompt messages to message and chat histories
|
||||
:param prompt_messages: prompt messages
|
||||
:return:
|
||||
"""
|
||||
chat_histories = []
|
||||
latest_tool_call_n_outputs = []
|
||||
for prompt_message in prompt_messages:
|
||||
chat_histories.append(self._convert_prompt_message_to_dict(prompt_message))
|
||||
if prompt_message.role == PromptMessageRole.ASSISTANT:
|
||||
prompt_message = cast(AssistantPromptMessage, prompt_message)
|
||||
if prompt_message.tool_calls:
|
||||
for tool_call in prompt_message.tool_calls:
|
||||
latest_tool_call_n_outputs.append(ChatStreamRequestToolResultsItem(
|
||||
call=ToolCall(
|
||||
name=tool_call.function.name,
|
||||
parameters=json.loads(tool_call.function.arguments)
|
||||
),
|
||||
outputs=[]
|
||||
))
|
||||
else:
|
||||
cohere_prompt_message = self._convert_prompt_message_to_dict(prompt_message)
|
||||
if cohere_prompt_message:
|
||||
chat_histories.append(cohere_prompt_message)
|
||||
elif prompt_message.role == PromptMessageRole.TOOL:
|
||||
prompt_message = cast(ToolPromptMessage, prompt_message)
|
||||
if latest_tool_call_n_outputs:
|
||||
i = 0
|
||||
for tool_call_n_outputs in latest_tool_call_n_outputs:
|
||||
if tool_call_n_outputs.call.name == prompt_message.tool_call_id:
|
||||
latest_tool_call_n_outputs[i] = ChatStreamRequestToolResultsItem(
|
||||
call=ToolCall(
|
||||
name=tool_call_n_outputs.call.name,
|
||||
parameters=tool_call_n_outputs.call.parameters
|
||||
),
|
||||
outputs=[{
|
||||
"result": prompt_message.content
|
||||
}]
|
||||
)
|
||||
break
|
||||
i += 1
|
||||
else:
|
||||
cohere_prompt_message = self._convert_prompt_message_to_dict(prompt_message)
|
||||
if cohere_prompt_message:
|
||||
chat_histories.append(cohere_prompt_message)
|
||||
|
||||
if latest_tool_call_n_outputs:
|
||||
new_latest_tool_call_n_outputs = []
|
||||
for tool_call_n_outputs in latest_tool_call_n_outputs:
|
||||
if tool_call_n_outputs.outputs:
|
||||
new_latest_tool_call_n_outputs.append(tool_call_n_outputs)
|
||||
|
||||
latest_tool_call_n_outputs = new_latest_tool_call_n_outputs
|
||||
|
||||
# get latest message from chat histories and pop it
|
||||
if len(chat_histories) > 0:
|
||||
latest_message = chat_histories.pop()
|
||||
message = latest_message['message']
|
||||
message = latest_message.message
|
||||
else:
|
||||
raise ValueError('Prompt messages is empty')
|
||||
|
||||
return message, chat_histories
|
||||
return message, chat_histories, latest_tool_call_n_outputs
|
||||
|
||||
def _convert_prompt_message_to_dict(self, message: PromptMessage) -> dict:
|
||||
def _convert_prompt_message_to_dict(self, message: PromptMessage) -> Optional[ChatMessage]:
|
||||
"""
|
||||
Convert PromptMessage to dict for Cohere model
|
||||
"""
|
||||
if isinstance(message, UserPromptMessage):
|
||||
message = cast(UserPromptMessage, message)
|
||||
if isinstance(message.content, str):
|
||||
message_dict = {"role": "USER", "message": message.content}
|
||||
chat_message = ChatMessage(role="USER", message=message.content)
|
||||
else:
|
||||
sub_message_text = ''
|
||||
for message_content in message.content:
|
||||
@ -461,20 +574,57 @@ class CohereLargeLanguageModel(LargeLanguageModel):
|
||||
message_content = cast(TextPromptMessageContent, message_content)
|
||||
sub_message_text += message_content.data
|
||||
|
||||
message_dict = {"role": "USER", "message": sub_message_text}
|
||||
chat_message = ChatMessage(role="USER", message=sub_message_text)
|
||||
elif isinstance(message, AssistantPromptMessage):
|
||||
message = cast(AssistantPromptMessage, message)
|
||||
message_dict = {"role": "CHATBOT", "message": message.content}
|
||||
if not message.content:
|
||||
return None
|
||||
chat_message = ChatMessage(role="CHATBOT", message=message.content)
|
||||
elif isinstance(message, SystemPromptMessage):
|
||||
message = cast(SystemPromptMessage, message)
|
||||
message_dict = {"role": "USER", "message": message.content}
|
||||
chat_message = ChatMessage(role="USER", message=message.content)
|
||||
elif isinstance(message, ToolPromptMessage):
|
||||
return None
|
||||
else:
|
||||
raise ValueError(f"Got unknown type {message}")
|
||||
|
||||
if message.name:
|
||||
message_dict["user_name"] = message.name
|
||||
return chat_message
|
||||
|
||||
return message_dict
|
||||
def _convert_tools(self, tools: list[PromptMessageTool]) -> list[Tool]:
|
||||
"""
|
||||
Convert tools to Cohere model
|
||||
"""
|
||||
cohere_tools = []
|
||||
for tool in tools:
|
||||
properties = tool.parameters['properties']
|
||||
required_properties = tool.parameters['required']
|
||||
|
||||
parameter_definitions = {}
|
||||
for p_key, p_val in properties.items():
|
||||
required = False
|
||||
if property in required_properties:
|
||||
required = True
|
||||
|
||||
desc = p_val['description']
|
||||
if 'enum' in p_val:
|
||||
desc += (f"; Only accepts one of the following predefined options: "
|
||||
f"[{', '.join(p_val['enum'])}]")
|
||||
|
||||
parameter_definitions[p_key] = ToolParameterDefinitionsValue(
|
||||
description=desc,
|
||||
type=p_val['type'],
|
||||
required=required
|
||||
)
|
||||
|
||||
cohere_tool = Tool(
|
||||
name=tool.name,
|
||||
description=tool.description,
|
||||
parameter_definitions=parameter_definitions
|
||||
)
|
||||
|
||||
cohere_tools.append(cohere_tool)
|
||||
|
||||
return cohere_tools
|
||||
|
||||
def _num_tokens_from_string(self, model: str, credentials: dict, text: str) -> int:
|
||||
"""
|
||||
@ -493,12 +643,16 @@ class CohereLargeLanguageModel(LargeLanguageModel):
|
||||
model=model
|
||||
)
|
||||
|
||||
return response.length
|
||||
return len(response.tokens)
|
||||
|
||||
def _num_tokens_from_messages(self, model: str, credentials: dict, messages: list[PromptMessage]) -> int:
|
||||
"""Calculate num tokens Cohere model."""
|
||||
messages = [self._convert_prompt_message_to_dict(m) for m in messages]
|
||||
message_strs = [f"{message['role']}: {message['message']}" for message in messages]
|
||||
calc_messages = []
|
||||
for message in messages:
|
||||
cohere_message = self._convert_prompt_message_to_dict(message)
|
||||
if cohere_message:
|
||||
calc_messages.append(cohere_message)
|
||||
message_strs = [f"{message.role}: {message.message}" for message in calc_messages]
|
||||
message_str = "\n".join(message_strs)
|
||||
|
||||
real_model = model
|
||||
@ -564,13 +718,21 @@ class CohereLargeLanguageModel(LargeLanguageModel):
|
||||
"""
|
||||
return {
|
||||
InvokeConnectionError: [
|
||||
cohere.CohereConnectionError
|
||||
cohere.errors.service_unavailable_error.ServiceUnavailableError
|
||||
],
|
||||
InvokeServerUnavailableError: [
|
||||
cohere.errors.internal_server_error.InternalServerError
|
||||
],
|
||||
InvokeRateLimitError: [
|
||||
cohere.errors.too_many_requests_error.TooManyRequestsError
|
||||
],
|
||||
InvokeAuthorizationError: [
|
||||
cohere.errors.unauthorized_error.UnauthorizedError,
|
||||
cohere.errors.forbidden_error.ForbiddenError
|
||||
],
|
||||
InvokeServerUnavailableError: [],
|
||||
InvokeRateLimitError: [],
|
||||
InvokeAuthorizationError: [],
|
||||
InvokeBadRequestError: [
|
||||
cohere.CohereAPIError,
|
||||
cohere.CohereError,
|
||||
cohere.core.api_error.ApiError,
|
||||
cohere.errors.bad_request_error.BadRequestError,
|
||||
cohere.errors.not_found_error.NotFoundError,
|
||||
]
|
||||
}
|
||||
|
||||
@ -1,6 +1,7 @@
|
||||
from typing import Optional
|
||||
|
||||
import cohere
|
||||
from cohere.core import RequestOptions
|
||||
|
||||
from core.model_runtime.entities.rerank_entities import RerankDocument, RerankResult
|
||||
from core.model_runtime.errors.invoke import (
|
||||
@ -44,19 +45,21 @@ class CohereRerankModel(RerankModel):
|
||||
|
||||
# initialize client
|
||||
client = cohere.Client(credentials.get('api_key'))
|
||||
results = client.rerank(
|
||||
response = client.rerank(
|
||||
query=query,
|
||||
documents=docs,
|
||||
model=model,
|
||||
top_n=top_n
|
||||
top_n=top_n,
|
||||
return_documents=True,
|
||||
request_options=RequestOptions(max_retries=0)
|
||||
)
|
||||
|
||||
rerank_documents = []
|
||||
for idx, result in enumerate(results):
|
||||
for idx, result in enumerate(response.results):
|
||||
# format document
|
||||
rerank_document = RerankDocument(
|
||||
index=result.index,
|
||||
text=result.document['text'],
|
||||
text=result.document.text,
|
||||
score=result.relevance_score,
|
||||
)
|
||||
|
||||
@ -108,13 +111,21 @@ class CohereRerankModel(RerankModel):
|
||||
"""
|
||||
return {
|
||||
InvokeConnectionError: [
|
||||
cohere.CohereConnectionError,
|
||||
cohere.errors.service_unavailable_error.ServiceUnavailableError
|
||||
],
|
||||
InvokeServerUnavailableError: [
|
||||
cohere.errors.internal_server_error.InternalServerError
|
||||
],
|
||||
InvokeRateLimitError: [
|
||||
cohere.errors.too_many_requests_error.TooManyRequestsError
|
||||
],
|
||||
InvokeAuthorizationError: [
|
||||
cohere.errors.unauthorized_error.UnauthorizedError,
|
||||
cohere.errors.forbidden_error.ForbiddenError
|
||||
],
|
||||
InvokeServerUnavailableError: [],
|
||||
InvokeRateLimitError: [],
|
||||
InvokeAuthorizationError: [],
|
||||
InvokeBadRequestError: [
|
||||
cohere.CohereAPIError,
|
||||
cohere.CohereError,
|
||||
cohere.core.api_error.ApiError,
|
||||
cohere.errors.bad_request_error.BadRequestError,
|
||||
cohere.errors.not_found_error.NotFoundError,
|
||||
]
|
||||
}
|
||||
|
||||
@ -3,7 +3,7 @@ from typing import Optional
|
||||
|
||||
import cohere
|
||||
import numpy as np
|
||||
from cohere.responses import Tokens
|
||||
from cohere.core import RequestOptions
|
||||
|
||||
from core.model_runtime.entities.model_entities import PriceType
|
||||
from core.model_runtime.entities.text_embedding_entities import EmbeddingUsage, TextEmbeddingResult
|
||||
@ -52,8 +52,8 @@ class CohereTextEmbeddingModel(TextEmbeddingModel):
|
||||
text=text
|
||||
)
|
||||
|
||||
for j in range(0, tokenize_response.length, context_size):
|
||||
tokens += [tokenize_response.token_strings[j: j + context_size]]
|
||||
for j in range(0, len(tokenize_response), context_size):
|
||||
tokens += [tokenize_response[j: j + context_size]]
|
||||
indices += [i]
|
||||
|
||||
batched_embeddings = []
|
||||
@ -127,9 +127,9 @@ class CohereTextEmbeddingModel(TextEmbeddingModel):
|
||||
except Exception as e:
|
||||
raise self._transform_invoke_error(e)
|
||||
|
||||
return response.length
|
||||
return len(response)
|
||||
|
||||
def _tokenize(self, model: str, credentials: dict, text: str) -> Tokens:
|
||||
def _tokenize(self, model: str, credentials: dict, text: str) -> list[str]:
|
||||
"""
|
||||
Tokenize text
|
||||
:param model: model name
|
||||
@ -138,17 +138,19 @@ class CohereTextEmbeddingModel(TextEmbeddingModel):
|
||||
:return:
|
||||
"""
|
||||
if not text:
|
||||
return Tokens([], [], {})
|
||||
return []
|
||||
|
||||
# initialize client
|
||||
client = cohere.Client(credentials.get('api_key'))
|
||||
|
||||
response = client.tokenize(
|
||||
text=text,
|
||||
model=model
|
||||
model=model,
|
||||
offline=False,
|
||||
request_options=RequestOptions(max_retries=0)
|
||||
)
|
||||
|
||||
return response
|
||||
return response.token_strings
|
||||
|
||||
def validate_credentials(self, model: str, credentials: dict) -> None:
|
||||
"""
|
||||
@ -184,10 +186,11 @@ class CohereTextEmbeddingModel(TextEmbeddingModel):
|
||||
response = client.embed(
|
||||
texts=texts,
|
||||
model=model,
|
||||
input_type='search_document' if len(texts) > 1 else 'search_query'
|
||||
input_type='search_document' if len(texts) > 1 else 'search_query',
|
||||
request_options=RequestOptions(max_retries=1)
|
||||
)
|
||||
|
||||
return response.embeddings, response.meta['billed_units']['input_tokens']
|
||||
return response.embeddings, int(response.meta.billed_units.input_tokens)
|
||||
|
||||
def _calc_response_usage(self, model: str, credentials: dict, tokens: int) -> EmbeddingUsage:
|
||||
"""
|
||||
@ -231,13 +234,21 @@ class CohereTextEmbeddingModel(TextEmbeddingModel):
|
||||
"""
|
||||
return {
|
||||
InvokeConnectionError: [
|
||||
cohere.CohereConnectionError
|
||||
cohere.errors.service_unavailable_error.ServiceUnavailableError
|
||||
],
|
||||
InvokeServerUnavailableError: [
|
||||
cohere.errors.internal_server_error.InternalServerError
|
||||
],
|
||||
InvokeRateLimitError: [
|
||||
cohere.errors.too_many_requests_error.TooManyRequestsError
|
||||
],
|
||||
InvokeAuthorizationError: [
|
||||
cohere.errors.unauthorized_error.UnauthorizedError,
|
||||
cohere.errors.forbidden_error.ForbiddenError
|
||||
],
|
||||
InvokeServerUnavailableError: [],
|
||||
InvokeRateLimitError: [],
|
||||
InvokeAuthorizationError: [],
|
||||
InvokeBadRequestError: [
|
||||
cohere.CohereAPIError,
|
||||
cohere.CohereError,
|
||||
cohere.core.api_error.ApiError,
|
||||
cohere.errors.bad_request_error.BadRequestError,
|
||||
cohere.errors.not_found_error.NotFoundError,
|
||||
]
|
||||
}
|
||||
|
||||
@ -0,0 +1,37 @@
|
||||
model: gemini-1.5-pro-latest
|
||||
label:
|
||||
en_US: Gemini 1.5 Pro
|
||||
model_type: llm
|
||||
features:
|
||||
- agent-thought
|
||||
- vision
|
||||
model_properties:
|
||||
mode: chat
|
||||
context_size: 1048576
|
||||
parameter_rules:
|
||||
- name: temperature
|
||||
use_template: temperature
|
||||
- name: top_p
|
||||
use_template: top_p
|
||||
- name: top_k
|
||||
label:
|
||||
zh_Hans: 取样数量
|
||||
en_US: Top k
|
||||
type: int
|
||||
help:
|
||||
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
|
||||
en_US: Only sample from the top K options for each subsequent token.
|
||||
required: false
|
||||
- name: max_tokens_to_sample
|
||||
use_template: max_tokens
|
||||
required: true
|
||||
default: 8192
|
||||
min: 1
|
||||
max: 8192
|
||||
- name: response_format
|
||||
use_template: response_format
|
||||
pricing:
|
||||
input: '0.00'
|
||||
output: '0.00'
|
||||
unit: '0.000001'
|
||||
currency: USD
|
||||
@ -132,15 +132,13 @@ class MoonshotLargeLanguageModel(OAIAPICompatLargeLanguageModel):
|
||||
"id": function_call.id,
|
||||
"type": function_call.type,
|
||||
"function": {
|
||||
"name": f"functions.{function_call.function.name}",
|
||||
"name": function_call.function.name,
|
||||
"arguments": function_call.function.arguments
|
||||
}
|
||||
})
|
||||
elif isinstance(message, ToolPromptMessage):
|
||||
message = cast(ToolPromptMessage, message)
|
||||
message_dict = {"role": "tool", "content": message.content, "tool_call_id": message.tool_call_id}
|
||||
if not message.name.startswith("functions."):
|
||||
message.name = f"functions.{message.name}"
|
||||
elif isinstance(message, SystemPromptMessage):
|
||||
message = cast(SystemPromptMessage, message)
|
||||
message_dict = {"role": "system", "content": message.content}
|
||||
@ -238,11 +236,6 @@ class MoonshotLargeLanguageModel(OAIAPICompatLargeLanguageModel):
|
||||
if new_tool_call.type:
|
||||
tool_call.type = new_tool_call.type
|
||||
if new_tool_call.function.name:
|
||||
# remove the functions. prefix
|
||||
if new_tool_call.function.name.startswith('functions.'):
|
||||
parts = new_tool_call.function.name.split('functions.')
|
||||
if len(parts) > 1:
|
||||
new_tool_call.function.name = parts[1]
|
||||
tool_call.function.name = new_tool_call.function.name
|
||||
if new_tool_call.function.arguments:
|
||||
tool_call.function.arguments += new_tool_call.function.arguments
|
||||
|
||||
@ -1,4 +1,6 @@
|
||||
- gpt-4
|
||||
- gpt-4-turbo
|
||||
- gpt-4-turbo-2024-04-09
|
||||
- gpt-4-turbo-preview
|
||||
- gpt-4-32k
|
||||
- gpt-4-1106-preview
|
||||
|
||||
@ -0,0 +1,57 @@
|
||||
model: gpt-4-turbo-2024-04-09
|
||||
label:
|
||||
zh_Hans: gpt-4-turbo-2024-04-09
|
||||
en_US: gpt-4-turbo-2024-04-09
|
||||
model_type: llm
|
||||
features:
|
||||
- multi-tool-call
|
||||
- agent-thought
|
||||
- stream-tool-call
|
||||
- vision
|
||||
model_properties:
|
||||
mode: chat
|
||||
context_size: 128000
|
||||
parameter_rules:
|
||||
- name: temperature
|
||||
use_template: temperature
|
||||
- name: top_p
|
||||
use_template: top_p
|
||||
- name: presence_penalty
|
||||
use_template: presence_penalty
|
||||
- name: frequency_penalty
|
||||
use_template: frequency_penalty
|
||||
- name: max_tokens
|
||||
use_template: max_tokens
|
||||
default: 512
|
||||
min: 1
|
||||
max: 4096
|
||||
- name: seed
|
||||
label:
|
||||
zh_Hans: 种子
|
||||
en_US: Seed
|
||||
type: int
|
||||
help:
|
||||
zh_Hans: 如果指定,模型将尽最大努力进行确定性采样,使得重复的具有相同种子和参数的请求应该返回相同的结果。不能保证确定性,您应该参考 system_fingerprint
|
||||
响应参数来监视变化。
|
||||
en_US: If specified, model will make a best effort to sample deterministically,
|
||||
such that repeated requests with the same seed and parameters should return
|
||||
the same result. Determinism is not guaranteed, and you should refer to the
|
||||
system_fingerprint response parameter to monitor changes in the backend.
|
||||
required: false
|
||||
- name: response_format
|
||||
label:
|
||||
zh_Hans: 回复格式
|
||||
en_US: response_format
|
||||
type: string
|
||||
help:
|
||||
zh_Hans: 指定模型必须输出的格式
|
||||
en_US: specifying the format that the model must output
|
||||
required: false
|
||||
options:
|
||||
- text
|
||||
- json_object
|
||||
pricing:
|
||||
input: '0.01'
|
||||
output: '0.03'
|
||||
unit: '0.001'
|
||||
currency: USD
|
||||
@ -0,0 +1,57 @@
|
||||
model: gpt-4-turbo
|
||||
label:
|
||||
zh_Hans: gpt-4-turbo
|
||||
en_US: gpt-4-turbo
|
||||
model_type: llm
|
||||
features:
|
||||
- multi-tool-call
|
||||
- agent-thought
|
||||
- stream-tool-call
|
||||
- vision
|
||||
model_properties:
|
||||
mode: chat
|
||||
context_size: 128000
|
||||
parameter_rules:
|
||||
- name: temperature
|
||||
use_template: temperature
|
||||
- name: top_p
|
||||
use_template: top_p
|
||||
- name: presence_penalty
|
||||
use_template: presence_penalty
|
||||
- name: frequency_penalty
|
||||
use_template: frequency_penalty
|
||||
- name: max_tokens
|
||||
use_template: max_tokens
|
||||
default: 512
|
||||
min: 1
|
||||
max: 4096
|
||||
- name: seed
|
||||
label:
|
||||
zh_Hans: 种子
|
||||
en_US: Seed
|
||||
type: int
|
||||
help:
|
||||
zh_Hans: 如果指定,模型将尽最大努力进行确定性采样,使得重复的具有相同种子和参数的请求应该返回相同的结果。不能保证确定性,您应该参考 system_fingerprint
|
||||
响应参数来监视变化。
|
||||
en_US: If specified, model will make a best effort to sample deterministically,
|
||||
such that repeated requests with the same seed and parameters should return
|
||||
the same result. Determinism is not guaranteed, and you should refer to the
|
||||
system_fingerprint response parameter to monitor changes in the backend.
|
||||
required: false
|
||||
- name: response_format
|
||||
label:
|
||||
zh_Hans: 回复格式
|
||||
en_US: response_format
|
||||
type: string
|
||||
help:
|
||||
zh_Hans: 指定模型必须输出的格式
|
||||
en_US: specifying the format that the model must output
|
||||
required: false
|
||||
options:
|
||||
- text
|
||||
- json_object
|
||||
pricing:
|
||||
input: '0.01'
|
||||
output: '0.03'
|
||||
unit: '0.001'
|
||||
currency: USD
|
||||
@ -547,6 +547,9 @@ class OpenAILargeLanguageModel(_CommonOpenAI, LargeLanguageModel):
|
||||
if user:
|
||||
extra_model_kwargs['user'] = user
|
||||
|
||||
# clear illegal prompt messages
|
||||
prompt_messages = self._clear_illegal_prompt_messages(model, prompt_messages)
|
||||
|
||||
# chat model
|
||||
response = client.chat.completions.create(
|
||||
messages=[self._convert_prompt_message_to_dict(m) for m in prompt_messages],
|
||||
@ -757,6 +760,31 @@ class OpenAILargeLanguageModel(_CommonOpenAI, LargeLanguageModel):
|
||||
|
||||
return tool_call
|
||||
|
||||
def _clear_illegal_prompt_messages(self, model: str, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
|
||||
"""
|
||||
Clear illegal prompt messages for OpenAI API
|
||||
|
||||
:param model: model name
|
||||
:param prompt_messages: prompt messages
|
||||
:return: cleaned prompt messages
|
||||
"""
|
||||
checklist = ['gpt-4-turbo', 'gpt-4-turbo-2024-04-09']
|
||||
|
||||
if model in checklist:
|
||||
# count how many user messages are there
|
||||
user_message_count = len([m for m in prompt_messages if isinstance(m, UserPromptMessage)])
|
||||
if user_message_count > 1:
|
||||
for prompt_message in prompt_messages:
|
||||
if isinstance(prompt_message, UserPromptMessage):
|
||||
if isinstance(prompt_message.content, list):
|
||||
prompt_message.content = '\n'.join([
|
||||
item.data if item.type == PromptMessageContentType.TEXT else
|
||||
'[IMAGE]' if item.type == PromptMessageContentType.IMAGE else ''
|
||||
for item in prompt_message.content
|
||||
])
|
||||
|
||||
return prompt_messages
|
||||
|
||||
def _convert_prompt_message_to_dict(self, message: PromptMessage) -> dict:
|
||||
"""
|
||||
Convert PromptMessage to dict for OpenAI API
|
||||
|
||||
@ -167,23 +167,27 @@ class OAIAPICompatLargeLanguageModel(_CommonOAI_API_Compat, LargeLanguageModel):
|
||||
"""
|
||||
generate custom model entities from credentials
|
||||
"""
|
||||
support_function_call = False
|
||||
features = []
|
||||
|
||||
function_calling_type = credentials.get('function_calling_type', 'no_call')
|
||||
if function_calling_type == 'function_call':
|
||||
features = [ModelFeature.TOOL_CALL]
|
||||
support_function_call = True
|
||||
features.append(ModelFeature.TOOL_CALL)
|
||||
endpoint_url = credentials["endpoint_url"]
|
||||
# if not endpoint_url.endswith('/'):
|
||||
# endpoint_url += '/'
|
||||
# if 'https://api.openai.com/v1/' == endpoint_url:
|
||||
# features = [ModelFeature.STREAM_TOOL_CALL]
|
||||
# features.append(ModelFeature.STREAM_TOOL_CALL)
|
||||
|
||||
vision_support = credentials.get('vision_support', 'not_support')
|
||||
if vision_support == 'support':
|
||||
features.append(ModelFeature.VISION)
|
||||
|
||||
entity = AIModelEntity(
|
||||
model=model,
|
||||
label=I18nObject(en_US=model),
|
||||
model_type=ModelType.LLM,
|
||||
fetch_from=FetchFrom.CUSTOMIZABLE_MODEL,
|
||||
features=features if support_function_call else [],
|
||||
features=features,
|
||||
model_properties={
|
||||
ModelPropertyKey.CONTEXT_SIZE: int(credentials.get('context_size', "4096")),
|
||||
ModelPropertyKey.MODE: credentials.get('mode'),
|
||||
@ -412,7 +416,7 @@ class OAIAPICompatLargeLanguageModel(_CommonOAI_API_Compat, LargeLanguageModel):
|
||||
if chunk.startswith(':'):
|
||||
continue
|
||||
decoded_chunk = chunk.strip().lstrip('data: ').lstrip()
|
||||
chunk_json = None
|
||||
|
||||
try:
|
||||
chunk_json = json.loads(decoded_chunk)
|
||||
# stream ended
|
||||
@ -616,7 +620,7 @@ class OAIAPICompatLargeLanguageModel(_CommonOAI_API_Compat, LargeLanguageModel):
|
||||
|
||||
return message_dict
|
||||
|
||||
def _num_tokens_from_string(self, model: str, text: str,
|
||||
def _num_tokens_from_string(self, model: str, text: Union[str, list[PromptMessageContent]],
|
||||
tools: Optional[list[PromptMessageTool]] = None) -> int:
|
||||
"""
|
||||
Approximate num tokens for model with gpt2 tokenizer.
|
||||
@ -626,7 +630,16 @@ class OAIAPICompatLargeLanguageModel(_CommonOAI_API_Compat, LargeLanguageModel):
|
||||
:param tools: tools for tool calling
|
||||
:return: number of tokens
|
||||
"""
|
||||
num_tokens = self._get_num_tokens_by_gpt2(text)
|
||||
if isinstance(text, str):
|
||||
full_text = text
|
||||
else:
|
||||
full_text = ''
|
||||
for message_content in text:
|
||||
if message_content.type == PromptMessageContentType.TEXT:
|
||||
message_content = cast(PromptMessageContent, message_content)
|
||||
full_text += message_content.data
|
||||
|
||||
num_tokens = self._get_num_tokens_by_gpt2(full_text)
|
||||
|
||||
if tools:
|
||||
num_tokens += self._num_tokens_for_tools(tools)
|
||||
|
||||
@ -97,6 +97,25 @@ model_credential_schema:
|
||||
label:
|
||||
en_US: Not Support
|
||||
zh_Hans: 不支持
|
||||
- variable: vision_support
|
||||
show_on:
|
||||
- variable: __model_type
|
||||
value: llm
|
||||
label:
|
||||
zh_Hans: Vision 支持
|
||||
en_US: Vision Support
|
||||
type: select
|
||||
required: false
|
||||
default: no_support
|
||||
options:
|
||||
- value: support
|
||||
label:
|
||||
en_US: Support
|
||||
zh_Hans: 支持
|
||||
- value: no_support
|
||||
label:
|
||||
en_US: Not Support
|
||||
zh_Hans: 不支持
|
||||
- variable: stream_mode_delimiter
|
||||
label:
|
||||
zh_Hans: 流模式返回结果的分隔符
|
||||
|
||||
@ -232,8 +232,8 @@ class SimplePromptTransform(PromptTransform):
|
||||
)
|
||||
),
|
||||
max_token_limit=rest_tokens,
|
||||
ai_prefix=prompt_rules['human_prefix'] if 'human_prefix' in prompt_rules else 'Human',
|
||||
human_prefix=prompt_rules['assistant_prefix'] if 'assistant_prefix' in prompt_rules else 'Assistant'
|
||||
human_prefix=prompt_rules['human_prefix'] if 'human_prefix' in prompt_rules else 'Human',
|
||||
ai_prefix=prompt_rules['assistant_prefix'] if 'assistant_prefix' in prompt_rules else 'Assistant'
|
||||
)
|
||||
|
||||
# get prompt
|
||||
|
||||
@ -48,6 +48,9 @@ class Jieba(BaseKeyword):
|
||||
text = texts[i]
|
||||
if keywords_list:
|
||||
keywords = keywords_list[i]
|
||||
if not keywords:
|
||||
keywords = keyword_table_handler.extract_keywords(text.page_content,
|
||||
self._config.max_keywords_per_chunk)
|
||||
else:
|
||||
keywords = keyword_table_handler.extract_keywords(text.page_content, self._config.max_keywords_per_chunk)
|
||||
self._update_segment_keywords(self.dataset.id, text.metadata['doc_id'], list(keywords))
|
||||
|
||||
@ -34,6 +34,7 @@ class CSVExtractor(BaseExtractor):
|
||||
|
||||
def extract(self) -> list[Document]:
|
||||
"""Load data into document objects."""
|
||||
docs = []
|
||||
try:
|
||||
with open(self._file_path, newline="", encoding=self._encoding) as csvfile:
|
||||
docs = self._read_from_file(csvfile)
|
||||
|
||||
@ -1,59 +0,0 @@
|
||||
import time
|
||||
from collections.abc import Mapping
|
||||
from typing import Any, Optional
|
||||
|
||||
from langchain.callbacks.manager import CallbackManagerForLLMRun
|
||||
from langchain.chat_models.base import SimpleChatModel
|
||||
from langchain.schema import AIMessage, BaseMessage, ChatGeneration, ChatResult
|
||||
|
||||
|
||||
class FakeLLM(SimpleChatModel):
|
||||
"""Fake ChatModel for testing purposes."""
|
||||
|
||||
streaming: bool = False
|
||||
"""Whether to stream the results or not."""
|
||||
response: str
|
||||
|
||||
@property
|
||||
def _llm_type(self) -> str:
|
||||
return "fake-chat-model"
|
||||
|
||||
def _call(
|
||||
self,
|
||||
messages: list[BaseMessage],
|
||||
stop: Optional[list[str]] = None,
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> str:
|
||||
"""First try to lookup in queries, else return 'foo' or 'bar'."""
|
||||
return self.response
|
||||
|
||||
@property
|
||||
def _identifying_params(self) -> Mapping[str, Any]:
|
||||
return {"response": self.response}
|
||||
|
||||
def get_num_tokens(self, text: str) -> int:
|
||||
return 0
|
||||
|
||||
def _generate(
|
||||
self,
|
||||
messages: list[BaseMessage],
|
||||
stop: Optional[list[str]] = None,
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> ChatResult:
|
||||
output_str = self._call(messages, stop=stop, run_manager=run_manager, **kwargs)
|
||||
if self.streaming:
|
||||
for token in output_str:
|
||||
if run_manager:
|
||||
run_manager.on_llm_new_token(token)
|
||||
time.sleep(0.01)
|
||||
|
||||
message = AIMessage(content=output_str)
|
||||
generation = ChatGeneration(message=message)
|
||||
llm_output = {"token_usage": {
|
||||
'prompt_tokens': 0,
|
||||
'completion_tokens': 0,
|
||||
'total_tokens': 0,
|
||||
}}
|
||||
return ChatResult(generations=[generation], llm_output=llm_output)
|
||||
@ -1,46 +0,0 @@
|
||||
from typing import Any, Optional
|
||||
|
||||
from langchain import LLMChain as LCLLMChain
|
||||
from langchain.callbacks.manager import CallbackManagerForChainRun
|
||||
from langchain.schema import Generation, LLMResult
|
||||
from langchain.schema.language_model import BaseLanguageModel
|
||||
|
||||
from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity
|
||||
from core.entities.message_entities import lc_messages_to_prompt_messages
|
||||
from core.model_manager import ModelInstance
|
||||
from core.rag.retrieval.agent.fake_llm import FakeLLM
|
||||
|
||||
|
||||
class LLMChain(LCLLMChain):
|
||||
model_config: ModelConfigWithCredentialsEntity
|
||||
"""The language model instance to use."""
|
||||
llm: BaseLanguageModel = FakeLLM(response="")
|
||||
parameters: dict[str, Any] = {}
|
||||
|
||||
def generate(
|
||||
self,
|
||||
input_list: list[dict[str, Any]],
|
||||
run_manager: Optional[CallbackManagerForChainRun] = None,
|
||||
) -> LLMResult:
|
||||
"""Generate LLM result from inputs."""
|
||||
prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
|
||||
messages = prompts[0].to_messages()
|
||||
prompt_messages = lc_messages_to_prompt_messages(messages)
|
||||
|
||||
model_instance = ModelInstance(
|
||||
provider_model_bundle=self.model_config.provider_model_bundle,
|
||||
model=self.model_config.model,
|
||||
)
|
||||
|
||||
result = model_instance.invoke_llm(
|
||||
prompt_messages=prompt_messages,
|
||||
stream=False,
|
||||
stop=stop,
|
||||
model_parameters=self.parameters
|
||||
)
|
||||
|
||||
generations = [
|
||||
[Generation(text=result.message.content)]
|
||||
]
|
||||
|
||||
return LLMResult(generations=generations)
|
||||
@ -1,179 +0,0 @@
|
||||
from collections.abc import Sequence
|
||||
from typing import Any, Optional, Union
|
||||
|
||||
from langchain.agents import BaseSingleActionAgent, OpenAIFunctionsAgent
|
||||
from langchain.agents.openai_functions_agent.base import _format_intermediate_steps, _parse_ai_message
|
||||
from langchain.callbacks.base import BaseCallbackManager
|
||||
from langchain.callbacks.manager import Callbacks
|
||||
from langchain.prompts.chat import BaseMessagePromptTemplate
|
||||
from langchain.schema import AgentAction, AgentFinish, AIMessage, SystemMessage
|
||||
from langchain.tools import BaseTool
|
||||
from pydantic import root_validator
|
||||
|
||||
from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity
|
||||
from core.entities.message_entities import lc_messages_to_prompt_messages
|
||||
from core.model_manager import ModelInstance
|
||||
from core.model_runtime.entities.message_entities import PromptMessageTool
|
||||
from core.rag.retrieval.agent.fake_llm import FakeLLM
|
||||
|
||||
|
||||
class MultiDatasetRouterAgent(OpenAIFunctionsAgent):
|
||||
"""
|
||||
An Multi Dataset Retrieve Agent driven by Router.
|
||||
"""
|
||||
model_config: ModelConfigWithCredentialsEntity
|
||||
|
||||
class Config:
|
||||
"""Configuration for this pydantic object."""
|
||||
|
||||
arbitrary_types_allowed = True
|
||||
|
||||
@root_validator
|
||||
def validate_llm(cls, values: dict) -> dict:
|
||||
return values
|
||||
|
||||
def should_use_agent(self, query: str):
|
||||
"""
|
||||
return should use agent
|
||||
|
||||
:param query:
|
||||
:return:
|
||||
"""
|
||||
return True
|
||||
|
||||
def plan(
|
||||
self,
|
||||
intermediate_steps: list[tuple[AgentAction, str]],
|
||||
callbacks: Callbacks = None,
|
||||
**kwargs: Any,
|
||||
) -> Union[AgentAction, AgentFinish]:
|
||||
"""Given input, decided what to do.
|
||||
|
||||
Args:
|
||||
intermediate_steps: Steps the LLM has taken to date, along with observations
|
||||
**kwargs: User inputs.
|
||||
|
||||
Returns:
|
||||
Action specifying what tool to use.
|
||||
"""
|
||||
if len(self.tools) == 0:
|
||||
return AgentFinish(return_values={"output": ''}, log='')
|
||||
elif len(self.tools) == 1:
|
||||
tool = next(iter(self.tools))
|
||||
rst = tool.run(tool_input={'query': kwargs['input']})
|
||||
# output = ''
|
||||
# rst_json = json.loads(rst)
|
||||
# for item in rst_json:
|
||||
# output += f'{item["content"]}\n'
|
||||
return AgentFinish(return_values={"output": rst}, log=rst)
|
||||
|
||||
if intermediate_steps:
|
||||
_, observation = intermediate_steps[-1]
|
||||
return AgentFinish(return_values={"output": observation}, log=observation)
|
||||
|
||||
try:
|
||||
agent_decision = self.real_plan(intermediate_steps, callbacks, **kwargs)
|
||||
if isinstance(agent_decision, AgentAction):
|
||||
tool_inputs = agent_decision.tool_input
|
||||
if isinstance(tool_inputs, dict) and 'query' in tool_inputs and 'chat_history' not in kwargs:
|
||||
tool_inputs['query'] = kwargs['input']
|
||||
agent_decision.tool_input = tool_inputs
|
||||
else:
|
||||
agent_decision.return_values['output'] = ''
|
||||
return agent_decision
|
||||
except Exception as e:
|
||||
raise e
|
||||
|
||||
def real_plan(
|
||||
self,
|
||||
intermediate_steps: list[tuple[AgentAction, str]],
|
||||
callbacks: Callbacks = None,
|
||||
**kwargs: Any,
|
||||
) -> Union[AgentAction, AgentFinish]:
|
||||
"""Given input, decided what to do.
|
||||
|
||||
Args:
|
||||
intermediate_steps: Steps the LLM has taken to date, along with observations
|
||||
**kwargs: User inputs.
|
||||
|
||||
Returns:
|
||||
Action specifying what tool to use.
|
||||
"""
|
||||
agent_scratchpad = _format_intermediate_steps(intermediate_steps)
|
||||
selected_inputs = {
|
||||
k: kwargs[k] for k in self.prompt.input_variables if k != "agent_scratchpad"
|
||||
}
|
||||
full_inputs = dict(**selected_inputs, agent_scratchpad=agent_scratchpad)
|
||||
prompt = self.prompt.format_prompt(**full_inputs)
|
||||
messages = prompt.to_messages()
|
||||
prompt_messages = lc_messages_to_prompt_messages(messages)
|
||||
|
||||
model_instance = ModelInstance(
|
||||
provider_model_bundle=self.model_config.provider_model_bundle,
|
||||
model=self.model_config.model,
|
||||
)
|
||||
|
||||
tools = []
|
||||
for function in self.functions:
|
||||
tool = PromptMessageTool(
|
||||
**function
|
||||
)
|
||||
|
||||
tools.append(tool)
|
||||
|
||||
result = model_instance.invoke_llm(
|
||||
prompt_messages=prompt_messages,
|
||||
tools=tools,
|
||||
stream=False,
|
||||
model_parameters={
|
||||
'temperature': 0.2,
|
||||
'top_p': 0.3,
|
||||
'max_tokens': 1500
|
||||
}
|
||||
)
|
||||
|
||||
ai_message = AIMessage(
|
||||
content=result.message.content or "",
|
||||
additional_kwargs={
|
||||
'function_call': {
|
||||
'id': result.message.tool_calls[0].id,
|
||||
**result.message.tool_calls[0].function.dict()
|
||||
} if result.message.tool_calls else None
|
||||
}
|
||||
)
|
||||
|
||||
agent_decision = _parse_ai_message(ai_message)
|
||||
return agent_decision
|
||||
|
||||
async def aplan(
|
||||
self,
|
||||
intermediate_steps: list[tuple[AgentAction, str]],
|
||||
callbacks: Callbacks = None,
|
||||
**kwargs: Any,
|
||||
) -> Union[AgentAction, AgentFinish]:
|
||||
raise NotImplementedError()
|
||||
|
||||
@classmethod
|
||||
def from_llm_and_tools(
|
||||
cls,
|
||||
model_config: ModelConfigWithCredentialsEntity,
|
||||
tools: Sequence[BaseTool],
|
||||
callback_manager: Optional[BaseCallbackManager] = None,
|
||||
extra_prompt_messages: Optional[list[BaseMessagePromptTemplate]] = None,
|
||||
system_message: Optional[SystemMessage] = SystemMessage(
|
||||
content="You are a helpful AI assistant."
|
||||
),
|
||||
**kwargs: Any,
|
||||
) -> BaseSingleActionAgent:
|
||||
prompt = cls.create_prompt(
|
||||
extra_prompt_messages=extra_prompt_messages,
|
||||
system_message=system_message,
|
||||
)
|
||||
return cls(
|
||||
model_config=model_config,
|
||||
llm=FakeLLM(response=''),
|
||||
prompt=prompt,
|
||||
tools=tools,
|
||||
callback_manager=callback_manager,
|
||||
**kwargs,
|
||||
)
|
||||
@ -1,259 +0,0 @@
|
||||
import re
|
||||
from collections.abc import Sequence
|
||||
from typing import Any, Optional, Union, cast
|
||||
|
||||
from langchain import BasePromptTemplate, PromptTemplate
|
||||
from langchain.agents import Agent, AgentOutputParser, StructuredChatAgent
|
||||
from langchain.agents.structured_chat.base import HUMAN_MESSAGE_TEMPLATE
|
||||
from langchain.agents.structured_chat.prompt import PREFIX, SUFFIX
|
||||
from langchain.callbacks.base import BaseCallbackManager
|
||||
from langchain.callbacks.manager import Callbacks
|
||||
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate, SystemMessagePromptTemplate
|
||||
from langchain.schema import AgentAction, AgentFinish, OutputParserException
|
||||
from langchain.tools import BaseTool
|
||||
|
||||
from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity
|
||||
from core.rag.retrieval.agent.llm_chain import LLMChain
|
||||
|
||||
FORMAT_INSTRUCTIONS = """Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).
|
||||
The nouns in the format of "Thought", "Action", "Action Input", "Final Answer" must be expressed in English.
|
||||
Valid "action" values: "Final Answer" or {tool_names}
|
||||
|
||||
Provide only ONE action per $JSON_BLOB, as shown:
|
||||
|
||||
```
|
||||
{{{{
|
||||
"action": $TOOL_NAME,
|
||||
"action_input": $INPUT
|
||||
}}}}
|
||||
```
|
||||
|
||||
Follow this format:
|
||||
|
||||
Question: input question to answer
|
||||
Thought: consider previous and subsequent steps
|
||||
Action:
|
||||
```
|
||||
$JSON_BLOB
|
||||
```
|
||||
Observation: action result
|
||||
... (repeat Thought/Action/Observation N times)
|
||||
Thought: I know what to respond
|
||||
Action:
|
||||
```
|
||||
{{{{
|
||||
"action": "Final Answer",
|
||||
"action_input": "Final response to human"
|
||||
}}}}
|
||||
```"""
|
||||
|
||||
|
||||
class StructuredMultiDatasetRouterAgent(StructuredChatAgent):
|
||||
dataset_tools: Sequence[BaseTool]
|
||||
|
||||
class Config:
|
||||
"""Configuration for this pydantic object."""
|
||||
|
||||
arbitrary_types_allowed = True
|
||||
|
||||
def should_use_agent(self, query: str):
|
||||
"""
|
||||
return should use agent
|
||||
Using the ReACT mode to determine whether an agent is needed is costly,
|
||||
so it's better to just use an Agent for reasoning, which is cheaper.
|
||||
|
||||
:param query:
|
||||
:return:
|
||||
"""
|
||||
return True
|
||||
|
||||
def plan(
|
||||
self,
|
||||
intermediate_steps: list[tuple[AgentAction, str]],
|
||||
callbacks: Callbacks = None,
|
||||
**kwargs: Any,
|
||||
) -> Union[AgentAction, AgentFinish]:
|
||||
"""Given input, decided what to do.
|
||||
|
||||
Args:
|
||||
intermediate_steps: Steps the LLM has taken to date,
|
||||
along with observations
|
||||
callbacks: Callbacks to run.
|
||||
**kwargs: User inputs.
|
||||
|
||||
Returns:
|
||||
Action specifying what tool to use.
|
||||
"""
|
||||
if len(self.dataset_tools) == 0:
|
||||
return AgentFinish(return_values={"output": ''}, log='')
|
||||
elif len(self.dataset_tools) == 1:
|
||||
tool = next(iter(self.dataset_tools))
|
||||
rst = tool.run(tool_input={'query': kwargs['input']})
|
||||
return AgentFinish(return_values={"output": rst}, log=rst)
|
||||
|
||||
if intermediate_steps:
|
||||
_, observation = intermediate_steps[-1]
|
||||
return AgentFinish(return_values={"output": observation}, log=observation)
|
||||
|
||||
full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)
|
||||
|
||||
try:
|
||||
full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs)
|
||||
except Exception as e:
|
||||
raise e
|
||||
|
||||
try:
|
||||
agent_decision = self.output_parser.parse(full_output)
|
||||
if isinstance(agent_decision, AgentAction):
|
||||
tool_inputs = agent_decision.tool_input
|
||||
if isinstance(tool_inputs, dict) and 'query' in tool_inputs:
|
||||
tool_inputs['query'] = kwargs['input']
|
||||
agent_decision.tool_input = tool_inputs
|
||||
elif isinstance(tool_inputs, str):
|
||||
agent_decision.tool_input = kwargs['input']
|
||||
else:
|
||||
agent_decision.return_values['output'] = ''
|
||||
return agent_decision
|
||||
except OutputParserException:
|
||||
return AgentFinish({"output": "I'm sorry, the answer of model is invalid, "
|
||||
"I don't know how to respond to that."}, "")
|
||||
|
||||
@classmethod
|
||||
def create_prompt(
|
||||
cls,
|
||||
tools: Sequence[BaseTool],
|
||||
prefix: str = PREFIX,
|
||||
suffix: str = SUFFIX,
|
||||
human_message_template: str = HUMAN_MESSAGE_TEMPLATE,
|
||||
format_instructions: str = FORMAT_INSTRUCTIONS,
|
||||
input_variables: Optional[list[str]] = None,
|
||||
memory_prompts: Optional[list[BasePromptTemplate]] = None,
|
||||
) -> BasePromptTemplate:
|
||||
tool_strings = []
|
||||
for tool in tools:
|
||||
args_schema = re.sub("}", "}}}}", re.sub("{", "{{{{", str(tool.args)))
|
||||
tool_strings.append(f"{tool.name}: {tool.description}, args: {args_schema}")
|
||||
formatted_tools = "\n".join(tool_strings)
|
||||
unique_tool_names = set(tool.name for tool in tools)
|
||||
tool_names = ", ".join('"' + name + '"' for name in unique_tool_names)
|
||||
format_instructions = format_instructions.format(tool_names=tool_names)
|
||||
template = "\n\n".join([prefix, formatted_tools, format_instructions, suffix])
|
||||
if input_variables is None:
|
||||
input_variables = ["input", "agent_scratchpad"]
|
||||
_memory_prompts = memory_prompts or []
|
||||
messages = [
|
||||
SystemMessagePromptTemplate.from_template(template),
|
||||
*_memory_prompts,
|
||||
HumanMessagePromptTemplate.from_template(human_message_template),
|
||||
]
|
||||
return ChatPromptTemplate(input_variables=input_variables, messages=messages)
|
||||
|
||||
@classmethod
|
||||
def create_completion_prompt(
|
||||
cls,
|
||||
tools: Sequence[BaseTool],
|
||||
prefix: str = PREFIX,
|
||||
format_instructions: str = FORMAT_INSTRUCTIONS,
|
||||
input_variables: Optional[list[str]] = None,
|
||||
) -> PromptTemplate:
|
||||
"""Create prompt in the style of the zero shot agent.
|
||||
|
||||
Args:
|
||||
tools: List of tools the agent will have access to, used to format the
|
||||
prompt.
|
||||
prefix: String to put before the list of tools.
|
||||
input_variables: List of input variables the final prompt will expect.
|
||||
|
||||
Returns:
|
||||
A PromptTemplate with the template assembled from the pieces here.
|
||||
"""
|
||||
suffix = """Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.
|
||||
Question: {input}
|
||||
Thought: {agent_scratchpad}
|
||||
"""
|
||||
|
||||
tool_strings = "\n".join([f"{tool.name}: {tool.description}" for tool in tools])
|
||||
tool_names = ", ".join([tool.name for tool in tools])
|
||||
format_instructions = format_instructions.format(tool_names=tool_names)
|
||||
template = "\n\n".join([prefix, tool_strings, format_instructions, suffix])
|
||||
if input_variables is None:
|
||||
input_variables = ["input", "agent_scratchpad"]
|
||||
return PromptTemplate(template=template, input_variables=input_variables)
|
||||
|
||||
def _construct_scratchpad(
|
||||
self, intermediate_steps: list[tuple[AgentAction, str]]
|
||||
) -> str:
|
||||
agent_scratchpad = ""
|
||||
for action, observation in intermediate_steps:
|
||||
agent_scratchpad += action.log
|
||||
agent_scratchpad += f"\n{self.observation_prefix}{observation}\n{self.llm_prefix}"
|
||||
|
||||
if not isinstance(agent_scratchpad, str):
|
||||
raise ValueError("agent_scratchpad should be of type string.")
|
||||
if agent_scratchpad:
|
||||
llm_chain = cast(LLMChain, self.llm_chain)
|
||||
if llm_chain.model_config.mode == "chat":
|
||||
return (
|
||||
f"This was your previous work "
|
||||
f"(but I haven't seen any of it! I only see what "
|
||||
f"you return as final answer):\n{agent_scratchpad}"
|
||||
)
|
||||
else:
|
||||
return agent_scratchpad
|
||||
else:
|
||||
return agent_scratchpad
|
||||
|
||||
@classmethod
|
||||
def from_llm_and_tools(
|
||||
cls,
|
||||
model_config: ModelConfigWithCredentialsEntity,
|
||||
tools: Sequence[BaseTool],
|
||||
callback_manager: Optional[BaseCallbackManager] = None,
|
||||
output_parser: Optional[AgentOutputParser] = None,
|
||||
prefix: str = PREFIX,
|
||||
suffix: str = SUFFIX,
|
||||
human_message_template: str = HUMAN_MESSAGE_TEMPLATE,
|
||||
format_instructions: str = FORMAT_INSTRUCTIONS,
|
||||
input_variables: Optional[list[str]] = None,
|
||||
memory_prompts: Optional[list[BasePromptTemplate]] = None,
|
||||
**kwargs: Any,
|
||||
) -> Agent:
|
||||
"""Construct an agent from an LLM and tools."""
|
||||
cls._validate_tools(tools)
|
||||
if model_config.mode == "chat":
|
||||
prompt = cls.create_prompt(
|
||||
tools,
|
||||
prefix=prefix,
|
||||
suffix=suffix,
|
||||
human_message_template=human_message_template,
|
||||
format_instructions=format_instructions,
|
||||
input_variables=input_variables,
|
||||
memory_prompts=memory_prompts,
|
||||
)
|
||||
else:
|
||||
prompt = cls.create_completion_prompt(
|
||||
tools,
|
||||
prefix=prefix,
|
||||
format_instructions=format_instructions,
|
||||
input_variables=input_variables
|
||||
)
|
||||
|
||||
llm_chain = LLMChain(
|
||||
model_config=model_config,
|
||||
prompt=prompt,
|
||||
callback_manager=callback_manager,
|
||||
parameters={
|
||||
'temperature': 0.2,
|
||||
'top_p': 0.3,
|
||||
'max_tokens': 1500
|
||||
}
|
||||
)
|
||||
tool_names = [tool.name for tool in tools]
|
||||
_output_parser = output_parser
|
||||
return cls(
|
||||
llm_chain=llm_chain,
|
||||
allowed_tools=tool_names,
|
||||
output_parser=_output_parser,
|
||||
dataset_tools=tools,
|
||||
**kwargs,
|
||||
)
|
||||
@ -1,117 +0,0 @@
|
||||
import logging
|
||||
from typing import Optional, Union
|
||||
|
||||
from langchain.agents import AgentExecutor as LCAgentExecutor
|
||||
from langchain.agents import BaseMultiActionAgent, BaseSingleActionAgent
|
||||
from langchain.callbacks.manager import Callbacks
|
||||
from langchain.tools import BaseTool
|
||||
from pydantic import BaseModel, Extra
|
||||
|
||||
from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity
|
||||
from core.entities.agent_entities import PlanningStrategy
|
||||
from core.entities.message_entities import prompt_messages_to_lc_messages
|
||||
from core.helper import moderation
|
||||
from core.memory.token_buffer_memory import TokenBufferMemory
|
||||
from core.model_runtime.errors.invoke import InvokeError
|
||||
from core.rag.retrieval.agent.multi_dataset_router_agent import MultiDatasetRouterAgent
|
||||
from core.rag.retrieval.agent.output_parser.structured_chat import StructuredChatOutputParser
|
||||
from core.rag.retrieval.agent.structed_multi_dataset_router_agent import StructuredMultiDatasetRouterAgent
|
||||
from core.tools.tool.dataset_retriever.dataset_multi_retriever_tool import DatasetMultiRetrieverTool
|
||||
from core.tools.tool.dataset_retriever.dataset_retriever_tool import DatasetRetrieverTool
|
||||
|
||||
|
||||
class AgentConfiguration(BaseModel):
|
||||
strategy: PlanningStrategy
|
||||
model_config: ModelConfigWithCredentialsEntity
|
||||
tools: list[BaseTool]
|
||||
summary_model_config: Optional[ModelConfigWithCredentialsEntity] = None
|
||||
memory: Optional[TokenBufferMemory] = None
|
||||
callbacks: Callbacks = None
|
||||
max_iterations: int = 6
|
||||
max_execution_time: Optional[float] = None
|
||||
early_stopping_method: str = "generate"
|
||||
# `generate` will continue to complete the last inference after reaching the iteration limit or request time limit
|
||||
|
||||
class Config:
|
||||
"""Configuration for this pydantic object."""
|
||||
|
||||
extra = Extra.forbid
|
||||
arbitrary_types_allowed = True
|
||||
|
||||
|
||||
class AgentExecuteResult(BaseModel):
|
||||
strategy: PlanningStrategy
|
||||
output: Optional[str]
|
||||
configuration: AgentConfiguration
|
||||
|
||||
|
||||
class AgentExecutor:
|
||||
def __init__(self, configuration: AgentConfiguration):
|
||||
self.configuration = configuration
|
||||
self.agent = self._init_agent()
|
||||
|
||||
def _init_agent(self) -> Union[BaseSingleActionAgent, BaseMultiActionAgent]:
|
||||
if self.configuration.strategy == PlanningStrategy.ROUTER:
|
||||
self.configuration.tools = [t for t in self.configuration.tools
|
||||
if isinstance(t, DatasetRetrieverTool)
|
||||
or isinstance(t, DatasetMultiRetrieverTool)]
|
||||
agent = MultiDatasetRouterAgent.from_llm_and_tools(
|
||||
model_config=self.configuration.model_config,
|
||||
tools=self.configuration.tools,
|
||||
extra_prompt_messages=prompt_messages_to_lc_messages(self.configuration.memory.get_history_prompt_messages())
|
||||
if self.configuration.memory else None,
|
||||
verbose=True
|
||||
)
|
||||
elif self.configuration.strategy == PlanningStrategy.REACT_ROUTER:
|
||||
self.configuration.tools = [t for t in self.configuration.tools
|
||||
if isinstance(t, DatasetRetrieverTool)
|
||||
or isinstance(t, DatasetMultiRetrieverTool)]
|
||||
agent = StructuredMultiDatasetRouterAgent.from_llm_and_tools(
|
||||
model_config=self.configuration.model_config,
|
||||
tools=self.configuration.tools,
|
||||
output_parser=StructuredChatOutputParser(),
|
||||
verbose=True
|
||||
)
|
||||
else:
|
||||
raise NotImplementedError(f"Unknown Agent Strategy: {self.configuration.strategy}")
|
||||
|
||||
return agent
|
||||
|
||||
def should_use_agent(self, query: str) -> bool:
|
||||
return self.agent.should_use_agent(query)
|
||||
|
||||
def run(self, query: str) -> AgentExecuteResult:
|
||||
moderation_result = moderation.check_moderation(
|
||||
self.configuration.model_config,
|
||||
query
|
||||
)
|
||||
|
||||
if moderation_result:
|
||||
return AgentExecuteResult(
|
||||
output="I apologize for any confusion, but I'm an AI assistant to be helpful, harmless, and honest.",
|
||||
strategy=self.configuration.strategy,
|
||||
configuration=self.configuration
|
||||
)
|
||||
|
||||
agent_executor = LCAgentExecutor.from_agent_and_tools(
|
||||
agent=self.agent,
|
||||
tools=self.configuration.tools,
|
||||
max_iterations=self.configuration.max_iterations,
|
||||
max_execution_time=self.configuration.max_execution_time,
|
||||
early_stopping_method=self.configuration.early_stopping_method,
|
||||
callbacks=self.configuration.callbacks
|
||||
)
|
||||
|
||||
try:
|
||||
output = agent_executor.run(input=query)
|
||||
except InvokeError as ex:
|
||||
raise ex
|
||||
except Exception as ex:
|
||||
logging.exception("agent_executor run failed")
|
||||
output = None
|
||||
|
||||
return AgentExecuteResult(
|
||||
output=output,
|
||||
strategy=self.configuration.strategy,
|
||||
configuration=self.configuration
|
||||
)
|
||||
@ -1,5 +1,7 @@
|
||||
import threading
|
||||
from typing import Optional, cast
|
||||
|
||||
from flask import Flask, current_app
|
||||
from langchain.tools import BaseTool
|
||||
|
||||
from core.app.app_config.entities import DatasetEntity, DatasetRetrieveConfigEntity
|
||||
@ -7,17 +9,35 @@ from core.app.entities.app_invoke_entities import InvokeFrom, ModelConfigWithCre
|
||||
from core.callback_handler.index_tool_callback_handler import DatasetIndexToolCallbackHandler
|
||||
from core.entities.agent_entities import PlanningStrategy
|
||||
from core.memory.token_buffer_memory import TokenBufferMemory
|
||||
from core.model_runtime.entities.model_entities import ModelFeature
|
||||
from core.model_manager import ModelInstance, ModelManager
|
||||
from core.model_runtime.entities.message_entities import PromptMessageTool
|
||||
from core.model_runtime.entities.model_entities import ModelFeature, ModelType
|
||||
from core.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel
|
||||
from core.rag.retrieval.agent_based_dataset_executor import AgentConfiguration, AgentExecutor
|
||||
from core.rag.datasource.retrieval_service import RetrievalService
|
||||
from core.rag.models.document import Document
|
||||
from core.rag.retrieval.router.multi_dataset_function_call_router import FunctionCallMultiDatasetRouter
|
||||
from core.rag.retrieval.router.multi_dataset_react_route import ReactMultiDatasetRouter
|
||||
from core.rerank.rerank import RerankRunner
|
||||
from core.tools.tool.dataset_retriever.dataset_multi_retriever_tool import DatasetMultiRetrieverTool
|
||||
from core.tools.tool.dataset_retriever.dataset_retriever_tool import DatasetRetrieverTool
|
||||
from extensions.ext_database import db
|
||||
from models.dataset import Dataset
|
||||
from models.dataset import Dataset, DatasetQuery, DocumentSegment
|
||||
from models.dataset import Document as DatasetDocument
|
||||
|
||||
default_retrieval_model = {
|
||||
'search_method': 'semantic_search',
|
||||
'reranking_enable': False,
|
||||
'reranking_model': {
|
||||
'reranking_provider_name': '',
|
||||
'reranking_model_name': ''
|
||||
},
|
||||
'top_k': 2,
|
||||
'score_threshold_enabled': False
|
||||
}
|
||||
|
||||
|
||||
class DatasetRetrieval:
|
||||
def retrieve(self, tenant_id: str,
|
||||
def retrieve(self, app_id: str, user_id: str, tenant_id: str,
|
||||
model_config: ModelConfigWithCredentialsEntity,
|
||||
config: DatasetEntity,
|
||||
query: str,
|
||||
@ -27,6 +47,8 @@ class DatasetRetrieval:
|
||||
memory: Optional[TokenBufferMemory] = None) -> Optional[str]:
|
||||
"""
|
||||
Retrieve dataset.
|
||||
:param app_id: app_id
|
||||
:param user_id: user_id
|
||||
:param tenant_id: tenant id
|
||||
:param model_config: model config
|
||||
:param config: dataset config
|
||||
@ -38,12 +60,22 @@ class DatasetRetrieval:
|
||||
:return:
|
||||
"""
|
||||
dataset_ids = config.dataset_ids
|
||||
if len(dataset_ids) == 0:
|
||||
return None
|
||||
retrieve_config = config.retrieve_config
|
||||
|
||||
# check model is support tool calling
|
||||
model_type_instance = model_config.provider_model_bundle.model_type_instance
|
||||
model_type_instance = cast(LargeLanguageModel, model_type_instance)
|
||||
|
||||
model_manager = ModelManager()
|
||||
model_instance = model_manager.get_model_instance(
|
||||
tenant_id=tenant_id,
|
||||
model_type=ModelType.LLM,
|
||||
provider=model_config.provider,
|
||||
model=model_config.model
|
||||
)
|
||||
|
||||
# get model schema
|
||||
model_schema = model_type_instance.get_model_schema(
|
||||
model=model_config.model,
|
||||
@ -59,38 +91,291 @@ class DatasetRetrieval:
|
||||
if ModelFeature.TOOL_CALL in features \
|
||||
or ModelFeature.MULTI_TOOL_CALL in features:
|
||||
planning_strategy = PlanningStrategy.ROUTER
|
||||
available_datasets = []
|
||||
for dataset_id in dataset_ids:
|
||||
# get dataset from dataset id
|
||||
dataset = db.session.query(Dataset).filter(
|
||||
Dataset.tenant_id == tenant_id,
|
||||
Dataset.id == dataset_id
|
||||
).first()
|
||||
|
||||
dataset_retriever_tools = self.to_dataset_retriever_tool(
|
||||
# pass if dataset is not available
|
||||
if not dataset:
|
||||
continue
|
||||
|
||||
# pass if dataset is not available
|
||||
if (dataset and dataset.available_document_count == 0
|
||||
and dataset.available_document_count == 0):
|
||||
continue
|
||||
|
||||
available_datasets.append(dataset)
|
||||
all_documents = []
|
||||
user_from = 'account' if invoke_from in [InvokeFrom.EXPLORE, InvokeFrom.DEBUGGER] else 'end_user'
|
||||
if retrieve_config.retrieve_strategy == DatasetRetrieveConfigEntity.RetrieveStrategy.SINGLE:
|
||||
all_documents = self.single_retrieve(app_id, tenant_id, user_id, user_from, available_datasets, query,
|
||||
model_instance,
|
||||
model_config, planning_strategy)
|
||||
elif retrieve_config.retrieve_strategy == DatasetRetrieveConfigEntity.RetrieveStrategy.MULTIPLE:
|
||||
all_documents = self.multiple_retrieve(app_id, tenant_id, user_id, user_from,
|
||||
available_datasets, query, retrieve_config.top_k,
|
||||
retrieve_config.score_threshold,
|
||||
retrieve_config.reranking_model.get('reranking_provider_name'),
|
||||
retrieve_config.reranking_model.get('reranking_model_name'))
|
||||
|
||||
document_score_list = {}
|
||||
for item in all_documents:
|
||||
if 'score' in item.metadata and item.metadata['score']:
|
||||
document_score_list[item.metadata['doc_id']] = item.metadata['score']
|
||||
|
||||
document_context_list = []
|
||||
index_node_ids = [document.metadata['doc_id'] for document in all_documents]
|
||||
segments = DocumentSegment.query.filter(
|
||||
DocumentSegment.dataset_id.in_(dataset_ids),
|
||||
DocumentSegment.completed_at.isnot(None),
|
||||
DocumentSegment.status == 'completed',
|
||||
DocumentSegment.enabled == True,
|
||||
DocumentSegment.index_node_id.in_(index_node_ids)
|
||||
).all()
|
||||
|
||||
if segments:
|
||||
index_node_id_to_position = {id: position for position, id in enumerate(index_node_ids)}
|
||||
sorted_segments = sorted(segments,
|
||||
key=lambda segment: index_node_id_to_position.get(segment.index_node_id,
|
||||
float('inf')))
|
||||
for segment in sorted_segments:
|
||||
if segment.answer:
|
||||
document_context_list.append(f'question:{segment.content} answer:{segment.answer}')
|
||||
else:
|
||||
document_context_list.append(segment.content)
|
||||
if show_retrieve_source:
|
||||
context_list = []
|
||||
resource_number = 1
|
||||
for segment in sorted_segments:
|
||||
dataset = Dataset.query.filter_by(
|
||||
id=segment.dataset_id
|
||||
).first()
|
||||
document = DatasetDocument.query.filter(DatasetDocument.id == segment.document_id,
|
||||
DatasetDocument.enabled == True,
|
||||
DatasetDocument.archived == False,
|
||||
).first()
|
||||
if dataset and document:
|
||||
source = {
|
||||
'position': resource_number,
|
||||
'dataset_id': dataset.id,
|
||||
'dataset_name': dataset.name,
|
||||
'document_id': document.id,
|
||||
'document_name': document.name,
|
||||
'data_source_type': document.data_source_type,
|
||||
'segment_id': segment.id,
|
||||
'retriever_from': invoke_from.to_source(),
|
||||
'score': document_score_list.get(segment.index_node_id, None)
|
||||
}
|
||||
|
||||
if invoke_from.to_source() == 'dev':
|
||||
source['hit_count'] = segment.hit_count
|
||||
source['word_count'] = segment.word_count
|
||||
source['segment_position'] = segment.position
|
||||
source['index_node_hash'] = segment.index_node_hash
|
||||
if segment.answer:
|
||||
source['content'] = f'question:{segment.content} \nanswer:{segment.answer}'
|
||||
else:
|
||||
source['content'] = segment.content
|
||||
context_list.append(source)
|
||||
resource_number += 1
|
||||
if hit_callback:
|
||||
hit_callback.return_retriever_resource_info(context_list)
|
||||
|
||||
return str("\n".join(document_context_list))
|
||||
return ''
|
||||
|
||||
def single_retrieve(self, app_id: str,
|
||||
tenant_id: str,
|
||||
user_id: str,
|
||||
user_from: str,
|
||||
available_datasets: list,
|
||||
query: str,
|
||||
model_instance: ModelInstance,
|
||||
model_config: ModelConfigWithCredentialsEntity,
|
||||
planning_strategy: PlanningStrategy,
|
||||
):
|
||||
tools = []
|
||||
for dataset in available_datasets:
|
||||
description = dataset.description
|
||||
if not description:
|
||||
description = 'useful for when you want to answer queries about the ' + dataset.name
|
||||
|
||||
description = description.replace('\n', '').replace('\r', '')
|
||||
message_tool = PromptMessageTool(
|
||||
name=dataset.id,
|
||||
description=description,
|
||||
parameters={
|
||||
"type": "object",
|
||||
"properties": {},
|
||||
"required": [],
|
||||
}
|
||||
)
|
||||
tools.append(message_tool)
|
||||
dataset_id = None
|
||||
if planning_strategy == PlanningStrategy.REACT_ROUTER:
|
||||
react_multi_dataset_router = ReactMultiDatasetRouter()
|
||||
dataset_id = react_multi_dataset_router.invoke(query, tools, model_config, model_instance,
|
||||
user_id, tenant_id)
|
||||
|
||||
elif planning_strategy == PlanningStrategy.ROUTER:
|
||||
function_call_router = FunctionCallMultiDatasetRouter()
|
||||
dataset_id = function_call_router.invoke(query, tools, model_config, model_instance)
|
||||
|
||||
if dataset_id:
|
||||
# get retrieval model config
|
||||
dataset = db.session.query(Dataset).filter(
|
||||
Dataset.id == dataset_id
|
||||
).first()
|
||||
if dataset:
|
||||
retrieval_model_config = dataset.retrieval_model \
|
||||
if dataset.retrieval_model else default_retrieval_model
|
||||
|
||||
# get top k
|
||||
top_k = retrieval_model_config['top_k']
|
||||
# get retrieval method
|
||||
if dataset.indexing_technique == "economy":
|
||||
retrival_method = 'keyword_search'
|
||||
else:
|
||||
retrival_method = retrieval_model_config['search_method']
|
||||
# get reranking model
|
||||
reranking_model = retrieval_model_config['reranking_model'] \
|
||||
if retrieval_model_config['reranking_enable'] else None
|
||||
# get score threshold
|
||||
score_threshold = .0
|
||||
score_threshold_enabled = retrieval_model_config.get("score_threshold_enabled")
|
||||
if score_threshold_enabled:
|
||||
score_threshold = retrieval_model_config.get("score_threshold")
|
||||
|
||||
results = RetrievalService.retrieve(retrival_method=retrival_method, dataset_id=dataset.id,
|
||||
query=query,
|
||||
top_k=top_k, score_threshold=score_threshold,
|
||||
reranking_model=reranking_model)
|
||||
self._on_query(query, [dataset_id], app_id, user_from, user_id)
|
||||
if results:
|
||||
self._on_retrival_end(results)
|
||||
return results
|
||||
return []
|
||||
|
||||
def multiple_retrieve(self,
|
||||
app_id: str,
|
||||
tenant_id: str,
|
||||
user_id: str,
|
||||
user_from: str,
|
||||
available_datasets: list,
|
||||
query: str,
|
||||
top_k: int,
|
||||
score_threshold: float,
|
||||
reranking_provider_name: str,
|
||||
reranking_model_name: str):
|
||||
threads = []
|
||||
all_documents = []
|
||||
dataset_ids = [dataset.id for dataset in available_datasets]
|
||||
for dataset in available_datasets:
|
||||
retrieval_thread = threading.Thread(target=self._retriever, kwargs={
|
||||
'flask_app': current_app._get_current_object(),
|
||||
'dataset_id': dataset.id,
|
||||
'query': query,
|
||||
'top_k': top_k,
|
||||
'all_documents': all_documents,
|
||||
})
|
||||
threads.append(retrieval_thread)
|
||||
retrieval_thread.start()
|
||||
for thread in threads:
|
||||
thread.join()
|
||||
# do rerank for searched documents
|
||||
model_manager = ModelManager()
|
||||
rerank_model_instance = model_manager.get_model_instance(
|
||||
tenant_id=tenant_id,
|
||||
dataset_ids=dataset_ids,
|
||||
retrieve_config=retrieve_config,
|
||||
return_resource=show_retrieve_source,
|
||||
invoke_from=invoke_from,
|
||||
hit_callback=hit_callback
|
||||
provider=reranking_provider_name,
|
||||
model_type=ModelType.RERANK,
|
||||
model=reranking_model_name
|
||||
)
|
||||
|
||||
if len(dataset_retriever_tools) == 0:
|
||||
return None
|
||||
rerank_runner = RerankRunner(rerank_model_instance)
|
||||
all_documents = rerank_runner.run(query, all_documents,
|
||||
score_threshold,
|
||||
top_k)
|
||||
self._on_query(query, dataset_ids, app_id, user_from, user_id)
|
||||
if all_documents:
|
||||
self._on_retrival_end(all_documents)
|
||||
return all_documents
|
||||
|
||||
agent_configuration = AgentConfiguration(
|
||||
strategy=planning_strategy,
|
||||
model_config=model_config,
|
||||
tools=dataset_retriever_tools,
|
||||
memory=memory,
|
||||
max_iterations=10,
|
||||
max_execution_time=400.0,
|
||||
early_stopping_method="generate"
|
||||
)
|
||||
def _on_retrival_end(self, documents: list[Document]) -> None:
|
||||
"""Handle retrival end."""
|
||||
for document in documents:
|
||||
query = db.session.query(DocumentSegment).filter(
|
||||
DocumentSegment.index_node_id == document.metadata['doc_id']
|
||||
)
|
||||
|
||||
agent_executor = AgentExecutor(agent_configuration)
|
||||
# if 'dataset_id' in document.metadata:
|
||||
if 'dataset_id' in document.metadata:
|
||||
query = query.filter(DocumentSegment.dataset_id == document.metadata['dataset_id'])
|
||||
|
||||
should_use_agent = agent_executor.should_use_agent(query)
|
||||
if not should_use_agent:
|
||||
return None
|
||||
# add hit count to document segment
|
||||
query.update(
|
||||
{DocumentSegment.hit_count: DocumentSegment.hit_count + 1},
|
||||
synchronize_session=False
|
||||
)
|
||||
|
||||
result = agent_executor.run(query)
|
||||
db.session.commit()
|
||||
|
||||
return result.output
|
||||
def _on_query(self, query: str, dataset_ids: list[str], app_id: str, user_from: str, user_id: str) -> None:
|
||||
"""
|
||||
Handle query.
|
||||
"""
|
||||
if not query:
|
||||
return
|
||||
for dataset_id in dataset_ids:
|
||||
dataset_query = DatasetQuery(
|
||||
dataset_id=dataset_id,
|
||||
content=query,
|
||||
source='app',
|
||||
source_app_id=app_id,
|
||||
created_by_role=user_from,
|
||||
created_by=user_id
|
||||
)
|
||||
db.session.add(dataset_query)
|
||||
db.session.commit()
|
||||
|
||||
def _retriever(self, flask_app: Flask, dataset_id: str, query: str, top_k: int, all_documents: list):
|
||||
with flask_app.app_context():
|
||||
dataset = db.session.query(Dataset).filter(
|
||||
Dataset.id == dataset_id
|
||||
).first()
|
||||
|
||||
if not dataset:
|
||||
return []
|
||||
|
||||
# get retrieval model , if the model is not setting , using default
|
||||
retrieval_model = dataset.retrieval_model if dataset.retrieval_model else default_retrieval_model
|
||||
|
||||
if dataset.indexing_technique == "economy":
|
||||
# use keyword table query
|
||||
documents = RetrievalService.retrieve(retrival_method='keyword_search',
|
||||
dataset_id=dataset.id,
|
||||
query=query,
|
||||
top_k=top_k
|
||||
)
|
||||
if documents:
|
||||
all_documents.extend(documents)
|
||||
else:
|
||||
if top_k > 0:
|
||||
# retrieval source
|
||||
documents = RetrievalService.retrieve(retrival_method=retrieval_model['search_method'],
|
||||
dataset_id=dataset.id,
|
||||
query=query,
|
||||
top_k=top_k,
|
||||
score_threshold=retrieval_model['score_threshold']
|
||||
if retrieval_model['score_threshold_enabled'] else None,
|
||||
reranking_model=retrieval_model['reranking_model']
|
||||
if retrieval_model['reranking_enable'] else None
|
||||
)
|
||||
|
||||
all_documents.extend(documents)
|
||||
|
||||
def to_dataset_retriever_tool(self, tenant_id: str,
|
||||
dataset_ids: list[str],
|
||||
|
||||
@ -12,8 +12,7 @@ from core.model_runtime.entities.llm_entities import LLMUsage
|
||||
from core.model_runtime.entities.message_entities import PromptMessage, PromptMessageRole, PromptMessageTool
|
||||
from core.prompt.advanced_prompt_transform import AdvancedPromptTransform
|
||||
from core.prompt.entities.advanced_prompt_entities import ChatModelMessage
|
||||
from core.rag.retrieval.agent.output_parser.structured_chat import StructuredChatOutputParser
|
||||
from core.workflow.nodes.knowledge_retrieval.entities import KnowledgeRetrievalNodeData
|
||||
from core.rag.retrieval.output_parser.structured_chat import StructuredChatOutputParser
|
||||
from core.workflow.nodes.llm.llm_node import LLMNode
|
||||
|
||||
FORMAT_INSTRUCTIONS = """Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).
|
||||
@ -55,11 +54,10 @@ class ReactMultiDatasetRouter:
|
||||
self,
|
||||
query: str,
|
||||
dataset_tools: list[PromptMessageTool],
|
||||
node_data: KnowledgeRetrievalNodeData,
|
||||
model_config: ModelConfigWithCredentialsEntity,
|
||||
model_instance: ModelInstance,
|
||||
user_id: str,
|
||||
tenant_id: str,
|
||||
tenant_id: str
|
||||
|
||||
) -> Union[str, None]:
|
||||
"""Given input, decided what to do.
|
||||
@ -72,7 +70,8 @@ class ReactMultiDatasetRouter:
|
||||
return dataset_tools[0].name
|
||||
|
||||
try:
|
||||
return self._react_invoke(query=query, node_data=node_data, model_config=model_config, model_instance=model_instance,
|
||||
return self._react_invoke(query=query, model_config=model_config,
|
||||
model_instance=model_instance,
|
||||
tools=dataset_tools, user_id=user_id, tenant_id=tenant_id)
|
||||
except Exception as e:
|
||||
return None
|
||||
@ -80,7 +79,6 @@ class ReactMultiDatasetRouter:
|
||||
def _react_invoke(
|
||||
self,
|
||||
query: str,
|
||||
node_data: KnowledgeRetrievalNodeData,
|
||||
model_config: ModelConfigWithCredentialsEntity,
|
||||
model_instance: ModelInstance,
|
||||
tools: Sequence[PromptMessageTool],
|
||||
@ -121,7 +119,7 @@ class ReactMultiDatasetRouter:
|
||||
model_config=model_config
|
||||
)
|
||||
result_text, usage = self._invoke_llm(
|
||||
node_data=node_data,
|
||||
completion_param=model_config.parameters,
|
||||
model_instance=model_instance,
|
||||
prompt_messages=prompt_messages,
|
||||
stop=stop,
|
||||
@ -134,10 +132,11 @@ class ReactMultiDatasetRouter:
|
||||
return agent_decision.tool
|
||||
return None
|
||||
|
||||
def _invoke_llm(self, node_data: KnowledgeRetrievalNodeData,
|
||||
def _invoke_llm(self, completion_param: dict,
|
||||
model_instance: ModelInstance,
|
||||
prompt_messages: list[PromptMessage],
|
||||
stop: list[str], user_id: str, tenant_id: str) -> tuple[str, LLMUsage]:
|
||||
stop: list[str], user_id: str, tenant_id: str
|
||||
) -> tuple[str, LLMUsage]:
|
||||
"""
|
||||
Invoke large language model
|
||||
:param node_data: node data
|
||||
@ -148,7 +147,7 @@ class ReactMultiDatasetRouter:
|
||||
"""
|
||||
invoke_result = model_instance.invoke_llm(
|
||||
prompt_messages=prompt_messages,
|
||||
model_parameters=node_data.single_retrieval_config.model.completion_params,
|
||||
model_parameters=completion_param,
|
||||
stop=stop,
|
||||
stream=True,
|
||||
user=user_id,
|
||||
@ -203,7 +202,8 @@ class ReactMultiDatasetRouter:
|
||||
) -> list[ChatModelMessage]:
|
||||
tool_strings = []
|
||||
for tool in tools:
|
||||
tool_strings.append(f"{tool.name}: {tool.description}, args: {{'query': {{'title': 'Query', 'description': 'Query for the dataset to be used to retrieve the dataset.', 'type': 'string'}}}}")
|
||||
tool_strings.append(
|
||||
f"{tool.name}: {tool.description}, args: {{'query': {{'title': 'Query', 'description': 'Query for the dataset to be used to retrieve the dataset.', 'type': 'string'}}}}")
|
||||
formatted_tools = "\n".join(tool_strings)
|
||||
unique_tool_names = set(tool.name for tool in tools)
|
||||
tool_names = ", ".join('"' + name + '"' for name in unique_tool_names)
|
||||
@ -38,8 +38,10 @@ Action:
|
||||
```
|
||||
|
||||
Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.
|
||||
{{historic_messages}}
|
||||
Question: {{query}}
|
||||
Thought: {{agent_scratchpad}}"""
|
||||
{{agent_scratchpad}}
|
||||
Thought:"""
|
||||
|
||||
ENGLISH_REACT_COMPLETION_AGENT_SCRATCHPAD_TEMPLATES = """Observation: {{observation}}
|
||||
Thought:"""
|
||||
|
||||
@ -12,6 +12,7 @@ class BingSearchTool(BuiltinTool):
|
||||
|
||||
def _invoke_bing(self,
|
||||
user_id: str,
|
||||
server_url: str,
|
||||
subscription_key: str, query: str, limit: int,
|
||||
result_type: str, market: str, lang: str,
|
||||
filters: list[str]) -> Union[ToolInvokeMessage, list[ToolInvokeMessage]]:
|
||||
@ -26,7 +27,7 @@ class BingSearchTool(BuiltinTool):
|
||||
}
|
||||
|
||||
query = quote(query)
|
||||
server_url = f'{self.url}?q={query}&mkt={market_code}&count={limit}&responseFilter={",".join(filters)}'
|
||||
server_url = f'{server_url}?q={query}&mkt={market_code}&count={limit}&responseFilter={",".join(filters)}'
|
||||
response = get(server_url, headers=headers)
|
||||
|
||||
if response.status_code != 200:
|
||||
@ -136,6 +137,7 @@ class BingSearchTool(BuiltinTool):
|
||||
|
||||
self._invoke_bing(
|
||||
user_id='test',
|
||||
server_url=server_url,
|
||||
subscription_key=key,
|
||||
query=query,
|
||||
limit=limit,
|
||||
@ -188,6 +190,7 @@ class BingSearchTool(BuiltinTool):
|
||||
|
||||
return self._invoke_bing(
|
||||
user_id=user_id,
|
||||
server_url=server_url,
|
||||
subscription_key=key,
|
||||
query=query,
|
||||
limit=limit,
|
||||
|
||||
@ -22,9 +22,6 @@ class ValueType(Enum):
|
||||
|
||||
|
||||
class VariablePool:
|
||||
variables_mapping = {}
|
||||
user_inputs: dict
|
||||
system_variables: dict[SystemVariable, Any]
|
||||
|
||||
def __init__(self, system_variables: dict[SystemVariable, Any],
|
||||
user_inputs: dict) -> None:
|
||||
@ -34,6 +31,7 @@ class VariablePool:
|
||||
# 'query': 'abc',
|
||||
# 'files': []
|
||||
# }
|
||||
self.variables_mapping = {}
|
||||
self.user_inputs = user_inputs
|
||||
self.system_variables = system_variables
|
||||
for system_variable, value in system_variables.items():
|
||||
|
||||
@ -1,28 +1,21 @@
|
||||
import threading
|
||||
from typing import Any, cast
|
||||
|
||||
from flask import Flask, current_app
|
||||
|
||||
from core.app.app_config.entities import DatasetRetrieveConfigEntity
|
||||
from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity
|
||||
from core.entities.agent_entities import PlanningStrategy
|
||||
from core.entities.model_entities import ModelStatus
|
||||
from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError
|
||||
from core.model_manager import ModelInstance, ModelManager
|
||||
from core.model_runtime.entities.message_entities import PromptMessageTool
|
||||
from core.model_runtime.entities.model_entities import ModelFeature, ModelType
|
||||
from core.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel
|
||||
from core.rag.datasource.retrieval_service import RetrievalService
|
||||
from core.rerank.rerank import RerankRunner
|
||||
from core.rag.retrieval.dataset_retrieval import DatasetRetrieval
|
||||
from core.workflow.entities.base_node_data_entities import BaseNodeData
|
||||
from core.workflow.entities.node_entities import NodeRunResult, NodeType
|
||||
from core.workflow.entities.variable_pool import VariablePool
|
||||
from core.workflow.nodes.base_node import BaseNode
|
||||
from core.workflow.nodes.knowledge_retrieval.entities import KnowledgeRetrievalNodeData
|
||||
from core.workflow.nodes.knowledge_retrieval.multi_dataset_function_call_router import FunctionCallMultiDatasetRouter
|
||||
from core.workflow.nodes.knowledge_retrieval.multi_dataset_react_route import ReactMultiDatasetRouter
|
||||
from extensions.ext_database import db
|
||||
from models.dataset import Dataset, DatasetQuery, Document, DocumentSegment
|
||||
from models.dataset import Dataset, Document, DocumentSegment
|
||||
from models.workflow import WorkflowNodeExecutionStatus
|
||||
|
||||
default_retrieval_model = {
|
||||
@ -106,10 +99,45 @@ class KnowledgeRetrievalNode(BaseNode):
|
||||
|
||||
available_datasets.append(dataset)
|
||||
all_documents = []
|
||||
dataset_retrieval = DatasetRetrieval()
|
||||
if node_data.retrieval_mode == DatasetRetrieveConfigEntity.RetrieveStrategy.SINGLE.value:
|
||||
all_documents = self._single_retrieve(available_datasets, node_data, query)
|
||||
# fetch model config
|
||||
model_instance, model_config = self._fetch_model_config(node_data)
|
||||
# check model is support tool calling
|
||||
model_type_instance = model_config.provider_model_bundle.model_type_instance
|
||||
model_type_instance = cast(LargeLanguageModel, model_type_instance)
|
||||
# get model schema
|
||||
model_schema = model_type_instance.get_model_schema(
|
||||
model=model_config.model,
|
||||
credentials=model_config.credentials
|
||||
)
|
||||
|
||||
if model_schema:
|
||||
planning_strategy = PlanningStrategy.REACT_ROUTER
|
||||
features = model_schema.features
|
||||
if features:
|
||||
if ModelFeature.TOOL_CALL in features \
|
||||
or ModelFeature.MULTI_TOOL_CALL in features:
|
||||
planning_strategy = PlanningStrategy.ROUTER
|
||||
all_documents = dataset_retrieval.single_retrieve(
|
||||
available_datasets=available_datasets,
|
||||
tenant_id=self.tenant_id,
|
||||
user_id=self.user_id,
|
||||
app_id=self.app_id,
|
||||
user_from=self.user_from.value,
|
||||
query=query,
|
||||
model_config=model_config,
|
||||
model_instance=model_instance,
|
||||
planning_strategy=planning_strategy
|
||||
)
|
||||
elif node_data.retrieval_mode == DatasetRetrieveConfigEntity.RetrieveStrategy.MULTIPLE.value:
|
||||
all_documents = self._multiple_retrieve(available_datasets, node_data, query)
|
||||
all_documents = dataset_retrieval.multiple_retrieve(self.app_id, self.tenant_id, self.user_id,
|
||||
self.user_from.value,
|
||||
available_datasets, query,
|
||||
node_data.multiple_retrieval_config.top_k,
|
||||
node_data.multiple_retrieval_config.score_threshold,
|
||||
node_data.multiple_retrieval_config.reranking_model.provider,
|
||||
node_data.multiple_retrieval_config.reranking_model.model)
|
||||
|
||||
context_list = []
|
||||
if all_documents:
|
||||
@ -184,87 +212,6 @@ class KnowledgeRetrievalNode(BaseNode):
|
||||
variable_mapping['query'] = node_data.query_variable_selector
|
||||
return variable_mapping
|
||||
|
||||
def _single_retrieve(self, available_datasets, node_data, query):
|
||||
tools = []
|
||||
for dataset in available_datasets:
|
||||
description = dataset.description
|
||||
if not description:
|
||||
description = 'useful for when you want to answer queries about the ' + dataset.name
|
||||
|
||||
description = description.replace('\n', '').replace('\r', '')
|
||||
message_tool = PromptMessageTool(
|
||||
name=dataset.id,
|
||||
description=description,
|
||||
parameters={
|
||||
"type": "object",
|
||||
"properties": {},
|
||||
"required": [],
|
||||
}
|
||||
)
|
||||
tools.append(message_tool)
|
||||
# fetch model config
|
||||
model_instance, model_config = self._fetch_model_config(node_data)
|
||||
# check model is support tool calling
|
||||
model_type_instance = model_config.provider_model_bundle.model_type_instance
|
||||
model_type_instance = cast(LargeLanguageModel, model_type_instance)
|
||||
# get model schema
|
||||
model_schema = model_type_instance.get_model_schema(
|
||||
model=model_config.model,
|
||||
credentials=model_config.credentials
|
||||
)
|
||||
|
||||
if not model_schema:
|
||||
return None
|
||||
planning_strategy = PlanningStrategy.REACT_ROUTER
|
||||
features = model_schema.features
|
||||
if features:
|
||||
if ModelFeature.TOOL_CALL in features \
|
||||
or ModelFeature.MULTI_TOOL_CALL in features:
|
||||
planning_strategy = PlanningStrategy.ROUTER
|
||||
dataset_id = None
|
||||
if planning_strategy == PlanningStrategy.REACT_ROUTER:
|
||||
react_multi_dataset_router = ReactMultiDatasetRouter()
|
||||
dataset_id = react_multi_dataset_router.invoke(query, tools, node_data, model_config, model_instance,
|
||||
self.user_id, self.tenant_id)
|
||||
|
||||
elif planning_strategy == PlanningStrategy.ROUTER:
|
||||
function_call_router = FunctionCallMultiDatasetRouter()
|
||||
dataset_id = function_call_router.invoke(query, tools, model_config, model_instance)
|
||||
if dataset_id:
|
||||
# get retrieval model config
|
||||
dataset = db.session.query(Dataset).filter(
|
||||
Dataset.id == dataset_id
|
||||
).first()
|
||||
if dataset:
|
||||
retrieval_model_config = dataset.retrieval_model \
|
||||
if dataset.retrieval_model else default_retrieval_model
|
||||
|
||||
# get top k
|
||||
top_k = retrieval_model_config['top_k']
|
||||
# get retrieval method
|
||||
if dataset.indexing_technique == "economy":
|
||||
retrival_method = 'keyword_search'
|
||||
else:
|
||||
retrival_method = retrieval_model_config['search_method']
|
||||
# get reranking model
|
||||
reranking_model=retrieval_model_config['reranking_model'] \
|
||||
if retrieval_model_config['reranking_enable'] else None
|
||||
# get score threshold
|
||||
score_threshold = .0
|
||||
score_threshold_enabled = retrieval_model_config.get("score_threshold_enabled")
|
||||
if score_threshold_enabled:
|
||||
score_threshold = retrieval_model_config.get("score_threshold")
|
||||
|
||||
results = RetrievalService.retrieve(retrival_method=retrival_method, dataset_id=dataset.id,
|
||||
query=query,
|
||||
top_k=top_k, score_threshold=score_threshold,
|
||||
reranking_model=reranking_model)
|
||||
self._on_query(query, [dataset_id])
|
||||
if results:
|
||||
self._on_retrival_end(results)
|
||||
return results
|
||||
return []
|
||||
|
||||
def _fetch_model_config(self, node_data: KnowledgeRetrievalNodeData) -> tuple[
|
||||
ModelInstance, ModelConfigWithCredentialsEntity]:
|
||||
"""
|
||||
@ -335,112 +282,3 @@ class KnowledgeRetrievalNode(BaseNode):
|
||||
parameters=completion_params,
|
||||
stop=stop,
|
||||
)
|
||||
|
||||
def _multiple_retrieve(self, available_datasets, node_data, query):
|
||||
threads = []
|
||||
all_documents = []
|
||||
dataset_ids = [dataset.id for dataset in available_datasets]
|
||||
for dataset in available_datasets:
|
||||
retrieval_thread = threading.Thread(target=self._retriever, kwargs={
|
||||
'flask_app': current_app._get_current_object(),
|
||||
'dataset_id': dataset.id,
|
||||
'query': query,
|
||||
'top_k': node_data.multiple_retrieval_config.top_k,
|
||||
'all_documents': all_documents,
|
||||
})
|
||||
threads.append(retrieval_thread)
|
||||
retrieval_thread.start()
|
||||
for thread in threads:
|
||||
thread.join()
|
||||
# do rerank for searched documents
|
||||
model_manager = ModelManager()
|
||||
rerank_model_instance = model_manager.get_model_instance(
|
||||
tenant_id=self.tenant_id,
|
||||
provider=node_data.multiple_retrieval_config.reranking_model.provider,
|
||||
model_type=ModelType.RERANK,
|
||||
model=node_data.multiple_retrieval_config.reranking_model.model
|
||||
)
|
||||
|
||||
rerank_runner = RerankRunner(rerank_model_instance)
|
||||
all_documents = rerank_runner.run(query, all_documents,
|
||||
node_data.multiple_retrieval_config.score_threshold,
|
||||
node_data.multiple_retrieval_config.top_k)
|
||||
self._on_query(query, dataset_ids)
|
||||
if all_documents:
|
||||
self._on_retrival_end(all_documents)
|
||||
return all_documents
|
||||
|
||||
def _on_retrival_end(self, documents: list[Document]) -> None:
|
||||
"""Handle retrival end."""
|
||||
for document in documents:
|
||||
query = db.session.query(DocumentSegment).filter(
|
||||
DocumentSegment.index_node_id == document.metadata['doc_id']
|
||||
)
|
||||
|
||||
# if 'dataset_id' in document.metadata:
|
||||
if 'dataset_id' in document.metadata:
|
||||
query = query.filter(DocumentSegment.dataset_id == document.metadata['dataset_id'])
|
||||
|
||||
# add hit count to document segment
|
||||
query.update(
|
||||
{DocumentSegment.hit_count: DocumentSegment.hit_count + 1},
|
||||
synchronize_session=False
|
||||
)
|
||||
|
||||
db.session.commit()
|
||||
|
||||
def _on_query(self, query: str, dataset_ids: list[str]) -> None:
|
||||
"""
|
||||
Handle query.
|
||||
"""
|
||||
if not query:
|
||||
return
|
||||
for dataset_id in dataset_ids:
|
||||
dataset_query = DatasetQuery(
|
||||
dataset_id=dataset_id,
|
||||
content=query,
|
||||
source='app',
|
||||
source_app_id=self.app_id,
|
||||
created_by_role=self.user_from.value,
|
||||
created_by=self.user_id
|
||||
)
|
||||
db.session.add(dataset_query)
|
||||
db.session.commit()
|
||||
|
||||
def _retriever(self, flask_app: Flask, dataset_id: str, query: str, top_k: int, all_documents: list):
|
||||
with flask_app.app_context():
|
||||
dataset = db.session.query(Dataset).filter(
|
||||
Dataset.tenant_id == self.tenant_id,
|
||||
Dataset.id == dataset_id
|
||||
).first()
|
||||
|
||||
if not dataset:
|
||||
return []
|
||||
|
||||
# get retrieval model , if the model is not setting , using default
|
||||
retrieval_model = dataset.retrieval_model if dataset.retrieval_model else default_retrieval_model
|
||||
|
||||
if dataset.indexing_technique == "economy":
|
||||
# use keyword table query
|
||||
documents = RetrievalService.retrieve(retrival_method='keyword_search',
|
||||
dataset_id=dataset.id,
|
||||
query=query,
|
||||
top_k=top_k
|
||||
)
|
||||
if documents:
|
||||
all_documents.extend(documents)
|
||||
else:
|
||||
if top_k > 0:
|
||||
# retrieval source
|
||||
documents = RetrievalService.retrieve(retrival_method=retrieval_model['search_method'],
|
||||
dataset_id=dataset.id,
|
||||
query=query,
|
||||
top_k=top_k,
|
||||
score_threshold=retrieval_model['score_threshold']
|
||||
if retrieval_model['score_threshold_enabled'] else None,
|
||||
reranking_model=retrieval_model['reranking_model']
|
||||
if retrieval_model['reranking_enable'] else None
|
||||
)
|
||||
|
||||
all_documents.extend(documents)
|
||||
|
||||
|
||||
@ -10,7 +10,7 @@ from core.file.file_obj import FileVar
|
||||
from core.memory.token_buffer_memory import TokenBufferMemory
|
||||
from core.model_manager import ModelInstance, ModelManager
|
||||
from core.model_runtime.entities.llm_entities import LLMUsage
|
||||
from core.model_runtime.entities.message_entities import PromptMessage
|
||||
from core.model_runtime.entities.message_entities import PromptMessage, PromptMessageContentType
|
||||
from core.model_runtime.entities.model_entities import ModelType
|
||||
from core.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel
|
||||
from core.model_runtime.utils.encoders import jsonable_encoder
|
||||
@ -434,6 +434,22 @@ class LLMNode(BaseNode):
|
||||
)
|
||||
stop = model_config.stop
|
||||
|
||||
vision_enabled = node_data.vision.enabled
|
||||
for prompt_message in prompt_messages:
|
||||
if not isinstance(prompt_message.content, str):
|
||||
prompt_message_content = []
|
||||
for content_item in prompt_message.content:
|
||||
if vision_enabled and content_item.type == PromptMessageContentType.IMAGE:
|
||||
prompt_message_content.append(content_item)
|
||||
elif content_item.type == PromptMessageContentType.TEXT:
|
||||
prompt_message_content.append(content_item)
|
||||
|
||||
if len(prompt_message_content) > 1:
|
||||
prompt_message.content = prompt_message_content
|
||||
elif (len(prompt_message_content) == 1
|
||||
and prompt_message_content[0].type == PromptMessageContentType.TEXT):
|
||||
prompt_message.content = prompt_message_content[0].data
|
||||
|
||||
return prompt_messages, stop
|
||||
|
||||
@classmethod
|
||||
|
||||
@ -34,20 +34,27 @@ redis[hiredis]~=5.0.3
|
||||
openpyxl==3.1.2
|
||||
chardet~=5.1.0
|
||||
python-docx~=1.1.0
|
||||
pypdfium2==4.16.0
|
||||
pypdfium2~=4.17.0
|
||||
resend~=0.7.0
|
||||
pyjwt~=2.8.0
|
||||
anthropic~=0.23.1
|
||||
newspaper3k==0.2.8
|
||||
google-api-python-client==2.90.0
|
||||
wikipedia==1.4.0
|
||||
readabilipy==0.2.0
|
||||
google-ai-generativelanguage==0.6.1
|
||||
google-api-core==2.18.0
|
||||
google-api-python-client==2.90.0
|
||||
google-auth==2.29.0
|
||||
google-auth-httplib2==0.2.0
|
||||
google-generativeai==0.5.0
|
||||
google-search-results==2.4.2
|
||||
googleapis-common-protos==1.63.0
|
||||
replicate~=0.22.0
|
||||
websocket-client~=1.7.0
|
||||
dashscope[tokenizer]~=1.14.0
|
||||
huggingface_hub~=0.16.4
|
||||
transformers~=4.31.0
|
||||
transformers~=4.35.0
|
||||
tokenizers~=0.15.0
|
||||
pandas==1.5.3
|
||||
xinference-client==0.9.4
|
||||
safetensors==0.3.2
|
||||
@ -55,13 +62,12 @@ zhipuai==1.0.7
|
||||
werkzeug~=3.0.1
|
||||
pymilvus==2.3.0
|
||||
qdrant-client==1.7.3
|
||||
cohere~=4.44
|
||||
cohere~=5.2.4
|
||||
pyyaml~=6.0.1
|
||||
numpy~=1.25.2
|
||||
unstructured[docx,pptx,msg,md,ppt]~=0.10.27
|
||||
bs4~=0.0.1
|
||||
markdown~=3.5.1
|
||||
google-generativeai~=0.3.2
|
||||
httpx[socks]~=0.24.1
|
||||
matplotlib~=3.8.2
|
||||
yfinance~=0.2.35
|
||||
@ -75,4 +81,4 @@ twilio==9.0.0
|
||||
qrcode~=7.4.2
|
||||
azure-storage-blob==12.9.0
|
||||
azure-identity==1.15.0
|
||||
lxml==5.1.0
|
||||
lxml==5.1.0
|
||||
|
||||
@ -1046,73 +1046,11 @@ class SegmentService:
|
||||
credentials=embedding_model.credentials,
|
||||
texts=[content]
|
||||
)
|
||||
max_position = db.session.query(func.max(DocumentSegment.position)).filter(
|
||||
DocumentSegment.document_id == document.id
|
||||
).scalar()
|
||||
segment_document = DocumentSegment(
|
||||
tenant_id=current_user.current_tenant_id,
|
||||
dataset_id=document.dataset_id,
|
||||
document_id=document.id,
|
||||
index_node_id=doc_id,
|
||||
index_node_hash=segment_hash,
|
||||
position=max_position + 1 if max_position else 1,
|
||||
content=content,
|
||||
word_count=len(content),
|
||||
tokens=tokens,
|
||||
status='completed',
|
||||
indexing_at=datetime.datetime.utcnow(),
|
||||
completed_at=datetime.datetime.utcnow(),
|
||||
created_by=current_user.id
|
||||
)
|
||||
if document.doc_form == 'qa_model':
|
||||
segment_document.answer = args['answer']
|
||||
|
||||
db.session.add(segment_document)
|
||||
db.session.commit()
|
||||
|
||||
# save vector index
|
||||
try:
|
||||
VectorService.create_segments_vector([args['keywords']], [segment_document], dataset)
|
||||
except Exception as e:
|
||||
logging.exception("create segment index failed")
|
||||
segment_document.enabled = False
|
||||
segment_document.disabled_at = datetime.datetime.utcnow()
|
||||
segment_document.status = 'error'
|
||||
segment_document.error = str(e)
|
||||
db.session.commit()
|
||||
segment = db.session.query(DocumentSegment).filter(DocumentSegment.id == segment_document.id).first()
|
||||
return segment
|
||||
|
||||
@classmethod
|
||||
def multi_create_segment(cls, segments: list, document: Document, dataset: Dataset):
|
||||
embedding_model = None
|
||||
if dataset.indexing_technique == 'high_quality':
|
||||
model_manager = ModelManager()
|
||||
embedding_model = model_manager.get_model_instance(
|
||||
tenant_id=current_user.current_tenant_id,
|
||||
provider=dataset.embedding_model_provider,
|
||||
model_type=ModelType.TEXT_EMBEDDING,
|
||||
model=dataset.embedding_model
|
||||
)
|
||||
max_position = db.session.query(func.max(DocumentSegment.position)).filter(
|
||||
DocumentSegment.document_id == document.id
|
||||
).scalar()
|
||||
pre_segment_data_list = []
|
||||
segment_data_list = []
|
||||
keywords_list = []
|
||||
for segment_item in segments:
|
||||
content = segment_item['content']
|
||||
doc_id = str(uuid.uuid4())
|
||||
segment_hash = helper.generate_text_hash(content)
|
||||
tokens = 0
|
||||
if dataset.indexing_technique == 'high_quality' and embedding_model:
|
||||
# calc embedding use tokens
|
||||
model_type_instance = cast(TextEmbeddingModel, embedding_model.model_type_instance)
|
||||
tokens = model_type_instance.get_num_tokens(
|
||||
model=embedding_model.model,
|
||||
credentials=embedding_model.credentials,
|
||||
texts=[content]
|
||||
)
|
||||
lock_name = 'add_segment_lock_document_id_{}'.format(document.id)
|
||||
with redis_client.lock(lock_name, timeout=600):
|
||||
max_position = db.session.query(func.max(DocumentSegment.position)).filter(
|
||||
DocumentSegment.document_id == document.id
|
||||
).scalar()
|
||||
segment_document = DocumentSegment(
|
||||
tenant_id=current_user.current_tenant_id,
|
||||
dataset_id=document.dataset_id,
|
||||
@ -1129,25 +1067,91 @@ class SegmentService:
|
||||
created_by=current_user.id
|
||||
)
|
||||
if document.doc_form == 'qa_model':
|
||||
segment_document.answer = segment_item['answer']
|
||||
segment_document.answer = args['answer']
|
||||
|
||||
db.session.add(segment_document)
|
||||
segment_data_list.append(segment_document)
|
||||
db.session.commit()
|
||||
|
||||
pre_segment_data_list.append(segment_document)
|
||||
keywords_list.append(segment_item['keywords'])
|
||||
|
||||
try:
|
||||
# save vector index
|
||||
VectorService.create_segments_vector(keywords_list, pre_segment_data_list, dataset)
|
||||
except Exception as e:
|
||||
logging.exception("create segment index failed")
|
||||
for segment_document in segment_data_list:
|
||||
try:
|
||||
VectorService.create_segments_vector([args['keywords']], [segment_document], dataset)
|
||||
except Exception as e:
|
||||
logging.exception("create segment index failed")
|
||||
segment_document.enabled = False
|
||||
segment_document.disabled_at = datetime.datetime.utcnow()
|
||||
segment_document.status = 'error'
|
||||
segment_document.error = str(e)
|
||||
db.session.commit()
|
||||
return segment_data_list
|
||||
db.session.commit()
|
||||
segment = db.session.query(DocumentSegment).filter(DocumentSegment.id == segment_document.id).first()
|
||||
return segment
|
||||
|
||||
@classmethod
|
||||
def multi_create_segment(cls, segments: list, document: Document, dataset: Dataset):
|
||||
lock_name = 'multi_add_segment_lock_document_id_{}'.format(document.id)
|
||||
with redis_client.lock(lock_name, timeout=600):
|
||||
embedding_model = None
|
||||
if dataset.indexing_technique == 'high_quality':
|
||||
model_manager = ModelManager()
|
||||
embedding_model = model_manager.get_model_instance(
|
||||
tenant_id=current_user.current_tenant_id,
|
||||
provider=dataset.embedding_model_provider,
|
||||
model_type=ModelType.TEXT_EMBEDDING,
|
||||
model=dataset.embedding_model
|
||||
)
|
||||
max_position = db.session.query(func.max(DocumentSegment.position)).filter(
|
||||
DocumentSegment.document_id == document.id
|
||||
).scalar()
|
||||
pre_segment_data_list = []
|
||||
segment_data_list = []
|
||||
keywords_list = []
|
||||
for segment_item in segments:
|
||||
content = segment_item['content']
|
||||
doc_id = str(uuid.uuid4())
|
||||
segment_hash = helper.generate_text_hash(content)
|
||||
tokens = 0
|
||||
if dataset.indexing_technique == 'high_quality' and embedding_model:
|
||||
# calc embedding use tokens
|
||||
model_type_instance = cast(TextEmbeddingModel, embedding_model.model_type_instance)
|
||||
tokens = model_type_instance.get_num_tokens(
|
||||
model=embedding_model.model,
|
||||
credentials=embedding_model.credentials,
|
||||
texts=[content]
|
||||
)
|
||||
segment_document = DocumentSegment(
|
||||
tenant_id=current_user.current_tenant_id,
|
||||
dataset_id=document.dataset_id,
|
||||
document_id=document.id,
|
||||
index_node_id=doc_id,
|
||||
index_node_hash=segment_hash,
|
||||
position=max_position + 1 if max_position else 1,
|
||||
content=content,
|
||||
word_count=len(content),
|
||||
tokens=tokens,
|
||||
status='completed',
|
||||
indexing_at=datetime.datetime.utcnow(),
|
||||
completed_at=datetime.datetime.utcnow(),
|
||||
created_by=current_user.id
|
||||
)
|
||||
if document.doc_form == 'qa_model':
|
||||
segment_document.answer = segment_item['answer']
|
||||
db.session.add(segment_document)
|
||||
segment_data_list.append(segment_document)
|
||||
|
||||
pre_segment_data_list.append(segment_document)
|
||||
keywords_list.append(segment_item['keywords'])
|
||||
|
||||
try:
|
||||
# save vector index
|
||||
VectorService.create_segments_vector(keywords_list, pre_segment_data_list, dataset)
|
||||
except Exception as e:
|
||||
logging.exception("create segment index failed")
|
||||
for segment_document in segment_data_list:
|
||||
segment_document.enabled = False
|
||||
segment_document.disabled_at = datetime.datetime.utcnow()
|
||||
segment_document.status = 'error'
|
||||
segment_document.error = str(e)
|
||||
db.session.commit()
|
||||
return segment_data_list
|
||||
|
||||
@classmethod
|
||||
def update_segment(cls, args: dict, segment: DocumentSegment, document: Document, dataset: Dataset):
|
||||
|
||||
@ -34,7 +34,7 @@ class VectorService:
|
||||
keyword = Keyword(dataset)
|
||||
|
||||
if keywords_list and len(keywords_list) > 0:
|
||||
keyword.add_texts(documents, keyword_list=keywords_list)
|
||||
keyword.add_texts(documents, keywords_list=keywords_list)
|
||||
else:
|
||||
keyword.add_texts(documents)
|
||||
|
||||
|
||||
@ -46,16 +46,16 @@ def clean_dataset_task(dataset_id: str, tenant_id: str, indexing_technique: str,
|
||||
|
||||
if documents is None or len(documents) == 0:
|
||||
logging.info(click.style('No documents found for dataset: {}'.format(dataset_id), fg='green'))
|
||||
return
|
||||
else:
|
||||
logging.info(click.style('Cleaning documents for dataset: {}'.format(dataset_id), fg='green'))
|
||||
index_processor = IndexProcessorFactory(doc_form).init_index_processor()
|
||||
index_processor.clean(dataset, None)
|
||||
|
||||
index_processor = IndexProcessorFactory(doc_form).init_index_processor()
|
||||
index_processor.clean(dataset, None)
|
||||
for document in documents:
|
||||
db.session.delete(document)
|
||||
|
||||
for document in documents:
|
||||
db.session.delete(document)
|
||||
|
||||
for segment in segments:
|
||||
db.session.delete(segment)
|
||||
for segment in segments:
|
||||
db.session.delete(segment)
|
||||
|
||||
db.session.query(DatasetProcessRule).filter(DatasetProcessRule.dataset_id == dataset_id).delete()
|
||||
db.session.query(DatasetQuery).filter(DatasetQuery.dataset_id == dataset_id).delete()
|
||||
|
||||
@ -2,7 +2,7 @@ version: '3'
|
||||
services:
|
||||
# API service
|
||||
api:
|
||||
image: langgenius/dify-api:0.6.1
|
||||
image: langgenius/dify-api:0.6.2
|
||||
restart: always
|
||||
environment:
|
||||
# Startup mode, 'api' starts the API server.
|
||||
@ -150,7 +150,7 @@ services:
|
||||
# worker service
|
||||
# The Celery worker for processing the queue.
|
||||
worker:
|
||||
image: langgenius/dify-api:0.6.1
|
||||
image: langgenius/dify-api:0.6.2
|
||||
restart: always
|
||||
environment:
|
||||
# Startup mode, 'worker' starts the Celery worker for processing the queue.
|
||||
@ -232,7 +232,7 @@ services:
|
||||
|
||||
# Frontend web application.
|
||||
web:
|
||||
image: langgenius/dify-web:0.6.1
|
||||
image: langgenius/dify-web:0.6.2
|
||||
restart: always
|
||||
environment:
|
||||
EDITION: SELF_HOSTED
|
||||
|
||||
@ -15,6 +15,10 @@ COPY yarn.lock .
|
||||
|
||||
RUN yarn install --frozen-lockfile
|
||||
|
||||
# if you located in China, you can use taobao registry to speed up
|
||||
# RUN yarn install --frozen-lockfile --registry https://registry.npm.taobao.org/
|
||||
|
||||
|
||||
|
||||
# build resources
|
||||
FROM base as builder
|
||||
|
||||
@ -17,7 +17,7 @@ import Switch from '@/app/components/base/switch'
|
||||
import { ChangeType, InputVarType } from '@/app/components/workflow/types'
|
||||
|
||||
const TEXT_MAX_LENGTH = 256
|
||||
const PARAGRAPH_MAX_LENGTH = 1024
|
||||
const PARAGRAPH_MAX_LENGTH = 1032 * 32
|
||||
|
||||
export type IConfigModalProps = {
|
||||
isCreate?: boolean
|
||||
|
||||
@ -112,7 +112,7 @@ const DebugItem: FC<DebugItemProps> = ({
|
||||
</div>
|
||||
<div style={{ height: 'calc(100% - 40px)' }}>
|
||||
{
|
||||
mode === 'chat' && currentProvider && currentModel && currentModel.status === ModelStatusEnum.active && (
|
||||
(mode === 'chat' || mode === 'agent-chat') && currentProvider && currentModel && currentModel.status === ModelStatusEnum.active && (
|
||||
<ChatItem modelAndParameter={modelAndParameter} />
|
||||
)
|
||||
}
|
||||
|
||||
@ -27,6 +27,7 @@ const DebugWithMultipleModel = () => {
|
||||
checkCanSend,
|
||||
} = useDebugWithMultipleModelContext()
|
||||
const { eventEmitter } = useEventEmitterContextContext()
|
||||
const isChatMode = mode === 'chat' || mode === 'agent-chat'
|
||||
|
||||
const handleSend = useCallback((message: string, files?: VisionFile[]) => {
|
||||
if (checkCanSend && !checkCanSend())
|
||||
@ -97,7 +98,7 @@ const DebugWithMultipleModel = () => {
|
||||
className={`
|
||||
grow mb-3 relative px-6 overflow-auto
|
||||
`}
|
||||
style={{ height: mode === 'chat' ? 'calc(100% - 60px)' : '100%' }}
|
||||
style={{ height: isChatMode ? 'calc(100% - 60px)' : '100%' }}
|
||||
>
|
||||
{
|
||||
multipleModelConfigs.map((modelConfig, index) => (
|
||||
@ -121,7 +122,7 @@ const DebugWithMultipleModel = () => {
|
||||
}
|
||||
</div>
|
||||
{
|
||||
mode === 'chat' && (
|
||||
isChatMode && (
|
||||
<div className='shrink-0 pb-4 px-6'>
|
||||
<ChatInput
|
||||
onSend={handleSend}
|
||||
|
||||
@ -229,7 +229,7 @@ export const useChat = (
|
||||
|
||||
// answer
|
||||
const responseItem: ChatItem = {
|
||||
id: `${Date.now()}`,
|
||||
id: placeholderAnswerId,
|
||||
content: '',
|
||||
agent_thoughts: [],
|
||||
message_files: [],
|
||||
|
||||
@ -31,7 +31,7 @@ const VariableValueBlock = () => {
|
||||
if (matchArr === null)
|
||||
return null
|
||||
|
||||
const hashtagLength = matchArr[3].length + 4
|
||||
const hashtagLength = matchArr[0].length
|
||||
const startOffset = matchArr.index
|
||||
const endOffset = startOffset + hashtagLength
|
||||
return {
|
||||
|
||||
@ -1,5 +1,5 @@
|
||||
export function getHashtagRegexString(): string {
|
||||
const hashtag = '(\{)(\{)([a-zA-Z_][a-zA-Z0-9_]{0,29})(\})(\})'
|
||||
const hashtag = '\\{\\{[a-zA-Z_][a-zA-Z0-9_]{0,29}\\}\\}'
|
||||
|
||||
return hashtag
|
||||
}
|
||||
|
||||
@ -56,7 +56,9 @@ const TagInput: FC<TagInputProps> = ({
|
||||
}
|
||||
|
||||
onChange([...items, valueTrimed])
|
||||
setValue('')
|
||||
setTimeout(() => {
|
||||
setValue('')
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@ -1,11 +1,10 @@
|
||||
import type { FC } from 'react'
|
||||
import React, { useCallback, useEffect, useMemo } from 'react'
|
||||
import React, { useCallback, useEffect, useMemo, useRef, useState } from 'react'
|
||||
import useSWR from 'swr'
|
||||
import { useRouter } from 'next/navigation'
|
||||
import { useTranslation } from 'react-i18next'
|
||||
import { omit } from 'lodash-es'
|
||||
import { ArrowRightIcon } from '@heroicons/react/24/solid'
|
||||
import { useGetState } from 'ahooks'
|
||||
import cn from 'classnames'
|
||||
import s from './index.module.css'
|
||||
import { FieldInfo } from '@/app/components/datasets/documents/detail/metadata'
|
||||
@ -89,27 +88,35 @@ const EmbeddingProcess: FC<Props> = ({ datasetId, batchId, documents = [], index
|
||||
|
||||
const getFirstDocument = documents[0]
|
||||
|
||||
const [indexingStatusBatchDetail, setIndexingStatusDetail, getIndexingStatusDetail] = useGetState<IndexingStatusResponse[]>([])
|
||||
const [indexingStatusBatchDetail, setIndexingStatusDetail] = useState<IndexingStatusResponse[]>([])
|
||||
const fetchIndexingStatus = async () => {
|
||||
const status = await doFetchIndexingStatus({ datasetId, batchId })
|
||||
setIndexingStatusDetail(status.data)
|
||||
return status.data
|
||||
}
|
||||
|
||||
const [_, setRunId, getRunId] = useGetState<ReturnType<typeof setInterval>>()
|
||||
|
||||
// const [_, setRunId, getRunId] = useGetState<ReturnType<typeof setInterval>>()
|
||||
const [runId, setRunId] = useState<ReturnType<typeof setInterval>>()
|
||||
const runIdRef = useRef(runId)
|
||||
const getRunId = () => runIdRef.current
|
||||
useEffect(() => {
|
||||
runIdRef.current = runId
|
||||
}, [runId])
|
||||
const stopQueryStatus = () => {
|
||||
clearInterval(getRunId())
|
||||
setRunId(undefined)
|
||||
}
|
||||
|
||||
const startQueryStatus = () => {
|
||||
const runId = setInterval(() => {
|
||||
const indexingStatusBatchDetail = getIndexingStatusDetail()
|
||||
const isCompleted = indexingStatusBatchDetail.every(indexingStatusDetail => ['completed', 'error'].includes(indexingStatusDetail.indexing_status))
|
||||
if (isCompleted) {
|
||||
stopQueryStatus()
|
||||
const runId = setInterval(async () => {
|
||||
// It's so strange that the interval can't be cleared after the clearInterval called. And the runId is current.
|
||||
if (!getRunId())
|
||||
return
|
||||
}
|
||||
fetchIndexingStatus()
|
||||
|
||||
const indexingStatusBatchDetail = await fetchIndexingStatus()
|
||||
const isCompleted = indexingStatusBatchDetail.every(indexingStatusDetail => ['completed', 'error'].includes(indexingStatusDetail.indexing_status))
|
||||
if (isCompleted)
|
||||
stopQueryStatus()
|
||||
}, 2500)
|
||||
setRunId(runId)
|
||||
}
|
||||
@ -221,11 +228,11 @@ const EmbeddingProcess: FC<Props> = ({ datasetId, batchId, documents = [], index
|
||||
indexingStatusDetail.indexing_status === 'completed' && s.success,
|
||||
)}>
|
||||
{isSourceEmbedding(indexingStatusDetail) && (
|
||||
<div className={s.progressbar} style={{ width: `${getSourcePercent(indexingStatusDetail)}%` }}/>
|
||||
<div className={s.progressbar} style={{ width: `${getSourcePercent(indexingStatusDetail)}%` }} />
|
||||
)}
|
||||
<div className={`${s.info} grow`}>
|
||||
{getSourceType(indexingStatusDetail.id) === DataSourceType.FILE && (
|
||||
<div className={cn(s.fileIcon, s[getFileType(getSourceName(indexingStatusDetail.id))])}/>
|
||||
<div className={cn(s.fileIcon, s[getFileType(getSourceName(indexingStatusDetail.id))])} />
|
||||
)}
|
||||
{getSourceType(indexingStatusDetail.id) === DataSourceType.NOTION && (
|
||||
<NotionIcon
|
||||
|
||||
@ -43,7 +43,10 @@ export const useNodesInteractions = () => {
|
||||
const workflowStore = useWorkflowStore()
|
||||
const nodesExtraData = useNodesExtraData()
|
||||
const { handleSyncWorkflowDraft } = useNodesSyncDraft()
|
||||
const { getAfterNodesInSameBranch } = useWorkflow()
|
||||
const {
|
||||
getAfterNodesInSameBranch,
|
||||
getTreeLeafNodes,
|
||||
} = useWorkflow()
|
||||
const { getNodesReadOnly } = useNodesReadOnly()
|
||||
const dragNodeStartPosition = useRef({ x: 0, y: 0 } as { x: number; y: number })
|
||||
const connectingNodeRef = useRef<{ nodeId: string; handleType: HandleType } | null>(null)
|
||||
@ -313,6 +316,13 @@ export const useNodesInteractions = () => {
|
||||
setEdges,
|
||||
} = store.getState()
|
||||
const nodes = getNodes()
|
||||
const targetNode = nodes.find(node => node.id === target!)
|
||||
if (targetNode && targetNode?.data.type === BlockEnum.VariableAssigner) {
|
||||
const treeNodes = getTreeLeafNodes(target!)
|
||||
|
||||
if (!treeNodes.find(treeNode => treeNode.id === source))
|
||||
return
|
||||
}
|
||||
const needDeleteEdges = edges.filter((edge) => {
|
||||
if (edge.source === source) {
|
||||
if (edge.sourceHandle)
|
||||
@ -368,7 +378,7 @@ export const useNodesInteractions = () => {
|
||||
})
|
||||
setEdges(newEdges)
|
||||
handleSyncWorkflowDraft()
|
||||
}, [store, handleSyncWorkflowDraft, getNodesReadOnly])
|
||||
}, [store, handleSyncWorkflowDraft, getNodesReadOnly, getTreeLeafNodes])
|
||||
|
||||
const handleNodeConnectStart = useCallback<OnConnectStart>((_, { nodeId, handleType }) => {
|
||||
if (nodeId && handleType) {
|
||||
|
||||
@ -79,8 +79,7 @@ const Node: FC<NodeProps<EndNodeType>> = ({
|
||||
</div>
|
||||
</div>
|
||||
<div className='text-xs font-normal text-gray-700'>
|
||||
<div className='ml-0.5 text-xs font-normal text-gray-500 capitalize'>{getVarType(node?.id || '', value_selector)}</div>
|
||||
|
||||
<div className='max-w-[42px] ml-0.5 text-xs font-normal text-gray-500 capitalize truncate' title={getVarType(node?.id || '', value_selector)}>{getVarType(node?.id || '', value_selector)}</div>
|
||||
</div>
|
||||
</div>
|
||||
)
|
||||
|
||||
@ -0,0 +1,107 @@
|
||||
'use client'
|
||||
import type { FC } from 'react'
|
||||
import React, { useEffect, useState } from 'react'
|
||||
import { uniqueId } from 'lodash-es'
|
||||
import { useTranslation } from 'react-i18next'
|
||||
import type { PromptItem } from '../../../types'
|
||||
import Editor from '@/app/components/workflow/nodes/_base/components/prompt/editor'
|
||||
import TypeSelector from '@/app/components/workflow/nodes/_base/components/selector'
|
||||
import TooltipPlus from '@/app/components/base/tooltip-plus'
|
||||
import { HelpCircle } from '@/app/components/base/icons/src/vender/line/general'
|
||||
import { PromptRole } from '@/models/debug'
|
||||
|
||||
const i18nPrefix = 'workflow.nodes.llm'
|
||||
|
||||
type Props = {
|
||||
readOnly: boolean
|
||||
id: string
|
||||
canRemove: boolean
|
||||
isChatModel: boolean
|
||||
isChatApp: boolean
|
||||
payload: PromptItem
|
||||
handleChatModeMessageRoleChange: (role: PromptRole) => void
|
||||
onPromptChange: (p: string) => void
|
||||
onRemove: () => void
|
||||
isShowContext: boolean
|
||||
hasSetBlockStatus: {
|
||||
context: boolean
|
||||
history: boolean
|
||||
query: boolean
|
||||
}
|
||||
availableVars: any
|
||||
availableNodes: any
|
||||
}
|
||||
|
||||
const roleOptions = [
|
||||
{
|
||||
label: 'system',
|
||||
value: PromptRole.system,
|
||||
},
|
||||
{
|
||||
label: 'user',
|
||||
value: PromptRole.user,
|
||||
},
|
||||
{
|
||||
label: 'assistant',
|
||||
value: PromptRole.assistant,
|
||||
},
|
||||
]
|
||||
|
||||
const ConfigPromptItem: FC<Props> = ({
|
||||
readOnly,
|
||||
id,
|
||||
canRemove,
|
||||
handleChatModeMessageRoleChange,
|
||||
isChatModel,
|
||||
isChatApp,
|
||||
payload,
|
||||
onPromptChange,
|
||||
onRemove,
|
||||
isShowContext,
|
||||
hasSetBlockStatus,
|
||||
availableVars,
|
||||
availableNodes,
|
||||
}) => {
|
||||
const { t } = useTranslation()
|
||||
const [instanceId, setInstanceId] = useState(uniqueId())
|
||||
useEffect(() => {
|
||||
setInstanceId(`${id}-${uniqueId()}`)
|
||||
}, [id])
|
||||
return (
|
||||
|
||||
<Editor
|
||||
instanceId={instanceId}
|
||||
key={instanceId}
|
||||
title={
|
||||
<div className='relative left-1 flex items-center'>
|
||||
<TypeSelector
|
||||
value={payload.role as string}
|
||||
options={roleOptions}
|
||||
onChange={handleChatModeMessageRoleChange}
|
||||
triggerClassName='text-xs font-semibold text-gray-700 uppercase'
|
||||
itemClassName='text-[13px] font-medium text-gray-700'
|
||||
/>
|
||||
<TooltipPlus
|
||||
popupContent={
|
||||
<div className='max-w-[180px]'>{t(`${i18nPrefix}.roleDescription.${payload.role}`)}</div>
|
||||
}
|
||||
>
|
||||
<HelpCircle className='w-3.5 h-3.5 text-gray-400' />
|
||||
</TooltipPlus>
|
||||
</div>
|
||||
}
|
||||
value={payload.text}
|
||||
onChange={onPromptChange}
|
||||
readOnly={readOnly}
|
||||
showRemove={canRemove}
|
||||
onRemove={onRemove}
|
||||
isChatModel={isChatModel}
|
||||
isChatApp={isChatApp}
|
||||
isShowContext={isShowContext}
|
||||
hasSetBlockStatus={hasSetBlockStatus}
|
||||
nodesOutputVars={availableVars}
|
||||
availableNodes={availableNodes}
|
||||
/>
|
||||
)
|
||||
}
|
||||
export default React.memo(ConfigPromptItem)
|
||||
@ -6,11 +6,9 @@ import produce from 'immer'
|
||||
import type { PromptItem, ValueSelector, Var } from '../../../types'
|
||||
import { PromptRole } from '../../../types'
|
||||
import useAvailableVarList from '../../_base/hooks/use-available-var-list'
|
||||
import ConfigPromptItem from './config-prompt-item'
|
||||
import Editor from '@/app/components/workflow/nodes/_base/components/prompt/editor'
|
||||
import AddButton from '@/app/components/workflow/nodes/_base/components/add-button'
|
||||
import TypeSelector from '@/app/components/workflow/nodes/_base/components/selector'
|
||||
import TooltipPlus from '@/app/components/base/tooltip-plus'
|
||||
import { HelpCircle } from '@/app/components/base/icons/src/vender/line/general'
|
||||
const i18nPrefix = 'workflow.nodes.llm'
|
||||
|
||||
type Props = {
|
||||
@ -58,21 +56,6 @@ const ConfigPrompt: FC<Props> = ({
|
||||
}
|
||||
}, [onChange, payload])
|
||||
|
||||
const roleOptions = [
|
||||
{
|
||||
label: 'system',
|
||||
value: PromptRole.system,
|
||||
},
|
||||
{
|
||||
label: 'user',
|
||||
value: PromptRole.user,
|
||||
},
|
||||
{
|
||||
label: 'assistant',
|
||||
value: PromptRole.assistant,
|
||||
},
|
||||
]
|
||||
|
||||
const handleChatModeMessageRoleChange = useCallback((index: number) => {
|
||||
return (role: PromptRole) => {
|
||||
const newPrompt = produce(payload as PromptItem[], (draft) => {
|
||||
@ -117,37 +100,20 @@ const ConfigPrompt: FC<Props> = ({
|
||||
{
|
||||
(payload as PromptItem[]).map((item, index) => {
|
||||
return (
|
||||
<Editor
|
||||
instanceId={`${nodeId}-chat-workflow-llm-prompt-editor-${item.role}-${index}`}
|
||||
key={index}
|
||||
title={
|
||||
<div className='relative left-1 flex items-center'>
|
||||
<TypeSelector
|
||||
value={item.role as string}
|
||||
options={roleOptions}
|
||||
onChange={handleChatModeMessageRoleChange(index)}
|
||||
triggerClassName='text-xs font-semibold text-gray-700 uppercase'
|
||||
itemClassName='text-[13px] font-medium text-gray-700'
|
||||
/>
|
||||
<TooltipPlus
|
||||
popupContent={
|
||||
<div className='max-w-[180px]'>{t(`${i18nPrefix}.roleDescription.${item.role}`)}</div>
|
||||
}
|
||||
>
|
||||
<HelpCircle className='w-3.5 h-3.5 text-gray-400' />
|
||||
</TooltipPlus>
|
||||
</div>
|
||||
}
|
||||
value={item.text}
|
||||
onChange={handleChatModePromptChange(index)}
|
||||
<ConfigPromptItem
|
||||
key={`${payload.length}-${index}`}
|
||||
canRemove={payload.length > 1}
|
||||
readOnly={readOnly}
|
||||
showRemove={(payload as PromptItem[]).length > 1}
|
||||
onRemove={handleRemove(index)}
|
||||
id={`${payload.length}-${index}`}
|
||||
handleChatModeMessageRoleChange={handleChatModeMessageRoleChange(index)}
|
||||
isChatModel={isChatModel}
|
||||
isChatApp={isChatApp}
|
||||
payload={item}
|
||||
onPromptChange={handleChatModePromptChange(index)}
|
||||
onRemove={handleRemove(index)}
|
||||
isShowContext={isShowContext}
|
||||
hasSetBlockStatus={hasSetBlockStatus}
|
||||
nodesOutputVars={availableVars}
|
||||
availableVars={availableVars}
|
||||
availableNodes={availableNodes}
|
||||
/>
|
||||
)
|
||||
|
||||
@ -6,6 +6,7 @@ import produce from 'immer'
|
||||
import RemoveButton from '../../../_base/components/remove-button'
|
||||
import VarReferencePicker from '@/app/components/workflow/nodes/_base/components/variable/var-reference-picker'
|
||||
import type { ValueSelector, Var } from '@/app/components/workflow/types'
|
||||
import { VarType as VarKindType } from '@/app/components/workflow/nodes/tool/types'
|
||||
|
||||
type Props = {
|
||||
readonly: boolean
|
||||
@ -71,6 +72,7 @@ const VarList: FC<Props> = ({
|
||||
onOpen={handleOpen(index)}
|
||||
onlyLeafNodeVar={onlyLeafNodeVar}
|
||||
filterVar={filterVar}
|
||||
defaultVarKindType={VarKindType.variable}
|
||||
/>
|
||||
{!readonly && (
|
||||
<RemoveButton
|
||||
|
||||
@ -59,12 +59,12 @@ const Node: FC<NodeProps<VariableAssignerNodeType>> = (props) => {
|
||||
type={(node?.data.type as BlockEnum) || BlockEnum.Start}
|
||||
/>
|
||||
</div>
|
||||
<div className='mx-0.5 text-xs font-medium text-gray-700'>{node?.data.title}</div>
|
||||
<div className='max-w-[85px] truncate mx-0.5 text-xs font-medium text-gray-700' title={node?.data.title}>{node?.data.title}</div>
|
||||
<Line3 className='mr-0.5'></Line3>
|
||||
</div>
|
||||
<div className='flex items-center text-primary-600'>
|
||||
<Variable02 className='w-3.5 h-3.5' />
|
||||
<div className='ml-0.5 text-xs font-medium'>{varName}</div>
|
||||
<div className='max-w-[75px] truncate ml-0.5 text-xs font-medium' title={varName}>{varName}</div>
|
||||
</div>
|
||||
{/* <div className='ml-0.5 text-xs font-normal text-gray-500'>{output_type}</div> */}
|
||||
</div>
|
||||
|
||||
@ -96,7 +96,7 @@ const NodePanel: FC<Props> = ({ nodeInfo, hideInfo = false }) => {
|
||||
<div className={cn('px-[10px] py-1', hideInfo && '!px-2 !py-0.5')}>
|
||||
<CodeEditor
|
||||
readOnly
|
||||
title={<div>INPUT</div>}
|
||||
title={<div>{t('workflow.common.input').toLocaleUpperCase()}</div>}
|
||||
language={CodeLanguage.json}
|
||||
value={nodeInfo.inputs}
|
||||
isJSONStringifyBeauty
|
||||
@ -107,7 +107,7 @@ const NodePanel: FC<Props> = ({ nodeInfo, hideInfo = false }) => {
|
||||
<div className={cn('px-[10px] py-1', hideInfo && '!px-2 !py-0.5')}>
|
||||
<CodeEditor
|
||||
readOnly
|
||||
title={<div>PROCESS DATA</div>}
|
||||
title={<div>{t('workflow.common.processData').toLocaleUpperCase()}</div>}
|
||||
language={CodeLanguage.json}
|
||||
value={nodeInfo.process_data}
|
||||
isJSONStringifyBeauty
|
||||
@ -118,7 +118,7 @@ const NodePanel: FC<Props> = ({ nodeInfo, hideInfo = false }) => {
|
||||
<div className={cn('px-[10px] py-1', hideInfo && '!px-2 !py-0.5')}>
|
||||
<CodeEditor
|
||||
readOnly
|
||||
title={<div>OUTPUT</div>}
|
||||
title={<div>{t('workflow.common.output').toLocaleUpperCase()}</div>}
|
||||
language={CodeLanguage.json}
|
||||
value={nodeInfo.outputs}
|
||||
isJSONStringifyBeauty
|
||||
|
||||
@ -1,5 +1,6 @@
|
||||
'use client'
|
||||
import type { FC } from 'react'
|
||||
import { useTranslation } from 'react-i18next'
|
||||
import StatusPanel from './status'
|
||||
import MetaData from './meta'
|
||||
import CodeEditor from '@/app/components/workflow/nodes/_base/components/editor/code-editor'
|
||||
@ -33,6 +34,7 @@ const ResultPanel: FC<ResultPanelProps> = ({
|
||||
steps,
|
||||
showSteps,
|
||||
}) => {
|
||||
const { t } = useTranslation()
|
||||
return (
|
||||
<div className='bg-white py-2'>
|
||||
<div className='px-4 py-2'>
|
||||
@ -46,7 +48,7 @@ const ResultPanel: FC<ResultPanelProps> = ({
|
||||
<div className='px-4 py-2 flex flex-col gap-2'>
|
||||
<CodeEditor
|
||||
readOnly
|
||||
title={<div>INPUT</div>}
|
||||
title={<div>{t('workflow.common.input').toLocaleUpperCase()}</div>}
|
||||
language={CodeLanguage.json}
|
||||
value={inputs}
|
||||
isJSONStringifyBeauty
|
||||
@ -54,7 +56,7 @@ const ResultPanel: FC<ResultPanelProps> = ({
|
||||
{process_data && (
|
||||
<CodeEditor
|
||||
readOnly
|
||||
title={<div>PROCESS DATA</div>}
|
||||
title={<div>{t('workflow.common.processData').toLocaleUpperCase()}</div>}
|
||||
language={CodeLanguage.json}
|
||||
value={process_data}
|
||||
isJSONStringifyBeauty
|
||||
@ -63,7 +65,7 @@ const ResultPanel: FC<ResultPanelProps> = ({
|
||||
{(outputs || status === 'running') && (
|
||||
<CodeEditor
|
||||
readOnly
|
||||
title={<div>OUTPUT</div>}
|
||||
title={<div>{t('workflow.common.output').toLocaleUpperCase()}</div>}
|
||||
language={CodeLanguage.json}
|
||||
value={outputs}
|
||||
isJSONStringifyBeauty
|
||||
|
||||
@ -59,7 +59,7 @@ const getCycleEdges = (nodes: Node[], edges: Edge[]) => {
|
||||
}
|
||||
|
||||
for (const edge of edges)
|
||||
adjaList[edge.source].push(edge.target)
|
||||
adjaList[edge.source]?.push(edge.target)
|
||||
|
||||
for (let i = 0; i < nodes.length; i++) {
|
||||
if (color[nodes[i].id] === WHITE)
|
||||
@ -143,14 +143,14 @@ export const initialEdges = (edges: Edge[], nodes: Node[]) => {
|
||||
if (!edge.targetHandle)
|
||||
edge.targetHandle = 'target'
|
||||
|
||||
if (!edge.data?.sourceType) {
|
||||
if (!edge.data?.sourceType && edge.source) {
|
||||
edge.data = {
|
||||
...edge.data,
|
||||
sourceType: nodesMap[edge.source].data.type!,
|
||||
} as any
|
||||
}
|
||||
|
||||
if (!edge.data?.targetType) {
|
||||
if (!edge.data?.targetType && edge.target) {
|
||||
edge.data = {
|
||||
...edge.data,
|
||||
targetType: nodesMap[edge.target].data.type!,
|
||||
@ -214,14 +214,19 @@ export const getNodesConnectedSourceOrTargetHandleIdsMap = (changes: ConnectedSo
|
||||
type,
|
||||
} = change
|
||||
const sourceNode = nodes.find(node => node.id === edge.source)!
|
||||
nodesConnectedSourceOrTargetHandleIdsMap[sourceNode.id] = nodesConnectedSourceOrTargetHandleIdsMap[sourceNode.id] || {
|
||||
_connectedSourceHandleIds: [...(sourceNode?.data._connectedSourceHandleIds || [])],
|
||||
_connectedTargetHandleIds: [...(sourceNode?.data._connectedTargetHandleIds || [])],
|
||||
if (sourceNode) {
|
||||
nodesConnectedSourceOrTargetHandleIdsMap[sourceNode.id] = nodesConnectedSourceOrTargetHandleIdsMap[sourceNode.id] || {
|
||||
_connectedSourceHandleIds: [...(sourceNode?.data._connectedSourceHandleIds || [])],
|
||||
_connectedTargetHandleIds: [...(sourceNode?.data._connectedTargetHandleIds || [])],
|
||||
}
|
||||
}
|
||||
|
||||
const targetNode = nodes.find(node => node.id === edge.target)!
|
||||
nodesConnectedSourceOrTargetHandleIdsMap[targetNode.id] = nodesConnectedSourceOrTargetHandleIdsMap[targetNode.id] || {
|
||||
_connectedSourceHandleIds: [...(targetNode?.data._connectedSourceHandleIds || [])],
|
||||
_connectedTargetHandleIds: [...(targetNode?.data._connectedTargetHandleIds || [])],
|
||||
if (targetNode) {
|
||||
nodesConnectedSourceOrTargetHandleIdsMap[targetNode.id] = nodesConnectedSourceOrTargetHandleIdsMap[targetNode.id] || {
|
||||
_connectedSourceHandleIds: [...(targetNode?.data._connectedSourceHandleIds || [])],
|
||||
_connectedTargetHandleIds: [...(targetNode?.data._connectedTargetHandleIds || [])],
|
||||
}
|
||||
}
|
||||
|
||||
if (sourceNode) {
|
||||
|
||||
87
web/i18n/de-DE/app-annotation.ts
Normal file
87
web/i18n/de-DE/app-annotation.ts
Normal file
@ -0,0 +1,87 @@
|
||||
const translation = {
|
||||
title: 'Anmerkungen',
|
||||
name: 'Antwort Anmerkung',
|
||||
editBy: 'Antwort bearbeitet von {{author}}',
|
||||
noData: {
|
||||
title: 'Keine Anmerkungen',
|
||||
description: 'Sie können Anmerkungen während des App-Debuggings bearbeiten oder hier Anmerkungen in großen Mengen importieren für eine hochwertige Antwort.',
|
||||
},
|
||||
table: {
|
||||
header: {
|
||||
question: 'Frage',
|
||||
answer: 'Antwort',
|
||||
createdAt: 'erstellt am',
|
||||
hits: 'Treffer',
|
||||
actions: 'Aktionen',
|
||||
addAnnotation: 'Anmerkung hinzufügen',
|
||||
bulkImport: 'Massenimport',
|
||||
bulkExport: 'Massenexport',
|
||||
clearAll: 'Alle Anmerkungen löschen',
|
||||
},
|
||||
},
|
||||
editModal: {
|
||||
title: 'Antwort Anmerkung bearbeiten',
|
||||
queryName: 'Benutzeranfrage',
|
||||
answerName: 'Geschichtenerzähler Bot',
|
||||
yourAnswer: 'Ihre Antwort',
|
||||
answerPlaceholder: 'Geben Sie hier Ihre Antwort ein',
|
||||
yourQuery: 'Ihre Anfrage',
|
||||
queryPlaceholder: 'Geben Sie hier Ihre Anfrage ein',
|
||||
removeThisCache: 'Diese Anmerkung entfernen',
|
||||
createdAt: 'Erstellt am',
|
||||
},
|
||||
addModal: {
|
||||
title: 'Antwort Anmerkung hinzufügen',
|
||||
queryName: 'Frage',
|
||||
answerName: 'Antwort',
|
||||
answerPlaceholder: 'Antwort hier eingeben',
|
||||
queryPlaceholder: 'Anfrage hier eingeben',
|
||||
createNext: 'Eine weitere annotierte Antwort hinzufügen',
|
||||
},
|
||||
batchModal: {
|
||||
title: 'Massenimport',
|
||||
csvUploadTitle: 'Ziehen Sie Ihre CSV-Datei hierher oder ',
|
||||
browse: 'durchsuchen',
|
||||
tip: 'Die CSV-Datei muss der folgenden Struktur entsprechen:',
|
||||
question: 'Frage',
|
||||
answer: 'Antwort',
|
||||
contentTitle: 'Inhaltsabschnitt',
|
||||
content: 'Inhalt',
|
||||
template: 'Laden Sie die Vorlage hier herunter',
|
||||
cancel: 'Abbrechen',
|
||||
run: 'Batch ausführen',
|
||||
runError: 'Batch-Ausführung fehlgeschlagen',
|
||||
processing: 'In Batch-Verarbeitung',
|
||||
completed: 'Import abgeschlossen',
|
||||
error: 'Importfehler',
|
||||
ok: 'OK',
|
||||
},
|
||||
errorMessage: {
|
||||
answerRequired: 'Antwort erforderlich',
|
||||
queryRequired: 'Frage erforderlich',
|
||||
},
|
||||
viewModal: {
|
||||
annotatedResponse: 'Antwort Anmerkung',
|
||||
hitHistory: 'Trefferhistorie',
|
||||
hit: 'Treffer',
|
||||
hits: 'Treffer',
|
||||
noHitHistory: 'Keine Trefferhistorie',
|
||||
},
|
||||
hitHistoryTable: {
|
||||
query: 'Anfrage',
|
||||
match: 'Übereinstimmung',
|
||||
response: 'Antwort',
|
||||
source: 'Quelle',
|
||||
score: 'Punktzahl',
|
||||
time: 'Zeit',
|
||||
},
|
||||
initSetup: {
|
||||
title: 'Initialeinrichtung Antwort Anmerkung',
|
||||
configTitle: 'Einrichtung Antwort Anmerkung',
|
||||
confirmBtn: 'Speichern & Aktivieren',
|
||||
configConfirmBtn: 'Speichern',
|
||||
},
|
||||
embeddingModelSwitchTip: 'Anmerkungstext-Vektorisierungsmodell, das Wechseln von Modellen wird neu eingebettet, was zusätzliche Kosten verursacht.',
|
||||
}
|
||||
|
||||
export default translation
|
||||
82
web/i18n/de-DE/app-api.ts
Normal file
82
web/i18n/de-DE/app-api.ts
Normal file
@ -0,0 +1,82 @@
|
||||
const translation = {
|
||||
apiServer: 'API Server',
|
||||
apiKey: 'API Schlüssel',
|
||||
status: 'Status',
|
||||
disabled: 'Deaktiviert',
|
||||
ok: 'In Betrieb',
|
||||
copy: 'Kopieren',
|
||||
copied: 'Kopiert',
|
||||
play: 'Abspielen',
|
||||
pause: 'Pause',
|
||||
playing: 'Wiedergabe',
|
||||
merMaind: {
|
||||
rerender: 'Neu rendern',
|
||||
},
|
||||
never: 'Nie',
|
||||
apiKeyModal: {
|
||||
apiSecretKey: 'API Geheimschlüssel',
|
||||
apiSecretKeyTips: 'Um Missbrauch der API zu verhindern, schützen Sie Ihren API Schlüssel. Vermeiden Sie es, ihn als Klartext im Frontend-Code zu verwenden. :)',
|
||||
createNewSecretKey: 'Neuen Geheimschlüssel erstellen',
|
||||
secretKey: 'Geheimschlüssel',
|
||||
created: 'ERSTELLT',
|
||||
lastUsed: 'ZULETZT VERWENDET',
|
||||
generateTips: 'Bewahren Sie diesen Schlüssel an einem sicheren und zugänglichen Ort auf.',
|
||||
},
|
||||
actionMsg: {
|
||||
deleteConfirmTitle: 'Diesen Geheimschlüssel löschen?',
|
||||
deleteConfirmTips: 'Diese Aktion kann nicht rückgängig gemacht werden.',
|
||||
ok: 'OK',
|
||||
},
|
||||
completionMode: {
|
||||
title: 'Completion App API',
|
||||
info: 'Für die Erzeugung von hochwertigem Text, wie z.B. Artikel, Zusammenfassungen und Übersetzungen, verwenden Sie die Completion-Messages API mit Benutzereingaben. Die Texterzeugung basiert auf den Modellparametern und Vorlagen für Aufforderungen in Dify Prompt Engineering.',
|
||||
createCompletionApi: 'Completion Nachricht erstellen',
|
||||
createCompletionApiTip: 'Erstellen Sie eine Completion Nachricht, um den Frage-Antwort-Modus zu unterstützen.',
|
||||
inputsTips: '(Optional) Geben Sie Benutzereingabefelder als Schlüssel-Wert-Paare an, die Variablen in Prompt Eng. entsprechen. Schlüssel ist der Variablenname, Wert ist der Parameterwert. Wenn der Feldtyp Select ist, muss der übermittelte Wert eine der voreingestellten Optionen sein.',
|
||||
queryTips: 'Textinhalt der Benutzereingabe.',
|
||||
blocking: 'Blockierender Typ, wartet auf die Fertigstellung der Ausführung und gibt Ergebnisse zurück. (Anfragen können unterbrochen werden, wenn der Prozess lang ist)',
|
||||
streaming: 'Streaming Rückgaben. Implementierung der Streaming-Rückgabe basierend auf SSE (Server-Sent Events).',
|
||||
messageFeedbackApi: 'Nachrichtenfeedback (Like)',
|
||||
messageFeedbackApiTip: 'Bewerten Sie empfangene Nachrichten im Namen der Endbenutzer mit Likes oder Dislikes. Diese Daten sind auf der Seite Logs & Annotations sichtbar und werden für zukünftige Modellanpassungen verwendet.',
|
||||
messageIDTip: 'Nachrichten-ID',
|
||||
ratingTip: 'like oder dislike, null ist rückgängig machen',
|
||||
parametersApi: 'Anwendungsparameterinformationen abrufen',
|
||||
parametersApiTip: 'Abrufen konfigurierter Eingabeparameter, einschließlich Variablennamen, Feldnamen, Typen und Standardwerten. Typischerweise verwendet, um diese Felder in einem Formular anzuzeigen oder Standardwerte nach dem Laden des Clients auszufüllen.',
|
||||
},
|
||||
chatMode: {
|
||||
title: 'Chat App API',
|
||||
info: 'Für vielseitige Gesprächsanwendungen im Q&A-Format rufen Sie die chat-messages API auf, um einen Dialog zu initiieren. Führen Sie laufende Gespräche fort, indem Sie die zurückgegebene conversation_id übergeben. Antwortparameter und -vorlagen hängen von den Einstellungen in Dify Prompt Eng. ab.',
|
||||
createChatApi: 'Chatnachricht erstellen',
|
||||
createChatApiTip: 'Eine neue Konversationsnachricht erstellen oder einen bestehenden Dialog fortsetzen.',
|
||||
inputsTips: '(Optional) Geben Sie Benutzereingabefelder als Schlüssel-Wert-Paare an, die Variablen in Prompt Eng. entsprechen. Schlüssel ist der Variablenname, Wert ist der Parameterwert. Wenn der Feldtyp Select ist, muss der übermittelte Wert eine der voreingestellten Optionen sein.',
|
||||
queryTips: 'Inhalt der Benutzereingabe/Frage',
|
||||
blocking: 'Blockierender Typ, wartet auf die Fertigstellung der Ausführung und gibt Ergebnisse zurück. (Anfragen können unterbrochen werden, wenn der Prozess lang ist)',
|
||||
streaming: 'Streaming Rückgaben. Implementierung der Streaming-Rückgabe basierend auf SSE (Server-Sent Events).',
|
||||
conversationIdTip: '(Optional) Konversations-ID: für erstmalige Konversation leer lassen; conversation_id aus dem Kontext übergeben, um den Dialog fortzusetzen.',
|
||||
messageFeedbackApi: 'Nachrichtenfeedback des Endbenutzers, like',
|
||||
messageFeedbackApiTip: 'Bewerten Sie empfangene Nachrichten im Namen der Endbenutzer mit Likes oder Dislikes. Diese Daten sind auf der Seite Logs & Annotations sichtbar und werden für zukünftige Modellanpassungen verwendet.',
|
||||
messageIDTip: 'Nachrichten-ID',
|
||||
ratingTip: 'like oder dislike, null ist rückgängig machen',
|
||||
chatMsgHistoryApi: 'Chatverlaufsnachricht abrufen',
|
||||
chatMsgHistoryApiTip: 'Die erste Seite gibt die neuesten `limit` Einträge in umgekehrter Reihenfolge zurück.',
|
||||
chatMsgHistoryConversationIdTip: 'Konversations-ID',
|
||||
chatMsgHistoryFirstId: 'ID des ersten Chat-Datensatzes auf der aktuellen Seite. Standardmäßig keiner.',
|
||||
chatMsgHistoryLimit: 'Wie viele Chats in einer Anfrage zurückgegeben werden',
|
||||
conversationsListApi: 'Konversationsliste abrufen',
|
||||
conversationsListApiTip: 'Ruft die Sitzungsliste des aktuellen Benutzers ab. Standardmäßig werden die letzten 20 Sitzungen zurückgegeben.',
|
||||
conversationsListFirstIdTip: 'Die ID des letzten Datensatzes auf der aktuellen Seite, standardmäßig keine.',
|
||||
conversationsListLimitTip: 'Wie viele Chats in einer Anfrage zurückgegeben werden',
|
||||
conversationRenamingApi: 'Konversation umbenennen',
|
||||
conversationRenamingApiTip: 'Konversationen umbenennen; der Name wird in Mehrsitzungs-Client-Schnittstellen angezeigt.',
|
||||
conversationRenamingNameTip: 'Neuer Name',
|
||||
parametersApi: 'Anwendungsparameterinformationen abrufen',
|
||||
parametersApiTip: 'Abrufen konfigurierter Eingabeparameter, einschließlich Variablennamen, Feldnamen, Typen und Standardwerten. Typischerweise verwendet, um diese Felder in einem Formular anzuzeigen oder Standardwerte nach dem Laden des Clients auszufüllen.',
|
||||
},
|
||||
develop: {
|
||||
requestBody: 'Anfragekörper',
|
||||
pathParams: 'Pfadparameter',
|
||||
query: 'Anfrage',
|
||||
},
|
||||
}
|
||||
|
||||
export default translation
|
||||
409
web/i18n/de-DE/app-debug.ts
Normal file
409
web/i18n/de-DE/app-debug.ts
Normal file
@ -0,0 +1,409 @@
|
||||
const translation = {
|
||||
pageTitle: {
|
||||
line1: 'PROMPT',
|
||||
line2: 'Engineering',
|
||||
},
|
||||
orchestrate: 'Orchestrieren',
|
||||
promptMode: {
|
||||
simple: 'Wechseln Sie in den Expertenmodus, um das gesamte PROMPT zu bearbeiten',
|
||||
advanced: 'Expertenmodus',
|
||||
switchBack: 'Zurückwechseln',
|
||||
advancedWarning: {
|
||||
title: 'Sie haben in den Expertenmodus gewechselt, und sobald Sie das PROMPT ändern, können Sie NICHT zum Basis-Modus zurückkehren.',
|
||||
description: 'Im Expertenmodus können Sie das gesamte PROMPT bearbeiten.',
|
||||
learnMore: 'Mehr erfahren',
|
||||
ok: 'OK',
|
||||
},
|
||||
operation: {
|
||||
addMessage: 'Nachricht hinzufügen',
|
||||
},
|
||||
contextMissing: 'Komponente fehlt, die Wirksamkeit des Prompts könnte schlecht sein.',
|
||||
},
|
||||
operation: {
|
||||
applyConfig: 'Veröffentlichen',
|
||||
resetConfig: 'Zurücksetzen',
|
||||
debugConfig: 'Debuggen',
|
||||
addFeature: 'Funktion hinzufügen',
|
||||
automatic: 'Automatisch',
|
||||
stopResponding: 'Antworten stoppen',
|
||||
agree: 'gefällt mir',
|
||||
disagree: 'gefällt mir nicht',
|
||||
cancelAgree: 'Gefällt mir zurücknehmen',
|
||||
cancelDisagree: 'Gefällt mir nicht zurücknehmen',
|
||||
userAction: 'Benutzer ',
|
||||
},
|
||||
notSetAPIKey: {
|
||||
title: 'LLM-Anbieterschlüssel wurde nicht festgelegt',
|
||||
trailFinished: 'Testversion beendet',
|
||||
description: 'Der LLM-Anbieterschlüssel wurde nicht festgelegt und muss vor dem Debuggen festgelegt werden.',
|
||||
settingBtn: 'Zu den Einstellungen gehen',
|
||||
},
|
||||
trailUseGPT4Info: {
|
||||
title: 'Unterstützt derzeit kein gpt-4',
|
||||
description: 'Um gpt-4 zu verwenden, bitte API-Schlüssel festlegen.',
|
||||
},
|
||||
feature: {
|
||||
groupChat: {
|
||||
title: 'Chatverbesserung',
|
||||
description: 'Voreinstellungen für Konversationen zu Apps hinzufügen kann die Benutzererfahrung verbessern.',
|
||||
},
|
||||
groupExperience: {
|
||||
title: 'Erfahrungsverbesserung',
|
||||
},
|
||||
conversationOpener: {
|
||||
title: 'Gesprächseröffnungen',
|
||||
description: 'In einer Chat-App wird der erste Satz, den die KI aktiv an den Benutzer richtet, üblicherweise als Begrüßung verwendet.',
|
||||
},
|
||||
suggestedQuestionsAfterAnswer: {
|
||||
title: 'Nachfolgefragen',
|
||||
description: 'Das Einrichten von Vorschlägen für nächste Fragen kann den Chat für Benutzer verbessern.',
|
||||
resDes: '3 Vorschläge für die nächste Benutzerfrage.',
|
||||
tryToAsk: 'Versuchen Sie zu fragen',
|
||||
},
|
||||
moreLikeThis: {
|
||||
title: 'Mehr davon',
|
||||
description: 'Mehrere Texte gleichzeitig generieren und dann bearbeiten und weiter generieren',
|
||||
generateNumTip: 'Anzahl der generierten Texte pro Durchgang',
|
||||
tip: 'Die Verwendung dieser Funktion verursacht zusätzliche Token-Kosten',
|
||||
},
|
||||
speechToText: {
|
||||
title: 'Sprache zu Text',
|
||||
description: 'Einmal aktiviert, können Sie Spracheingabe verwenden.',
|
||||
resDes: 'Spracheingabe ist aktiviert',
|
||||
},
|
||||
textToSpeech: {
|
||||
title: 'Text zu Sprache',
|
||||
description: 'Einmal aktiviert, kann Text in Sprache umgewandelt werden.',
|
||||
resDes: 'Text zu Audio ist aktiviert',
|
||||
},
|
||||
citation: {
|
||||
title: 'Zitate und Urheberangaben',
|
||||
description: 'Einmal aktiviert, zeigen Sie das Quelldokument und den zugeordneten Abschnitt des generierten Inhalts an.',
|
||||
resDes: 'Zitate und Urheberangaben sind aktiviert',
|
||||
},
|
||||
annotation: {
|
||||
title: 'Annotation Antwort',
|
||||
description: 'Sie können manuell hochwertige Antworten zum Cache hinzufügen für bevorzugte Übereinstimmung mit ähnlichen Benutzerfragen.',
|
||||
resDes: 'Annotationsantwort ist aktiviert',
|
||||
scoreThreshold: {
|
||||
title: 'Schwellenwert',
|
||||
description: 'Wird verwendet, um den Ähnlichkeitsschwellenwert für die Annotation Antwort einzustellen.',
|
||||
easyMatch: 'Einfache Übereinstimmung',
|
||||
accurateMatch: 'Genaue Übereinstimmung',
|
||||
},
|
||||
matchVariable: {
|
||||
title: 'Übereinstimmungsvariable',
|
||||
choosePlaceholder: 'Wählen Sie Übereinstimmungsvariable',
|
||||
},
|
||||
cacheManagement: 'Annotationen',
|
||||
cached: 'Annotiert',
|
||||
remove: 'Entfernen',
|
||||
removeConfirm: 'Diese Annotation löschen?',
|
||||
add: 'Annotation hinzufügen',
|
||||
edit: 'Annotation bearbeiten',
|
||||
},
|
||||
dataSet: {
|
||||
title: 'Kontext',
|
||||
noData: 'Sie können Wissen als Kontext importieren',
|
||||
words: 'Wörter',
|
||||
textBlocks: 'Textblöcke',
|
||||
selectTitle: 'Wählen Sie Referenzwissen',
|
||||
selected: 'Wissen ausgewählt',
|
||||
noDataSet: 'Kein Wissen gefunden',
|
||||
toCreate: 'Erstellen gehen',
|
||||
notSupportSelectMulti: 'Unterstützt derzeit nur ein Wissen',
|
||||
queryVariable: {
|
||||
title: 'Abfragevariable',
|
||||
tip: 'Diese Variable wird als Eingabe für die Kontextabfrage verwendet, um kontextbezogene Informationen in Bezug auf die Eingabe dieser Variable zu erhalten.',
|
||||
choosePlaceholder: 'Wählen Sie Abfragevariable',
|
||||
noVar: 'Keine Variablen',
|
||||
noVarTip: 'Bitte erstellen Sie eine Variable im Variablenbereich',
|
||||
unableToQueryDataSet: 'Konnte das Wissen nicht abfragen',
|
||||
unableToQueryDataSetTip: 'Konnte das Wissen nicht erfolgreich abfragen, bitte wählen Sie eine Kontextabfragevariable im Kontextbereich.',
|
||||
ok: 'OK',
|
||||
contextVarNotEmpty: 'Kontextabfragevariable darf nicht leer sein',
|
||||
deleteContextVarTitle: 'Variable „{{varName}}“ löschen?',
|
||||
deleteContextVarTip: 'Diese Variable wurde als Kontextabfragevariable festgelegt und deren Entfernung wird die normale Verwendung des Wissens beeinträchtigen. Wenn Sie sie trotzdem löschen müssen, wählen Sie sie bitte im Kontextbereich erneut.',
|
||||
},
|
||||
},
|
||||
tools: {
|
||||
title: 'Werkzeuge',
|
||||
tips: 'Werkzeuge bieten eine standardisierte API-Aufrufmethode, die Benutzereingaben oder Variablen als Anfrageparameter für die Abfrage externer Daten als Kontext verwendet.',
|
||||
toolsInUse: '{{count}} Werkzeuge in Verwendung',
|
||||
modal: {
|
||||
title: 'Werkzeug',
|
||||
toolType: {
|
||||
title: 'Werkzeugtyp',
|
||||
placeholder: 'Bitte wählen Sie den Werkzeugtyp',
|
||||
},
|
||||
name: {
|
||||
title: 'Name',
|
||||
placeholder: 'Bitte geben Sie den Namen ein',
|
||||
},
|
||||
variableName: {
|
||||
title: 'Variablenname',
|
||||
placeholder: 'Bitte geben Sie den Variablennamen ein',
|
||||
},
|
||||
},
|
||||
},
|
||||
conversationHistory: {
|
||||
title: 'Konversationsverlauf',
|
||||
description: 'Präfixnamen für Konversationsrollen festlegen',
|
||||
tip: 'Der Konversationsverlauf ist nicht aktiviert, bitte fügen Sie <histories> im Prompt oben ein.',
|
||||
learnMore: 'Mehr erfahren',
|
||||
editModal: {
|
||||
title: 'Konversationsrollennamen bearbeiten',
|
||||
userPrefix: 'Benutzerpräfix',
|
||||
assistantPrefix: 'Assistentenpräfix',
|
||||
},
|
||||
},
|
||||
toolbox: {
|
||||
title: 'WERKZEUGKASTEN',
|
||||
},
|
||||
moderation: {
|
||||
title: 'Inhaltsmoderation',
|
||||
description: 'Sichern Sie die Ausgabe des Modells durch Verwendung der Moderations-API oder durch Pflege einer Liste sensibler Wörter.',
|
||||
allEnabled: 'INHALT von EINGABE/AUSGABE aktiviert',
|
||||
inputEnabled: 'INHALT von EINGABE aktiviert',
|
||||
outputEnabled: 'INHALT von AUSGABE aktiviert',
|
||||
modal: {
|
||||
title: 'Einstellungen zur Inhaltsmoderation',
|
||||
provider: {
|
||||
title: 'Anbieter',
|
||||
openai: 'OpenAI-Moderation',
|
||||
openaiTip: {
|
||||
prefix: 'OpenAI-Moderation erfordert einen konfigurierten OpenAI-API-Schlüssel in den ',
|
||||
suffix: '.',
|
||||
},
|
||||
keywords: 'Schlüsselwörter',
|
||||
},
|
||||
keywords: {
|
||||
tip: 'Jeweils eine pro Zeile, getrennt durch Zeilenumbrüche. Bis zu 100 Zeichen pro Zeile.',
|
||||
placeholder: 'Jeweils eine pro Zeile, getrennt durch Zeilenumbrüche',
|
||||
line: 'Zeile',
|
||||
},
|
||||
content: {
|
||||
input: 'INHALT der EINGABE moderieren',
|
||||
output: 'INHALT der AUSGABE moderieren',
|
||||
preset: 'Voreingestellte Antworten',
|
||||
placeholder: 'Inhalt der voreingestellten Antworten hier',
|
||||
condition: 'Moderation von INHALT der EINGABE und AUSGABE mindestens eine aktiviert',
|
||||
fromApi: 'Voreingestellte Antworten werden durch API zurückgegeben',
|
||||
errorMessage: 'Voreingestellte Antworten dürfen nicht leer sein',
|
||||
supportMarkdown: 'Markdown unterstützt',
|
||||
},
|
||||
openaiNotConfig: {
|
||||
before: 'OpenAI-Moderation erfordert einen konfigurierten OpenAI-API-Schlüssel in den',
|
||||
after: '',
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
automatic: {
|
||||
title: 'Automatisierte Anwendungsorchestrierung',
|
||||
description: 'Beschreiben Sie Ihr Szenario, Dify wird eine Anwendung für Sie orchestrieren.',
|
||||
intendedAudience: 'Wer ist die Zielgruppe?',
|
||||
intendedAudiencePlaceHolder: 'z.B. Student',
|
||||
solveProblem: 'Welche Probleme hoffen sie, dass KI für sie lösen kann?',
|
||||
solveProblemPlaceHolder: 'z.B. Erkenntnisse extrahieren und Informationen aus langen Berichten und Artikeln zusammenfassen',
|
||||
generate: 'Generieren',
|
||||
audiencesRequired: 'Zielgruppe erforderlich',
|
||||
problemRequired: 'Problem erforderlich',
|
||||
resTitle: 'Wir haben die folgende Anwendung für Sie orchestriert.',
|
||||
apply: 'Diese Orchestrierung anwenden',
|
||||
noData: 'Beschreiben Sie Ihren Anwendungsfall links, die Orchestrierungsvorschau wird hier angezeigt.',
|
||||
loading: 'Orchestrieren der Anwendung für Sie...',
|
||||
overwriteTitle: 'Bestehende Konfiguration überschreiben?',
|
||||
overwriteMessage: 'Das Anwenden dieser Orchestrierung wird die bestehende Konfiguration überschreiben.',
|
||||
},
|
||||
resetConfig: {
|
||||
title: 'Zurücksetzen bestätigen?',
|
||||
message:
|
||||
'Zurücksetzen verwirft Änderungen und stellt die zuletzt veröffentlichte Konfiguration wieder her.',
|
||||
},
|
||||
errorMessage: {
|
||||
nameOfKeyRequired: 'Name des Schlüssels: {{key}} erforderlich',
|
||||
valueOfVarRequired: '{{key}} Wert darf nicht leer sein',
|
||||
queryRequired: 'Anfragetext ist erforderlich.',
|
||||
waitForResponse:
|
||||
'Bitte warten Sie auf die Antwort auf die vorherige Nachricht, um abzuschließen.',
|
||||
waitForBatchResponse:
|
||||
'Bitte warten Sie auf die Antwort auf die Stapelaufgabe, um abzuschließen.',
|
||||
notSelectModel: 'Bitte wählen Sie ein Modell',
|
||||
waitForImgUpload: 'Bitte warten Sie, bis das Bild hochgeladen ist',
|
||||
},
|
||||
chatSubTitle: 'Anweisungen',
|
||||
completionSubTitle: 'Vor-Prompt',
|
||||
promptTip:
|
||||
'Prompts leiten KI-Antworten mit Anweisungen und Einschränkungen. Fügen Sie Variablen wie {{input}} ein. Dieses Prompt wird den Benutzern nicht angezeigt.',
|
||||
formattingChangedTitle: 'Formatierung geändert',
|
||||
formattingChangedText:
|
||||
'Die Änderung der Formatierung wird den Debug-Bereich zurücksetzen, sind Sie sicher?',
|
||||
variableTitle: 'Variablen',
|
||||
variableTip:
|
||||
'Benutzer füllen Variablen in einem Formular aus, automatisches Ersetzen von Variablen im Prompt.',
|
||||
notSetVar: 'Variablen ermöglichen es Benutzern, Aufforderungswörter oder Eröffnungsbemerkungen einzuführen, wenn sie Formulare ausfüllen. Sie könnten versuchen, "{{input}}" im Prompt einzugeben.',
|
||||
autoAddVar: 'Im Vor-Prompt referenzierte undefinierte Variablen, möchten Sie sie im Benutzereingabeformular hinzufügen?',
|
||||
variableTable: {
|
||||
key: 'Variablenschlüssel',
|
||||
name: 'Name des Benutzereingabefelds',
|
||||
optional: 'Optional',
|
||||
type: 'Eingabetyp',
|
||||
action: 'Aktionen',
|
||||
typeString: 'String',
|
||||
typeSelect: 'Auswählen',
|
||||
},
|
||||
varKeyError: {
|
||||
canNoBeEmpty: 'Variablenschlüssel darf nicht leer sein',
|
||||
tooLong: 'Variablenschlüssel: {{key}} zu lang. Darf nicht länger als 30 Zeichen sein',
|
||||
notValid: 'Variablenschlüssel: {{key}} ist ungültig. Darf nur Buchstaben, Zahlen und Unterstriche enthalten',
|
||||
notStartWithNumber: 'Variablenschlüssel: {{key}} darf nicht mit einer Zahl beginnen',
|
||||
keyAlreadyExists: 'Variablenschlüssel: :{{key}} existiert bereits',
|
||||
},
|
||||
otherError: {
|
||||
promptNoBeEmpty: 'Prompt darf nicht leer sein',
|
||||
historyNoBeEmpty: 'Konversationsverlauf muss im Prompt gesetzt sein',
|
||||
queryNoBeEmpty: 'Anfrage muss im Prompt gesetzt sein',
|
||||
},
|
||||
variableConig: {
|
||||
modalTitle: 'Feldeinstellungen',
|
||||
description: 'Einstellung für Variable {{varName}}',
|
||||
fieldType: 'Feldtyp',
|
||||
string: 'Kurztext',
|
||||
paragraph: 'Absatz',
|
||||
select: 'Auswählen',
|
||||
notSet: 'Nicht gesetzt, versuchen Sie, {{input}} im Vor-Prompt zu tippen',
|
||||
stringTitle: 'Formular-Textfeldoptionen',
|
||||
maxLength: 'Maximale Länge',
|
||||
options: 'Optionen',
|
||||
addOption: 'Option hinzufügen',
|
||||
apiBasedVar: 'API-basierte Variable',
|
||||
},
|
||||
vision: {
|
||||
name: 'Vision',
|
||||
description: 'Vision zu aktivieren ermöglicht es dem Modell, Bilder aufzunehmen und Fragen dazu zu beantworten.',
|
||||
settings: 'Einstellungen',
|
||||
visionSettings: {
|
||||
title: 'Vision-Einstellungen',
|
||||
resolution: 'Auflösung',
|
||||
resolutionTooltip: `Niedrige Auflösung ermöglicht es dem Modell, eine Bildversion mit niedriger Auflösung von 512 x 512 zu erhalten und das Bild mit einem Budget von 65 Tokens darzustellen. Dies ermöglicht schnellere Antworten des API und verbraucht weniger Eingabetokens für Anwendungsfälle, die kein hohes Detail benötigen.
|
||||
\n
|
||||
Hohe Auflösung ermöglicht zunächst, dass das Modell das Bild mit niedriger Auflösung sieht und dann detaillierte Ausschnitte von Eingabebildern als 512px Quadrate basierend auf der Größe des Eingabebildes erstellt. Jeder der detaillierten Ausschnitte verwendet das doppelte Token-Budget für insgesamt 129 Tokens.`,
|
||||
high: 'Hoch',
|
||||
low: 'Niedrig',
|
||||
uploadMethod: 'Upload-Methode',
|
||||
both: 'Beides',
|
||||
localUpload: 'Lokaler Upload',
|
||||
url: 'URL',
|
||||
uploadLimit: 'Upload-Limit',
|
||||
},
|
||||
},
|
||||
voice: {
|
||||
name: 'Stimme',
|
||||
defaultDisplay: 'Standardstimme',
|
||||
description: 'Text-zu-Sprache-Stimmeinstellungen',
|
||||
settings: 'Einstellungen',
|
||||
voiceSettings: {
|
||||
title: 'Stimmeinstellungen',
|
||||
language: 'Sprache',
|
||||
resolutionTooltip: 'Text-zu-Sprache unterstützte Sprache.',
|
||||
voice: 'Stimme',
|
||||
},
|
||||
},
|
||||
openingStatement: {
|
||||
title: 'Gesprächseröffner',
|
||||
add: 'Hinzufügen',
|
||||
writeOpner: 'Eröffnung schreiben',
|
||||
placeholder: 'Schreiben Sie hier Ihre Eröffnungsnachricht, Sie können Variablen verwenden, versuchen Sie {{Variable}} zu tippen.',
|
||||
openingQuestion: 'Eröffnungsfragen',
|
||||
noDataPlaceHolder:
|
||||
'Den Dialog mit dem Benutzer zu beginnen, kann helfen, in konversationellen Anwendungen eine engere Verbindung mit ihnen herzustellen.',
|
||||
varTip: 'Sie können Variablen verwenden, versuchen Sie {{Variable}} zu tippen',
|
||||
tooShort: 'Für die Erzeugung von Eröffnungsbemerkungen für das Gespräch werden mindestens 20 Wörter des Anfangsprompts benötigt.',
|
||||
notIncludeKey: 'Das Anfangsprompt enthält nicht die Variable: {{key}}. Bitte fügen Sie sie dem Anfangsprompt hinzu.',
|
||||
},
|
||||
modelConfig: {
|
||||
model: 'Modell',
|
||||
setTone: 'Ton der Antworten festlegen',
|
||||
title: 'Modell und Parameter',
|
||||
modeType: {
|
||||
chat: 'Chat',
|
||||
completion: 'Vollständig',
|
||||
},
|
||||
},
|
||||
inputs: {
|
||||
title: 'Debug und Vorschau',
|
||||
noPrompt: 'Versuchen Sie, etwas Prompt im Vor-Prompt-Eingabefeld zu schreiben',
|
||||
userInputField: 'Benutzereingabefeld',
|
||||
noVar: 'Füllen Sie den Wert der Variable aus, der bei jedem Start einer neuen Sitzung automatisch im Prompt ersetzt wird.',
|
||||
chatVarTip:
|
||||
'Füllen Sie den Wert der Variable aus, der bei jedem Start einer neuen Sitzung automatisch im Prompt ersetzt wird',
|
||||
completionVarTip:
|
||||
'Füllen Sie den Wert der Variable aus, der bei jeder Einreichung einer Frage automatisch in den Prompt-Wörtern ersetzt wird.',
|
||||
previewTitle: 'Prompt-Vorschau',
|
||||
queryTitle: 'Anfrageinhalt',
|
||||
queryPlaceholder: 'Bitte geben Sie den Anfragetext ein.',
|
||||
run: 'AUSFÜHREN',
|
||||
},
|
||||
result: 'Ausgabetext',
|
||||
datasetConfig: {
|
||||
settingTitle: 'Abfragen-Einstellungen',
|
||||
retrieveOneWay: {
|
||||
title: 'N-zu-1-Abfrage',
|
||||
description: 'Basierend auf Benutzerabsicht und Beschreibungen des Wissens wählt der Agent autonom das beste Wissen für die Abfrage aus. Am besten für Anwendungen mit deutlichen, begrenzten Wissensgebieten.',
|
||||
},
|
||||
retrieveMultiWay: {
|
||||
title: 'Mehrwegabfrage',
|
||||
description: 'Basierend auf Benutzerabsicht werden Abfragen über alle Wissensbereiche hinweg durchgeführt, relevante Texte aus Mehrfachquellen abgerufen und die besten Ergebnisse, die der Benutzerabfrage entsprechen, nach einer Neubewertung ausgewählt. Konfiguration des Rerank-Modell-APIs erforderlich.',
|
||||
},
|
||||
rerankModelRequired: 'Rerank-Modell erforderlich',
|
||||
params: 'Parameter',
|
||||
top_k: 'Top K',
|
||||
top_kTip: 'Wird verwendet, um Abschnitte zu filtern, die am ähnlichsten zu Benutzerfragen sind. Das System wird auch dynamisch den Wert von Top K anpassen, entsprechend max_tokens des ausgewählten Modells.',
|
||||
score_threshold: 'Schwellenwert',
|
||||
score_thresholdTip: 'Wird verwendet, um den Ähnlichkeitsschwellenwert für die Abschnittsfilterung einzustellen.',
|
||||
retrieveChangeTip: 'Das Ändern des Indexmodus und des Abfragemodus kann Anwendungen beeinflussen, die mit diesem Wissen verbunden sind.',
|
||||
},
|
||||
debugAsSingleModel: 'Als Einzelmodell debuggen',
|
||||
debugAsMultipleModel: 'Als Mehrfachmodelle debuggen',
|
||||
duplicateModel: 'Duplizieren',
|
||||
publishAs: 'Veröffentlichen als',
|
||||
assistantType: {
|
||||
name: 'Assistententyp',
|
||||
chatAssistant: {
|
||||
name: 'Basisassistent',
|
||||
description: 'Erstellen eines chatbasierten Assistenten mit einem Großsprachmodell',
|
||||
},
|
||||
agentAssistant: {
|
||||
name: 'Agentenassistent',
|
||||
description: 'Erstellen eines intelligenten Agenten, der autonom Werkzeuge wählen kann, um Aufgaben zu erfüllen',
|
||||
},
|
||||
},
|
||||
agent: {
|
||||
agentMode: 'Agentenmodus',
|
||||
agentModeDes: 'Den Typ des Inferenzmodus für den Agenten festlegen',
|
||||
agentModeType: {
|
||||
ReACT: 'ReAct',
|
||||
functionCall: 'Funktionsaufruf',
|
||||
},
|
||||
setting: {
|
||||
name: 'Agenten-Einstellungen',
|
||||
description: 'Agentenassistenten-Einstellungen ermöglichen die Festlegung des Agentenmodus und erweiterte Funktionen wie integrierte Prompts, nur verfügbar im Agententyp.',
|
||||
maximumIterations: {
|
||||
name: 'Maximale Iterationen',
|
||||
description: 'Begrenzt die Anzahl der Iterationen, die ein Agentenassistent ausführen kann',
|
||||
},
|
||||
},
|
||||
buildInPrompt: 'Eingebautes Prompt',
|
||||
firstPrompt: 'Erstes Prompt',
|
||||
nextIteration: 'Nächste Iteration',
|
||||
promptPlaceholder: 'Schreiben Sie hier Ihr Prompt',
|
||||
tools: {
|
||||
name: 'Werkzeuge',
|
||||
description: 'Die Verwendung von Werkzeugen kann die Fähigkeiten von LLM erweitern, z.B. das Internet durchsuchen oder wissenschaftliche Berechnungen durchführen',
|
||||
enabled: 'Aktiviert',
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
export default translation
|
||||
69
web/i18n/de-DE/app-log.ts
Normal file
69
web/i18n/de-DE/app-log.ts
Normal file
@ -0,0 +1,69 @@
|
||||
const translation = {
|
||||
title: 'Protokolle',
|
||||
description: 'Die Protokolle zeichnen den Betriebsstatus der Anwendung auf, einschließlich Benutzereingaben und KI-Antworten.',
|
||||
dateTimeFormat: 'MM/DD/YYYY hh:mm A',
|
||||
table: {
|
||||
header: {
|
||||
time: 'Zeit',
|
||||
endUser: 'Endbenutzer',
|
||||
input: 'Eingabe',
|
||||
output: 'Ausgabe',
|
||||
summary: 'Titel',
|
||||
messageCount: 'Nachrichtenzahl',
|
||||
userRate: 'Benutzerbewertung',
|
||||
adminRate: 'Op. Bewertung',
|
||||
},
|
||||
pagination: {
|
||||
previous: 'Vorherige',
|
||||
next: 'Nächste',
|
||||
},
|
||||
empty: {
|
||||
noChat: 'Noch keine Konversation',
|
||||
noOutput: 'Keine Ausgabe',
|
||||
element: {
|
||||
title: 'Ist da jemand?',
|
||||
content: 'Beobachten und annotieren Sie hier die Interaktionen zwischen Endbenutzern und KI-Anwendungen, um die Genauigkeit der KI kontinuierlich zu verbessern. Sie können versuchen, die Web-App selbst <shareLink>zu teilen</shareLink> oder <testLink>zu testen</testLink>, und dann zu dieser Seite zurückkehren.',
|
||||
},
|
||||
},
|
||||
},
|
||||
detail: {
|
||||
time: 'Zeit',
|
||||
conversationId: 'Konversations-ID',
|
||||
promptTemplate: 'Prompt-Vorlage',
|
||||
promptTemplateBeforeChat: 'Prompt-Vorlage vor dem Chat · Als Systemnachricht',
|
||||
annotationTip: 'Verbesserungen markiert von {{user}}',
|
||||
timeConsuming: '',
|
||||
second: 's',
|
||||
tokenCost: 'Verbrauchte Token',
|
||||
loading: 'lädt',
|
||||
operation: {
|
||||
like: 'gefällt mir',
|
||||
dislike: 'gefällt mir nicht',
|
||||
addAnnotation: 'Verbesserung hinzufügen',
|
||||
editAnnotation: 'Verbesserung bearbeiten',
|
||||
annotationPlaceholder: 'Geben Sie die erwartete Antwort ein, die Sie möchten, dass die KI antwortet, welche für die Feinabstimmung des Modells und die kontinuierliche Verbesserung der Qualität der Textgenerierung in Zukunft verwendet werden kann.',
|
||||
},
|
||||
variables: 'Variablen',
|
||||
uploadImages: 'Hochgeladene Bilder',
|
||||
},
|
||||
filter: {
|
||||
period: {
|
||||
today: 'Heute',
|
||||
last7days: 'Letzte 7 Tage',
|
||||
last4weeks: 'Letzte 4 Wochen',
|
||||
last3months: 'Letzte 3 Monate',
|
||||
last12months: 'Letzte 12 Monate',
|
||||
monthToDate: 'Monat bis heute',
|
||||
quarterToDate: 'Quartal bis heute',
|
||||
yearToDate: 'Jahr bis heute',
|
||||
allTime: 'Gesamte Zeit',
|
||||
},
|
||||
annotation: {
|
||||
all: 'Alle',
|
||||
annotated: 'Markierte Verbesserungen ({{count}} Elemente)',
|
||||
not_annotated: 'Nicht annotiert',
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
export default translation
|
||||
139
web/i18n/de-DE/app-overview.ts
Normal file
139
web/i18n/de-DE/app-overview.ts
Normal file
@ -0,0 +1,139 @@
|
||||
const translation = {
|
||||
welcome: {
|
||||
firstStepTip: 'Um zu beginnen,',
|
||||
enterKeyTip: 'geben Sie unten Ihren OpenAI-API-Schlüssel ein',
|
||||
getKeyTip: 'Holen Sie sich Ihren API-Schlüssel vom OpenAI-Dashboard',
|
||||
placeholder: 'Ihr OpenAI-API-Schlüssel (z.B. sk-xxxx)',
|
||||
},
|
||||
apiKeyInfo: {
|
||||
cloud: {
|
||||
trial: {
|
||||
title: 'Sie nutzen das Testkontingent von {{providerName}}.',
|
||||
description: 'Das Testkontingent wird für Ihre Testnutzung bereitgestellt. Bevor das Testkontingent aufgebraucht ist, richten Sie bitte Ihren eigenen Modellanbieter ein oder kaufen zusätzliches Kontingent.',
|
||||
},
|
||||
exhausted: {
|
||||
title: 'Ihr Testkontingent wurde aufgebraucht, bitte richten Sie Ihren APIKey ein.',
|
||||
description: 'Ihr Testkontingent ist aufgebraucht. Bitte richten Sie Ihren eigenen Modellanbieter ein oder kaufen zusätzliches Kontingent.',
|
||||
},
|
||||
},
|
||||
selfHost: {
|
||||
title: {
|
||||
row1: 'Um zu beginnen,',
|
||||
row2: 'richten Sie zuerst Ihren Modellanbieter ein.',
|
||||
},
|
||||
},
|
||||
callTimes: 'Aufrufzeiten',
|
||||
usedToken: 'Verwendetes Token',
|
||||
setAPIBtn: 'Zum Einrichten des Modellanbieters gehen',
|
||||
tryCloud: 'Oder probieren Sie die Cloud-Version von Dify mit kostenlosem Angebot aus',
|
||||
},
|
||||
overview: {
|
||||
title: 'Übersicht',
|
||||
appInfo: {
|
||||
explanation: 'Einsatzbereite AI-WebApp',
|
||||
accessibleAddress: 'Öffentliche URL',
|
||||
preview: 'Vorschau',
|
||||
regenerate: 'Regenerieren',
|
||||
preUseReminder: 'Bitte aktivieren Sie WebApp, bevor Sie fortfahren.',
|
||||
settings: {
|
||||
entry: 'Einstellungen',
|
||||
title: 'WebApp-Einstellungen',
|
||||
webName: 'WebApp-Name',
|
||||
webDesc: 'WebApp-Beschreibung',
|
||||
webDescTip: 'Dieser Text wird auf der Clientseite angezeigt und bietet grundlegende Anleitungen zur Verwendung der Anwendung',
|
||||
webDescPlaceholder: 'Geben Sie die Beschreibung der WebApp ein',
|
||||
language: 'Sprache',
|
||||
more: {
|
||||
entry: 'Mehr Einstellungen anzeigen',
|
||||
copyright: 'Urheberrecht',
|
||||
copyRightPlaceholder: 'Geben Sie den Namen des Autors oder der Organisation ein',
|
||||
privacyPolicy: 'Datenschutzrichtlinie',
|
||||
privacyPolicyPlaceholder: 'Geben Sie den Link zur Datenschutzrichtlinie ein',
|
||||
privacyPolicyTip: 'Hilft Besuchern zu verstehen, welche Daten die Anwendung sammelt, siehe Difys <privacyPolicyLink>Datenschutzrichtlinie</privacyPolicyLink>.',
|
||||
},
|
||||
},
|
||||
embedded: {
|
||||
entry: 'Eingebettet',
|
||||
title: 'Einbetten auf der Website',
|
||||
explanation: 'Wählen Sie die Art und Weise, wie die Chat-App auf Ihrer Website eingebettet wird',
|
||||
iframe: 'Um die Chat-App an einer beliebigen Stelle auf Ihrer Website hinzuzufügen, fügen Sie diesen iframe in Ihren HTML-Code ein.',
|
||||
scripts: 'Um eine Chat-App unten rechts auf Ihrer Website hinzuzufügen, fügen Sie diesen Code in Ihren HTML-Code ein.',
|
||||
chromePlugin: 'Installieren Sie die Dify Chatbot Chrome-Erweiterung',
|
||||
copied: 'Kopiert',
|
||||
copy: 'Kopieren',
|
||||
},
|
||||
qrcode: {
|
||||
title: 'QR-Code zum Teilen',
|
||||
scan: 'Teilen Sie die Anwendung per Scan',
|
||||
download: 'QR-Code herunterladen',
|
||||
},
|
||||
customize: {
|
||||
way: 'Art',
|
||||
entry: 'Anpassen',
|
||||
title: 'AI-WebApp anpassen',
|
||||
explanation: 'Sie können das Frontend der Web-App an Ihre Szenarien und Stilbedürfnisse anpassen.',
|
||||
way1: {
|
||||
name: 'Forken Sie den Client-Code, ändern Sie ihn und deployen Sie ihn auf Vercel (empfohlen)',
|
||||
step1: 'Forken Sie den Client-Code und ändern Sie ihn',
|
||||
step1Tip: 'Klicken Sie hier, um den Quellcode in Ihr GitHub-Konto zu forken und den Code zu ändern',
|
||||
step1Operation: 'Dify-WebClient',
|
||||
step2: 'Deployen auf Vercel',
|
||||
step2Tip: 'Klicken Sie hier, um das Repository in Vercel zu importieren und zu deployen',
|
||||
step2Operation: 'Repository importieren',
|
||||
step3: 'Umgebungsvariablen konfigurieren',
|
||||
step3Tip: 'Fügen Sie die folgenden Umgebungsvariablen in Vercel hinzu',
|
||||
},
|
||||
way2: {
|
||||
name: 'Clientseitigen Code schreiben, um die API aufzurufen, und ihn auf einem Server deployen',
|
||||
operation: 'Dokumentation',
|
||||
},
|
||||
},
|
||||
},
|
||||
apiInfo: {
|
||||
title: 'Backend-Service-API',
|
||||
explanation: 'Einfach in Ihre Anwendung integrierbar',
|
||||
accessibleAddress: 'Service-API-Endpunkt',
|
||||
doc: 'API-Referenz',
|
||||
},
|
||||
status: {
|
||||
running: 'In Betrieb',
|
||||
disable: 'Deaktivieren',
|
||||
},
|
||||
},
|
||||
analysis: {
|
||||
title: 'Analyse',
|
||||
ms: 'ms',
|
||||
tokenPS: 'Token/s',
|
||||
totalMessages: {
|
||||
title: 'Gesamtnachrichten',
|
||||
explanation: 'Tägliche AI-Interaktionszählung; Prompt-Engineering/Debugging ausgenommen.',
|
||||
},
|
||||
activeUsers: {
|
||||
title: 'Aktive Benutzer',
|
||||
explanation: 'Einzigartige Benutzer, die mit AI Q&A führen; Prompt-Engineering/Debugging ausgenommen.',
|
||||
},
|
||||
tokenUsage: {
|
||||
title: 'Token-Verbrauch',
|
||||
explanation: 'Spiegelt den täglichen Token-Verbrauch des Sprachmodells für die Anwendung wider, nützlich für Kostenkontrollzwecke.',
|
||||
consumed: 'Verbraucht',
|
||||
},
|
||||
avgSessionInteractions: {
|
||||
title: 'Durchschn. Sitzungsinteraktionen',
|
||||
explanation: 'Fortlaufende Benutzer-KI-Kommunikationszählung; für konversationsbasierte Apps.',
|
||||
},
|
||||
userSatisfactionRate: {
|
||||
title: 'Benutzerzufriedenheitsrate',
|
||||
explanation: 'Die Anzahl der Likes pro 1.000 Nachrichten. Dies zeigt den Anteil der Antworten an, mit denen die Benutzer sehr zufrieden sind.',
|
||||
},
|
||||
avgResponseTime: {
|
||||
title: 'Durchschn. Antwortzeit',
|
||||
explanation: 'Zeit (ms) für die AI, um zu verarbeiten/antworten; für textbasierte Apps.',
|
||||
},
|
||||
tps: {
|
||||
title: 'Token-Ausgabegeschwindigkeit',
|
||||
explanation: 'Misst die Leistung des LLM. Zählt die Token-Ausgabegeschwindigkeit des LLM vom Beginn der Anfrage bis zum Abschluss der Ausgabe.',
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
export default translation
|
||||
54
web/i18n/de-DE/app.ts
Normal file
54
web/i18n/de-DE/app.ts
Normal file
@ -0,0 +1,54 @@
|
||||
const translation = {
|
||||
createApp: 'Neue App erstellen',
|
||||
types: {
|
||||
all: 'Alle',
|
||||
assistant: 'Assistent',
|
||||
completion: 'Vervollständigung',
|
||||
},
|
||||
modes: {
|
||||
completion: 'Textgenerator',
|
||||
chat: 'Basisassistent',
|
||||
},
|
||||
createFromConfigFile: 'App aus Konfigurationsdatei erstellen',
|
||||
deleteAppConfirmTitle: 'Diese App löschen?',
|
||||
deleteAppConfirmContent:
|
||||
'Das Löschen der App ist unwiderruflich. Nutzer werden keinen Zugang mehr zu Ihrer App haben, und alle Prompt-Konfigurationen und Logs werden dauerhaft gelöscht.',
|
||||
appDeleted: 'App gelöscht',
|
||||
appDeleteFailed: 'Löschen der App fehlgeschlagen',
|
||||
join: 'Treten Sie der Gemeinschaft bei',
|
||||
communityIntro:
|
||||
'Diskutieren Sie mit Teammitgliedern, Mitwirkenden und Entwicklern auf verschiedenen Kanälen.',
|
||||
roadmap: 'Sehen Sie unseren Fahrplan',
|
||||
appNamePlaceholder: 'Bitte geben Sie den Namen der App ein',
|
||||
newApp: {
|
||||
startToCreate: 'Lassen Sie uns mit Ihrer neuen App beginnen',
|
||||
captionName: 'App-Symbol & Name',
|
||||
captionAppType: 'Welchen Typ von App möchten Sie erstellen?',
|
||||
previewDemo: 'Vorschau-Demo',
|
||||
chatApp: 'Assistent',
|
||||
chatAppIntro:
|
||||
'Ich möchte eine Chat-basierte Anwendung bauen. Diese App verwendet ein Frage-Antwort-Format und ermöglicht mehrere Runden kontinuierlicher Konversation.',
|
||||
agentAssistant: 'Neuer Agentenassistent',
|
||||
completeApp: 'Textgenerator',
|
||||
completeAppIntro:
|
||||
'Ich möchte eine Anwendung erstellen, die hochwertigen Text basierend auf Aufforderungen generiert, wie z.B. das Erstellen von Artikeln, Zusammenfassungen, Übersetzungen und mehr.',
|
||||
showTemplates: 'Ich möchte aus einer Vorlage wählen',
|
||||
hideTemplates: 'Zurück zur Modusauswahl',
|
||||
Create: 'Erstellen',
|
||||
Cancel: 'Abbrechen',
|
||||
nameNotEmpty: 'Name darf nicht leer sein',
|
||||
appTemplateNotSelected: 'Bitte wählen Sie eine Vorlage',
|
||||
appTypeRequired: 'Bitte wählen Sie einen App-Typ',
|
||||
appCreated: 'App erstellt',
|
||||
appCreateFailed: 'Erstellen der App fehlgeschlagen',
|
||||
},
|
||||
editApp: {
|
||||
startToEdit: 'App bearbeiten',
|
||||
},
|
||||
emoji: {
|
||||
ok: 'OK',
|
||||
cancel: 'Abbrechen',
|
||||
},
|
||||
}
|
||||
|
||||
export default translation
|
||||
115
web/i18n/de-DE/billing.ts
Normal file
115
web/i18n/de-DE/billing.ts
Normal file
@ -0,0 +1,115 @@
|
||||
const translation = {
|
||||
currentPlan: 'Aktueller Tarif',
|
||||
upgradeBtn: {
|
||||
plain: 'Tarif Upgraden',
|
||||
encourage: 'Jetzt Upgraden',
|
||||
encourageShort: 'Upgraden',
|
||||
},
|
||||
viewBilling: 'Abrechnung und Abonnements verwalten',
|
||||
buyPermissionDeniedTip: 'Bitte kontaktieren Sie Ihren Unternehmensadministrator, um zu abonnieren',
|
||||
plansCommon: {
|
||||
title: 'Wählen Sie einen Tarif, der zu Ihnen passt',
|
||||
yearlyTip: 'Erhalten Sie 2 Monate kostenlos durch jährliches Abonnieren!',
|
||||
mostPopular: 'Am beliebtesten',
|
||||
planRange: {
|
||||
monthly: 'Monatlich',
|
||||
yearly: 'Jährlich',
|
||||
},
|
||||
month: 'Monat',
|
||||
year: 'Jahr',
|
||||
save: 'Sparen ',
|
||||
free: 'Kostenlos',
|
||||
currentPlan: 'Aktueller Tarif',
|
||||
contractSales: 'Vertrieb kontaktieren',
|
||||
contractOwner: 'Teammanager kontaktieren',
|
||||
startForFree: 'Kostenlos starten',
|
||||
getStartedWith: 'Beginnen Sie mit ',
|
||||
contactSales: 'Vertrieb kontaktieren',
|
||||
talkToSales: 'Mit dem Vertrieb sprechen',
|
||||
modelProviders: 'Modellanbieter',
|
||||
teamMembers: 'Teammitglieder',
|
||||
buildApps: 'Apps bauen',
|
||||
vectorSpace: 'Vektorraum',
|
||||
vectorSpaceBillingTooltip: 'Jedes 1MB kann ungefähr 1,2 Millionen Zeichen an vektorisierten Daten speichern (geschätzt mit OpenAI Embeddings, variiert je nach Modell).',
|
||||
vectorSpaceTooltip: 'Vektorraum ist das Langzeitspeichersystem, das erforderlich ist, damit LLMs Ihre Daten verstehen können.',
|
||||
documentsUploadQuota: 'Dokumenten-Upload-Kontingent',
|
||||
documentProcessingPriority: 'Priorität der Dokumentenverarbeitung',
|
||||
documentProcessingPriorityTip: 'Für eine höhere Dokumentenverarbeitungspriorität, bitte Ihren Tarif upgraden.',
|
||||
documentProcessingPriorityUpgrade: 'Mehr Daten mit höherer Genauigkeit bei schnelleren Geschwindigkeiten verarbeiten.',
|
||||
priority: {
|
||||
'standard': 'Standard',
|
||||
'priority': 'Priorität',
|
||||
'top-priority': 'Höchste Priorität',
|
||||
},
|
||||
logsHistory: 'Protokollverlauf',
|
||||
customTools: 'Benutzerdefinierte Werkzeuge',
|
||||
unavailable: 'Nicht verfügbar',
|
||||
days: 'Tage',
|
||||
unlimited: 'Unbegrenzt',
|
||||
support: 'Support',
|
||||
supportItems: {
|
||||
communityForums: 'Community-Foren',
|
||||
emailSupport: 'E-Mail-Support',
|
||||
priorityEmail: 'Priorisierter E-Mail- und Chat-Support',
|
||||
logoChange: 'Logo-Änderung',
|
||||
SSOAuthentication: 'SSO-Authentifizierung',
|
||||
personalizedSupport: 'Persönlicher Support',
|
||||
dedicatedAPISupport: 'Dedizierter API-Support',
|
||||
customIntegration: 'Benutzerdefinierte Integration und Support',
|
||||
ragAPIRequest: 'RAG-API-Anfragen',
|
||||
bulkUpload: 'Massenupload von Dokumenten',
|
||||
agentMode: 'Agentenmodus',
|
||||
workflow: 'Workflow',
|
||||
},
|
||||
comingSoon: 'Demnächst',
|
||||
member: 'Mitglied',
|
||||
memberAfter: 'Mitglied',
|
||||
messageRequest: {
|
||||
title: 'Nachrichtenguthaben',
|
||||
tooltip: 'Nachrichtenaufrufkontingente für verschiedene Tarife unter Verwendung von OpenAI-Modellen (außer gpt4).Nachrichten über dem Limit verwenden Ihren OpenAI-API-Schlüssel.',
|
||||
},
|
||||
annotatedResponse: {
|
||||
title: 'Kontingentgrenzen für Annotationen',
|
||||
tooltip: 'Manuelle Bearbeitung und Annotation von Antworten bieten anpassbare, hochwertige Frage-Antwort-Fähigkeiten für Apps. (Nur anwendbar in Chat-Apps)',
|
||||
},
|
||||
ragAPIRequestTooltip: 'Bezieht sich auf die Anzahl der API-Aufrufe, die nur die Wissensdatenbankverarbeitungsfähigkeiten von Dify aufrufen.',
|
||||
receiptInfo: 'Nur der Teaminhaber und der Teamadministrator können abonnieren und Abrechnungsinformationen einsehen',
|
||||
},
|
||||
plans: {
|
||||
sandbox: {
|
||||
name: 'Sandbox',
|
||||
description: '200 mal GPT kostenlos testen',
|
||||
includesTitle: 'Beinhaltet:',
|
||||
},
|
||||
professional: {
|
||||
name: 'Professionell',
|
||||
description: 'Für Einzelpersonen und kleine Teams, um mehr Leistung erschwinglich freizuschalten.',
|
||||
includesTitle: 'Alles im kostenlosen Tarif, plus:',
|
||||
},
|
||||
team: {
|
||||
name: 'Team',
|
||||
description: 'Zusammenarbeiten ohne Grenzen und Top-Leistung genießen.',
|
||||
includesTitle: 'Alles im Professionell-Tarif, plus:',
|
||||
},
|
||||
enterprise: {
|
||||
name: 'Unternehmen',
|
||||
description: 'Erhalten Sie volle Fähigkeiten und Unterstützung für großangelegte, missionskritische Systeme.',
|
||||
includesTitle: 'Alles im Team-Tarif, plus:',
|
||||
},
|
||||
},
|
||||
vectorSpace: {
|
||||
fullTip: 'Vektorraum ist voll.',
|
||||
fullSolution: 'Upgraden Sie Ihren Tarif, um mehr Speicherplatz zu erhalten.',
|
||||
},
|
||||
apps: {
|
||||
fullTipLine1: 'Upgraden Sie Ihren Tarif, um',
|
||||
fullTipLine2: 'mehr Apps zu bauen.',
|
||||
},
|
||||
annotatedResponse: {
|
||||
fullTipLine1: 'Upgraden Sie Ihren Tarif, um',
|
||||
fullTipLine2: 'mehr Konversationen zu annotieren.',
|
||||
quotaTitle: 'Kontingent für Annotation-Antworten',
|
||||
},
|
||||
}
|
||||
|
||||
export default translation
|
||||
505
web/i18n/de-DE/common.ts
Normal file
505
web/i18n/de-DE/common.ts
Normal file
@ -0,0 +1,505 @@
|
||||
const translation = {
|
||||
api: {
|
||||
success: 'Erfolg',
|
||||
actionSuccess: 'Aktion erfolgreich',
|
||||
saved: 'Gespeichert',
|
||||
create: 'Erstellt',
|
||||
remove: 'Entfernt',
|
||||
},
|
||||
operation: {
|
||||
create: 'Erstellen',
|
||||
confirm: 'Bestätigen',
|
||||
cancel: 'Abbrechen',
|
||||
clear: 'Leeren',
|
||||
save: 'Speichern',
|
||||
edit: 'Bearbeiten',
|
||||
add: 'Hinzufügen',
|
||||
added: 'Hinzugefügt',
|
||||
refresh: 'Neustart',
|
||||
reset: 'Zurücksetzen',
|
||||
search: 'Suchen',
|
||||
change: 'Ändern',
|
||||
remove: 'Entfernen',
|
||||
send: 'Senden',
|
||||
copy: 'Kopieren',
|
||||
lineBreak: 'Zeilenumbruch',
|
||||
sure: 'Ich bin sicher',
|
||||
download: 'Herunterladen',
|
||||
delete: 'Löschen',
|
||||
settings: 'Einstellungen',
|
||||
setup: 'Einrichten',
|
||||
getForFree: 'Kostenlos erhalten',
|
||||
reload: 'Neu laden',
|
||||
ok: 'OK',
|
||||
log: 'Protokoll',
|
||||
learnMore: 'Mehr erfahren',
|
||||
params: 'Parameter',
|
||||
},
|
||||
placeholder: {
|
||||
input: 'Bitte eingeben',
|
||||
select: 'Bitte auswählen',
|
||||
},
|
||||
voice: {
|
||||
language: {
|
||||
zhHans: 'Chinesisch',
|
||||
enUS: 'Englisch',
|
||||
deDE: 'Deutsch',
|
||||
frFR: 'Französisch',
|
||||
esES: 'Spanisch',
|
||||
itIT: 'Italienisch',
|
||||
thTH: 'Thailändisch',
|
||||
idID: 'Indonesisch',
|
||||
jaJP: 'Japanisch',
|
||||
koKR: 'Koreanisch',
|
||||
ptBR: 'Portugiesisch',
|
||||
ruRU: 'Russisch',
|
||||
ukUA: 'Ukrainisch',
|
||||
},
|
||||
},
|
||||
unit: {
|
||||
char: 'Zeichen',
|
||||
},
|
||||
actionMsg: {
|
||||
noModification: 'Im Moment keine Änderungen.',
|
||||
modifiedSuccessfully: 'Erfolgreich geändert',
|
||||
modifiedUnsuccessfully: 'Änderung nicht erfolgreich',
|
||||
copySuccessfully: 'Erfolgreich kopiert',
|
||||
paySucceeded: 'Zahlung erfolgreich',
|
||||
payCancelled: 'Zahlung abgebrochen',
|
||||
generatedSuccessfully: 'Erfolgreich generiert',
|
||||
generatedUnsuccessfully: 'Generierung nicht erfolgreich',
|
||||
},
|
||||
model: {
|
||||
params: {
|
||||
temperature: 'Temperatur',
|
||||
temperatureTip:
|
||||
'Kontrolliert Zufälligkeit: Eine niedrigere Temperatur führt zu weniger zufälligen Ergebnissen. Nähert sich die Temperatur null, wird das Modell deterministisch und repetitiv.',
|
||||
top_p: 'Top P',
|
||||
top_pTip:
|
||||
'Kontrolliert Diversität über Nukleus-Sampling: 0,5 bedeutet, dass die Hälfte aller wahrscheinlichkeitsgewichteten Optionen berücksichtigt wird.',
|
||||
presence_penalty: 'Präsenz-Strafe',
|
||||
presence_penaltyTip:
|
||||
'Wie stark neue Tokens basierend darauf bestraft werden, ob sie bereits im Text erschienen sind.\nErhöht die Wahrscheinlichkeit des Modells, über neue Themen zu sprechen.',
|
||||
frequency_penalty: 'Häufigkeitsstrafe',
|
||||
frequency_penaltyTip:
|
||||
'Wie stark neue Tokens basierend auf ihrer bisherigen Häufigkeit im Text bestraft werden.\nVerringert die Wahrscheinlichkeit des Modells, denselben Satz wortwörtlich zu wiederholen.',
|
||||
max_tokens: 'Maximale Token',
|
||||
max_tokensTip:
|
||||
'Begrenzt die maximale Länge der Antwort in Token. \nGrößere Werte können den Platz für Eingabeaufforderungen, Chat-Logs und Wissen begrenzen. \nEs wird empfohlen, dies unter zwei Dritteln zu setzen\ngpt-4-1106-Vorschau, gpt-4-vision-Vorschau maximale Token (Eingabe 128k Ausgabe 4k)',
|
||||
maxTokenSettingTip: 'Ihre Einstellung für maximale Token ist hoch, was den Platz für Eingabeaufforderungen, Abfragen und Daten potenziell begrenzen kann. Erwägen Sie, dies unter 2/3 zu setzen.',
|
||||
setToCurrentModelMaxTokenTip: 'Maximale Token auf 80 % der maximalen Token des aktuellen Modells {{maxToken}} aktualisiert.',
|
||||
stop_sequences: 'Stop-Sequenzen',
|
||||
stop_sequencesTip: 'Bis zu vier Sequenzen, bei denen die API die Generierung weiterer Token stoppt. Der zurückgegebene Text wird die Stop-Sequenz nicht enthalten.',
|
||||
stop_sequencesPlaceholder: 'Sequenz eingeben und Tab drücken',
|
||||
},
|
||||
tone: {
|
||||
Creative: 'Kreativ',
|
||||
Balanced: 'Ausgewogen',
|
||||
Precise: 'Präzise',
|
||||
Custom: 'Benutzerdefiniert',
|
||||
},
|
||||
addMoreModel: 'Gehen Sie zu den Einstellungen, um mehr Modelle hinzuzufügen',
|
||||
},
|
||||
menus: {
|
||||
status: 'Beta',
|
||||
explore: 'Erkunden',
|
||||
apps: 'Studio',
|
||||
plugins: 'Plugins',
|
||||
pluginsTips: 'Integrieren Sie Plugins von Drittanbietern oder erstellen Sie ChatGPT-kompatible KI-Plugins.',
|
||||
datasets: 'Wissen',
|
||||
datasetsTips: 'BALD VERFÜGBAR: Importieren Sie Ihre eigenen Textdaten oder schreiben Sie Daten in Echtzeit über Webhook, um den LLM-Kontext zu verbessern.',
|
||||
newApp: 'Neue App',
|
||||
newDataset: 'Wissen erstellen',
|
||||
tools: 'Werkzeuge',
|
||||
},
|
||||
userProfile: {
|
||||
settings: 'Einstellungen',
|
||||
workspace: 'Arbeitsbereich',
|
||||
createWorkspace: 'Arbeitsbereich erstellen',
|
||||
helpCenter: 'Hilfe',
|
||||
roadmapAndFeedback: 'Feedback',
|
||||
community: 'Gemeinschaft',
|
||||
about: 'Über',
|
||||
logout: 'Abmelden',
|
||||
},
|
||||
settings: {
|
||||
accountGroup: 'KONTO',
|
||||
workplaceGroup: 'ARBEITSBEREICH',
|
||||
account: 'Mein Konto',
|
||||
members: 'Mitglieder',
|
||||
billing: 'Abrechnung',
|
||||
integrations: 'Integrationen',
|
||||
language: 'Sprache',
|
||||
provider: 'Modellanbieter',
|
||||
dataSource: 'Datenquelle',
|
||||
plugin: 'Plugins',
|
||||
apiBasedExtension: 'API-Erweiterung',
|
||||
},
|
||||
account: {
|
||||
avatar: 'Avatar',
|
||||
name: 'Name',
|
||||
email: 'E-Mail',
|
||||
password: 'Passwort',
|
||||
passwordTip: 'Sie können ein dauerhaftes Passwort festlegen, wenn Sie keine temporären Anmeldecodes verwenden möchten',
|
||||
setPassword: 'Ein Passwort festlegen',
|
||||
resetPassword: 'Passwort zurücksetzen',
|
||||
currentPassword: 'Aktuelles Passwort',
|
||||
newPassword: 'Neues Passwort',
|
||||
confirmPassword: 'Passwort bestätigen',
|
||||
notEqual: 'Die Passwörter sind unterschiedlich.',
|
||||
langGeniusAccount: 'Dify-Konto',
|
||||
langGeniusAccountTip: 'Ihr Dify-Konto und zugehörige Benutzerdaten.',
|
||||
editName: 'Namen bearbeiten',
|
||||
showAppLength: '{{length}} Apps anzeigen',
|
||||
},
|
||||
members: {
|
||||
team: 'Team',
|
||||
invite: 'Hinzufügen',
|
||||
name: 'NAME',
|
||||
lastActive: 'ZULETZT AKTIV',
|
||||
role: 'ROLLEN',
|
||||
pending: 'Ausstehend...',
|
||||
owner: 'Eigentümer',
|
||||
admin: 'Admin',
|
||||
adminTip: 'Kann Apps erstellen & Team-Einstellungen verwalten',
|
||||
normal: 'Normal',
|
||||
normalTip: 'Kann nur Apps verwenden, kann keine Apps erstellen',
|
||||
inviteTeamMember: 'Teammitglied hinzufügen',
|
||||
inviteTeamMemberTip: 'Sie können direkt nach der Anmeldung auf Ihre Teamdaten zugreifen.',
|
||||
email: 'E-Mail',
|
||||
emailInvalid: 'Ungültiges E-Mail-Format',
|
||||
emailPlaceholder: 'Bitte E-Mails eingeben',
|
||||
sendInvite: 'Einladung senden',
|
||||
invitedAsRole: 'Eingeladen als {{role}}-Benutzer',
|
||||
invitationSent: 'Einladung gesendet',
|
||||
invitationSentTip: 'Einladung gesendet, und sie können sich bei Dify anmelden, um auf Ihre Teamdaten zuzugreifen.',
|
||||
invitationLink: 'Einladungslink',
|
||||
failedinvitationEmails: 'Die folgenden Benutzer wurden nicht erfolgreich eingeladen',
|
||||
ok: 'OK',
|
||||
removeFromTeam: 'Vom Team entfernen',
|
||||
removeFromTeamTip: 'Wird den Teamzugang entfernen',
|
||||
setAdmin: 'Als Administrator einstellen',
|
||||
setMember: 'Als normales Mitglied einstellen',
|
||||
disinvite: 'Einladung widerrufen',
|
||||
deleteMember: 'Mitglied löschen',
|
||||
you: '(Du)',
|
||||
},
|
||||
integrations: {
|
||||
connected: 'Verbunden',
|
||||
google: 'Google',
|
||||
googleAccount: 'Mit Google-Konto anmelden',
|
||||
github: 'GitHub',
|
||||
githubAccount: 'Mit GitHub-Konto anmelden',
|
||||
connect: 'Verbinden',
|
||||
},
|
||||
language: {
|
||||
displayLanguage: 'Anzeigesprache',
|
||||
timezone: 'Zeitzone',
|
||||
},
|
||||
provider: {
|
||||
apiKey: 'API-Schlüssel',
|
||||
enterYourKey: 'Geben Sie hier Ihren API-Schlüssel ein',
|
||||
invalidKey: 'Ungültiger OpenAI API-Schlüssel',
|
||||
validatedError: 'Validierung fehlgeschlagen: ',
|
||||
validating: 'Schlüssel wird validiert...',
|
||||
saveFailed: 'API-Schlüssel speichern fehlgeschlagen',
|
||||
apiKeyExceedBill: 'Dieser API-SCHLÜSSEL verfügt über kein verfügbares Kontingent, bitte lesen',
|
||||
addKey: 'Schlüssel hinzufügen',
|
||||
comingSoon: 'Demnächst verfügbar',
|
||||
editKey: 'Bearbeiten',
|
||||
invalidApiKey: 'Ungültiger API-Schlüssel',
|
||||
azure: {
|
||||
apiBase: 'API-Basis',
|
||||
apiBasePlaceholder: 'Die API-Basis-URL Ihres Azure OpenAI-Endpunkts.',
|
||||
apiKey: 'API-Schlüssel',
|
||||
apiKeyPlaceholder: 'Geben Sie hier Ihren API-Schlüssel ein',
|
||||
helpTip: 'Azure OpenAI Service kennenlernen',
|
||||
},
|
||||
openaiHosted: {
|
||||
openaiHosted: 'Gehostetes OpenAI',
|
||||
onTrial: 'IN PROBE',
|
||||
exhausted: 'KONTINGENT ERSCHÖPFT',
|
||||
desc: 'Der OpenAI-Hostingdienst von Dify ermöglicht es Ihnen, Modelle wie GPT-3.5 zu verwenden. Bevor Ihr Probe-Kontingent aufgebraucht ist, müssen Sie andere Modellanbieter einrichten.',
|
||||
callTimes: 'Anrufzeiten',
|
||||
usedUp: 'Probe-Kontingent aufgebraucht. Eigenen Modellanbieter hinzufügen.',
|
||||
useYourModel: 'Derzeit wird eigener Modellanbieter verwendet.',
|
||||
close: 'Schließen',
|
||||
},
|
||||
anthropicHosted: {
|
||||
anthropicHosted: 'Anthropic Claude',
|
||||
onTrial: 'IN PROBE',
|
||||
exhausted: 'KONTINGENT ERSCHÖPFT',
|
||||
desc: 'Leistungsstarkes Modell, das bei einer Vielzahl von Aufgaben von anspruchsvollen Dialogen und kreativer Inhalteerstellung bis hin zu detaillierten Anweisungen hervorragend ist.',
|
||||
callTimes: 'Anrufzeiten',
|
||||
usedUp: 'Testkontingent aufgebraucht. Eigenen Modellanbieter hinzufügen.',
|
||||
useYourModel: 'Derzeit wird eigener Modellanbieter verwendet.',
|
||||
close: 'Schließen',
|
||||
},
|
||||
anthropic: {
|
||||
using: 'Die Einbettungsfähigkeit verwendet',
|
||||
enableTip: 'Um das Anthropische Modell zu aktivieren, müssen Sie sich zuerst mit OpenAI oder Azure OpenAI Service verbinden.',
|
||||
notEnabled: 'Nicht aktiviert',
|
||||
keyFrom: 'Holen Sie Ihren API-Schlüssel von Anthropic',
|
||||
},
|
||||
encrypted: {
|
||||
front: 'Ihr API-SCHLÜSSEL wird verschlüsselt und mit',
|
||||
back: ' Technologie gespeichert.',
|
||||
},
|
||||
},
|
||||
modelProvider: {
|
||||
notConfigured: 'Das Systemmodell wurde noch nicht vollständig konfiguriert, und einige Funktionen sind möglicherweise nicht verfügbar.',
|
||||
systemModelSettings: 'Systemmodell-Einstellungen',
|
||||
systemModelSettingsLink: 'Warum ist es notwendig, ein Systemmodell einzurichten?',
|
||||
selectModel: 'Wählen Sie Ihr Modell',
|
||||
setupModelFirst: 'Bitte richten Sie zuerst Ihr Modell ein',
|
||||
systemReasoningModel: {
|
||||
key: 'System-Reasoning-Modell',
|
||||
tip: 'Legen Sie das Standardinferenzmodell fest, das für die Erstellung von Anwendungen verwendet wird, sowie Funktionen wie die Generierung von Dialognamen und die Vorschlagserstellung für die nächste Frage, die auch das Standardinferenzmodell verwenden.',
|
||||
},
|
||||
embeddingModel: {
|
||||
key: 'Einbettungsmodell',
|
||||
tip: 'Legen Sie das Standardmodell für die Dokumenteneinbettungsverarbeitung des Wissens fest, sowohl die Wiederherstellung als auch der Import des Wissens verwenden dieses Einbettungsmodell für die Vektorisierungsverarbeitung. Ein Wechsel wird dazu führen, dass die Vektordimension zwischen dem importierten Wissen und der Frage inkonsistent ist, was zu einem Wiederherstellungsfehler führt. Um einen Wiederherstellungsfehler zu vermeiden, wechseln Sie dieses Modell bitte nicht willkürlich.',
|
||||
required: 'Einbettungsmodell ist erforderlich',
|
||||
},
|
||||
speechToTextModel: {
|
||||
key: 'Sprach-zu-Text-Modell',
|
||||
tip: 'Legen Sie das Standardmodell für die Spracheingabe in Konversationen fest.',
|
||||
},
|
||||
ttsModel: {
|
||||
key: 'Text-zu-Sprache-Modell',
|
||||
tip: 'Legen Sie das Standardmodell für die Text-zu-Sprache-Eingabe in Konversationen fest.',
|
||||
},
|
||||
rerankModel: {
|
||||
key: 'Rerank-Modell',
|
||||
tip: 'Rerank-Modell wird die Kandidatendokumentenliste basierend auf der semantischen Übereinstimmung mit der Benutzeranfrage neu ordnen und die Ergebnisse der semantischen Rangordnung verbessern',
|
||||
},
|
||||
quota: 'Kontingent',
|
||||
searchModel: 'Suchmodell',
|
||||
noModelFound: 'Kein Modell für {{model}} gefunden',
|
||||
models: 'Modelle',
|
||||
showMoreModelProvider: 'Zeige mehr Modellanbieter',
|
||||
selector: {
|
||||
tip: 'Dieses Modell wurde entfernt. Bitte fügen Sie ein Modell hinzu oder wählen Sie ein anderes Modell.',
|
||||
emptyTip: 'Keine verfügbaren Modelle',
|
||||
emptySetting: 'Bitte gehen Sie zu den Einstellungen, um zu konfigurieren',
|
||||
rerankTip: 'Bitte richten Sie das Rerank-Modell ein',
|
||||
},
|
||||
card: {
|
||||
quota: 'KONTINGENT',
|
||||
onTrial: 'In Probe',
|
||||
paid: 'Bezahlt',
|
||||
quotaExhausted: 'Kontingent erschöpft',
|
||||
callTimes: 'Anrufzeiten',
|
||||
tokens: 'Token',
|
||||
buyQuota: 'Kontingent kaufen',
|
||||
priorityUse: 'Priorisierte Nutzung',
|
||||
removeKey: 'API-Schlüssel entfernen',
|
||||
tip: 'Der bezahlten Kontingent wird Vorrang gegeben. Das Testkontingent wird nach dem Verbrauch des bezahlten Kontingents verwendet.',
|
||||
},
|
||||
item: {
|
||||
deleteDesc: '{{modelName}} werden als System-Reasoning-Modelle verwendet. Einige Funktionen stehen nach der Entfernung nicht zur Verfügung. Bitte bestätigen.',
|
||||
freeQuota: 'KOSTENLOSES KONTINGENT',
|
||||
},
|
||||
addApiKey: 'Fügen Sie Ihren API-Schlüssel hinzu',
|
||||
invalidApiKey: 'Ungültiger API-Schlüssel',
|
||||
encrypted: {
|
||||
front: 'Ihr API-SCHLÜSSEL wird verschlüsselt und mit',
|
||||
back: ' Technologie gespeichert.',
|
||||
},
|
||||
freeQuota: {
|
||||
howToEarn: 'Wie zu verdienen',
|
||||
},
|
||||
addMoreModelProvider: 'MEHR MODELLANBIETER HINZUFÜGEN',
|
||||
addModel: 'Modell hinzufügen',
|
||||
modelsNum: '{{num}} Modelle',
|
||||
showModels: 'Modelle anzeigen',
|
||||
showModelsNum: 'Zeige {{num}} Modelle',
|
||||
collapse: 'Einklappen',
|
||||
config: 'Konfigurieren',
|
||||
modelAndParameters: 'Modell und Parameter',
|
||||
model: 'Modell',
|
||||
featureSupported: '{{feature}} unterstützt',
|
||||
callTimes: 'Anrufzeiten',
|
||||
credits: 'Nachrichtenguthaben',
|
||||
buyQuota: 'Kontingent kaufen',
|
||||
getFreeTokens: 'Kostenlose Token erhalten',
|
||||
priorityUsing: 'Bevorzugte Nutzung',
|
||||
deprecated: 'Veraltet',
|
||||
confirmDelete: 'Löschung bestätigen?',
|
||||
quotaTip: 'Verbleibende verfügbare kostenlose Token',
|
||||
loadPresets: 'Voreinstellungen laden',
|
||||
parameters: 'PARAMETER',
|
||||
},
|
||||
dataSource: {
|
||||
add: 'Eine Datenquelle hinzufügen',
|
||||
connect: 'Verbinden',
|
||||
notion: {
|
||||
title: 'Notion',
|
||||
description: 'Notion als Datenquelle für das Wissen verwenden.',
|
||||
connectedWorkspace: 'Verbundener Arbeitsbereich',
|
||||
addWorkspace: 'Arbeitsbereich hinzufügen',
|
||||
connected: 'Verbunden',
|
||||
disconnected: 'Getrennt',
|
||||
changeAuthorizedPages: 'Autorisierte Seiten ändern',
|
||||
pagesAuthorized: 'Autorisierte Seiten',
|
||||
sync: 'Synchronisieren',
|
||||
remove: 'Entfernen',
|
||||
selector: {
|
||||
pageSelected: 'Ausgewählte Seiten',
|
||||
searchPages: 'Seiten suchen...',
|
||||
noSearchResult: 'Keine Suchergebnisse',
|
||||
addPages: 'Seiten hinzufügen',
|
||||
preview: 'VORSCHAU',
|
||||
},
|
||||
},
|
||||
},
|
||||
plugin: {
|
||||
serpapi: {
|
||||
apiKey: 'API-Schlüssel',
|
||||
apiKeyPlaceholder: 'Geben Sie Ihren API-Schlüssel ein',
|
||||
keyFrom: 'Holen Sie Ihren SerpAPI-Schlüssel von der SerpAPI-Kontoseite',
|
||||
},
|
||||
},
|
||||
apiBasedExtension: {
|
||||
title: 'API-Erweiterungen bieten zentralisiertes API-Management und vereinfachen die Konfiguration für eine einfache Verwendung in Difys Anwendungen.',
|
||||
link: 'Erfahren Sie, wie Sie Ihre eigene API-Erweiterung entwickeln.',
|
||||
linkUrl: 'https://docs.dify.ai/features/extension/api_based_extension',
|
||||
add: 'API-Erweiterung hinzufügen',
|
||||
selector: {
|
||||
title: 'API-Erweiterung',
|
||||
placeholder: 'Bitte wählen Sie API-Erweiterung',
|
||||
manage: 'API-Erweiterung verwalten',
|
||||
},
|
||||
modal: {
|
||||
title: 'API-Erweiterung hinzufügen',
|
||||
editTitle: 'API-Erweiterung bearbeiten',
|
||||
name: {
|
||||
title: 'Name',
|
||||
placeholder: 'Bitte geben Sie den Namen ein',
|
||||
},
|
||||
apiEndpoint: {
|
||||
title: 'API-Endpunkt',
|
||||
placeholder: 'Bitte geben Sie den API-Endpunkt ein',
|
||||
},
|
||||
apiKey: {
|
||||
title: 'API-Schlüssel',
|
||||
placeholder: 'Bitte geben Sie den API-Schlüssel ein',
|
||||
lengthError: 'Die Länge des API-Schlüssels darf nicht weniger als 5 Zeichen betragen',
|
||||
},
|
||||
},
|
||||
type: 'Typ',
|
||||
},
|
||||
about: {
|
||||
changeLog: 'Änderungsprotokoll',
|
||||
updateNow: 'Jetzt aktualisieren',
|
||||
nowAvailable: 'Dify {{version}} ist jetzt verfügbar.',
|
||||
latestAvailable: 'Dify {{version}} ist die neueste verfügbare Version.',
|
||||
},
|
||||
appMenus: {
|
||||
overview: 'Übersicht',
|
||||
promptEng: 'Orchestrieren',
|
||||
apiAccess: 'API-Zugriff',
|
||||
logAndAnn: 'Protokolle & Ank.',
|
||||
},
|
||||
environment: {
|
||||
testing: 'TESTEN',
|
||||
development: 'ENTWICKLUNG',
|
||||
},
|
||||
appModes: {
|
||||
completionApp: 'Textgenerator',
|
||||
chatApp: 'Chat-App',
|
||||
},
|
||||
datasetMenus: {
|
||||
documents: 'Dokumente',
|
||||
hitTesting: 'Wiederherstellungstest',
|
||||
settings: 'Einstellungen',
|
||||
emptyTip: 'Das Wissen wurde nicht zugeordnet, bitte gehen Sie zur Anwendung oder zum Plug-in, um die Zuordnung abzuschließen.',
|
||||
viewDoc: 'Dokumentation anzeigen',
|
||||
relatedApp: 'verbundene Apps',
|
||||
},
|
||||
voiceInput: {
|
||||
speaking: 'Sprechen Sie jetzt...',
|
||||
converting: 'Umwandlung in Text...',
|
||||
notAllow: 'Mikrofon nicht autorisiert',
|
||||
},
|
||||
modelName: {
|
||||
'gpt-3.5-turbo': 'GPT-3.5-Turbo',
|
||||
'gpt-3.5-turbo-16k': 'GPT-3.5-Turbo-16K',
|
||||
'gpt-4': 'GPT-4',
|
||||
'gpt-4-32k': 'GPT-4-32K',
|
||||
'text-davinci-003': 'Text-Davinci-003',
|
||||
'text-embedding-ada-002': 'Text-Embedding-Ada-002',
|
||||
'whisper-1': 'Flüstern-1',
|
||||
'claude-instant-1': 'Claude-Instant',
|
||||
'claude-2': 'Claude-2',
|
||||
},
|
||||
chat: {
|
||||
renameConversation: 'Konversation umbenennen',
|
||||
conversationName: 'Konversationsname',
|
||||
conversationNamePlaceholder: 'Bitte geben Sie den Konversationsnamen ein',
|
||||
conversationNameCanNotEmpty: 'Konversationsname erforderlich',
|
||||
citation: {
|
||||
title: 'ZITIERUNGEN',
|
||||
linkToDataset: 'Link zum Wissen',
|
||||
characters: 'Zeichen:',
|
||||
hitCount: 'Abrufanzahl:',
|
||||
vectorHash: 'Vektorhash:',
|
||||
hitScore: 'Abrufwertung:',
|
||||
},
|
||||
},
|
||||
promptEditor: {
|
||||
placeholder: 'Schreiben Sie hier Ihr Aufforderungswort, geben Sie \'{\' ein, um eine Variable einzufügen, geben Sie \'/\' ein, um einen Aufforderungs-Inhaltsblock einzufügen',
|
||||
context: {
|
||||
item: {
|
||||
title: 'Kontext',
|
||||
desc: 'Kontextvorlage einfügen',
|
||||
},
|
||||
modal: {
|
||||
title: '{{num}} Wissen im Kontext',
|
||||
add: 'Kontext hinzufügen',
|
||||
footer: 'Sie können Kontexte im unten stehenden Kontextabschnitt verwalten.',
|
||||
},
|
||||
},
|
||||
history: {
|
||||
item: {
|
||||
title: 'Konversationsgeschichte',
|
||||
desc: 'Vorlage für historische Nachricht einfügen',
|
||||
},
|
||||
modal: {
|
||||
title: 'BEISPIEL',
|
||||
user: 'Hallo',
|
||||
assistant: 'Hallo! Wie kann ich Ihnen heute helfen?',
|
||||
edit: 'Konversationsrollennamen bearbeiten',
|
||||
},
|
||||
},
|
||||
variable: {
|
||||
item: {
|
||||
title: 'Variablen & Externe Werkzeuge',
|
||||
desc: 'Variablen & Externe Werkzeuge einfügen',
|
||||
},
|
||||
modal: {
|
||||
add: 'Neue Variable',
|
||||
addTool: 'Neues Werkzeug',
|
||||
},
|
||||
},
|
||||
query: {
|
||||
item: {
|
||||
title: 'Abfrage',
|
||||
desc: 'Benutzerabfragevorlage einfügen',
|
||||
},
|
||||
},
|
||||
existed: 'Bereits im Aufforderungstext vorhanden',
|
||||
},
|
||||
imageUploader: {
|
||||
uploadFromComputer: 'Vom Computer hochladen',
|
||||
uploadFromComputerReadError: 'Bildlesung fehlgeschlagen, bitte versuchen Sie es erneut.',
|
||||
uploadFromComputerUploadError: 'Bildupload fehlgeschlagen, bitte erneut hochladen.',
|
||||
uploadFromComputerLimit: 'Hochgeladene Bilder dürfen {{size}} MB nicht überschreiten',
|
||||
pasteImageLink: 'Bildlink einfügen',
|
||||
pasteImageLinkInputPlaceholder: 'Bildlink hier einfügen',
|
||||
pasteImageLinkInvalid: 'Ungültiger Bildlink',
|
||||
imageUpload: 'Bild-Upload',
|
||||
},
|
||||
}
|
||||
|
||||
export default translation
|
||||
30
web/i18n/de-DE/custom.ts
Normal file
30
web/i18n/de-DE/custom.ts
Normal file
@ -0,0 +1,30 @@
|
||||
const translation = {
|
||||
custom: 'Anpassung',
|
||||
upgradeTip: {
|
||||
prefix: 'Erweitere deinen Plan auf',
|
||||
suffix: 'um deine Marke anzupassen.',
|
||||
},
|
||||
webapp: {
|
||||
title: 'WebApp Marke anpassen',
|
||||
removeBrand: 'Entferne Powered by Dify',
|
||||
changeLogo: 'Ändere Powered by Markenbild',
|
||||
changeLogoTip: 'SVG oder PNG Format mit einer Mindestgröße von 40x40px',
|
||||
},
|
||||
app: {
|
||||
title: 'App Kopfzeilen Marke anpassen',
|
||||
changeLogoTip: 'SVG oder PNG Format mit einer Mindestgröße von 80x80px',
|
||||
},
|
||||
upload: 'Hochladen',
|
||||
uploading: 'Lade hoch',
|
||||
uploadedFail: 'Bild-Upload fehlgeschlagen, bitte erneut hochladen.',
|
||||
change: 'Ändern',
|
||||
apply: 'Anwenden',
|
||||
restore: 'Standardeinstellungen wiederherstellen',
|
||||
customize: {
|
||||
contactUs: ' kontaktiere uns ',
|
||||
prefix: 'Um das Markenlogo innerhalb der App anzupassen, bitte',
|
||||
suffix: 'um auf die Enterprise-Edition zu upgraden.',
|
||||
},
|
||||
}
|
||||
|
||||
export default translation
|
||||
130
web/i18n/de-DE/dataset-creation.ts
Normal file
130
web/i18n/de-DE/dataset-creation.ts
Normal file
@ -0,0 +1,130 @@
|
||||
const translation = {
|
||||
steps: {
|
||||
header: {
|
||||
creation: 'Wissen erstellen',
|
||||
update: 'Daten hinzufügen',
|
||||
},
|
||||
one: 'Datenquelle wählen',
|
||||
two: 'Textvorverarbeitung und Bereinigung',
|
||||
three: 'Ausführen und beenden',
|
||||
},
|
||||
error: {
|
||||
unavailable: 'Dieses Wissen ist nicht verfügbar',
|
||||
},
|
||||
stepOne: {
|
||||
filePreview: 'Dateivorschau',
|
||||
pagePreview: 'Seitenvorschau',
|
||||
dataSourceType: {
|
||||
file: 'Import aus Textdatei',
|
||||
notion: 'Synchronisation aus Notion',
|
||||
web: 'Synchronisation von Webseite',
|
||||
},
|
||||
uploader: {
|
||||
title: 'Textdatei hochladen',
|
||||
button: 'Datei hierher ziehen oder',
|
||||
browse: 'Durchsuchen',
|
||||
tip: 'Unterstützt {{supportTypes}}. Maximal {{size}}MB pro Datei.',
|
||||
validation: {
|
||||
typeError: 'Dateityp nicht unterstützt',
|
||||
size: 'Datei zu groß. Maximum ist {{size}}MB',
|
||||
count: 'Mehrere Dateien nicht unterstützt',
|
||||
filesNumber: 'Sie haben das Limit für die Stapelverarbeitung von {{filesNumber}} erreicht.',
|
||||
},
|
||||
cancel: 'Abbrechen',
|
||||
change: 'Ändern',
|
||||
failed: 'Hochladen fehlgeschlagen',
|
||||
},
|
||||
notionSyncTitle: 'Notion ist nicht verbunden',
|
||||
notionSyncTip: 'Um mit Notion zu synchronisieren, muss zuerst eine Verbindung zu Notion hergestellt werden.',
|
||||
connect: 'Verbinden gehen',
|
||||
button: 'weiter',
|
||||
emptyDatasetCreation: 'Ich möchte ein leeres Wissen erstellen',
|
||||
modal: {
|
||||
title: 'Ein leeres Wissen erstellen',
|
||||
tip: 'Ein leeres Wissen enthält keine Dokumente, und Sie können jederzeit Dokumente hochladen.',
|
||||
input: 'Wissensname',
|
||||
placeholder: 'Bitte eingeben',
|
||||
nameNotEmpty: 'Name darf nicht leer sein',
|
||||
nameLengthInvaild: 'Name muss zwischen 1 bis 40 Zeichen lang sein',
|
||||
cancelButton: 'Abbrechen',
|
||||
confirmButton: 'Erstellen',
|
||||
failed: 'Erstellung fehlgeschlagen',
|
||||
},
|
||||
},
|
||||
stepTwo: {
|
||||
segmentation: 'Chunk-Einstellungen',
|
||||
auto: 'Automatisch',
|
||||
autoDescription: 'Stellt Chunk- und Vorverarbeitungsregeln automatisch ein. Unbekannten Benutzern wird dies empfohlen.',
|
||||
custom: 'Benutzerdefiniert',
|
||||
customDescription: 'Chunk-Regeln, Chunk-Länge und Vorverarbeitungsregeln usw. anpassen.',
|
||||
separator: 'Segmentidentifikator',
|
||||
separatorPlaceholder: 'Zum Beispiel Neuer Absatz (\\\\n) oder spezieller Separator (wie "***")',
|
||||
maxLength: 'Maximale Chunk-Länge',
|
||||
overlap: 'Chunk-Überlappung',
|
||||
overlapTip: 'Die Einstellung der Chunk-Überlappung kann die semantische Relevanz zwischen ihnen aufrechterhalten und so die Abrufeffekt verbessern. Es wird empfohlen, 10%-25% der maximalen Chunk-Größe einzustellen.',
|
||||
overlapCheck: 'Chunk-Überlappung sollte nicht größer als maximale Chunk-Länge sein',
|
||||
rules: 'Textvorverarbeitungsregeln',
|
||||
removeExtraSpaces: 'Mehrfache Leerzeichen, Zeilenumbrüche und Tabulatoren ersetzen',
|
||||
removeUrlEmails: 'Alle URLs und E-Mail-Adressen löschen',
|
||||
removeStopwords: 'Stopwörter wie "ein", "eine", "der" entfernen',
|
||||
preview: 'Bestätigen & Vorschau',
|
||||
reset: 'Zurücksetzen',
|
||||
indexMode: 'Indexmodus',
|
||||
qualified: 'Hohe Qualität',
|
||||
recommend: 'Empfehlen',
|
||||
qualifiedTip: 'Ruft standardmäßige Systemeinbettungsschnittstelle für die Verarbeitung auf, um höhere Genauigkeit bei Benutzerabfragen zu bieten.',
|
||||
warning: 'Bitte zuerst den API-Schlüssel des Modellanbieters einrichten.',
|
||||
click: 'Zu den Einstellungen gehen',
|
||||
economical: 'Ökonomisch',
|
||||
economicalTip: 'Verwendet Offline-Vektor-Engines, Schlagwortindizes usw., um die Genauigkeit ohne Tokenverbrauch zu reduzieren',
|
||||
QATitle: 'Segmentierung im Frage-und-Antwort-Format',
|
||||
QATip: 'Diese Option zu aktivieren, wird mehr Tokens verbrauchen',
|
||||
QALanguage: 'Segmentierung verwenden',
|
||||
emstimateCost: 'Schätzung',
|
||||
emstimateSegment: 'Geschätzte Chunks',
|
||||
segmentCount: 'Chunks',
|
||||
calculating: 'Berechnung...',
|
||||
fileSource: 'Dokumente vorverarbeiten',
|
||||
notionSource: 'Seiten vorverarbeiten',
|
||||
other: 'und weitere ',
|
||||
fileUnit: ' Dateien',
|
||||
notionUnit: ' Seiten',
|
||||
previousStep: 'Vorheriger Schritt',
|
||||
nextStep: 'Speichern & Verarbeiten',
|
||||
save: 'Speichern & Verarbeiten',
|
||||
cancel: 'Abbrechen',
|
||||
sideTipTitle: 'Warum segmentieren und vorverarbeiten?',
|
||||
sideTipP1: 'Bei der Verarbeitung von Textdaten sind Segmentierung und Bereinigung zwei wichtige Vorverarbeitungsschritte.',
|
||||
sideTipP2: 'Segmentierung teilt langen Text in Absätze, damit Modelle ihn besser verstehen können. Dies verbessert die Qualität und Relevanz der Modellergebnisse.',
|
||||
sideTipP3: 'Bereinigung entfernt unnötige Zeichen und Formate, macht das Wissen sauberer und leichter zu parsen.',
|
||||
sideTipP4: 'Richtige Segmentierung und Bereinigung verbessern die Modellleistung und liefern genauere und wertvollere Ergebnisse.',
|
||||
previewTitle: 'Vorschau',
|
||||
previewTitleButton: 'Vorschau',
|
||||
previewButton: 'Umschalten zum Frage-und-Antwort-Format',
|
||||
previewSwitchTipStart: 'Die aktuelle Chunk-Vorschau ist im Textformat, ein Wechsel zur Vorschau im Frage-und-Antwort-Format wird',
|
||||
previewSwitchTipEnd: ' zusätzliche Tokens verbrauchen',
|
||||
characters: 'Zeichen',
|
||||
indexSettedTip: 'Um die Indexmethode zu ändern, bitte gehen Sie zu den ',
|
||||
retrivalSettedTip: 'Um die Indexmethode zu ändern, bitte gehen Sie zu den ',
|
||||
datasetSettingLink: 'Wissenseinstellungen.',
|
||||
},
|
||||
stepThree: {
|
||||
creationTitle: '🎉 Wissen erstellt',
|
||||
creationContent: 'Wir haben das Wissen automatisch benannt, Sie können es jederzeit ändern',
|
||||
label: 'Wissensname',
|
||||
additionTitle: '🎉 Dokument hochgeladen',
|
||||
additionP1: 'Das Dokument wurde zum Wissen hinzugefügt',
|
||||
additionP2: ', Sie können es in der Dokumentenliste des Wissens finden.',
|
||||
stop: 'Verarbeitung stoppen',
|
||||
resume: 'Verarbeitung fortsetzen',
|
||||
navTo: 'Zum Dokument gehen',
|
||||
sideTipTitle: 'Was kommt als Nächstes',
|
||||
sideTipContent: 'Nachdem das Dokument indiziert wurde, kann das Wissen in die Anwendung als Kontext integriert werden, Sie finden die Kontexteinstellung auf der Seite zur Eingabeaufforderungen-Orchestrierung. Sie können es auch als unabhängiges ChatGPT-Indexierungsplugin zur Veröffentlichung erstellen.',
|
||||
modelTitle: 'Sind Sie sicher, dass Sie die Einbettung stoppen möchten?',
|
||||
modelContent: 'Wenn Sie die Verarbeitung später fortsetzen möchten, werden Sie dort weitermachen, wo Sie aufgehört haben.',
|
||||
modelButtonConfirm: 'Bestätigen',
|
||||
modelButtonCancel: 'Abbrechen',
|
||||
},
|
||||
}
|
||||
|
||||
export default translation
|
||||
349
web/i18n/de-DE/dataset-documents.ts
Normal file
349
web/i18n/de-DE/dataset-documents.ts
Normal file
@ -0,0 +1,349 @@
|
||||
const translation = {
|
||||
list: {
|
||||
title: 'Dokumente',
|
||||
desc: 'Alle Dateien des Wissens werden hier angezeigt, und das gesamte Wissen kann mit Dify-Zitaten verknüpft oder über das Chat-Plugin indiziert werden.',
|
||||
addFile: 'Datei hinzufügen',
|
||||
addPages: 'Seiten hinzufügen',
|
||||
table: {
|
||||
header: {
|
||||
fileName: 'DATEINAME',
|
||||
words: 'WÖRTER',
|
||||
hitCount: 'SUCHANFRAGEN',
|
||||
uploadTime: 'HOCHLADEZEIT',
|
||||
status: 'STATUS',
|
||||
action: 'AKTION',
|
||||
},
|
||||
},
|
||||
action: {
|
||||
uploadFile: 'Neue Datei hochladen',
|
||||
settings: 'Segment-Einstellungen',
|
||||
addButton: 'Chunk hinzufügen',
|
||||
add: 'Einen Chunk hinzufügen',
|
||||
batchAdd: 'Batch hinzufügen',
|
||||
archive: 'Archivieren',
|
||||
unarchive: 'Archivierung aufheben',
|
||||
delete: 'Löschen',
|
||||
enableWarning: 'Archivierte Datei kann nicht aktiviert werden',
|
||||
sync: 'Synchronisieren',
|
||||
},
|
||||
index: {
|
||||
enable: 'Aktivieren',
|
||||
disable: 'Deaktivieren',
|
||||
all: 'Alle',
|
||||
enableTip: 'Die Datei kann indiziert werden',
|
||||
disableTip: 'Die Datei kann nicht indiziert werden',
|
||||
},
|
||||
status: {
|
||||
queuing: 'In Warteschlange',
|
||||
indexing: 'Indizierung',
|
||||
paused: 'Pausiert',
|
||||
error: 'Fehler',
|
||||
available: 'Verfügbar',
|
||||
enabled: 'Aktiviert',
|
||||
disabled: 'Deaktiviert',
|
||||
archived: 'Archiviert',
|
||||
},
|
||||
empty: {
|
||||
title: 'Es gibt noch keine Dokumentation',
|
||||
upload: {
|
||||
tip: 'Sie können Dateien hochladen, von der Website oder von Web-Apps wie Notion, GitHub usw. synchronisieren.',
|
||||
},
|
||||
sync: {
|
||||
tip: 'Dify wird periodisch Dateien von Ihrem Notion herunterladen und die Verarbeitung abschließen.',
|
||||
},
|
||||
},
|
||||
delete: {
|
||||
title: 'Sind Sie sicher, dass Sie löschen möchten?',
|
||||
content: 'Wenn Sie die Verarbeitung später fortsetzen müssen, werden Sie dort weitermachen, wo Sie aufgehört haben',
|
||||
},
|
||||
batchModal: {
|
||||
title: 'Chunks in Batch hinzufügen',
|
||||
csvUploadTitle: 'Ziehen Sie Ihre CSV-Datei hierher oder ',
|
||||
browse: 'durchsuchen',
|
||||
tip: 'Die CSV-Datei muss der folgenden Struktur entsprechen:',
|
||||
question: 'Frage',
|
||||
answer: 'Antwort',
|
||||
contentTitle: 'Chunk-Inhalt',
|
||||
content: 'Inhalt',
|
||||
template: 'Laden Sie die Vorlage hier herunter',
|
||||
cancel: 'Abbrechen',
|
||||
run: 'Batch ausführen',
|
||||
runError: 'Batch-Ausführung fehlgeschlagen',
|
||||
processing: 'In Batch-Verarbeitung',
|
||||
completed: 'Import abgeschlossen',
|
||||
error: 'Importfehler',
|
||||
ok: 'OK',
|
||||
},
|
||||
},
|
||||
metadata: {
|
||||
title: 'Metadaten',
|
||||
desc: 'Das Kennzeichnen von Metadaten für Dokumente ermöglicht es der KI, sie rechtzeitig zu erreichen und die Quelle der Referenzen für die Benutzer offenzulegen.',
|
||||
dateTimeFormat: 'MMMM D, YYYY hh:mm A',
|
||||
docTypeSelectTitle: 'Bitte wählen Sie einen Dokumenttyp',
|
||||
docTypeChangeTitle: 'Dokumenttyp ändern',
|
||||
docTypeSelectWarning:
|
||||
'Wenn der Dokumenttyp geändert wird, werden die jetzt ausgefüllten Metadaten nicht mehr erhalten bleiben',
|
||||
firstMetaAction: 'Los geht\'s',
|
||||
placeholder: {
|
||||
add: 'Hinzufügen ',
|
||||
select: 'Auswählen ',
|
||||
},
|
||||
source: {
|
||||
upload_file: 'Datei hochladen',
|
||||
notion: 'Von Notion synchronisieren',
|
||||
github: 'Von Github synchronisieren',
|
||||
},
|
||||
type: {
|
||||
book: 'Buch',
|
||||
webPage: 'Webseite',
|
||||
paper: 'Aufsatz',
|
||||
socialMediaPost: 'Social Media Beitrag',
|
||||
personalDocument: 'Persönliches Dokument',
|
||||
businessDocument: 'Geschäftsdokument',
|
||||
IMChat: 'IM Chat',
|
||||
wikipediaEntry: 'Wikipedia-Eintrag',
|
||||
notion: 'Von Notion synchronisieren',
|
||||
github: 'Von Github synchronisieren',
|
||||
technicalParameters: 'Technische Parameter',
|
||||
},
|
||||
field: {
|
||||
processRule: {
|
||||
processDoc: 'Dokument verarbeiten',
|
||||
segmentRule: 'Chunk-Regel',
|
||||
segmentLength: 'Chunk-Länge',
|
||||
processClean: 'Textverarbeitung bereinigen',
|
||||
},
|
||||
book: {
|
||||
title: 'Titel',
|
||||
language: 'Sprache',
|
||||
author: 'Autor',
|
||||
publisher: 'Verlag',
|
||||
publicationDate: 'Veröffentlichungsdatum',
|
||||
ISBN: 'ISBN',
|
||||
category: 'Kategorie',
|
||||
},
|
||||
webPage: {
|
||||
title: 'Titel',
|
||||
url: 'URL',
|
||||
language: 'Sprache',
|
||||
authorPublisher: 'Autor/Verlag',
|
||||
publishDate: 'Veröffentlichungsdatum',
|
||||
topicsKeywords: 'Themen/Schlüsselwörter',
|
||||
description: 'Beschreibung',
|
||||
},
|
||||
paper: {
|
||||
title: 'Titel',
|
||||
language: 'Sprache',
|
||||
author: 'Autor',
|
||||
publishDate: 'Veröffentlichungsdatum',
|
||||
journalConferenceName: 'Zeitschrift/Konferenzname',
|
||||
volumeIssuePage: 'Band/Ausgabe/Seite',
|
||||
DOI: 'DOI',
|
||||
topicsKeywords: 'Themen/Schlüsselwörter',
|
||||
abstract: 'Zusammenfassung',
|
||||
},
|
||||
socialMediaPost: {
|
||||
platform: 'Plattform',
|
||||
authorUsername: 'Autor/Benutzername',
|
||||
publishDate: 'Veröffentlichungsdatum',
|
||||
postURL: 'Beitrags-URL',
|
||||
topicsTags: 'Themen/Tags',
|
||||
},
|
||||
personalDocument: {
|
||||
title: 'Titel',
|
||||
author: 'Autor',
|
||||
creationDate: 'Erstellungsdatum',
|
||||
lastModifiedDate: 'Letztes Änderungsdatum',
|
||||
documentType: 'Dokumenttyp',
|
||||
tagsCategory: 'Tags/Kategorie',
|
||||
},
|
||||
businessDocument: {
|
||||
title: 'Titel',
|
||||
author: 'Autor',
|
||||
creationDate: 'Erstellungsdatum',
|
||||
lastModifiedDate: 'Letztes Änderungsdatum',
|
||||
documentType: 'Dokumenttyp',
|
||||
departmentTeam: 'Abteilung/Team',
|
||||
},
|
||||
IMChat: {
|
||||
chatPlatform: 'Chat-Plattform',
|
||||
chatPartiesGroupName: 'Chat-Parteien/Gruppenname',
|
||||
participants: 'Teilnehmer',
|
||||
startDate: 'Startdatum',
|
||||
endDate: 'Enddatum',
|
||||
topicsKeywords: 'Themen/Schlüsselwörter',
|
||||
fileType: 'Dateityp',
|
||||
},
|
||||
wikipediaEntry: {
|
||||
title: 'Titel',
|
||||
language: 'Sprache',
|
||||
webpageURL: 'Webseiten-URL',
|
||||
editorContributor: 'Editor/Beitragender',
|
||||
lastEditDate: 'Letztes Bearbeitungsdatum',
|
||||
summaryIntroduction: 'Zusammenfassung/Einführung',
|
||||
},
|
||||
notion: {
|
||||
title: 'Titel',
|
||||
language: 'Sprache',
|
||||
author: 'Autor',
|
||||
createdTime: 'Erstellungszeit',
|
||||
lastModifiedTime: 'Letzte Änderungszeit',
|
||||
url: 'URL',
|
||||
tag: 'Tag',
|
||||
description: 'Beschreibung',
|
||||
},
|
||||
github: {
|
||||
repoName: 'Repository-Name',
|
||||
repoDesc: 'Repository-Beschreibung',
|
||||
repoOwner: 'Repository-Eigentümer',
|
||||
fileName: 'Dateiname',
|
||||
filePath: 'Dateipfad',
|
||||
programmingLang: 'Programmiersprache',
|
||||
url: 'URL',
|
||||
license: 'Lizenz',
|
||||
lastCommitTime: 'Letzte Commit-Zeit',
|
||||
lastCommitAuthor: 'Letzter Commit-Autor',
|
||||
},
|
||||
originInfo: {
|
||||
originalFilename: 'Originaldateiname',
|
||||
originalFileSize: 'Originaldateigröße',
|
||||
uploadDate: 'Hochladedatum',
|
||||
lastUpdateDate: 'Letztes Änderungsdatum',
|
||||
source: 'Quelle',
|
||||
},
|
||||
technicalParameters: {
|
||||
segmentSpecification: 'Chunk-Spezifikation',
|
||||
segmentLength: 'Chunk-Länge',
|
||||
avgParagraphLength: 'Durchschn. Absatzlänge',
|
||||
paragraphs: 'Absätze',
|
||||
hitCount: 'Abrufanzahl',
|
||||
embeddingTime: 'Einbettungszeit',
|
||||
embeddedSpend: 'Einbettungsausgaben',
|
||||
},
|
||||
},
|
||||
languageMap: {
|
||||
zh: 'Chinesisch',
|
||||
en: 'Englisch',
|
||||
es: 'Spanisch',
|
||||
fr: 'Französisch',
|
||||
de: 'Deutsch',
|
||||
ja: 'Japanisch',
|
||||
ko: 'Koreanisch',
|
||||
ru: 'Russisch',
|
||||
ar: 'Arabisch',
|
||||
pt: 'Portugiesisch',
|
||||
it: 'Italienisch',
|
||||
nl: 'Niederländisch',
|
||||
pl: 'Polnisch',
|
||||
sv: 'Schwedisch',
|
||||
tr: 'Türkisch',
|
||||
he: 'Hebräisch',
|
||||
hi: 'Hindi',
|
||||
da: 'Dänisch',
|
||||
fi: 'Finnisch',
|
||||
no: 'Norwegisch',
|
||||
hu: 'Ungarisch',
|
||||
el: 'Griechisch',
|
||||
cs: 'Tschechisch',
|
||||
th: 'Thai',
|
||||
id: 'Indonesisch',
|
||||
},
|
||||
categoryMap: {
|
||||
book: {
|
||||
fiction: 'Fiktion',
|
||||
biography: 'Biografie',
|
||||
history: 'Geschichte',
|
||||
science: 'Wissenschaft',
|
||||
technology: 'Technologie',
|
||||
education: 'Bildung',
|
||||
philosophy: 'Philosophie',
|
||||
religion: 'Religion',
|
||||
socialSciences: 'Sozialwissenschaften',
|
||||
art: 'Kunst',
|
||||
travel: 'Reisen',
|
||||
health: 'Gesundheit',
|
||||
selfHelp: 'Selbsthilfe',
|
||||
businessEconomics: 'Wirtschaft',
|
||||
cooking: 'Kochen',
|
||||
childrenYoungAdults: 'Kinder & Jugendliche',
|
||||
comicsGraphicNovels: 'Comics & Grafische Romane',
|
||||
poetry: 'Poesie',
|
||||
drama: 'Drama',
|
||||
other: 'Andere',
|
||||
},
|
||||
personalDoc: {
|
||||
notes: 'Notizen',
|
||||
blogDraft: 'Blog-Entwurf',
|
||||
diary: 'Tagebuch',
|
||||
researchReport: 'Forschungsbericht',
|
||||
bookExcerpt: 'Buchauszug',
|
||||
schedule: 'Zeitplan',
|
||||
list: 'Liste',
|
||||
projectOverview: 'Projektübersicht',
|
||||
photoCollection: 'Fotosammlung',
|
||||
creativeWriting: 'Kreatives Schreiben',
|
||||
codeSnippet: 'Code-Snippet',
|
||||
designDraft: 'Design-Entwurf',
|
||||
personalResume: 'Persönlicher Lebenslauf',
|
||||
other: 'Andere',
|
||||
},
|
||||
businessDoc: {
|
||||
meetingMinutes: 'Protokolle',
|
||||
researchReport: 'Forschungsbericht',
|
||||
proposal: 'Vorschlag',
|
||||
employeeHandbook: 'Mitarbeiterhandbuch',
|
||||
trainingMaterials: 'Schulungsmaterialien',
|
||||
requirementsDocument: 'Anforderungsdokumentation',
|
||||
designDocument: 'Design-Dokument',
|
||||
productSpecification: 'Produktspezifikation',
|
||||
financialReport: 'Finanzbericht',
|
||||
marketAnalysis: 'Marktanalyse',
|
||||
projectPlan: 'Projektplan',
|
||||
teamStructure: 'Teamstruktur',
|
||||
policiesProcedures: 'Richtlinien & Verfahren',
|
||||
contractsAgreements: 'Verträge & Vereinbarungen',
|
||||
emailCorrespondence: 'E-Mail-Korrespondenz',
|
||||
other: 'Andere',
|
||||
},
|
||||
},
|
||||
},
|
||||
embedding: {
|
||||
processing: 'Einbettungsverarbeitung...',
|
||||
paused: 'Einbettung pausiert',
|
||||
completed: 'Einbettung abgeschlossen',
|
||||
error: 'Einbettungsfehler',
|
||||
docName: 'Dokument vorbereiten',
|
||||
mode: 'Segmentierungsregel',
|
||||
segmentLength: 'Chunk-Länge',
|
||||
textCleaning: 'Textvordefinition und -bereinigung',
|
||||
segments: 'Absätze',
|
||||
highQuality: 'Hochwertiger Modus',
|
||||
economy: 'Wirtschaftlicher Modus',
|
||||
estimate: 'Geschätzter Verbrauch',
|
||||
stop: 'Verarbeitung stoppen',
|
||||
resume: 'Verarbeitung fortsetzen',
|
||||
automatic: 'Automatisch',
|
||||
custom: 'Benutzerdefiniert',
|
||||
previewTip: 'Absatzvorschau ist nach Abschluss der Einbettung verfügbar',
|
||||
},
|
||||
segment: {
|
||||
paragraphs: 'Absätze',
|
||||
keywords: 'Schlüsselwörter',
|
||||
addKeyWord: 'Schlüsselwort hinzufügen',
|
||||
keywordError: 'Die maximale Länge des Schlüsselworts beträgt 20',
|
||||
characters: 'Zeichen',
|
||||
hitCount: 'Abrufanzahl',
|
||||
vectorHash: 'Vektor-Hash: ',
|
||||
questionPlaceholder: 'Frage hier hinzufügen',
|
||||
questionEmpty: 'Frage darf nicht leer sein',
|
||||
answerPlaceholder: 'Antwort hier hinzufügen',
|
||||
answerEmpty: 'Antwort darf nicht leer sein',
|
||||
contentPlaceholder: 'Inhalt hier hinzufügen',
|
||||
contentEmpty: 'Inhalt darf nicht leer sein',
|
||||
newTextSegment: 'Neues Textsegment',
|
||||
newQaSegment: 'Neues Q&A-Segment',
|
||||
delete: 'Diesen Chunk löschen?',
|
||||
},
|
||||
}
|
||||
|
||||
export default translation
|
||||
28
web/i18n/de-DE/dataset-hit-testing.ts
Normal file
28
web/i18n/de-DE/dataset-hit-testing.ts
Normal file
@ -0,0 +1,28 @@
|
||||
const translation = {
|
||||
title: 'Abruf-Test',
|
||||
desc: 'Testen Sie die Treffereffektivität des Wissens anhand des gegebenen Abfragetextes.',
|
||||
dateTimeFormat: 'MM/TT/JJJJ hh:mm A',
|
||||
recents: 'Kürzlich',
|
||||
table: {
|
||||
header: {
|
||||
source: 'Quelle',
|
||||
text: 'Text',
|
||||
time: 'Zeit',
|
||||
},
|
||||
},
|
||||
input: {
|
||||
title: 'Quelltext',
|
||||
placeholder: 'Bitte geben Sie einen Text ein, ein kurzer aussagekräftiger Satz wird empfohlen.',
|
||||
countWarning: 'Bis zu 200 Zeichen.',
|
||||
indexWarning: 'Nur Wissen hoher Qualität.',
|
||||
testing: 'Testen',
|
||||
},
|
||||
hit: {
|
||||
title: 'ABRUFPARAGRAFEN',
|
||||
emptyTip: 'Ergebnisse des Abruf-Tests werden hier angezeigt',
|
||||
},
|
||||
noRecentTip: 'Keine kürzlichen Abfrageergebnisse hier',
|
||||
viewChart: 'VEKTORDIAGRAMM ansehen',
|
||||
}
|
||||
|
||||
export default translation
|
||||
33
web/i18n/de-DE/dataset-settings.ts
Normal file
33
web/i18n/de-DE/dataset-settings.ts
Normal file
@ -0,0 +1,33 @@
|
||||
const translation = {
|
||||
title: 'Wissenseinstellungen',
|
||||
desc: 'Hier können Sie die Eigenschaften und Arbeitsweisen des Wissens anpassen.',
|
||||
form: {
|
||||
name: 'Wissensname',
|
||||
namePlaceholder: 'Bitte geben Sie den Namen des Wissens ein',
|
||||
nameError: 'Name darf nicht leer sein',
|
||||
desc: 'Wissensbeschreibung',
|
||||
descInfo: 'Bitte schreiben Sie eine klare textuelle Beschreibung, um den Inhalt des Wissens zu umreißen. Diese Beschreibung wird als Grundlage für die Auswahl aus mehreren Wissensdatenbanken zur Inferenz verwendet.',
|
||||
descPlaceholder: 'Beschreiben Sie, was in diesem Wissen enthalten ist. Eine detaillierte Beschreibung ermöglicht es der KI, zeitnah auf den Inhalt des Wissens zuzugreifen. Wenn leer, verwendet Dify die Standard-Treffstrategie.',
|
||||
descWrite: 'Erfahren Sie, wie man eine gute Wissensbeschreibung schreibt.',
|
||||
permissions: 'Berechtigungen',
|
||||
permissionsOnlyMe: 'Nur ich',
|
||||
permissionsAllMember: 'Alle Teammitglieder',
|
||||
indexMethod: 'Indexierungsmethode',
|
||||
indexMethodHighQuality: 'Hohe Qualität',
|
||||
indexMethodHighQualityTip: 'Ruft die Einbettungsschnittstelle von OpenAI für die Verarbeitung auf, um bei Benutzerabfragen eine höhere Genauigkeit zu bieten.',
|
||||
indexMethodEconomy: 'Ökonomisch',
|
||||
indexMethodEconomyTip: 'Verwendet Offline-Vektor-Engines, Schlagwortindizes usw., um die Genauigkeit ohne Tokenverbrauch zu reduzieren',
|
||||
embeddingModel: 'Einbettungsmodell',
|
||||
embeddingModelTip: 'Ändern Sie das eingebettete Modell, bitte gehen Sie zu ',
|
||||
embeddingModelTipLink: 'Einstellungen',
|
||||
retrievalSetting: {
|
||||
title: 'Abrufeinstellung',
|
||||
learnMore: 'Mehr erfahren',
|
||||
description: ' über die Abrufmethode.',
|
||||
longDescription: ' über die Abrufmethode, dies kann jederzeit in den Wissenseinstellungen geändert werden.',
|
||||
},
|
||||
save: 'Speichern',
|
||||
},
|
||||
}
|
||||
|
||||
export default translation
|
||||
47
web/i18n/de-DE/dataset.ts
Normal file
47
web/i18n/de-DE/dataset.ts
Normal file
@ -0,0 +1,47 @@
|
||||
const translation = {
|
||||
knowledge: 'Wissen',
|
||||
documentCount: ' Dokumente',
|
||||
wordCount: 'k Wörter',
|
||||
appCount: ' verknüpfte Apps',
|
||||
createDataset: 'Wissen erstellen',
|
||||
createDatasetIntro: 'Importiere deine eigenen Textdaten oder schreibe Daten in Echtzeit über Webhook für die LLM-Kontextverbesserung.',
|
||||
deleteDatasetConfirmTitle: 'Dieses Wissen löschen?',
|
||||
deleteDatasetConfirmContent:
|
||||
'Das Löschen des Wissens ist unwiderruflich. Benutzer werden nicht mehr auf Ihr Wissen zugreifen können und alle Eingabeaufforderungen, Konfigurationen und Protokolle werden dauerhaft gelöscht.',
|
||||
datasetDeleted: 'Wissen gelöscht',
|
||||
datasetDeleteFailed: 'Löschen des Wissens fehlgeschlagen',
|
||||
didYouKnow: 'Wusstest du schon?',
|
||||
intro1: 'Das Wissen kann in die Dify-Anwendung ',
|
||||
intro2: 'als Kontext',
|
||||
intro3: ',',
|
||||
intro4: 'oder es ',
|
||||
intro5: 'kann erstellt werden',
|
||||
intro6: ' als ein eigenständiges ChatGPT-Index-Plugin zum Veröffentlichen',
|
||||
unavailable: 'Nicht verfügbar',
|
||||
unavailableTip: 'Einbettungsmodell ist nicht verfügbar, das Standard-Einbettungsmodell muss konfiguriert werden',
|
||||
datasets: 'WISSEN',
|
||||
datasetsApi: 'API',
|
||||
retrieval: {
|
||||
semantic_search: {
|
||||
title: 'Vektorsuche',
|
||||
description: 'Erzeuge Abfrage-Einbettungen und suche nach dem Textstück, das seiner Vektorrepräsentation am ähnlichsten ist.',
|
||||
},
|
||||
full_text_search: {
|
||||
title: 'Volltextsuche',
|
||||
description: 'Indiziere alle Begriffe im Dokument, sodass Benutzer jeden Begriff suchen und den relevanten Textabschnitt finden können, der diese Begriffe enthält.',
|
||||
},
|
||||
hybrid_search: {
|
||||
title: 'Hybridsuche',
|
||||
description: 'Führe Volltextsuche und Vektorsuchen gleichzeitig aus, ordne neu, um die beste Übereinstimmung für die Abfrage des Benutzers auszuwählen. Konfiguration des Rerank-Modell-APIs ist notwendig.',
|
||||
recommend: 'Empfehlen',
|
||||
},
|
||||
invertedIndex: {
|
||||
title: 'Invertierter Index',
|
||||
description: 'Ein invertierter Index ist eine Struktur, die für effiziente Abfragen verwendet wird. Organisiert nach Begriffen, zeigt jeder Begriff auf Dokumente oder Webseiten, die ihn enthalten.',
|
||||
},
|
||||
change: 'Ändern',
|
||||
changeRetrievalMethod: 'Abfragemethode ändern',
|
||||
},
|
||||
}
|
||||
|
||||
export default translation
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user