Compare commits

..

3 Commits

417 changed files with 2368 additions and 11153 deletions

View File

@ -3,8 +3,8 @@
cd web && npm install
pipx install poetry
echo 'alias start-api="cd /workspaces/dify/api && poetry run python -m flask run --host 0.0.0.0 --port=5001 --debug"' >> ~/.bashrc
echo 'alias start-worker="cd /workspaces/dify/api && poetry run python -m celery -A app.celery worker -P gevent -c 1 --loglevel INFO -Q dataset,generation,mail,ops_trace,app_deletion"' >> ~/.bashrc
echo 'alias start-api="cd /workspaces/dify/api && flask run --host 0.0.0.0 --port=5001 --debug"' >> ~/.bashrc
echo 'alias start-worker="cd /workspaces/dify/api && celery -A app.celery worker -P gevent -c 1 --loglevel INFO -Q dataset,generation,mail,ops_trace,app_deletion"' >> ~/.bashrc
echo 'alias start-web="cd /workspaces/dify/web && npm run dev"' >> ~/.bashrc
echo 'alias start-containers="cd /workspaces/dify/docker && docker-compose -f docker-compose.middleware.yaml -p dify up -d"' >> ~/.bashrc

View File

@ -9,7 +9,7 @@ body:
required: true
- label: I confirm that I am using English to submit this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "[FOR CHINESE USERS] 请务必使用英文提交 Issue否则会被关闭。谢谢:"
- label: "请务必使用英文提交 Issue否则会被关闭。谢谢:"
required: true
- label: "Please do not modify this template :) and fill in all the required fields."
required: true

View File

@ -9,7 +9,7 @@ body:
required: true
- label: I confirm that I am using English to submit this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "[FOR CHINESE USERS] 请务必使用英文提交 Issue否则会被关闭。谢谢:"
- label: "请务必使用英文提交 Issue否则会被关闭。谢谢:"
required: true
- label: "Please do not modify this template :) and fill in all the required fields."
required: true

View File

@ -9,7 +9,7 @@ body:
required: true
- label: I confirm that I am using English to submit this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "[FOR CHINESE USERS] 请务必使用英文提交 Issue否则会被关闭。谢谢:"
- label: "请务必使用英文提交 Issue否则会被关闭。谢谢:"
required: true
- label: "Please do not modify this template :) and fill in all the required fields."
required: true

View File

@ -14,7 +14,7 @@ body:
required: true
- label: I confirm that I am using English to submit this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "[FOR CHINESE USERS] 请务必使用英文提交 Issue否则会被关闭。谢谢:"
- label: "请务必使用英文提交 Issue否则会被关闭。谢谢:"
required: true
- label: "Please do not modify this template :) and fill in all the required fields."
required: true
@ -22,6 +22,7 @@ body:
- type: input
attributes:
label: Dify version
placeholder: 0.6.11
description: See about section in Dify console
validations:
required: true

View File

@ -12,7 +12,7 @@ body:
required: true
- label: I confirm that I am using English to submit report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "[FOR CHINESE USERS] 请务必使用英文提交 Issue否则会被关闭。谢谢:"
- label: "请务必使用英文提交 Issue否则会被关闭。谢谢:"
required: true
- label: "Please do not modify this template :) and fill in all the required fields."
required: true

View File

@ -12,7 +12,7 @@ body:
required: true
- label: I confirm that I am using English to submit this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "[FOR CHINESE USERS] 请务必使用英文提交 Issue否则会被关闭。谢谢:"
- label: "请务必使用英文提交 Issue否则会被关闭。谢谢:"
required: true
- label: "Please do not modify this template :) and fill in all the required fields."
required: true

View File

@ -12,13 +12,14 @@ body:
required: true
- label: I confirm that I am using English to submit this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "[FOR CHINESE USERS] 请务必使用英文提交 Issue否则会被关闭。谢谢:"
- label: "请务必使用英文提交 Issue否则会被关闭。谢谢:"
required: true
- label: "Please do not modify this template :) and fill in all the required fields."
required: true
- type: input
attributes:
label: Dify version
placeholder: 0.3.21
description: Hover over system tray icon or look at Settings
validations:
required: true

View File

@ -21,7 +21,6 @@ jobs:
python-version:
- "3.10"
- "3.11"
- "3.12"
steps:
- name: Checkout code

1
.gitignore vendored
View File

@ -155,7 +155,6 @@ docker-legacy/volumes/milvus/*
docker-legacy/volumes/chroma/*
docker/volumes/app/storage/*
docker/volumes/certbot/*
docker/volumes/db/data/*
docker/volumes/redis/data/*
docker/volumes/weaviate/*

View File

@ -1,7 +1,7 @@
Dify にコントリビュートしたいとお考えなのですね。それは素晴らしいことです。
私たちは、LLM アプリケーションの構築と管理のための最も直感的なワークフローを設計するという壮大な野望を持っています。人数も資金も限られている新興企業として、コミュニティからの支援は本当に重要です。
私たちは現状を鑑み、機敏かつ迅速に開発をする必要がありますが、同時にあなたのようなコントリビューターの方々に、可能な限りスムーズな貢献体験をしていただきたいと思っています。そのためにこのコントリビュートガイドを作成しました。
私たちは現状を鑑み、機敏かつ迅速に開発をする必要がありますが、同時にあなたのようなコントリビューターの方々に、可能な限りスムーズな貢献体験をしていただきたいと思っています。そのためにこのコントリビュートガイドを作成しました。
コードベースやコントリビュータの方々と私たちがどのように仕事をしているのかに慣れていただき、楽しいパートにすぐに飛び込めるようにすることが目的です。
このガイドは Dify そのものと同様に、継続的に改善されています。実際のプロジェクトに遅れをとることがあるかもしれませんが、ご理解のほどよろしくお願いいたします。
@ -14,13 +14,13 @@ Dify にコントリビュートしたいとお考えなのですね。それは
### 機能リクエスト
* 新しい機能要望を出す場合は、提案する機能が何を実現するものなのかを説明し、可能な限り多くのコンテキストを含めてください。[@perzeusss](https://github.com/perzeuss)は、あなたの要望を書き出すのに役立つ [Feature Request Copilot](https://udify.app/chat/MK2kVSnw1gakVwMX) を作ってくれました。気軽に試してみてください。
* 新しい機能要望を出す場合は、提案する機能が何を実現するものなのかを説明し、可能な限り多くのコンテキストを含めてください。[@perzeusss](https://github.com/perzeuss)は、あなたの要望を書き出すのに役立つ [Feature Request Copilot](https://udify.app/chat/MK2kVSnw1gakVwMX) を作ってくれました。気軽に試してみてください。
* 既存の課題から 1 つ選びたい場合は、その下にコメントを書いてください。
関連する方向で作業しているチームメンバーが参加します。すべてが良好であれば、コーディングを開始する許可が与えられます。私たちが変更を提案した場合にあなたの作業が無駄になることがないよう、それまでこの機能の作業を控えていただくようお願いいたします。
関連する方向で作業しているチームメンバーが参加します。すべてが良好であれば、コーディングを開始する許可が与えられます。私たちが変更を提案した場合にあなたの作業が無駄になることがないよう、それまでこの機能の作業を控えていただくようお願いいたします。
提案された機能がどの分野に属するかによって、あなたは異なるチーム・メンバーと話をするかもしれません。以下は、各チームメンバーが現在取り組んでいる分野の概要です。
提案された機能がどの分野に属するかによって、あなたは異なるチーム・メンバーと話をするかもしれません。以下は、各チームメンバーが現在取り組んでいる分野の概要です。
| Member | Scope |
| --------------------------------------------------------------------------------------- | ------------------------------------ |
@ -153,7 +153,7 @@ Dify のバックエンドは[Flask](https://flask.palletsprojects.com/en/3.0.x/
いよいよ、私たちのリポジトリにプルリクエスト (PR) を提出する時が来ました。主要な機能については、まず `deploy/dev` ブランチにマージしてテストしてから `main` ブランチにマージします。
マージ競合などの問題が発生した場合、またはプル リクエストを開く方法がわからない場合は、[GitHub's pull request tutorial](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests) をチェックしてみてください。
これで完了です!あなたの PR がマージされると、[README](https://github.com/langgenius/dify/blob/main/README.md) にコントリビューターとして紹介されます。
これで完了です!あなたの PR がマージされると、[README](https://github.com/langgenius/dify/blob/main/README.md) にコントリビューターとして紹介されます。
## ヘルプを得る

View File

@ -37,7 +37,6 @@
<a href="./README_KL.md"><img alt="README tlhIngan Hol" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="README in Korean" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
</p>
@ -65,7 +64,7 @@ Dify is an open-source LLM app development platform. Its intuitive interface com
Extensive RAG capabilities that cover everything from document ingestion to retrieval, with out-of-box support for text extraction from PDFs, PPTs, and other common document formats.
**5. Agent capabilities**:
You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DALL·E, Stable Diffusion and WolframAlpha.
You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DELL·E, Stable Diffusion and WolframAlpha.
**6. LLMOps**:
Monitor and analyze application logs and performance over time. You could continuously improve prompts, datasets, and models based on production data and annotations.

View File

@ -37,7 +37,6 @@
<a href="./README_KL.md"><img alt="README tlhIngan Hol" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="README in Korean" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
</p>
<div style="text-align: right;">
@ -57,7 +56,7 @@
**4. خط أنابيب RAG**: قدرات RAG الواسعة التي تغطي كل شيء من استيعاب الوثائق إلى الاسترجاع، مع الدعم الفوري لاستخراج النص من ملفات PDF و PPT وتنسيقات الوثائق الشائعة الأخرى.
**5. قدرات الوكيل**: يمكنك تعريف الوكلاء بناءً على أمر وظيفة LLM أو ReAct، وإضافة أدوات مدمجة أو مخصصة للوكيل. توفر Dify أكثر من 50 أداة مدمجة لوكلاء الذكاء الاصطناعي، مثل البحث في Google و DALL·E وStable Diffusion و WolframAlpha.
**5. قدرات الوكيل**: يمكنك تعريف الوكلاء بناءً على أمر وظيفة LLM أو ReAct، وإضافة أدوات مدمجة أو مخصصة للوكيل. توفر Dify أكثر من 50 أداة مدمجة لوكلاء الذكاء الاصطناعي، مثل البحث في Google و DELL·E وStable Diffusion و WolframAlpha.
**6. الـ LLMOps**: راقب وتحلل سجلات التطبيق والأداء على مر الزمن. يمكنك تحسين الأوامر والبيانات والنماذج باستمرار استنادًا إلى البيانات الإنتاجية والتعليقات.

View File

@ -36,7 +36,6 @@
<a href="./README_KL.md"><img alt="上个月的提交次数" src="https://img.shields.io/badge/法语-d9d9d9"></a>
<a href="./README_FR.md"><img alt="上个月的提交次数" src="https://img.shields.io/badge/克林贡语-d9d9d9"></a>
<a href="./README_KR.md"><img alt="上个月的提交次数" src="https://img.shields.io/badge/韓國語-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
</div>
@ -70,7 +69,7 @@ Dify 是一个开源的 LLM 应用开发平台。其直观的界面结合了 AI
广泛的 RAG 功能,涵盖从文档摄入到检索的所有内容,支持从 PDF、PPT 和其他常见文档格式中提取文本的开箱即用的支持。
**5. Agent 智能体**:
您可以基于 LLM 函数调用或 ReAct 定义 Agent并为 Agent 添加预构建或自定义工具。Dify 为 AI Agent 提供了50多种内置工具如谷歌搜索、DALL·E、Stable Diffusion 和 WolframAlpha 等。
您可以基于 LLM 函数调用或 ReAct 定义 Agent并为 Agent 添加预构建或自定义工具。Dify 为 AI Agent 提供了50多种内置工具如谷歌搜索、DELL·E、Stable Diffusion 和 WolframAlpha 等。
**6. LLMOps**:
随时间监视和分析应用程序日志和性能。您可以根据生产数据和标注持续改进提示、数据集和模型。

View File

@ -36,7 +36,6 @@
<a href="./README_KL.md"><img alt="Actividad de Commits el último mes" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_FR.md"><img alt="Actividad de Commits el último mes" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="Actividad de Commits el último mes" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
</p>
#
@ -70,7 +69,7 @@ Dify es una plataforma de desarrollo de aplicaciones de LLM de código abierto.
**5. Capacidades de agente**:
Puedes definir agent
es basados en LLM Function Calling o ReAct, y agregar herramientas preconstruidas o personalizadas para el agente. Dify proporciona más de 50 herramientas integradas para agentes de IA, como Búsqueda de Google, DALL·E, Difusión Estable y WolframAlpha.
es basados en LLM Function Calling o ReAct, y agregar herramientas preconstruidas o personalizadas para el agente. Dify proporciona más de 50 herramientas integradas para agentes de IA, como Búsqueda de Google, DELL·E, Difusión Estable y WolframAlpha.
**6. LLMOps**:
Supervisa y analiza registros de aplicaciones y rendimiento a lo largo del tiempo. Podrías mejorar continuamente prompts, conjuntos de datos y modelos basados en datos de producción y anotaciones.
@ -256,4 +255,4 @@ Para proteger tu privacidad, evita publicar problemas de seguridad en GitHub. En
## Licencia
Este repositorio está disponible bajo la [Licencia de Código Abierto de Dify](LICENSE), que es esencialmente Apache 2.0 con algunas restricciones adicionales.
Este repositorio está disponible bajo la [Licencia de Código Abierto de Dify](LICENSE), que es esencialmente Apache 2.0 con algunas restricciones adicionales.

View File

@ -36,7 +36,6 @@
<a href="./README_KL.md"><img alt="Commits le mois dernier" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_FR.md"><img alt="Commits le mois dernier" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="Commits le mois dernier" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
</p>
#
@ -70,7 +69,7 @@ Dify est une plateforme de développement d'applications LLM open source. Son in
**5. Capac
ités d'agent**:
Vous pouvez définir des agents basés sur l'appel de fonction LLM ou ReAct, et ajouter des outils pré-construits ou personnalisés pour l'agent. Dify fournit plus de 50 outils intégrés pour les agents d'IA, tels que la recherche Google, DALL·E, Stable Diffusion et WolframAlpha.
Vous pouvez définir des agents basés sur l'appel de fonction LLM ou ReAct, et ajouter des outils pré-construits ou personnalisés pour l'agent. Dify fournit plus de 50 outils intégrés pour les agents d'IA, tels que la recherche Google, DELL·E, Stable Diffusion et WolframAlpha.
**6. LLMOps**:
Surveillez et analysez les journaux d'application et les performances au fil du temps. Vous pouvez continuellement améliorer les prompts, les ensembles de données et les modèles en fonction des données de production et des annotations.

View File

@ -36,7 +36,6 @@
<a href="./README_KL.md"><img alt="先月のコミット" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_FR.md"><img alt="先月のコミット" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="先月のコミット" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
</p>
#
@ -69,7 +68,7 @@ DifyはオープンソースのLLMアプリケーション開発プラットフ
ドキュメントの取り込みから検索までをカバーする広範なRAG機能ができます。ほかにもPDF、PPT、その他の一般的なドキュメントフォーマットからのテキスト抽出のサーポイントも提供します。
**5. エージェント機能**:
LLM Function CallingやReActに基づくエージェントの定義が可能で、AIエージェント用のプリビルトまたはカスタムツールを追加できます。Difyには、Google検索、DALL·E、Stable Diffusion、WolframAlphaなどのAIエージェント用の50以上の組み込みツールが提供します。
LLM Function CallingやReActに基づくエージェントの定義が可能で、AIエージェント用のプリビルトまたはカスタムツールを追加できます。Difyには、Google検索、DELL·E、Stable Diffusion、WolframAlphaなどのAIエージェント用の50以上の組み込みツールが提供します。
**6. LLMOps**:
アプリケーションのログやパフォーマンスを監視と分析し、生産のデータと注釈に基づいて、プロンプト、データセット、モデルを継続的に改善できます。

View File

@ -36,7 +36,6 @@
<a href="./README_KL.md"><img alt="Commits last month" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_FR.md"><img alt="Commits last month" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="Commits last month" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
</p>
#
@ -68,7 +67,7 @@ Dify is an open-source LLM app development platform. Its intuitive interface com
Extensive RAG capabilities that cover everything from document ingestion to retrieval, with out-of-box support for text extraction from PDFs, PPTs, and other common document formats.
**5. Agent capabilities**:
You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DALL·E, Stable Diffusion and WolframAlpha.
You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DELL·E, Stable Diffusion and WolframAlpha.
**6. LLMOps**:
Monitor and analyze application logs and performance over time. You could continuously improve prompts, datasets, and models based on production data and annotations.
@ -256,4 +255,4 @@ To protect your privacy, please avoid posting security issues on GitHub. Instead
## License
This repository is available under the [Dify Open Source License](LICENSE), which is essentially Apache 2.0 with a few additional restrictions.
This repository is available under the [Dify Open Source License](LICENSE), which is essentially Apache 2.0 with a few additional restrictions.

View File

@ -36,7 +36,6 @@
<a href="./README_FR.md"><img alt="README en Français" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_KL.md"><img alt="README tlhIngan Hol" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="한국어 README" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
</p>
@ -64,7 +63,7 @@
문서 수집부터 검색까지 모든 것을 다루며, PDF, PPT 및 기타 일반적인 문서 형식에서 텍스트 추출을 위한 기본 지원이 포함되어 있는 광범위한 RAG 기능을 제공합니다.
**5. 에이전트 기능**:
LLM 함수 호출 또는 ReAct를 기반으로 에이전트를 정의하고 에이전트에 대해 사전 구축된 도구나 사용자 정의 도구를 추가할 수 있습니다. Dify는 Google Search, DALL·E, Stable Diffusion, WolframAlpha 등 AI 에이전트를 위한 50개 이상의 내장 도구를 제공합니다.
LLM 함수 호출 또는 ReAct를 기반으로 에이전트를 정의하고 에이전트에 대해 사전 구축된 도구나 사용자 정의 도구를 추가할 수 있습니다. Dify는 Google Search, DELL·E, Stable Diffusion, WolframAlpha 등 AI 에이전트를 위한 50개 이상의 내장 도구를 제공합니다.
**6. LLMOps**:
시간 경과에 따른 애플리케이션 로그와 성능을 모니터링하고 분석합니다. 생산 데이터와 주석을 기반으로 프롬프트, 데이터세트, 모델을 지속적으로 개선할 수 있습니다.

View File

@ -1,253 +0,0 @@
![cover-v5-optimized](https://github.com/langgenius/dify/assets/13230914/f9e19af5-61ba-4119-b926-d10c4c06ebab)
<p align="center">
<a href="https://cloud.dify.ai">Dify Bulut</a> ·
<a href="https://docs.dify.ai/getting-started/install-self-hosted">Kendi Sunucunuzda Barındırma</a> ·
<a href="https://docs.dify.ai">Dokümantasyon</a> ·
<a href="https://cal.com/guchenhe/60-min-meeting">Kurumsal Sorgu</a>
</p>
<p align="center">
<a href="https://dify.ai" target="_blank">
<img alt="Statik Rozet" src="https://img.shields.io/badge/Ürün-F04438"></a>
<a href="https://dify.ai/pricing" target="_blank">
<img alt="Statik Rozet" src="https://img.shields.io/badge/ücretsiz-fiyatlandırma?logo=free&color=%20%23155EEF&label=fiyatlandirma&labelColor=%20%23528bff"></a>
<a href="https://discord.gg/FngNHpbcY7" target="_blank">
<img src="https://img.shields.io/discord/1082486657678311454?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb"
alt="Discord'da sohbet et"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="Twitter'da takip et"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Çekmeleri" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
<img alt="Geçen ay yapılan commitler" src="https://img.shields.io/github/commit-activity/m/langgenius/dify?labelColor=%20%2332b583&color=%20%2312b76a"></a>
<a href="https://github.com/langgenius/dify/" target="_blank">
<img alt="Kapatılan sorunlar" src="https://img.shields.io/github/issues-search?query=repo%3Alanggenius%2Fdify%20is%3Aclosed&label=kapatilan%20sorunlar&labelColor=%20%237d89b0&color=%20%235d6b98"></a>
<a href="https://github.com/langgenius/dify/discussions/" target="_blank">
<img alt="Tartışma gönderileri" src="https://img.shields.io/github/discussions/langgenius/dify?labelColor=%20%239b8afb&color=%20%237a5af8"></a>
</p>
<p align="center">
<a href="./README.md"><img alt="README in English" src="https://img.shields.io/badge/English-d9d9d9"></a>
<a href="./README_CN.md"><img alt="简体中文版自述文件" src="https://img.shields.io/badge/简体中文-d9d9d9"></a>
<a href="./README_JA.md"><img alt="日本語のREADME" src="https://img.shields.io/badge/日本語-d9d9d9"></a>
<a href="./README_ES.md"><img alt="README en Español" src="https://img.shields.io/badge/Español-d9d9d9"></a>
<a href="./README_FR.md"><img alt="README en Français" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_KL.md"><img alt="README tlhIngan Hol" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="README in Korean" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
</p>
Dify, açık kaynaklı bir LLM uygulama geliştirme platformudur. Sezgisel arayüzü, AI iş akışı, RAG pipeline'ı, ajan yetenekleri, model yönetimi, gözlemlenebilirlik özellikleri ve daha fazlasını birleştirerek, prototipten üretime hızlıca geçmenizi sağlar. İşte temel özelliklerin bir listesi:
</br> </br>
**1. Workflow**:
Görsel bir arayüz üzerinde güçlü AI iş akışları oluşturun ve test edin, aşağıdaki tüm özellikleri ve daha fazlasını kullanarak.
https://github.com/langgenius/dify/assets/13230914/356df23e-1604-483d-80a6-9517ece318aa
**2. Kapsamlı model desteği**:
Çok sayıda çıkarım sağlayıcısı ve kendi kendine barındırılan çözümlerden yüzlerce özel / açık kaynaklı LLM ile sorunsuz entegrasyon sağlar. GPT, Mistral, Llama3 ve OpenAI API uyumlu tüm modelleri kapsar. Desteklenen model sağlayıcılarının tam listesine [buradan](https://docs.dify.ai/getting-started/readme/model-providers) ulaşabilirsiniz.
![providers-v5](https://github.com/langgenius/dify/assets/13230914/5a17bdbe-097a-4100-8363-40255b70f6e3)
Özür dilerim, haklısınız. Daha anlamlı ve akıcı bir çeviri yapmaya çalışayım. İşte güncellenmiş çeviri:
**3. Prompt IDE**:
Komut istemlerini oluşturmak, model performansını karşılaştırmak ve sohbet tabanlı uygulamalara metin-konuşma gibi ek özellikler eklemek için kullanıcı dostu bir arayüz.
**4. RAG Pipeline**:
Belge alımından bilgi çekmeye kadar geniş kapsamlı RAG yetenekleri. PDF'ler, PPT'ler ve diğer yaygın belge formatlarından metin çıkarma için hazır destek sunar.
**5. Ajan yetenekleri**:
LLM Fonksiyon Çağırma veya ReAct'a dayalı ajanlar tanımlayabilir ve bu ajanlara önceden hazırlanmış veya özel araçlar ekleyebilirsiniz. Dify, AI ajanları için Google Arama, DALL·E, Stable Diffusion ve WolframAlpha gibi 50'den fazla yerleşik araç sağlar.
**6. LLMOps**:
Uygulama loglarını ve performans metriklerini zaman içinde izleme ve analiz etme imkanı. Üretim ortamından elde edilen verilere ve kullanıcı geri bildirimlerine dayanarak, prompt'ları, veri setlerini ve modelleri sürekli olarak optimize edebilirsiniz. Bu sayede, AI uygulamanızın performansını ve doğruluğunu sürekli olarak artırabilirsiniz.
**7. Hizmet Olarak Backend**:
Dify'ın tüm özellikleri ilgili API'lerle birlikte gelir, böylece Dify'ı kendi iş mantığınıza kolayca entegre edebilirsiniz.
## Özellik karşılaştırması
<table style="width: 100%;">
<tr>
<th align="center">Özellik</th>
<th align="center">Dify.AI</th>
<th align="center">LangChain</th>
<th align="center">Flowise</th>
<th align="center">OpenAI Assistants API</th>
</tr>
<tr>
<td align="center">Programlama Yaklaşımı</td>
<td align="center">API + Uygulama odaklı</td>
<td align="center">Python Kodu</td>
<td align="center">Uygulama odaklı</td>
<td align="center">API odaklı</td>
</tr>
<tr>
<td align="center">Desteklenen LLM'ler</td>
<td align="center">Zengin Çeşitlilik</td>
<td align="center">Zengin Çeşitlilik</td>
<td align="center">Zengin Çeşitlilik</td>
<td align="center">Yalnızca OpenAI</td>
</tr>
<tr>
<td align="center">RAG Motoru</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Ajan</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">İş Akışı</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Gözlemlenebilirlik</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Kurumsal Özellikler (SSO/Erişim kontrolü)</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Yerel Dağıtım</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
</table>
## Dify'ı Kullanma
- **Cloud </br>**
İşte verdiğiniz metnin Türkçe çevirisi, kod bloğu içinde:
-
Herkesin sıfır kurulumla denemesi için bir [Dify Cloud](https://dify.ai) hizmeti sunuyoruz. Bu hizmet, kendi kendine dağıtılan versiyonun tüm yeteneklerini sağlar ve sandbox planında 200 ücretsiz GPT-4 çağrısı içerir.
- **Dify Topluluk Sürümünü Kendi Sunucunuzda Barındırma</br>**
Bu [başlangıç kılavuzu](#quick-start) ile Dify'ı kendi ortamınızda hızlıca çalıştırın.
Daha fazla referans ve detaylı talimatlar için [dokümantasyonumuzu](https://docs.dify.ai) kullanın.
- **Kurumlar / organizasyonlar için Dify</br>**
Ek kurumsal odaklı özellikler sunuyoruz. Kurumsal ihtiyaçları görüşmek için [bizimle bir toplantı planlayın](https://cal.com/guchenhe/30min) veya [bize bir e-posta gönderin](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry). </br>
> AWS kullanan startuplar ve küçük işletmeler için, [AWS Marketplace'deki Dify Premium'a](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) göz atın ve tek tıklamayla kendi AWS VPC'nize dağıtın. Bu, özel logo ve marka ile uygulamalar oluşturma seçeneğine sahip uygun fiyatlı bir AMI teklifdir.
## Güncel Kalma
GitHub'da Dify'a yıldız verin ve yeni sürümlerden anında haberdar olun.
![bizi-yıldızlayın](https://github.com/langgenius/dify/assets/13230914/b823edc1-6388-4e25-ad45-2f6b187adbb4)
## Hızlı başlangıç
> Dify'ı kurmadan önce, makinenizin aşağıdaki minimum sistem gereksinimlerini karşıladığından emin olun:
>
>- CPU >= 2 Çekirdek
>- RAM >= 4GB
</br>
İşte verdiğiniz metnin Türkçe çevirisi, kod bloğu içinde:
Dify sunucusunu başlatmanın en kolay yolu, [docker-compose.yml](docker/docker-compose.yaml) dosyamızı çalıştırmaktır. Kurulum komutunu çalıştırmadan önce, makinenizde [Docker](https://docs.docker.com/get-docker/) ve [Docker Compose](https://docs.docker.com/compose/install/)'un kurulu olduğundan emin olun:
```bash
cd docker
cp .env.example .env
docker compose up -d
```
Çalıştırdıktan sonra, tarayıcınızda [http://localhost/install](http://localhost/install) adresinden Dify kontrol paneline erişebilir ve başlangıç ayarları sürecini başlatabilirsiniz.
> Eğer Dify'a katkıda bulunmak veya ek geliştirmeler yapmak isterseniz, [kaynak koddan dağıtım kılavuzumuza](https://docs.dify.ai/getting-started/install-self-hosted/local-source-code) başvurun.
## Sonraki adımlar
Yapılandırmayı özelleştirmeniz gerekiyorsa, lütfen [.env.example](docker/.env.example) dosyamızdaki yorumlara bakın ve `.env` dosyanızdaki ilgili değerleri güncelleyin. Ayrıca, spesifik dağıtım ortamınıza ve gereksinimlerinize bağlı olarak `docker-compose.yaml` dosyasının kendisinde de, imaj sürümlerini, port eşlemelerini veya hacim bağlantılarını değiştirmek gibi ayarlamalar yapmanız gerekebilir. Herhangi bir değişiklik yaptıktan sonra, lütfen `docker-compose up -d` komutunu tekrar çalıştırın. Kullanılabilir tüm ortam değişkenlerinin tam listesini [burada](https://docs.dify.ai/getting-started/install-self-hosted/environments) bulabilirsiniz.
Yüksek kullanılabilirliğe sahip bir kurulum yapılandırmak isterseniz, Dify'ın Kubernetes üzerine dağıtılmasına olanak tanıyan topluluk katkılı [Helm Charts](https://helm.sh/) ve YAML dosyaları mevcuttur.
- [@LeoQuote tarafından Helm Chart](https://github.com/douban/charts/tree/master/charts/dify)
- [@BorisPolonsky tarafından Helm Chart](https://github.com/BorisPolonsky/dify-helm)
- [@Winson-030 tarafından YAML dosyası](https://github.com/Winson-030/dify-kubernetes)
#### Dağıtım için Terraform Kullanımı
##### Azure Global
[Terraform](https://www.terraform.io/) kullanarak Dify'ı Azure'a tek tıklamayla dağıtın.
- [@nikawang tarafından Azure Terraform](https://github.com/nikawang/dify-azure-terraform)
## Katkıda Bulunma
Kod katkısında bulunmak isteyenler için [Katkı Kılavuzumuza](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) bakabilirsiniz.
Aynı zamanda, lütfen Dify'ı sosyal medyada, etkinliklerde ve konferanslarda paylaşarak desteklemeyi düşünün.
> Dify'ı Mandarin veya İngilizce dışındaki dillere çevirmemize yardımcı olacak katkıda bulunanlara ihtiyacımız var. Yardımcı olmakla ilgileniyorsanız, lütfen daha fazla bilgi için [i18n README](https://github.com/langgenius/dify/blob/main/web/i18n/README.md) dosyasına bakın ve [Discord Topluluk Sunucumuzdaki](https://discord.gg/8Tpq4AcN9c) `global-users` kanalında bize bir yorum bırakın.
**Katkıda Bulunanlar**
<a href="https://github.com/langgenius/dify/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langgenius/dify" />
</a>
## Topluluk & iletişim
* [Github Tartışmaları](https://github.com/langgenius/dify/discussions). En uygun: geri bildirim paylaşmak ve soru sormak için.
* [GitHub Sorunları](https://github.com/langgenius/dify/issues). En uygun: Dify.AI kullanırken karşılaştığınız hatalar ve özellik önerileri için. [Katkı Kılavuzumuza](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) bakın.
* [Discord](https://discord.gg/FngNHpbcY7). En uygun: uygulamalarınızı paylaşmak ve toplulukla vakit geçirmek için.
* [Twitter](https://twitter.com/dify_ai). En uygun: uygulamalarınızı paylaşmak ve toplulukla vakit geçirmek için.
Veya doğrudan bir ekip üyesiyle toplantı planlayın:
<table>
<tr>
<th>İletişim Noktası</th>
<th>Amaç</th>
</tr>
<tr>
<td><a href='https://cal.com/guchenhe/15min' target='_blank'><img class="schedule-button" src='https://github.com/langgenius/dify/assets/13230914/9ebcd111-1205-4d71-83d5-948d70b809f5' alt='Git-Hub-README-Button-3x' style="width: 180px; height: auto; object-fit: contain;"/></a></td>
<td>İş sorgulamaları & ürün geri bildirimleri</td>
</tr>
<tr>
<td><a href='https://cal.com/pinkbanana' target='_blank'><img class="schedule-button" src='https://github.com/langgenius/dify/assets/13230914/d1edd00a-d7e4-4513-be6c-e57038e143fd' alt='Git-Hub-README-Button-2x' style="width: 180px; height: auto; object-fit: contain;"/></a></td>
<td>Katkılar, sorunlar & özellik istekleri</td>
</tr>
</table>
## Star history
[![Star History Chart](https://api.star-history.com/svg?repos=langgenius/dify&type=Date)](https://star-history.com/#langgenius/dify&Date)
## Güvenlik açıklaması
Gizliliğinizi korumak için, lütfen güvenlik sorunlarını GitHub'da paylaşmaktan kaçının. Bunun yerine, sorularınızı security@dify.ai adresine gönderin ve size daha detaylı bir cevap vereceğiz.
## Lisans
Bu depo, temel olarak Apache 2.0 lisansı ve birkaç ek kısıtlama içeren [Dify Açık Kaynak Lisansı](LICENSE) altında kullanıma sunulmuştur.

View File

@ -183,7 +183,6 @@ UPLOAD_IMAGE_FILE_SIZE_LIMIT=10
# Model Configuration
MULTIMODAL_SEND_IMAGE_FORMAT=base64
PROMPT_GENERATION_MAX_TOKENS=512
# Mail configuration, support: resend, smtp
MAIL_TYPE=
@ -217,7 +216,6 @@ UNSTRUCTURED_API_KEY=
SSRF_PROXY_HTTP_URL=
SSRF_PROXY_HTTPS_URL=
SSRF_DEFAULT_MAX_RETRIES=3
BATCH_UPLOAD_LIMIT=10
KEYWORD_DATA_SOURCE_TYPE=database

View File

@ -41,12 +41,8 @@ ENV TZ=UTC
WORKDIR /app/api
RUN apt-get update \
&& apt-get install -y --no-install-recommends curl nodejs libgmp-dev libmpfr-dev libmpc-dev \
&& echo "deb http://deb.debian.org/debian testing main" > /etc/apt/sources.list \
&& apt-get update \
# For Security
&& apt-get install -y --no-install-recommends zlib1g=1:1.3.dfsg+really1.3.1-1 expat=2.6.2-1 libldap-2.5-0=2.5.18+dfsg-2 perl=5.38.2-5 libsqlite3-0=3.46.0-1 \
&& apt-get autoremove -y \
&& apt-get install -y --no-install-recommends curl wget vim nodejs ffmpeg libgmp-dev libmpfr-dev libmpc-dev \
&& apt-get autoremove \
&& rm -rf /var/lib/apt/lists/*
# Copy Python environment and packages

View File

@ -12,8 +12,7 @@
```bash
cd ../docker
cp middleware.env.example middleware.env
# change the profile to other vector database if you are not using weaviate
docker compose -f docker-compose.middleware.yaml --profile weaviate -p dify up -d
docker compose -f docker-compose.middleware.yaml -p dify up -d
cd ../api
```

View File

@ -261,7 +261,6 @@ def after_request(response):
@app.route('/health')
def health():
return Response(json.dumps({
'pid': os.getpid(),
'status': 'ok',
'version': app.config['CURRENT_VERSION']
}), status=200, content_type="application/json")
@ -285,7 +284,6 @@ def threads():
})
return {
'pid': os.getpid(),
'thread_num': num_threads,
'threads': thread_list
}
@ -295,7 +293,6 @@ def threads():
def pool_stat():
engine = db.engine
return {
'pid': os.getpid(),
'pool_size': engine.pool.size(),
'checked_in_connections': engine.pool.checkedin(),
'checked_out_connections': engine.pool.checkedout(),

View File

@ -249,7 +249,8 @@ def migrate_knowledge_vector_database():
create_count = 0
skipped_count = 0
total_count = 0
vector_type = dify_config.VECTOR_STORE
config = current_app.config
vector_type = config.get('VECTOR_STORE')
page = 1
while True:
try:
@ -483,7 +484,8 @@ def convert_to_agent_apps():
@click.option('--field', default='metadata.doc_id', prompt=False, help='index field , default is metadata.doc_id.')
def add_qdrant_doc_id_index(field: str):
click.echo(click.style('Start add qdrant doc_id index.', fg='green'))
vector_type = dify_config.VECTOR_STORE
config = current_app.config
vector_type = config.get('VECTOR_STORE')
if vector_type != "qdrant":
click.echo(click.style('Sorry, only support qdrant vector store.', fg='red'))
return
@ -500,15 +502,13 @@ def add_qdrant_doc_id_index(field: str):
from core.rag.datasource.vdb.qdrant.qdrant_vector import QdrantConfig
for binding in bindings:
if dify_config.QDRANT_URL is None:
raise ValueError('Qdrant url is required.')
qdrant_config = QdrantConfig(
endpoint=dify_config.QDRANT_URL,
api_key=dify_config.QDRANT_API_KEY,
endpoint=config.get('QDRANT_URL'),
api_key=config.get('QDRANT_API_KEY'),
root_path=current_app.root_path,
timeout=dify_config.QDRANT_CLIENT_TIMEOUT,
grpc_port=dify_config.QDRANT_GRPC_PORT,
prefer_grpc=dify_config.QDRANT_GRPC_ENABLED
timeout=config.get('QDRANT_CLIENT_TIMEOUT'),
grpc_port=config.get('QDRANT_GRPC_PORT'),
prefer_grpc=config.get('QDRANT_GRPC_ENABLED')
)
try:
client = qdrant_client.QdrantClient(**qdrant_config.to_qdrant_params())

View File

@ -64,6 +64,4 @@ class DifyConfig(
return f'{self.HTTP_REQUEST_NODE_MAX_TEXT_SIZE / 1024 / 1024:.2f}MB'
SSRF_PROXY_HTTP_URL: str | None = None
SSRF_PROXY_HTTPS_URL: str | None = None
MODERATION_BUFFER_SIZE: int = Field(default=300, description='The buffer size for moderation.')
SSRF_PROXY_HTTPS_URL: str | None = None

View File

@ -1,5 +1,4 @@
from typing import Any, Optional
from urllib.parse import quote_plus
from pydantic import Field, NonNegativeInt, PositiveInt, computed_field
from pydantic_settings import BaseSettings
@ -105,7 +104,7 @@ class DatabaseConfig:
).strip("&")
db_extras = f"?{db_extras}" if db_extras else ""
return (f"{self.SQLALCHEMY_DATABASE_URI_SCHEME}://"
f"{quote_plus(self.DB_USERNAME)}:{quote_plus(self.DB_PASSWORD)}@{self.DB_HOST}:{self.DB_PORT}/{self.DB_DATABASE}"
f"{self.DB_USERNAME}:{self.DB_PASSWORD}@{self.DB_HOST}:{self.DB_PORT}/{self.DB_DATABASE}"
f"{db_extras}")
SQLALCHEMY_POOL_SIZE: NonNegativeInt = Field(

View File

@ -9,7 +9,7 @@ class PackagingInfo(BaseSettings):
CURRENT_VERSION: str = Field(
description='Dify version',
default='0.6.16',
default='0.6.15',
)
COMMIT_SHA: str = Field(

View File

@ -15,7 +15,6 @@ language_timezone_mapping = {
'ro-RO': 'Europe/Bucharest',
'pl-PL': 'Europe/Warsaw',
'hi-IN': 'Asia/Kolkata',
'tr-TR': 'Europe/Istanbul',
}
languages = list(language_timezone_mapping.keys())

View File

@ -23,7 +23,8 @@ class AnnotationReplyActionApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check('annotation')
def post(self, app_id, action):
if not current_user.is_editor:
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
app_id = str(app_id)
@ -46,7 +47,8 @@ class AppAnnotationSettingDetailApi(Resource):
@login_required
@account_initialization_required
def get(self, app_id):
if not current_user.is_editor:
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
app_id = str(app_id)
@ -59,7 +61,8 @@ class AppAnnotationSettingUpdateApi(Resource):
@login_required
@account_initialization_required
def post(self, app_id, annotation_setting_id):
if not current_user.is_editor:
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
app_id = str(app_id)
@ -79,7 +82,8 @@ class AnnotationReplyActionStatusApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check('annotation')
def get(self, app_id, job_id, action):
if not current_user.is_editor:
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
job_id = str(job_id)
@ -106,7 +110,8 @@ class AnnotationListApi(Resource):
@login_required
@account_initialization_required
def get(self, app_id):
if not current_user.is_editor:
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
page = request.args.get('page', default=1, type=int)
@ -130,7 +135,8 @@ class AnnotationExportApi(Resource):
@login_required
@account_initialization_required
def get(self, app_id):
if not current_user.is_editor:
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
app_id = str(app_id)
@ -148,7 +154,8 @@ class AnnotationCreateApi(Resource):
@cloud_edition_billing_resource_check('annotation')
@marshal_with(annotation_fields)
def post(self, app_id):
if not current_user.is_editor:
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
app_id = str(app_id)
@ -167,7 +174,8 @@ class AnnotationUpdateDeleteApi(Resource):
@cloud_edition_billing_resource_check('annotation')
@marshal_with(annotation_fields)
def post(self, app_id, annotation_id):
if not current_user.is_editor:
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
app_id = str(app_id)
@ -183,7 +191,8 @@ class AnnotationUpdateDeleteApi(Resource):
@login_required
@account_initialization_required
def delete(self, app_id, annotation_id):
if not current_user.is_editor:
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
app_id = str(app_id)
@ -198,7 +207,8 @@ class AnnotationBatchImportApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check('annotation')
def post(self, app_id):
if not current_user.is_editor:
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
app_id = str(app_id)
@ -222,7 +232,8 @@ class AnnotationBatchImportStatusApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check('annotation')
def get(self, app_id, job_id):
if not current_user.is_editor:
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
job_id = str(job_id)
@ -248,7 +259,8 @@ class AnnotationHitHistoryListApi(Resource):
@login_required
@account_initialization_required
def get(self, app_id, annotation_id):
if not current_user.is_editor:
# The role of the current user in the table must be admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
page = request.args.get('page', default=1, type=int)

View File

@ -22,7 +22,7 @@ from fields.conversation_fields import (
)
from libs.helper import datetime_string
from libs.login import login_required
from models.model import AppMode, Conversation, EndUser, Message, MessageAnnotation
from models.model import AppMode, Conversation, Message, MessageAnnotation
class CompletionConversationApi(Resource):
@ -143,7 +143,7 @@ class ChatConversationApi(Resource):
@get_app_model(mode=[AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT])
@marshal_with(conversation_with_summary_pagination_fields)
def get(self, app_model):
if not current_user.is_editor:
if not current_user.is_admin_or_owner:
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument('keyword', type=str, location='args')
@ -156,31 +156,19 @@ class ChatConversationApi(Resource):
parser.add_argument('limit', type=int_range(1, 100), required=False, default=20, location='args')
args = parser.parse_args()
subquery = (
db.session.query(
Conversation.id.label('conversation_id'),
EndUser.session_id.label('from_end_user_session_id')
)
.outerjoin(EndUser, Conversation.from_end_user_id == EndUser.id)
.subquery()
)
query = db.select(Conversation).where(Conversation.app_id == app_model.id)
if args['keyword']:
keyword_filter = '%{}%'.format(args['keyword'])
query = query.join(
Message, Message.conversation_id == Conversation.id,
).join(
subquery, subquery.c.conversation_id == Conversation.id
Message, Message.conversation_id == Conversation.id
).filter(
or_(
Message.query.ilike(keyword_filter),
Message.answer.ilike(keyword_filter),
Conversation.name.ilike(keyword_filter),
Conversation.introduction.ilike(keyword_filter),
subquery.c.from_end_user_session_id.ilike(keyword_filter)
Message.query.ilike('%{}%'.format(args['keyword'])),
Message.answer.ilike('%{}%'.format(args['keyword'])),
Conversation.name.ilike('%{}%'.format(args['keyword'])),
Conversation.introduction.ilike('%{}%'.format(args['keyword'])),
),
)
account = current_user
@ -245,7 +233,7 @@ class ChatConversationDetailApi(Resource):
@get_app_model(mode=[AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT])
@marshal_with(conversation_detail_fields)
def get(self, app_model, conversation_id):
if not current_user.is_editor:
if not current_user.is_admin_or_owner:
raise Forbidden()
conversation_id = str(conversation_id)

View File

@ -1,5 +1,3 @@
import os
from flask_login import current_user
from flask_restful import Resource, reqparse
@ -30,15 +28,13 @@ class RuleGenerateApi(Resource):
args = parser.parse_args()
account = current_user
PROMPT_GENERATION_MAX_TOKENS = int(os.getenv('PROMPT_GENERATION_MAX_TOKENS', '512'))
try:
rules = LLMGenerator.generate_rule_config(
tenant_id=account.current_tenant_id,
instruction=args['instruction'],
model_config=args['model_config'],
no_variable=args['no_variable'],
rule_config_max_tokens=PROMPT_GENERATION_MAX_TOKENS
no_variable=args['no_variable']
)
except ProviderTokenNotInitError as ex:
raise ProviderNotInitializeError(ex.description)

View File

@ -149,7 +149,8 @@ class MessageAnnotationApi(Resource):
@get_app_model
@marshal_with(annotation_fields)
def post(self, app_model):
if not current_user.is_editor:
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
parser = reqparse.RequestParser()

View File

@ -17,6 +17,8 @@ from ..wraps import account_initialization_required
def get_oauth_providers():
with current_app.app_context():
if not dify_config.NOTION_CLIENT_ID or not dify_config.NOTION_CLIENT_SECRET:
return {}
notion_oauth = NotionOAuth(client_id=dify_config.NOTION_CLIENT_ID,
client_secret=dify_config.NOTION_CLIENT_SECRET,
redirect_uri=dify_config.CONSOLE_API_URL + '/console/api/oauth/data-source/callback/notion')

View File

@ -71,7 +71,7 @@ class ResetPasswordApi(Resource):
# AccountService.update_password(account, new_password)
# todo: Send email
# MAILCHIMP_API_KEY = dify_config.MAILCHIMP_TRANSACTIONAL_API_KEY
# MAILCHIMP_API_KEY = current_app.config['MAILCHIMP_TRANSACTIONAL_API_KEY']
# mailchimp = MailchimpTransactional(MAILCHIMP_API_KEY)
# message = {
@ -92,7 +92,7 @@ class ResetPasswordApi(Resource):
# 'message': message,
# # required for transactional email
# ' settings': {
# 'sandbox_mode': dify_config.MAILCHIMP_SANDBOX_MODE,
# 'sandbox_mode': current_app.config['MAILCHIMP_SANDBOX_MODE'],
# },
# })

View File

@ -189,6 +189,8 @@ class DatasetApi(Resource):
dataset = DatasetService.get_dataset(dataset_id_str)
if dataset is None:
raise NotFound("Dataset not found.")
# check user's model setting
DatasetService.check_dataset_model_setting(dataset)
parser = reqparse.RequestParser()
parser.add_argument('name', nullable=False,
@ -213,13 +215,6 @@ class DatasetApi(Resource):
args = parser.parse_args()
data = request.get_json()
# check embedding model setting
if data.get('indexing_technique') == 'high_quality':
DatasetService.check_embedding_model_setting(dataset.tenant_id,
data.get('embedding_model_provider'),
data.get('embedding_model')
)
# The role of the current user in the ta table must be admin, owner, editor, or dataset_operator
DatasetPermissionService.check_permission(
current_user, dataset, data.get('permission'), data.get('partial_member_list')
@ -238,8 +233,7 @@ class DatasetApi(Resource):
DatasetPermissionService.update_partial_member_list(
tenant_id, dataset_id_str, data.get('partial_member_list')
)
# clear partial member list when permission is only_me or all_team_members
elif data.get('permission') == 'only_me' or data.get('permission') == 'all_team_members':
else:
DatasetPermissionService.clear_partial_member_list(dataset_id_str)
partial_member_list = DatasetPermissionService.get_dataset_partial_member_list(dataset_id_str)

View File

@ -223,7 +223,8 @@ class DatasetDocumentSegmentAddApi(Resource):
document = DocumentService.get_document(dataset_id, document_id)
if not document:
raise NotFound('Document not found.')
if not current_user.is_editor:
# The role of the current user in the ta table must be admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
# check embedding model setting
if dataset.indexing_technique == 'high_quality':
@ -346,7 +347,7 @@ class DatasetDocumentSegmentUpdateApi(Resource):
if not segment:
raise NotFound('Segment not found.')
# The role of the current user in the ta table must be admin or owner
if not current_user.is_editor:
if not current_user.is_admin_or_owner:
raise Forbidden()
try:
DatasetService.check_dataset_permission(dataset, current_user)

View File

@ -19,7 +19,7 @@ def inner_api_only(view):
# get header 'X-Inner-Api-Key'
inner_api_key = request.headers.get('X-Inner-Api-Key')
if not inner_api_key or inner_api_key != dify_config.INNER_API_KEY:
abort(401)
abort(404)
return view(*args, **kwargs)

View File

@ -53,7 +53,7 @@ class ConversationDetailApi(Resource):
ConversationService.delete(app_model, conversation_id, end_user)
except services.errors.conversation.ConversationNotExistsError:
raise NotFound("Conversation Not Exists.")
return {'result': 'success'}, 200
return {"result": "success"}, 204
class ConversationRenameApi(Resource):

View File

@ -29,21 +29,22 @@ from services.app_generate_service import AppGenerateService
logger = logging.getLogger(__name__)
workflow_run_fields = {
'id': fields.String,
'workflow_id': fields.String,
'status': fields.String,
'inputs': fields.Raw,
'outputs': fields.Raw,
'error': fields.String,
'total_steps': fields.Integer,
'total_tokens': fields.Integer,
'created_at': fields.DateTime,
'finished_at': fields.DateTime,
'elapsed_time': fields.Float,
}
class WorkflowRunDetailApi(Resource):
class WorkflowRunApi(Resource):
workflow_run_fields = {
'id': fields.String,
'workflow_id': fields.String,
'status': fields.String,
'inputs': fields.Raw,
'outputs': fields.Raw,
'error': fields.String,
'total_steps': fields.Integer,
'total_tokens': fields.Integer,
'created_at': fields.DateTime,
'finished_at': fields.DateTime,
'elapsed_time': fields.Float,
}
@validate_app_token
@marshal_with(workflow_run_fields)
def get(self, app_model: App, workflow_id: str):
@ -56,7 +57,7 @@ class WorkflowRunDetailApi(Resource):
workflow_run = db.session.query(WorkflowRun).filter(WorkflowRun.id == workflow_id).first()
return workflow_run
class WorkflowRunApi(Resource):
@validate_app_token(fetch_user_arg=FetchUserArg(fetch_from=WhereisUserArg.JSON, required=True))
def post(self, app_model: App, end_user: EndUser):
"""
@ -116,6 +117,5 @@ class WorkflowTaskStopApi(Resource):
}
api.add_resource(WorkflowRunApi, '/workflows/run')
api.add_resource(WorkflowRunDetailApi, '/workflows/run/<string:workflow_id>')
api.add_resource(WorkflowRunApi, '/workflows/run/<string:workflow_id>', '/workflows/run')
api.add_resource(WorkflowTaskStopApi, '/workflows/tasks/<string:task_id>/stop')

View File

@ -79,7 +79,6 @@ class CotAgentRunner(BaseAgentRunner, ABC):
llm_usage.completion_tokens += usage.completion_tokens
llm_usage.prompt_price += usage.prompt_price
llm_usage.completion_price += usage.completion_price
llm_usage.total_price += usage.total_price
model_instance = self.model_instance

View File

@ -62,7 +62,6 @@ class FunctionCallAgentRunner(BaseAgentRunner):
llm_usage.completion_tokens += usage.completion_tokens
llm_usage.prompt_price += usage.prompt_price
llm_usage.completion_price += usage.completion_price
llm_usage.total_price += usage.total_price
model_instance = self.model_instance

View File

@ -91,8 +91,7 @@ class DatasetConfigManager:
top_k=dataset_configs.get('top_k', 4),
score_threshold=dataset_configs.get('score_threshold'),
reranking_model=dataset_configs.get('reranking_model'),
weights=dataset_configs.get('weights'),
reranking_enabled=dataset_configs.get('reranking_enabled', True),
weights=dataset_configs.get('weights')
)
)
@ -121,6 +120,12 @@ class DatasetConfigManager:
if not isinstance(config["dataset_configs"], dict):
raise ValueError("dataset_configs must be of object type")
if config["dataset_configs"]['retrieval_model'] == 'multiple':
if not config["dataset_configs"]['reranking_model']:
raise ValueError("reranking_model has not been set")
if not isinstance(config["dataset_configs"]['reranking_model'], dict):
raise ValueError("reranking_model must be of object type")
if not isinstance(config["dataset_configs"], dict):
raise ValueError("dataset_configs must be of object type")

View File

@ -158,11 +158,10 @@ class DatasetRetrieveConfigEntity(BaseModel):
retrieve_strategy: RetrieveStrategy
top_k: Optional[int] = None
score_threshold: Optional[float] = .0
score_threshold: Optional[float] = None
rerank_mode: Optional[str] = 'reranking_model'
reranking_model: Optional[dict] = None
weights: Optional[dict] = None
reranking_enabled: Optional[bool] = True

View File

@ -5,12 +5,7 @@ import queue
import re
import threading
from core.app.entities.queue_entities import (
QueueAgentMessageEvent,
QueueLLMChunkEvent,
QueueNodeSucceededEvent,
QueueTextChunkEvent,
)
from core.app.entities.queue_entities import QueueAgentMessageEvent, QueueLLMChunkEvent, QueueTextChunkEvent
from core.model_manager import ModelManager
from core.model_runtime.entities.model_entities import ModelType
@ -93,8 +88,6 @@ class AppGeneratorTTSPublisher:
self.msg_text += message.event.chunk.delta.message.content
elif isinstance(message.event, QueueTextChunkEvent):
self.msg_text += message.event.text
elif isinstance(message.event, QueueNodeSucceededEvent):
self.msg_text += message.event.outputs.get('output', '')
self.last_message = message
sentence_arr, text_tmp = self._extract_sentence(self.msg_text)
if len(sentence_arr) >= min(self.MAX_SENTENCE, 7):

View File

@ -244,12 +244,7 @@ class AdvancedChatAppGenerateTaskPipeline(BasedGenerateTaskPipeline, WorkflowCyc
:return:
"""
for message in self._queue_manager.listen():
if hasattr(message.event, 'metadata') and message.event.metadata.get('is_answer_previous_node', False) and publisher:
publisher.publish(message=message)
elif (hasattr(message.event, 'execution_metadata')
and message.event.execution_metadata
and message.event.execution_metadata.get('is_answer_previous_node', False)
and publisher):
if publisher:
publisher.publish(message=message)
event = message.event

View File

@ -110,8 +110,7 @@ class AgentChatAppGenerator(MessageBasedAppGenerator):
)
# get tracing instance
user_id = user.id if isinstance(user, Account) else user.session_id
trace_manager = TraceQueueManager(app_model.id, user_id)
trace_manager = TraceQueueManager(app_model.id)
# init application generate entity
application_generate_entity = AgentChatAppGenerateEntity(

View File

@ -5,9 +5,9 @@ from collections.abc import Generator
from enum import Enum
from typing import Any
from flask import current_app
from sqlalchemy.orm import DeclarativeMeta
from configs import dify_config
from core.app.entities.app_invoke_entities import InvokeFrom
from core.app.entities.queue_entities import (
AppQueueEvent,
@ -48,7 +48,7 @@ class AppQueueManager:
:return:
"""
# wait for APP_MAX_EXECUTION_TIME seconds to stop listen
listen_timeout = dify_config.APP_MAX_EXECUTION_TIME
listen_timeout = current_app.config.get("APP_MAX_EXECUTION_TIME")
start_time = time.time()
last_ping_time = 0
while True:

View File

@ -74,8 +74,7 @@ class WorkflowAppGenerator(BaseAppGenerator):
)
# get tracing instance
user_id = user.id if isinstance(user, Account) else user.session_id
trace_manager = TraceQueueManager(app_model.id, user_id)
trace_manager = TraceQueueManager(app_model.id)
# init application generate entity
application_generate_entity = WorkflowAppGenerateEntity(

View File

@ -1,21 +1,8 @@
from .segment_group import SegmentGroup
from .segments import (
ArrayAnySegment,
FileSegment,
FloatSegment,
IntegerSegment,
NoneSegment,
ObjectSegment,
Segment,
StringSegment,
)
from .segments import NoneSegment, Segment
from .types import SegmentType
from .variables import (
ArrayAnyVariable,
ArrayFileVariable,
ArrayNumberVariable,
ArrayObjectVariable,
ArrayStringVariable,
ArrayVariable,
FileVariable,
FloatVariable,
IntegerVariable,
@ -33,21 +20,11 @@ __all__ = [
'SecretVariable',
'FileVariable',
'StringVariable',
'ArrayAnyVariable',
'ArrayVariable',
'Variable',
'SegmentType',
'SegmentGroup',
'Segment',
'NoneSegment',
'NoneVariable',
'IntegerSegment',
'FloatSegment',
'ObjectSegment',
'ArrayAnySegment',
'FileSegment',
'StringSegment',
'ArrayStringVariable',
'ArrayNumberVariable',
'ArrayObjectVariable',
'ArrayFileVariable',
]

View File

@ -3,25 +3,14 @@ from typing import Any
from core.file.file_obj import FileVar
from .segments import (
ArrayAnySegment,
FileSegment,
FloatSegment,
IntegerSegment,
NoneSegment,
ObjectSegment,
Segment,
StringSegment,
)
from .segments import Segment, StringSegment
from .types import SegmentType
from .variables import (
ArrayFileVariable,
ArrayNumberVariable,
ArrayObjectVariable,
ArrayStringVariable,
ArrayVariable,
FileVariable,
FloatVariable,
IntegerVariable,
NoneVariable,
ObjectVariable,
SecretVariable,
StringVariable,
@ -39,48 +28,40 @@ def build_variable_from_mapping(m: Mapping[str, Any], /) -> Variable:
match value_type:
case SegmentType.STRING:
return StringVariable.model_validate(m)
case SegmentType.SECRET:
return SecretVariable.model_validate(m)
case SegmentType.NUMBER if isinstance(value, int):
return IntegerVariable.model_validate(m)
case SegmentType.NUMBER if isinstance(value, float):
return FloatVariable.model_validate(m)
case SegmentType.SECRET:
return SecretVariable.model_validate(m)
case SegmentType.NUMBER if not isinstance(value, float | int):
raise ValueError(f'invalid number value {value}')
case SegmentType.FILE:
return FileVariable.model_validate(m)
case SegmentType.OBJECT if isinstance(value, dict):
return ObjectVariable.model_validate(
{**m, 'value': {k: build_variable_from_mapping(v) for k, v in value.items()}}
)
case SegmentType.ARRAY_STRING if isinstance(value, list):
return ArrayStringVariable.model_validate({**m, 'value': [build_variable_from_mapping(v) for v in value]})
case SegmentType.ARRAY_NUMBER if isinstance(value, list):
return ArrayNumberVariable.model_validate({**m, 'value': [build_variable_from_mapping(v) for v in value]})
case SegmentType.ARRAY_OBJECT if isinstance(value, list):
return ArrayObjectVariable.model_validate({**m, 'value': [build_variable_from_mapping(v) for v in value]})
case SegmentType.ARRAY_FILE if isinstance(value, list):
return ArrayFileVariable.model_validate({**m, 'value': [build_variable_from_mapping(v) for v in value]})
raise ValueError(f'not supported value type {value_type}')
def build_segment(value: Any, /) -> Segment:
def build_anonymous_variable(value: Any, /) -> Variable:
if value is None:
return NoneSegment()
return NoneVariable(name='anonymous')
if isinstance(value, str):
return StringSegment(value=value)
return StringVariable(name='anonymous', value=value)
if isinstance(value, int):
return IntegerSegment(value=value)
return IntegerVariable(name='anonymous', value=value)
if isinstance(value, float):
return FloatSegment(value=value)
return FloatVariable(name='anonymous', value=value)
if isinstance(value, dict):
# TODO: Limit the depth of the object
obj = {k: build_segment(v) for k, v in value.items()}
return ObjectSegment(value=obj)
obj = {k: build_anonymous_variable(v) for k, v in value.items()}
return ObjectVariable(name='anonymous', value=obj)
if isinstance(value, list):
# TODO: Limit the depth of the array
elements = [build_segment(v) for v in value]
return ArrayAnySegment(value=elements)
elements = [build_anonymous_variable(v) for v in value]
return ArrayVariable(name='anonymous', value=elements)
if isinstance(value, FileVar):
return FileSegment(value=value)
return FileVariable(name='anonymous', value=value)
raise ValueError(f'not supported value {value}')
def build_segment(value: Any, /) -> Segment:
if isinstance(value, str):
return StringSegment(value=value)
raise ValueError(f'not supported value {value}')

View File

@ -1,18 +1,17 @@
import re
from core.app.segments import SegmentGroup, factory
from core.workflow.entities.variable_pool import VariablePool
from . import SegmentGroup, factory
VARIABLE_PATTERN = re.compile(r'\{\{#([a-zA-Z0-9_]{1,50}(?:\.[a-zA-Z_][a-zA-Z0-9_]{0,29}){1,10})#\}\}')
def convert_template(*, template: str, variable_pool: VariablePool):
parts = re.split(VARIABLE_PATTERN, template)
segments = []
for part in filter(lambda x: x, parts):
for part in parts:
if '.' in part and (value := variable_pool.get(part.split('.'))):
segments.append(value)
else:
segments.append(factory.build_segment(part))
return SegmentGroup(value=segments)
return SegmentGroup(segments=segments)

View File

@ -1,22 +1,19 @@
from pydantic import BaseModel
from .segments import Segment
from .types import SegmentType
class SegmentGroup(Segment):
value_type: SegmentType = SegmentType.GROUP
value: list[Segment]
class SegmentGroup(BaseModel):
segments: list[Segment]
@property
def text(self):
return ''.join([segment.text for segment in self.value])
return ''.join([segment.text for segment in self.segments])
@property
def log(self):
return ''.join([segment.log for segment in self.value])
return ''.join([segment.log for segment in self.segments])
@property
def markdown(self):
return ''.join([segment.markdown for segment in self.value])
def to_object(self):
return [segment.to_object() for segment in self.value]
return ''.join([segment.markdown for segment in self.segments])

View File

@ -1,11 +1,7 @@
import json
from collections.abc import Mapping, Sequence
from typing import Any
from pydantic import BaseModel, ConfigDict, field_validator
from core.file.file_obj import FileVar
from .types import SegmentType
@ -38,6 +34,12 @@ class Segment(BaseModel):
return str(self.value)
def to_object(self) -> Any:
if isinstance(self.value, Segment):
return self.value.to_object()
if isinstance(self.value, list):
return [v.to_object() for v in self.value]
if isinstance(self.value, dict):
return {k: v.to_object() for k, v in self.value.items()}
return self.value
@ -61,80 +63,3 @@ class NoneSegment(Segment):
class StringSegment(Segment):
value_type: SegmentType = SegmentType.STRING
value: str
class FloatSegment(Segment):
value_type: SegmentType = SegmentType.NUMBER
value: float
class IntegerSegment(Segment):
value_type: SegmentType = SegmentType.NUMBER
value: int
class FileSegment(Segment):
value_type: SegmentType = SegmentType.FILE
# TODO: embed FileVar in this model.
value: FileVar
@property
def markdown(self) -> str:
return self.value.to_markdown()
class ObjectSegment(Segment):
value_type: SegmentType = SegmentType.OBJECT
value: Mapping[str, Segment]
@property
def text(self) -> str:
# TODO: Process variables.
return json.dumps(self.model_dump()['value'], ensure_ascii=False)
@property
def log(self) -> str:
# TODO: Process variables.
return json.dumps(self.model_dump()['value'], ensure_ascii=False, indent=2)
@property
def markdown(self) -> str:
# TODO: Use markdown code block
return json.dumps(self.model_dump()['value'], ensure_ascii=False, indent=2)
def to_object(self):
return {k: v.to_object() for k, v in self.value.items()}
class ArraySegment(Segment):
@property
def markdown(self) -> str:
return '\n'.join(['- ' + item.markdown for item in self.value])
def to_object(self):
return [v.to_object() for v in self.value]
class ArrayAnySegment(ArraySegment):
value_type: SegmentType = SegmentType.ARRAY_ANY
value: Sequence[Segment]
class ArrayStringSegment(ArraySegment):
value_type: SegmentType = SegmentType.ARRAY_STRING
value: Sequence[StringSegment]
class ArrayNumberSegment(ArraySegment):
value_type: SegmentType = SegmentType.ARRAY_NUMBER
value: Sequence[FloatSegment | IntegerSegment]
class ArrayObjectSegment(ArraySegment):
value_type: SegmentType = SegmentType.ARRAY_OBJECT
value: Sequence[ObjectSegment]
class ArrayFileSegment(ArraySegment):
value_type: SegmentType = SegmentType.ARRAY_FILE
value: Sequence[FileSegment]

View File

@ -6,12 +6,6 @@ class SegmentType(str, Enum):
NUMBER = 'number'
STRING = 'string'
SECRET = 'secret'
ARRAY_ANY = 'array[any]'
ARRAY_STRING = 'array[string]'
ARRAY_NUMBER = 'array[number]'
ARRAY_OBJECT = 'array[object]'
ARRAY_FILE = 'array[file]'
ARRAY = 'array'
OBJECT = 'object'
FILE = 'file'
GROUP = 'group'

View File

@ -1,21 +1,12 @@
import json
from collections.abc import Mapping, Sequence
from pydantic import Field
from core.file.file_obj import FileVar
from core.helper import encrypter
from .segments import (
ArrayAnySegment,
ArrayFileSegment,
ArrayNumberSegment,
ArrayObjectSegment,
ArrayStringSegment,
FileSegment,
FloatSegment,
IntegerSegment,
NoneSegment,
ObjectSegment,
Segment,
StringSegment,
)
from .segments import NoneSegment, Segment, StringSegment
from .types import SegmentType
@ -29,47 +20,59 @@ class Variable(Segment):
description="Unique identity for variable. It's only used by environment variables now.",
)
name: str
description: str = Field(default='', description='Description of the variable.')
class StringVariable(StringSegment, Variable):
pass
class FloatVariable(FloatSegment, Variable):
pass
class FloatVariable(Variable):
value_type: SegmentType = SegmentType.NUMBER
value: float
class IntegerVariable(IntegerSegment, Variable):
pass
class IntegerVariable(Variable):
value_type: SegmentType = SegmentType.NUMBER
value: int
class FileVariable(FileSegment, Variable):
pass
class ObjectVariable(Variable):
value_type: SegmentType = SegmentType.OBJECT
value: Mapping[str, Variable]
@property
def text(self) -> str:
# TODO: Process variables.
return json.dumps(self.model_dump()['value'], ensure_ascii=False)
@property
def log(self) -> str:
# TODO: Process variables.
return json.dumps(self.model_dump()['value'], ensure_ascii=False, indent=2)
@property
def markdown(self) -> str:
# TODO: Use markdown code block
return json.dumps(self.model_dump()['value'], ensure_ascii=False, indent=2)
class ObjectVariable(ObjectSegment, Variable):
pass
class ArrayVariable(Variable):
value_type: SegmentType = SegmentType.ARRAY
value: Sequence[Variable]
@property
def markdown(self) -> str:
return '\n'.join(['- ' + item.markdown for item in self.value])
class ArrayAnyVariable(ArrayAnySegment, Variable):
pass
class FileVariable(Variable):
value_type: SegmentType = SegmentType.FILE
# TODO: embed FileVar in this model.
value: FileVar
class ArrayStringVariable(ArrayStringSegment, Variable):
pass
class ArrayNumberVariable(ArrayNumberSegment, Variable):
pass
class ArrayObjectVariable(ArrayObjectSegment, Variable):
pass
class ArrayFileVariable(ArrayFileSegment, Variable):
pass
@property
def markdown(self) -> str:
return self.value.to_markdown()
class SecretVariable(StringVariable):

View File

@ -131,7 +131,6 @@ class WorkflowCycleManage(WorkflowIterationCycleManage):
TraceTaskName.WORKFLOW_TRACE,
workflow_run=workflow_run,
conversation_id=conversation_id,
user_id=trace_manager.user_id,
)
)
@ -174,7 +173,6 @@ class WorkflowCycleManage(WorkflowIterationCycleManage):
TraceTaskName.WORKFLOW_TRACE,
workflow_run=workflow_run,
conversation_id=conversation_id,
user_id=trace_manager.user_id,
)
)

View File

@ -6,7 +6,8 @@ import os
import time
from typing import Optional
from configs import dify_config
from flask import current_app
from extensions.ext_storage import storage
IMAGE_EXTENSIONS = ['jpg', 'jpeg', 'png', 'webp', 'gif', 'svg']
@ -22,7 +23,7 @@ class UploadFileParser:
if upload_file.extension not in IMAGE_EXTENSIONS:
return None
if dify_config.MULTIMODAL_SEND_IMAGE_FORMAT == 'url' or force_url:
if current_app.config['MULTIMODAL_SEND_IMAGE_FORMAT'] == 'url' or force_url:
return cls.get_signed_temp_image_url(upload_file.id)
else:
# get image file base64
@ -43,13 +44,13 @@ class UploadFileParser:
:param upload_file: UploadFile object
:return:
"""
base_url = dify_config.FILES_URL
base_url = current_app.config.get('FILES_URL')
image_preview_url = f'{base_url}/files/{upload_file_id}/image-preview'
timestamp = str(int(time.time()))
nonce = os.urandom(16).hex()
data_to_sign = f"image-preview|{upload_file_id}|{timestamp}|{nonce}"
secret_key = dify_config.SECRET_KEY.encode()
secret_key = current_app.config['SECRET_KEY'].encode()
sign = hmac.new(secret_key, data_to_sign.encode(), hashlib.sha256).digest()
encoded_sign = base64.urlsafe_b64encode(sign).decode()
@ -67,7 +68,7 @@ class UploadFileParser:
:return:
"""
data_to_sign = f"image-preview|{upload_file_id}|{timestamp}|{nonce}"
secret_key = dify_config.SECRET_KEY.encode()
secret_key = current_app.config['SECRET_KEY'].encode()
recalculated_sign = hmac.new(secret_key, data_to_sign.encode(), hashlib.sha256).digest()
recalculated_encoded_sign = base64.urlsafe_b64encode(recalculated_sign).decode()
@ -76,4 +77,4 @@ class UploadFileParser:
return False
current_time = int(time.time())
return current_time - int(timestamp) <= dify_config.FILES_ACCESS_TIMEOUT
return current_time - int(timestamp) <= current_app.config.get('FILES_ACCESS_TIMEOUT')

View File

@ -107,11 +107,11 @@ class CodeExecutor:
response = response.json()
except:
raise CodeExecutionException('Failed to parse response')
if (code := response.get('code')) != 0:
raise CodeExecutionException(f"Got error code: {code}. Got error msg: {response.get('message')}")
response = CodeExecutionResponse(**response)
if response.code != 0:
raise CodeExecutionException(response.message)
if response.data.error:
raise CodeExecutionException(response.data.error)

View File

@ -13,10 +13,18 @@ def get_position_map(folder_path: str, *, file_name: str = "_position.yaml") ->
:param file_name: the YAML file name, default to '_position.yaml'
:return: a dict with name as key and index as value
"""
position_file_path = os.path.join(folder_path, file_name)
yaml_content = load_yaml_file(file_path=position_file_path, default_value=[])
positions = [item.strip() for item in yaml_content if item and isinstance(item, str) and item.strip()]
return {name: index for index, name in enumerate(positions)}
position_file_name = os.path.join(folder_path, file_name)
if not position_file_name or not os.path.exists(position_file_name):
return {}
positions = load_yaml_file(position_file_name, ignore_error=True)
position_map = {}
index = 0
for _, name in enumerate(positions):
if name and isinstance(name, str):
position_map[name.strip()] = index
index += 1
return position_map
def sort_by_position_map(

View File

@ -1,75 +1,48 @@
"""
Proxy requests to avoid SSRF
"""
import logging
import os
import time
import httpx
SSRF_PROXY_ALL_URL = os.getenv('SSRF_PROXY_ALL_URL', '')
SSRF_PROXY_HTTP_URL = os.getenv('SSRF_PROXY_HTTP_URL', '')
SSRF_PROXY_HTTPS_URL = os.getenv('SSRF_PROXY_HTTPS_URL', '')
SSRF_DEFAULT_MAX_RETRIES = int(os.getenv('SSRF_DEFAULT_MAX_RETRIES', '3'))
proxies = {
'http://': SSRF_PROXY_HTTP_URL,
'https://': SSRF_PROXY_HTTPS_URL
} if SSRF_PROXY_HTTP_URL and SSRF_PROXY_HTTPS_URL else None
BACKOFF_FACTOR = 0.5
STATUS_FORCELIST = [429, 500, 502, 503, 504]
def make_request(method, url, max_retries=SSRF_DEFAULT_MAX_RETRIES, **kwargs):
if "allow_redirects" in kwargs:
allow_redirects = kwargs.pop("allow_redirects")
if "follow_redirects" not in kwargs:
kwargs["follow_redirects"] = allow_redirects
retries = 0
while retries <= max_retries:
try:
if SSRF_PROXY_ALL_URL:
response = httpx.request(method=method, url=url, proxy=SSRF_PROXY_ALL_URL, **kwargs)
elif proxies:
response = httpx.request(method=method, url=url, proxies=proxies, **kwargs)
else:
response = httpx.request(method=method, url=url, **kwargs)
if response.status_code not in STATUS_FORCELIST:
return response
else:
logging.warning(f"Received status code {response.status_code} for URL {url} which is in the force list")
except httpx.RequestError as e:
logging.warning(f"Request to URL {url} failed on attempt {retries + 1}: {e}")
retries += 1
if retries <= max_retries:
time.sleep(BACKOFF_FACTOR * (2 ** (retries - 1)))
raise Exception(f"Reached maximum retries ({max_retries}) for URL {url}")
def make_request(method, url, **kwargs):
if SSRF_PROXY_ALL_URL:
return httpx.request(method=method, url=url, proxy=SSRF_PROXY_ALL_URL, **kwargs)
elif proxies:
return httpx.request(method=method, url=url, proxies=proxies, **kwargs)
else:
return httpx.request(method=method, url=url, **kwargs)
def get(url, max_retries=SSRF_DEFAULT_MAX_RETRIES, **kwargs):
return make_request('GET', url, max_retries=max_retries, **kwargs)
def get(url, **kwargs):
return make_request('GET', url, **kwargs)
def post(url, max_retries=SSRF_DEFAULT_MAX_RETRIES, **kwargs):
return make_request('POST', url, max_retries=max_retries, **kwargs)
def post(url, **kwargs):
return make_request('POST', url, **kwargs)
def put(url, max_retries=SSRF_DEFAULT_MAX_RETRIES, **kwargs):
return make_request('PUT', url, max_retries=max_retries, **kwargs)
def put(url, **kwargs):
return make_request('PUT', url, **kwargs)
def patch(url, max_retries=SSRF_DEFAULT_MAX_RETRIES, **kwargs):
return make_request('PATCH', url, max_retries=max_retries, **kwargs)
def patch(url, **kwargs):
return make_request('PATCH', url, **kwargs)
def delete(url, max_retries=SSRF_DEFAULT_MAX_RETRIES, **kwargs):
return make_request('DELETE', url, max_retries=max_retries, **kwargs)
def delete(url, **kwargs):
return make_request('DELETE', url, **kwargs)
def head(url, max_retries=SSRF_DEFAULT_MAX_RETRIES, **kwargs):
return make_request('HEAD', url, max_retries=max_retries, **kwargs)
def head(url, **kwargs):
return make_request('HEAD', url, **kwargs)

View File

@ -73,8 +73,6 @@ class HostingConfiguration:
quota_limit=hosted_quota_limit,
restrict_models=[
RestrictModel(model="gpt-4", base_model_name="gpt-4", model_type=ModelType.LLM),
RestrictModel(model="gpt-4o", base_model_name="gpt-4o", model_type=ModelType.LLM),
RestrictModel(model="gpt-4o-mini", base_model_name="gpt-4o-mini", model_type=ModelType.LLM),
RestrictModel(model="gpt-4-32k", base_model_name="gpt-4-32k", model_type=ModelType.LLM),
RestrictModel(model="gpt-4-1106-preview", base_model_name="gpt-4-1106-preview", model_type=ModelType.LLM),
RestrictModel(model="gpt-4-vision-preview", base_model_name="gpt-4-vision-preview", model_type=ModelType.LLM),

View File

@ -12,7 +12,6 @@ from flask import Flask, current_app
from flask_login import current_user
from sqlalchemy.orm.exc import ObjectDeletedError
from configs import dify_config
from core.errors.error import ProviderTokenNotInitError
from core.llm_generator.llm_generator import LLMGenerator
from core.model_manager import ModelInstance, ModelManager
@ -225,7 +224,7 @@ class IndexingRunner:
features = FeatureService.get_features(tenant_id)
if features.billing.enabled:
count = len(extract_settings)
batch_upload_limit = dify_config.BATCH_UPLOAD_LIMIT
batch_upload_limit = int(current_app.config['BATCH_UPLOAD_LIMIT'])
if count > batch_upload_limit:
raise ValueError(f"You have reached the batch upload limit of {batch_upload_limit}.")
@ -428,7 +427,7 @@ class IndexingRunner:
# The user-defined segmentation rule
rules = json.loads(processing_rule.rules)
segmentation = rules["segmentation"]
max_segmentation_tokens_length = dify_config.INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH
max_segmentation_tokens_length = int(current_app.config['INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH'])
if segmentation["max_tokens"] < 50 or segmentation["max_tokens"] > max_segmentation_tokens_length:
raise ValueError(f"Custom segment length should be between 50 and {max_segmentation_tokens_length}.")

View File

@ -118,7 +118,7 @@ class LLMGenerator:
return questions
@classmethod
def generate_rule_config(cls, tenant_id: str, instruction: str, model_config: dict, no_variable: bool, rule_config_max_tokens: int = 512) -> dict:
def generate_rule_config(cls, tenant_id: str, instruction: str, model_config: dict, no_variable: bool) -> dict:
output_parser = RuleConfigGeneratorOutputParser()
error = ""
@ -130,7 +130,7 @@ class LLMGenerator:
"error": ""
}
model_parameters = {
"max_tokens": rule_config_max_tokens,
"max_tokens": 512,
"temperature": 0.01
}

View File

@ -86,9 +86,6 @@
- `agent-thought` Agent reasoning, generally over 70B with thought chain capability.
- `vision` Vision, i.e., image understanding.
- `tool-call`
- `multi-tool-call`
- `stream-tool-call`
### FetchFrom

View File

@ -87,9 +87,6 @@
- `agent-thought` Agent 推理,一般超过 70B 有思维链能力。
- `vision` 视觉,即:图像理解。
- `tool-call` 工具调用
- `multi-tool-call` 多工具调用
- `stream-tool-call` 流式工具调用
### FetchFrom

View File

@ -162,7 +162,7 @@ class AIModel(ABC):
# traverse all model_schema_yaml_paths
for model_schema_yaml_path in model_schema_yaml_paths:
# read yaml data from yaml file
yaml_data = load_yaml_file(model_schema_yaml_path)
yaml_data = load_yaml_file(model_schema_yaml_path, ignore_error=True)
new_parameter_rules = []
for parameter_rule in yaml_data.get('parameter_rules', []):

View File

@ -44,7 +44,7 @@ class ModelProvider(ABC):
# read provider schema from yaml file
yaml_path = os.path.join(current_path, f'{provider_name}.yaml')
yaml_data = load_yaml_file(yaml_path)
yaml_data = load_yaml_file(yaml_path, ignore_error=True)
try:
# yaml_data to entity

View File

@ -1,16 +1,18 @@
import hashlib
import logging
import re
import subprocess
import uuid
from abc import abstractmethod
from typing import Optional
from pydantic import ConfigDict
from core.model_runtime.entities.model_entities import ModelPropertyKey, ModelType
from core.model_runtime.errors.invoke import InvokeBadRequestError
from core.model_runtime.model_providers.__base.ai_model import AIModel
logger = logging.getLogger(__name__)
class TTSModel(AIModel):
"""
Model class for ttstext model.
@ -35,6 +37,8 @@ class TTSModel(AIModel):
:return: translated audio file
"""
try:
logger.info(f"Invoke TTS model: {model} , invoke content : {content_text}")
self._is_ffmpeg_installed()
return self._invoke(model=model, credentials=credentials, user=user,
content_text=content_text, voice=voice, tenant_id=tenant_id)
except Exception as e:
@ -71,8 +75,7 @@ class TTSModel(AIModel):
if model_schema and ModelPropertyKey.VOICES in model_schema.model_properties:
voices = model_schema.model_properties[ModelPropertyKey.VOICES]
if language:
return [{'name': d['name'], 'value': d['mode']} for d in voices if
language and language in d.get('language')]
return [{'name': d['name'], 'value': d['mode']} for d in voices if language and language in d.get('language')]
else:
return [{'name': d['name'], 'value': d['mode']} for d in voices]
@ -143,3 +146,28 @@ class TTSModel(AIModel):
if one_sentence != '':
result.append(one_sentence)
return result
@staticmethod
def _is_ffmpeg_installed():
try:
output = subprocess.check_output("ffmpeg -version", shell=True)
if "ffmpeg version" in output.decode("utf-8"):
return True
else:
raise InvokeBadRequestError("ffmpeg is not installed, "
"details: https://docs.dify.ai/getting-started/install-self-hosted"
"/install-faq#id-14.-what-to-do-if-this-error-occurs-in-text-to-speech")
except Exception:
raise InvokeBadRequestError("ffmpeg is not installed, "
"details: https://docs.dify.ai/getting-started/install-self-hosted"
"/install-faq#id-14.-what-to-do-if-this-error-occurs-in-text-to-speech")
# Todo: To improve the streaming function
@staticmethod
def _get_file_name(file_content: str) -> str:
hash_object = hashlib.sha256(file_content.encode())
hex_digest = hash_object.hexdigest()
namespace_uuid = uuid.UUID('a5da6ef9-b303-596f-8e88-bf8fa40f4b31')
unique_uuid = uuid.uuid5(namespace_uuid, hex_digest)
return str(unique_uuid)

View File

@ -6,7 +6,6 @@
- nvidia
- nvidia_nim
- cohere
- upstage
- bedrock
- togetherai
- openrouter

View File

@ -116,8 +116,7 @@ class AnthropicLargeLanguageModel(LargeLanguageModel):
# Add the new header for claude-3-5-sonnet-20240620 model
extra_headers = {}
if model == "claude-3-5-sonnet-20240620":
if model_parameters.get('max_tokens') > 4096:
extra_headers["anthropic-beta"] = "max-tokens-3-5-sonnet-2024-07-15"
extra_headers["anthropic-beta"] = "max-tokens-3-5-sonnet-2024-07-15"
if tools:
extra_model_kwargs['tools'] = [

View File

@ -496,158 +496,6 @@ LLM_BASE_MODELS = [
)
)
),
AzureBaseModel(
base_model_name='gpt-4o-mini',
entity=AIModelEntity(
model='fake-deployment-name',
label=I18nObject(
en_US='fake-deployment-name-label',
),
model_type=ModelType.LLM,
features=[
ModelFeature.AGENT_THOUGHT,
ModelFeature.VISION,
ModelFeature.MULTI_TOOL_CALL,
ModelFeature.STREAM_TOOL_CALL,
],
fetch_from=FetchFrom.CUSTOMIZABLE_MODEL,
model_properties={
ModelPropertyKey.MODE: LLMMode.CHAT.value,
ModelPropertyKey.CONTEXT_SIZE: 128000,
},
parameter_rules=[
ParameterRule(
name='temperature',
**PARAMETER_RULE_TEMPLATE[DefaultParameterName.TEMPERATURE],
),
ParameterRule(
name='top_p',
**PARAMETER_RULE_TEMPLATE[DefaultParameterName.TOP_P],
),
ParameterRule(
name='presence_penalty',
**PARAMETER_RULE_TEMPLATE[DefaultParameterName.PRESENCE_PENALTY],
),
ParameterRule(
name='frequency_penalty',
**PARAMETER_RULE_TEMPLATE[DefaultParameterName.FREQUENCY_PENALTY],
),
_get_max_tokens(default=512, min_val=1, max_val=16384),
ParameterRule(
name='seed',
label=I18nObject(
zh_Hans='种子',
en_US='Seed'
),
type='int',
help=I18nObject(
zh_Hans='如果指定,模型将尽最大努力进行确定性采样,使得重复的具有相同种子和参数的请求应该返回相同的结果。不能保证确定性,您应该参考 system_fingerprint 响应参数来监视变化。',
en_US='If specified, model will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.'
),
required=False,
precision=2,
min=0,
max=1,
),
ParameterRule(
name='response_format',
label=I18nObject(
zh_Hans='回复格式',
en_US='response_format'
),
type='string',
help=I18nObject(
zh_Hans='指定模型必须输出的格式',
en_US='specifying the format that the model must output'
),
required=False,
options=['text', 'json_object']
),
],
pricing=PriceConfig(
input=0.150,
output=0.600,
unit=0.000001,
currency='USD',
)
)
),
AzureBaseModel(
base_model_name='gpt-4o-mini-2024-07-18',
entity=AIModelEntity(
model='fake-deployment-name',
label=I18nObject(
en_US='fake-deployment-name-label',
),
model_type=ModelType.LLM,
features=[
ModelFeature.AGENT_THOUGHT,
ModelFeature.VISION,
ModelFeature.MULTI_TOOL_CALL,
ModelFeature.STREAM_TOOL_CALL,
],
fetch_from=FetchFrom.CUSTOMIZABLE_MODEL,
model_properties={
ModelPropertyKey.MODE: LLMMode.CHAT.value,
ModelPropertyKey.CONTEXT_SIZE: 128000,
},
parameter_rules=[
ParameterRule(
name='temperature',
**PARAMETER_RULE_TEMPLATE[DefaultParameterName.TEMPERATURE],
),
ParameterRule(
name='top_p',
**PARAMETER_RULE_TEMPLATE[DefaultParameterName.TOP_P],
),
ParameterRule(
name='presence_penalty',
**PARAMETER_RULE_TEMPLATE[DefaultParameterName.PRESENCE_PENALTY],
),
ParameterRule(
name='frequency_penalty',
**PARAMETER_RULE_TEMPLATE[DefaultParameterName.FREQUENCY_PENALTY],
),
_get_max_tokens(default=512, min_val=1, max_val=16384),
ParameterRule(
name='seed',
label=I18nObject(
zh_Hans='种子',
en_US='Seed'
),
type='int',
help=I18nObject(
zh_Hans='如果指定,模型将尽最大努力进行确定性采样,使得重复的具有相同种子和参数的请求应该返回相同的结果。不能保证确定性,您应该参考 system_fingerprint 响应参数来监视变化。',
en_US='If specified, model will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed, and you should refer to the system_fingerprint response parameter to monitor changes in the backend.'
),
required=False,
precision=2,
min=0,
max=1,
),
ParameterRule(
name='response_format',
label=I18nObject(
zh_Hans='回复格式',
en_US='response_format'
),
type='string',
help=I18nObject(
zh_Hans='指定模型必须输出的格式',
en_US='specifying the format that the model must output'
),
required=False,
options=['text', 'json_object']
),
],
pricing=PriceConfig(
input=0.150,
output=0.600,
unit=0.000001,
currency='USD',
)
)
),
AzureBaseModel(
base_model_name='gpt-4o',
entity=AIModelEntity(

View File

@ -114,18 +114,6 @@ model_credential_schema:
show_on:
- variable: __model_type
value: llm
- label:
en_US: gpt-4o-mini
value: gpt-4o-mini
show_on:
- variable: __model_type
value: llm
- label:
en_US: gpt-4o-mini-2024-07-18
value: gpt-4o-mini-2024-07-18
show_on:
- variable: __model_type
value: llm
- label:
en_US: gpt-4o
value: gpt-4o

View File

@ -375,10 +375,6 @@ class AzureOpenAILargeLanguageModel(_CommonAzureOpenAI, LargeLanguageModel):
continue
delta = chunk.choices[0]
# NOTE: For fix https://github.com/langgenius/dify/issues/5790
if delta.delta is None:
continue
# extract tool calls from response
self._update_tool_calls(tool_calls=tool_calls, tool_calls_response=delta.delta.tool_calls)

View File

@ -1,8 +1,12 @@
import concurrent.futures
import copy
from functools import reduce
from io import BytesIO
from typing import Optional
from flask import Response
from openai import AzureOpenAI
from pydub import AudioSegment
from core.model_runtime.entities.model_entities import AIModelEntity
from core.model_runtime.errors.invoke import InvokeBadRequestError
@ -47,7 +51,7 @@ class AzureOpenAIText2SpeechModel(_CommonAzureOpenAI, TTSModel):
:return: text translated to audio file
"""
try:
self._tts_invoke_streaming(
self._tts_invoke(
model=model,
credentials=credentials,
content_text='Hello Dify!',
@ -56,6 +60,45 @@ class AzureOpenAIText2SpeechModel(_CommonAzureOpenAI, TTSModel):
except Exception as ex:
raise CredentialsValidateFailedError(str(ex))
def _tts_invoke(self, model: str, credentials: dict, content_text: str, voice: str) -> Response:
"""
_tts_invoke text2speech model
:param model: model name
:param credentials: model credentials
:param content_text: text content to be translated
:param voice: model timbre
:return: text translated to audio file
"""
audio_type = self._get_model_audio_type(model, credentials)
word_limit = self._get_model_word_limit(model, credentials)
max_workers = self._get_model_workers_limit(model, credentials)
try:
sentences = list(self._split_text_into_sentences(org_text=content_text, max_length=word_limit))
audio_bytes_list = []
# Create a thread pool and map the function to the list of sentences
with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
futures = [executor.submit(self._process_sentence, sentence=sentence, model=model, voice=voice,
credentials=credentials) for sentence in sentences]
for future in futures:
try:
if future.result():
audio_bytes_list.append(future.result())
except Exception as ex:
raise InvokeBadRequestError(str(ex))
if len(audio_bytes_list) > 0:
audio_segments = [AudioSegment.from_file(BytesIO(audio_bytes), format=audio_type) for audio_bytes in
audio_bytes_list if audio_bytes]
combined_segment = reduce(lambda x, y: x + y, audio_segments)
buffer: BytesIO = BytesIO()
combined_segment.export(buffer, format=audio_type)
buffer.seek(0)
return Response(buffer.read(), status=200, mimetype=f"audio/{audio_type}")
except Exception as ex:
raise InvokeBadRequestError(str(ex))
def _tts_invoke_streaming(self, model: str, credentials: dict, content_text: str,
voice: str) -> any:
"""
@ -101,6 +144,7 @@ class AzureOpenAIText2SpeechModel(_CommonAzureOpenAI, TTSModel):
:param sentence: text content to be translated
:return: text translated to audio file
"""
# transform credentials to kwargs for model instance
credentials_kwargs = self._to_credential_kwargs(credentials)
client = AzureOpenAI(**credentials_kwargs)
response = client.audio.speech.create(model=model, voice=voice, input=sentence.strip())

View File

@ -10,14 +10,10 @@
- cohere.command-text-v14
- cohere.command-r-plus-v1.0
- cohere.command-r-v1.0
- meta.llama3-1-8b-instruct-v1:0
- meta.llama3-1-70b-instruct-v1:0
- meta.llama3-1-405b-instruct-v1:0
- meta.llama3-8b-instruct-v1:0
- meta.llama3-70b-instruct-v1:0
- meta.llama2-13b-chat-v1
- meta.llama2-70b-chat-v1
- mistral.mistral-large-2407-v1:0
- mistral.mistral-small-2402-v1:0
- mistral.mistral-large-2402-v1:0
- mistral.mixtral-8x7b-instruct-v0:1

View File

@ -3,7 +3,8 @@ label:
en_US: Command R+
model_type: llm
features:
- tool-call
#- multi-tool-call
- agent-thought
#- stream-tool-call
model_properties:
mode: chat

View File

@ -3,7 +3,9 @@ label:
en_US: Command R
model_type: llm
features:
- tool-call
#- multi-tool-call
- agent-thought
#- stream-tool-call
model_properties:
mode: chat
context_size: 128000

View File

@ -17,6 +17,7 @@ from botocore.exceptions import (
ServiceNotInRegionError,
UnknownServiceError,
)
from cohere import ChatMessage
# local import
from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta
@ -41,6 +42,7 @@ from core.model_runtime.errors.invoke import (
)
from core.model_runtime.errors.validate import CredentialsValidateFailedError
from core.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel
from core.model_runtime.model_providers.cohere.llm.llm import CohereLargeLanguageModel
logger = logging.getLogger(__name__)
@ -57,7 +59,6 @@ class BedrockLargeLanguageModel(LargeLanguageModel):
{'prefix': 'mistral.mixtral-8x7b-instruct', 'support_system_prompts': False, 'support_tool_use': False},
{'prefix': 'mistral.mistral-large', 'support_system_prompts': True, 'support_tool_use': True},
{'prefix': 'mistral.mistral-small', 'support_system_prompts': True, 'support_tool_use': True},
{'prefix': 'cohere.command-r', 'support_system_prompts': True, 'support_tool_use': True},
{'prefix': 'amazon.titan', 'support_system_prompts': False, 'support_tool_use': False}
]
@ -93,8 +94,86 @@ class BedrockLargeLanguageModel(LargeLanguageModel):
model_info['model'] = model
# invoke models via boto3 converse API
return self._generate_with_converse(model_info, credentials, prompt_messages, model_parameters, stop, stream, user, tools)
# invoke Cohere models via boto3 client
if "cohere.command-r" in model:
return self._generate_cohere_chat(model, credentials, prompt_messages, model_parameters, stop, stream, user, tools)
# invoke other models via boto3 client
return self._generate(model, credentials, prompt_messages, model_parameters, stop, stream, user)
def _generate_cohere_chat(
self, model: str, credentials: dict, prompt_messages: list[PromptMessage], model_parameters: dict,
stop: Optional[list[str]] = None, stream: bool = True, user: Optional[str] = None,
tools: Optional[list[PromptMessageTool]] = None,) -> Union[LLMResult, Generator]:
cohere_llm = CohereLargeLanguageModel()
client_config = Config(
region_name=credentials["aws_region"]
)
runtime_client = boto3.client(
service_name='bedrock-runtime',
config=client_config,
aws_access_key_id=credentials["aws_access_key_id"],
aws_secret_access_key=credentials["aws_secret_access_key"]
)
extra_model_kwargs = {}
if stop:
extra_model_kwargs['stop_sequences'] = stop
if tools:
tools = cohere_llm._convert_tools(tools)
model_parameters['tools'] = tools
message, chat_histories, tool_results \
= cohere_llm._convert_prompt_messages_to_message_and_chat_histories(prompt_messages)
if tool_results:
model_parameters['tool_results'] = tool_results
payload = {
**model_parameters,
"message": message,
"chat_history": chat_histories,
}
# need workaround for ai21 models which doesn't support streaming
if stream:
invoke = runtime_client.invoke_model_with_response_stream
else:
invoke = runtime_client.invoke_model
def serialize(obj):
if isinstance(obj, ChatMessage):
return obj.__dict__
raise TypeError(f"Type {type(obj)} not serializable")
try:
body_jsonstr=json.dumps(payload, default=serialize)
response = invoke(
modelId=model,
contentType="application/json",
accept="*/*",
body=body_jsonstr
)
except ClientError as ex:
error_code = ex.response['Error']['Code']
full_error_msg = f"{error_code}: {ex.response['Error']['Message']}"
raise self._map_client_to_invoke_error(error_code, full_error_msg)
except (EndpointConnectionError, NoRegionError, ServiceNotInRegionError) as ex:
raise InvokeConnectionError(str(ex))
except UnknownServiceError as ex:
raise InvokeServerUnavailableError(str(ex))
except Exception as ex:
raise InvokeError(str(ex))
if stream:
return self._handle_generate_stream_response(model, credentials, response, prompt_messages)
return self._handle_generate_response(model, credentials, response, prompt_messages)
def _generate_with_converse(self, model_info: dict, credentials: dict, prompt_messages: list[PromptMessage], model_parameters: dict,
stop: Optional[list[str]] = None, stream: bool = True, user: Optional[str] = None, tools: Optional[list[PromptMessageTool]] = None,) -> Union[LLMResult, Generator]:
@ -129,25 +208,14 @@ class BedrockLargeLanguageModel(LargeLanguageModel):
if model_info['support_tool_use'] and tools:
parameters['toolConfig'] = self._convert_converse_tool_config(tools=tools)
try:
if stream:
response = bedrock_client.converse_stream(**parameters)
return self._handle_converse_stream_response(model_info['model'], credentials, response, prompt_messages)
else:
response = bedrock_client.converse(**parameters)
return self._handle_converse_response(model_info['model'], credentials, response, prompt_messages)
except ClientError as ex:
error_code = ex.response['Error']['Code']
full_error_msg = f"{error_code}: {ex.response['Error']['Message']}"
raise self._map_client_to_invoke_error(error_code, full_error_msg)
except (EndpointConnectionError, NoRegionError, ServiceNotInRegionError) as ex:
raise InvokeConnectionError(str(ex))
except UnknownServiceError as ex:
raise InvokeServerUnavailableError(str(ex))
if stream:
response = bedrock_client.converse_stream(**parameters)
return self._handle_converse_stream_response(model_info['model'], credentials, response, prompt_messages)
else:
response = bedrock_client.converse(**parameters)
return self._handle_converse_response(model_info['model'], credentials, response, prompt_messages)
except Exception as ex:
raise InvokeError(str(ex))
def _handle_converse_response(self, model: str, credentials: dict, response: dict,
prompt_messages: list[PromptMessage]) -> LLMResult:
"""
@ -490,6 +558,7 @@ class BedrockLargeLanguageModel(LargeLanguageModel):
except ClientError as ex:
error_code = ex.response['Error']['Code']
full_error_msg = f"{error_code}: {ex.response['Error']['Message']}"
raise CredentialsValidateFailedError(str(self._map_client_to_invoke_error(error_code, full_error_msg)))
except Exception as ex:
@ -502,9 +571,38 @@ class BedrockLargeLanguageModel(LargeLanguageModel):
:param message: PromptMessage to convert.
:return: String representation of the message.
"""
human_prompt_prefix = ""
human_prompt_postfix = ""
ai_prompt = ""
if model_prefix == "anthropic":
human_prompt_prefix = "\n\nHuman:"
human_prompt_postfix = ""
ai_prompt = "\n\nAssistant:"
elif model_prefix == "meta":
# LLAMA3
if model_name.startswith("llama3"):
human_prompt_prefix = "<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n"
human_prompt_postfix = "<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"
ai_prompt = "\n\nAssistant:"
else:
# LLAMA2
human_prompt_prefix = "\n[INST]"
human_prompt_postfix = "[\\INST]\n"
ai_prompt = ""
elif model_prefix == "mistral":
human_prompt_prefix = "<s>[INST]"
human_prompt_postfix = "[\\INST]\n"
ai_prompt = "\n\nAssistant:"
elif model_prefix == "amazon":
human_prompt_prefix = "\n\nUser:"
human_prompt_postfix = ""
ai_prompt = "\n\nBot:"
else:
human_prompt_prefix = ""
human_prompt_postfix = ""
ai_prompt = ""
content = message.content
@ -555,7 +653,13 @@ class BedrockLargeLanguageModel(LargeLanguageModel):
model_prefix = model.split('.')[0]
model_name = model.split('.')[1]
if model_prefix == "ai21":
if model_prefix == "amazon":
payload["textGenerationConfig"] = { **model_parameters }
payload["textGenerationConfig"]["stopSequences"] = ["User:"]
payload["inputText"] = self._convert_messages_to_prompt(prompt_messages, model_prefix)
elif model_prefix == "ai21":
payload["temperature"] = model_parameters.get("temperature")
payload["topP"] = model_parameters.get("topP")
payload["maxTokens"] = model_parameters.get("maxTokens")
@ -567,12 +671,28 @@ class BedrockLargeLanguageModel(LargeLanguageModel):
payload["frequencyPenalty"] = {model_parameters.get("frequencyPenalty")}
if model_parameters.get("countPenalty"):
payload["countPenalty"] = {model_parameters.get("countPenalty")}
elif model_prefix == "mistral":
payload["temperature"] = model_parameters.get("temperature")
payload["top_p"] = model_parameters.get("top_p")
payload["max_tokens"] = model_parameters.get("max_tokens")
payload["prompt"] = self._convert_messages_to_prompt(prompt_messages, model_prefix)
payload["stop"] = stop[:10] if stop else []
elif model_prefix == "anthropic":
payload = { **model_parameters }
payload["prompt"] = self._convert_messages_to_prompt(prompt_messages, model_prefix)
payload["stop_sequences"] = ["\n\nHuman:"] + (stop if stop else [])
elif model_prefix == "cohere":
payload = { **model_parameters }
payload["prompt"] = prompt_messages[0].content
payload["stream"] = stream
elif model_prefix == "meta":
payload = { **model_parameters }
payload["prompt"] = self._convert_messages_to_prompt(prompt_messages, model_prefix, model_name)
else:
raise ValueError(f"Got unknown model prefix {model_prefix}")
@ -663,16 +783,36 @@ class BedrockLargeLanguageModel(LargeLanguageModel):
# get output text and calculate num tokens based on model / provider
model_prefix = model.split('.')[0]
if model_prefix == "ai21":
if model_prefix == "amazon":
output = response_body.get("results")[0].get("outputText").strip('\n')
prompt_tokens = response_body.get("inputTextTokenCount")
completion_tokens = response_body.get("results")[0].get("tokenCount")
elif model_prefix == "ai21":
output = response_body.get('completions')[0].get('data').get('text')
prompt_tokens = len(response_body.get("prompt").get("tokens"))
completion_tokens = len(response_body.get('completions')[0].get('data').get('tokens'))
elif model_prefix == "anthropic":
output = response_body.get("completion")
prompt_tokens = self.get_num_tokens(model, credentials, prompt_messages)
completion_tokens = self.get_num_tokens(model, credentials, output if output else '')
elif model_prefix == "cohere":
output = response_body.get("generations")[0].get("text")
prompt_tokens = self.get_num_tokens(model, credentials, prompt_messages)
completion_tokens = self.get_num_tokens(model, credentials, output if output else '')
elif model_prefix == "meta":
output = response_body.get("generation").strip('\n')
prompt_tokens = response_body.get("prompt_token_count")
completion_tokens = response_body.get("generation_token_count")
elif model_prefix == "mistral":
output = response_body.get("outputs")[0].get("text")
prompt_tokens = response.get('ResponseMetadata').get('HTTPHeaders').get('x-amzn-bedrock-input-token-count')
completion_tokens = response.get('ResponseMetadata').get('HTTPHeaders').get('x-amzn-bedrock-output-token-count')
else:
raise ValueError(f"Got unknown model prefix {model_prefix} when handling block response")
@ -743,10 +883,26 @@ class BedrockLargeLanguageModel(LargeLanguageModel):
payload = json.loads(chunk.get('bytes').decode())
model_prefix = model.split('.')[0]
if model_prefix == "cohere":
if model_prefix == "amazon":
content_delta = payload.get("outputText").strip('\n')
finish_reason = payload.get("completion_reason")
elif model_prefix == "anthropic":
content_delta = payload.get("completion")
finish_reason = payload.get("stop_reason")
elif model_prefix == "cohere":
content_delta = payload.get("text")
finish_reason = payload.get("finish_reason")
elif model_prefix == "mistral":
content_delta = payload.get('outputs')[0].get("text")
finish_reason = payload.get('outputs')[0].get("stop_reason")
elif model_prefix == "meta":
content_delta = payload.get("generation").strip('\n')
finish_reason = payload.get("stop_reason")
else:
raise ValueError(f"Got unknown model prefix {model_prefix} when handling stream response")

View File

@ -1,25 +0,0 @@
model: meta.llama3-1-405b-instruct-v1:0
label:
en_US: Llama 3.1 405B Instruct
model_type: llm
model_properties:
mode: completion
context_size: 128000
parameter_rules:
- name: temperature
use_template: temperature
default: 0.5
- name: top_p
use_template: top_p
default: 0.9
- name: max_gen_len
use_template: max_tokens
required: true
default: 512
min: 1
max: 2048
pricing:
input: '0.00532'
output: '0.016'
unit: '0.001'
currency: USD

View File

@ -1,25 +0,0 @@
model: meta.llama3-1-70b-instruct-v1:0
label:
en_US: Llama 3.1 Instruct 70B
model_type: llm
model_properties:
mode: completion
context_size: 128000
parameter_rules:
- name: temperature
use_template: temperature
default: 0.5
- name: top_p
use_template: top_p
default: 0.9
- name: max_gen_len
use_template: max_tokens
required: true
default: 512
min: 1
max: 2048
pricing:
input: '0.00265'
output: '0.0035'
unit: '0.001'
currency: USD

View File

@ -1,25 +0,0 @@
model: meta.llama3-1-8b-instruct-v1:0
label:
en_US: Llama 3.1 Instruct 8B
model_type: llm
model_properties:
mode: completion
context_size: 128000
parameter_rules:
- name: temperature
use_template: temperature
default: 0.5
- name: top_p
use_template: top_p
default: 0.9
- name: max_gen_len
use_template: max_tokens
required: true
default: 512
min: 1
max: 2048
pricing:
input: '0.0003'
output: '0.0006'
unit: '0.001'
currency: USD

View File

@ -1,29 +0,0 @@
model: mistral.mistral-large-2407-v1:0
label:
en_US: Mistral Large 2 (24.07)
model_type: llm
features:
- tool-call
model_properties:
mode: completion
context_size: 128000
parameter_rules:
- name: temperature
use_template: temperature
required: false
default: 0.7
- name: top_p
use_template: top_p
required: false
default: 1
- name: max_tokens
use_template: max_tokens
required: true
default: 512
min: 1
max: 8192
pricing:
input: '0.003'
output: '0.009'
unit: '0.001'
currency: USD

View File

@ -5,8 +5,6 @@ label:
model_type: llm
features:
- agent-thought
- multi-tool-call
- stream-tool-call
model_properties:
mode: chat
context_size: 128000
@ -25,7 +23,7 @@ parameter_rules:
type: int
default: 4096
min: 1
max: 8192
max: 4096
help:
zh_Hans: 指定生成结果长度的上限。如果生成结果截断,可以调大该参数。
en_US: Specifies the upper limit on the length of generated results. If the generated results are truncated, you can increase this parameter.
@ -59,18 +57,6 @@ parameter_rules:
help:
zh_Hans: 介于 -2.0 和 2.0 之间的数字。如果该值为正,那么新 token 会根据其在已有文本中的出现频率受到相应的惩罚,降低模型重复相同内容的可能性。
en_US: A number between -2.0 and 2.0. If the value is positive, new tokens are penalized based on their frequency of occurrence in existing text, reducing the likelihood that the model will repeat the same content.
- name: response_format
label:
zh_Hans: 回复格式
en_US: response_format
type: string
help:
zh_Hans: 指定模型必须输出的格式
en_US: specifying the format that the model must output
required: false
options:
- text
- json_object
pricing:
input: '1'
output: '2'

View File

@ -5,8 +5,6 @@ label:
model_type: llm
features:
- agent-thought
- multi-tool-call
- stream-tool-call
model_properties:
mode: chat
context_size: 128000

View File

@ -19,7 +19,7 @@ parameter_rules:
min: 1
max: 8192
pricing:
input: '0.59'
output: '0.79'
input: '0.05'
output: '0.1'
unit: '0.000001'
currency: USD

View File

@ -19,7 +19,7 @@ parameter_rules:
min: 1
max: 8192
pricing:
input: '0.05'
output: '0.08'
input: '0.59'
output: '0.79'
unit: '0.000001'
currency: USD

View File

@ -18,7 +18,6 @@ help:
en_US: https://console.cloud.tencent.com/cam/capi
supported_model_types:
- llm
- text-embedding
configurate_methods:
- predefined-model
provider_credential_schema:

View File

@ -21,16 +21,6 @@ parameter_rules:
default: 1024
min: 1
max: 32000
- name: enable_enhance
label:
zh_Hans: 功能增强
en_US: Enable Enhancement
type: boolean
help:
zh_Hans: 功能增强(如搜索)开关,关闭时将直接由主模型生成回复内容,可以降低响应时延(对于流式输出时的首字时延尤为明显)。但在少数场景里,回复效果可能会下降。
en_US: Allow the model to perform external search to enhance the generation results.
required: false
default: true
pricing:
input: '0.03'
output: '0.10'

View File

@ -21,16 +21,6 @@ parameter_rules:
default: 1024
min: 1
max: 256000
- name: enable_enhance
label:
zh_Hans: 功能增强
en_US: Enable Enhancement
type: boolean
help:
zh_Hans: 功能增强(如搜索)开关,关闭时将直接由主模型生成回复内容,可以降低响应时延(对于流式输出时的首字时延尤为明显)。但在少数场景里,回复效果可能会下降。
en_US: Allow the model to perform external search to enhance the generation results.
required: false
default: true
pricing:
input: '0.015'
output: '0.06'

View File

@ -21,16 +21,6 @@ parameter_rules:
default: 1024
min: 1
max: 32000
- name: enable_enhance
label:
zh_Hans: 功能增强
en_US: Enable Enhancement
type: boolean
help:
zh_Hans: 功能增强(如搜索)开关,关闭时将直接由主模型生成回复内容,可以降低响应时延(对于流式输出时的首字时延尤为明显)。但在少数场景里,回复效果可能会下降。
en_US: Allow the model to perform external search to enhance the generation results.
required: false
default: true
pricing:
input: '0.0045'
output: '0.0005'

View File

@ -14,7 +14,6 @@ from core.model_runtime.entities.message_entities import (
PromptMessage,
PromptMessageTool,
SystemPromptMessage,
ToolPromptMessage,
UserPromptMessage,
)
from core.model_runtime.errors.invoke import InvokeError
@ -36,8 +35,7 @@ class HunyuanLargeLanguageModel(LargeLanguageModel):
custom_parameters = {
'Temperature': model_parameters.get('temperature', 0.0),
'TopP': model_parameters.get('top_p', 1.0),
'EnableEnhancement': model_parameters.get('enable_enhance', True)
'TopP': model_parameters.get('top_p', 1.0)
}
params = {
@ -46,17 +44,6 @@ class HunyuanLargeLanguageModel(LargeLanguageModel):
"Stream": stream,
**custom_parameters,
}
# add Tools and ToolChoice
if (tools and len(tools) > 0):
params['ToolChoice'] = "auto"
params['Tools'] = [{
"Type": "function",
"Function": {
"Name": tool.name,
"Description": tool.description,
"Parameters": json.dumps(tool.parameters)
}
} for tool in tools]
request.from_json_string(json.dumps(params))
response = client.ChatCompletions(request)
@ -102,43 +89,9 @@ class HunyuanLargeLanguageModel(LargeLanguageModel):
def _convert_prompt_messages_to_dicts(self, prompt_messages: list[PromptMessage]) -> list[dict]:
"""Convert a list of PromptMessage objects to a list of dictionaries with 'Role' and 'Content' keys."""
dict_list = []
for message in prompt_messages:
if isinstance(message, AssistantPromptMessage):
tool_calls = message.tool_calls
if (tool_calls and len(tool_calls) > 0):
dict_tool_calls = [
{
"Id": tool_call.id,
"Type": tool_call.type,
"Function": {
"Name": tool_call.function.name,
"Arguments": tool_call.function.arguments if (tool_call.function.arguments == "") else "{}"
}
} for tool_call in tool_calls]
dict_list.append({
"Role": message.role.value,
# fix set content = "" while tool_call request
# fix [hunyuan] None, [TencentCloudSDKException] code:InvalidParameter message:Messages Content and Contents not allowed empty at the same time.
"Content": " ", # message.content if (message.content is not None) else "",
"ToolCalls": dict_tool_calls
})
else:
dict_list.append({ "Role": message.role.value, "Content": message.content })
elif isinstance(message, ToolPromptMessage):
tool_execute_result = { "result": message.content }
content =json.dumps(tool_execute_result, ensure_ascii=False)
dict_list.append({ "Role": message.role.value, "Content": content, "ToolCallId": message.tool_call_id })
else:
dict_list.append({ "Role": message.role.value, "Content": message.content })
return dict_list
return [{"Role": message.role.value, "Content": message.content} for message in prompt_messages]
def _handle_stream_chat_response(self, model, credentials, prompt_messages, resp):
tool_call = None
tool_calls = []
for index, event in enumerate(resp):
logging.debug("_handle_stream_chat_response, event: %s", event)
@ -156,54 +109,20 @@ class HunyuanLargeLanguageModel(LargeLanguageModel):
usage = data.get('Usage', {})
prompt_tokens = usage.get('PromptTokens', 0)
completion_tokens = usage.get('CompletionTokens', 0)
response_tool_calls = delta.get('ToolCalls')
if (response_tool_calls is not None):
new_tool_calls = self._extract_response_tool_calls(response_tool_calls)
if (len(new_tool_calls) > 0):
new_tool_call = new_tool_calls[0]
if (tool_call is None): tool_call = new_tool_call
elif (tool_call.id != new_tool_call.id):
tool_calls.append(tool_call)
tool_call = new_tool_call
else:
tool_call.function.name += new_tool_call.function.name
tool_call.function.arguments += new_tool_call.function.arguments
if (tool_call is not None and len(tool_call.function.name) > 0 and len(tool_call.function.arguments) > 0):
tool_calls.append(tool_call)
tool_call = None
usage = self._calc_response_usage(model, credentials, prompt_tokens, completion_tokens)
assistant_prompt_message = AssistantPromptMessage(
content=message_content,
tool_calls=[]
)
# rewrite content = "" while tool_call to avoid show content on web page
if (len(tool_calls) > 0): assistant_prompt_message.content = ""
# add tool_calls to assistant_prompt_message
if (finish_reason == 'tool_calls'):
assistant_prompt_message.tool_calls = tool_calls
tool_call = None
tool_calls = []
if (len(finish_reason) > 0):
usage = self._calc_response_usage(model, credentials, prompt_tokens, completion_tokens)
delta_chunk = LLMResultChunkDelta(
index=index,
role=delta.get('Role', 'assistant'),
message=assistant_prompt_message,
usage=usage,
finish_reason=finish_reason,
)
tool_call = None
tool_calls = []
else:
delta_chunk = LLMResultChunkDelta(
index=index,
message=assistant_prompt_message,
)
delta_chunk = LLMResultChunkDelta(
index=index,
role=delta.get('Role', 'assistant'),
message=assistant_prompt_message,
usage=usage,
finish_reason=finish_reason,
)
yield LLMResultChunk(
model=model,
@ -258,15 +177,12 @@ class HunyuanLargeLanguageModel(LargeLanguageModel):
"""
human_prompt = "\n\nHuman:"
ai_prompt = "\n\nAssistant:"
tool_prompt = "\n\nTool:"
content = message.content
if isinstance(message, UserPromptMessage):
message_text = f"{human_prompt} {content}"
elif isinstance(message, AssistantPromptMessage):
message_text = f"{ai_prompt} {content}"
elif isinstance(message, ToolPromptMessage):
message_text = f"{tool_prompt} {content}"
elif isinstance(message, SystemPromptMessage):
message_text = content
else:
@ -287,30 +203,3 @@ class HunyuanLargeLanguageModel(LargeLanguageModel):
return {
InvokeError: [TencentCloudSDKException],
}
def _extract_response_tool_calls(self,
response_tool_calls: list[dict]) \
-> list[AssistantPromptMessage.ToolCall]:
"""
Extract tool calls from response
:param response_tool_calls: response tool calls
:return: list of tool calls
"""
tool_calls = []
if response_tool_calls:
for response_tool_call in response_tool_calls:
response_function = response_tool_call.get('Function', {})
function = AssistantPromptMessage.ToolCall.ToolCallFunction(
name=response_function.get('Name', ''),
arguments=response_function.get('Arguments', '')
)
tool_call = AssistantPromptMessage.ToolCall(
id=response_tool_call.get('Id', 0),
type='function',
function=function
)
tool_calls.append(tool_call)
return tool_calls

View File

@ -1,5 +0,0 @@
model: hunyuan-embedding
model_type: text-embedding
model_properties:
context_size: 1024
max_chunks: 1

View File

@ -1,173 +0,0 @@
import json
import logging
import time
from typing import Optional
from tencentcloud.common import credential
from tencentcloud.common.exception import TencentCloudSDKException
from tencentcloud.common.profile.client_profile import ClientProfile
from tencentcloud.common.profile.http_profile import HttpProfile
from tencentcloud.hunyuan.v20230901 import hunyuan_client, models
from core.model_runtime.entities.model_entities import PriceType
from core.model_runtime.entities.text_embedding_entities import EmbeddingUsage, TextEmbeddingResult
from core.model_runtime.errors.invoke import (
InvokeError,
)
from core.model_runtime.errors.validate import CredentialsValidateFailedError
from core.model_runtime.model_providers.__base.text_embedding_model import TextEmbeddingModel
logger = logging.getLogger(__name__)
class HunyuanTextEmbeddingModel(TextEmbeddingModel):
"""
Model class for Hunyuan text embedding model.
"""
def _invoke(self, model: str, credentials: dict,
texts: list[str], user: Optional[str] = None) \
-> TextEmbeddingResult:
"""
Invoke text embedding model
:param model: model name
:param credentials: model credentials
:param texts: texts to embed
:param user: unique user id
:return: embeddings result
"""
if model != 'hunyuan-embedding':
raise ValueError('Invalid model name')
client = self._setup_hunyuan_client(credentials)
embeddings = []
token_usage = 0
for input in texts:
request = models.GetEmbeddingRequest()
params = {
"Input": input
}
request.from_json_string(json.dumps(params))
response = client.GetEmbedding(request)
usage = response.Usage.TotalTokens
embeddings.extend([data.Embedding for data in response.Data])
token_usage += usage
result = TextEmbeddingResult(
model=model,
embeddings=embeddings,
usage=self._calc_response_usage(
model=model,
credentials=credentials,
tokens=token_usage
)
)
return result
def validate_credentials(self, model: str, credentials: dict) -> None:
"""
Validate credentials
"""
try:
client = self._setup_hunyuan_client(credentials)
req = models.ChatCompletionsRequest()
params = {
"Model": model,
"Messages": [{
"Role": "user",
"Content": "hello"
}],
"TopP": 1,
"Temperature": 0,
"Stream": False
}
req.from_json_string(json.dumps(params))
client.ChatCompletions(req)
except Exception as e:
raise CredentialsValidateFailedError(f'Credentials validation failed: {e}')
def _setup_hunyuan_client(self, credentials):
secret_id = credentials['secret_id']
secret_key = credentials['secret_key']
cred = credential.Credential(secret_id, secret_key)
httpProfile = HttpProfile()
httpProfile.endpoint = "hunyuan.tencentcloudapi.com"
clientProfile = ClientProfile()
clientProfile.httpProfile = httpProfile
client = hunyuan_client.HunyuanClient(cred, "", clientProfile)
return client
def _calc_response_usage(self, model: str, credentials: dict, tokens: int) -> EmbeddingUsage:
"""
Calculate response usage
:param model: model name
:param credentials: model credentials
:param tokens: input tokens
:return: usage
"""
# get input price info
input_price_info = self.get_price(
model=model,
credentials=credentials,
price_type=PriceType.INPUT,
tokens=tokens
)
# transform usage
usage = EmbeddingUsage(
tokens=tokens,
total_tokens=tokens,
unit_price=input_price_info.unit_price,
price_unit=input_price_info.unit,
total_price=input_price_info.total_amount,
currency=input_price_info.currency,
latency=time.perf_counter() - self.started_at
)
return usage
@property
def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]:
"""
Map model invoke error to unified error
The key is the error type thrown to the caller
The value is the error type thrown by the model,
which needs to be converted into a unified error type for the caller.
:return: Invoke error mapping
"""
return {
InvokeError: [TencentCloudSDKException],
}
def get_num_tokens(self, model: str, credentials: dict, texts: list[str]) -> int:
"""
Get number of tokens for given prompt messages
:param model: model name
:param credentials: model credentials
:param texts: texts to embed
:return:
"""
# client = self._setup_hunyuan_client(credentials)
num_tokens = 0
for text in texts:
num_tokens += self._get_num_tokens_by_gpt2(text)
# use client.GetTokenCount to get num tokens
# request = models.GetTokenCountRequest()
# params = {
# "Prompt": text
# }
# request.from_json_string(json.dumps(params))
# response = client.GetTokenCount(request)
# num_tokens += response.TokenCount
return num_tokens

View File

@ -34,8 +34,3 @@ parameter_rules:
min: -2
max: 2
default: 0
pricing:
input: '0.0027'
output: '0.0027'
unit: '0.0001'
currency: USD

View File

@ -1,41 +0,0 @@
model: jondurbin/airoboros-l2-70b
label:
zh_Hans: jondurbin/airoboros-l2-70b
en_US: jondurbin/airoboros-l2-70b
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 4096
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 2
default: 1
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 2048
default: 512
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0
pricing:
input: '0.005'
output: '0.005'
unit: '0.0001'
currency: USD

View File

@ -1,41 +0,0 @@
model: cognitivecomputations/dolphin-mixtral-8x22b
label:
zh_Hans: cognitivecomputations/dolphin-mixtral-8x22b
en_US: cognitivecomputations/dolphin-mixtral-8x22b
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 16000
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 2
default: 1
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 2048
default: 512
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0
pricing:
input: '0.009'
output: '0.009'
unit: '0.0001'
currency: USD

View File

@ -1,41 +0,0 @@
model: google/gemma-2-9b-it
label:
zh_Hans: google/gemma-2-9b-it
en_US: google/gemma-2-9b-it
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 8192
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 2
default: 1
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 2048
default: 512
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0
pricing:
input: '0.0008'
output: '0.0008'
unit: '0.0001'
currency: USD

View File

@ -1,41 +0,0 @@
model: nousresearch/hermes-2-pro-llama-3-8b
label:
zh_Hans: nousresearch/hermes-2-pro-llama-3-8b
en_US: nousresearch/hermes-2-pro-llama-3-8b
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 8192
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 2
default: 1
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 2048
default: 512
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0
pricing:
input: '0.0014'
output: '0.0014'
unit: '0.0001'
currency: USD

Some files were not shown because too many files have changed in this diff Show More