Compare commits

..

17 Commits

Author SHA1 Message Date
2ab04bb933 fix Reranking mode is null 2024-08-06 19:03:32 +08:00
a3c2ab9a6e fix Reranking mode is null 2024-08-06 18:34:07 +08:00
c53875ce8c fix #6902 .docx handles images within tables and handles cross-column tables (#6951) 2024-08-06 17:14:24 +08:00
7f18c06b0a fix: code-block-missing-checks (#7002) 2024-08-06 16:11:14 +08:00
96dcf0fe8a fix: code tool fails when null property exists in object (#6988) 2024-08-06 16:11:00 +08:00
0c22e4e3d1 Feat/new confirm (#6984) 2024-08-06 14:31:13 +08:00
bd3ed89516 feat: add function calling for deepseek models (#6990) 2024-08-06 13:37:27 +08:00
1c043b8426 Chores: fix name typo (#6987) 2024-08-06 13:33:21 +08:00
23ed15d19f feat:nvidia add nemotron4-340b and microsoft/phi-3 (#6973) 2024-08-06 10:16:41 +08:00
312d905c9b chore: update duckduckgo tool (#6983) 2024-08-06 10:16:04 +08:00
cba9319cc7 fix doc (#6974) 2024-08-06 10:10:55 +08:00
d839f1ada7 version to 0.6.16 (#6972) 2024-08-05 23:33:37 +08:00
6da14c2d48 security: fix api image security issues (#6971) 2024-08-05 20:21:08 +08:00
a34285196b Revise the wrong pricing of certain LLM models. (#6967) 2024-08-05 18:41:44 +08:00
e4587b2151 chore: MAX_TREE_DEPTH spelling mistake (#6965) 2024-08-05 18:41:08 +08:00
ea30174057 chore: optimize streaming tts of xinference (#6966) 2024-08-05 18:23:23 +08:00
dd676866aa chore: exclude .txt extenstion in create_by_text API (#6956) 2024-08-05 15:52:07 +08:00
81 changed files with 662 additions and 835 deletions

View File

@ -65,7 +65,7 @@ Dify is an open-source LLM app development platform. Its intuitive interface com
Extensive RAG capabilities that cover everything from document ingestion to retrieval, with out-of-box support for text extraction from PDFs, PPTs, and other common document formats.
**5. Agent capabilities**:
You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DELL·E, Stable Diffusion and WolframAlpha.
You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DALL·E, Stable Diffusion and WolframAlpha.
**6. LLMOps**:
Monitor and analyze application logs and performance over time. You could continuously improve prompts, datasets, and models based on production data and annotations.

View File

@ -57,7 +57,7 @@
**4. خط أنابيب RAG**: قدرات RAG الواسعة التي تغطي كل شيء من استيعاب الوثائق إلى الاسترجاع، مع الدعم الفوري لاستخراج النص من ملفات PDF و PPT وتنسيقات الوثائق الشائعة الأخرى.
**5. قدرات الوكيل**: يمكنك تعريف الوكلاء بناءً على أمر وظيفة LLM أو ReAct، وإضافة أدوات مدمجة أو مخصصة للوكيل. توفر Dify أكثر من 50 أداة مدمجة لوكلاء الذكاء الاصطناعي، مثل البحث في Google و DELL·E وStable Diffusion و WolframAlpha.
**5. قدرات الوكيل**: يمكنك تعريف الوكلاء بناءً على أمر وظيفة LLM أو ReAct، وإضافة أدوات مدمجة أو مخصصة للوكيل. توفر Dify أكثر من 50 أداة مدمجة لوكلاء الذكاء الاصطناعي، مثل البحث في Google و DALL·E وStable Diffusion و WolframAlpha.
**6. الـ LLMOps**: راقب وتحلل سجلات التطبيق والأداء على مر الزمن. يمكنك تحسين الأوامر والبيانات والنماذج باستمرار استنادًا إلى البيانات الإنتاجية والتعليقات.

View File

@ -70,7 +70,7 @@ Dify 是一个开源的 LLM 应用开发平台。其直观的界面结合了 AI
广泛的 RAG 功能,涵盖从文档摄入到检索的所有内容,支持从 PDF、PPT 和其他常见文档格式中提取文本的开箱即用的支持。
**5. Agent 智能体**:
您可以基于 LLM 函数调用或 ReAct 定义 Agent并为 Agent 添加预构建或自定义工具。Dify 为 AI Agent 提供了50多种内置工具如谷歌搜索、DELL·E、Stable Diffusion 和 WolframAlpha 等。
您可以基于 LLM 函数调用或 ReAct 定义 Agent并为 Agent 添加预构建或自定义工具。Dify 为 AI Agent 提供了50多种内置工具如谷歌搜索、DALL·E、Stable Diffusion 和 WolframAlpha 等。
**6. LLMOps**:
随时间监视和分析应用程序日志和性能。您可以根据生产数据和标注持续改进提示、数据集和模型。

View File

@ -70,7 +70,7 @@ Dify es una plataforma de desarrollo de aplicaciones de LLM de código abierto.
**5. Capacidades de agente**:
Puedes definir agent
es basados en LLM Function Calling o ReAct, y agregar herramientas preconstruidas o personalizadas para el agente. Dify proporciona más de 50 herramientas integradas para agentes de IA, como Búsqueda de Google, DELL·E, Difusión Estable y WolframAlpha.
es basados en LLM Function Calling o ReAct, y agregar herramientas preconstruidas o personalizadas para el agente. Dify proporciona más de 50 herramientas integradas para agentes de IA, como Búsqueda de Google, DALL·E, Difusión Estable y WolframAlpha.
**6. LLMOps**:
Supervisa y analiza registros de aplicaciones y rendimiento a lo largo del tiempo. Podrías mejorar continuamente prompts, conjuntos de datos y modelos basados en datos de producción y anotaciones.
@ -256,4 +256,4 @@ Para proteger tu privacidad, evita publicar problemas de seguridad en GitHub. En
## Licencia
Este repositorio está disponible bajo la [Licencia de Código Abierto de Dify](LICENSE), que es esencialmente Apache 2.0 con algunas restricciones adicionales.
Este repositorio está disponible bajo la [Licencia de Código Abierto de Dify](LICENSE), que es esencialmente Apache 2.0 con algunas restricciones adicionales.

View File

@ -70,7 +70,7 @@ Dify est une plateforme de développement d'applications LLM open source. Son in
**5. Capac
ités d'agent**:
Vous pouvez définir des agents basés sur l'appel de fonction LLM ou ReAct, et ajouter des outils pré-construits ou personnalisés pour l'agent. Dify fournit plus de 50 outils intégrés pour les agents d'IA, tels que la recherche Google, DELL·E, Stable Diffusion et WolframAlpha.
Vous pouvez définir des agents basés sur l'appel de fonction LLM ou ReAct, et ajouter des outils pré-construits ou personnalisés pour l'agent. Dify fournit plus de 50 outils intégrés pour les agents d'IA, tels que la recherche Google, DALL·E, Stable Diffusion et WolframAlpha.
**6. LLMOps**:
Surveillez et analysez les journaux d'application et les performances au fil du temps. Vous pouvez continuellement améliorer les prompts, les ensembles de données et les modèles en fonction des données de production et des annotations.

View File

@ -69,7 +69,7 @@ DifyはオープンソースのLLMアプリケーション開発プラットフ
ドキュメントの取り込みから検索までをカバーする広範なRAG機能ができます。ほかにもPDF、PPT、その他の一般的なドキュメントフォーマットからのテキスト抽出のサーポイントも提供します。
**5. エージェント機能**:
LLM Function CallingやReActに基づくエージェントの定義が可能で、AIエージェント用のプリビルトまたはカスタムツールを追加できます。Difyには、Google検索、DELL·E、Stable Diffusion、WolframAlphaなどのAIエージェント用の50以上の組み込みツールが提供します。
LLM Function CallingやReActに基づくエージェントの定義が可能で、AIエージェント用のプリビルトまたはカスタムツールを追加できます。Difyには、Google検索、DALL·E、Stable Diffusion、WolframAlphaなどのAIエージェント用の50以上の組み込みツールが提供します。
**6. LLMOps**:
アプリケーションのログやパフォーマンスを監視と分析し、生産のデータと注釈に基づいて、プロンプト、データセット、モデルを継続的に改善できます。

View File

@ -68,7 +68,7 @@ Dify is an open-source LLM app development platform. Its intuitive interface com
Extensive RAG capabilities that cover everything from document ingestion to retrieval, with out-of-box support for text extraction from PDFs, PPTs, and other common document formats.
**5. Agent capabilities**:
You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DELL·E, Stable Diffusion and WolframAlpha.
You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DALL·E, Stable Diffusion and WolframAlpha.
**6. LLMOps**:
Monitor and analyze application logs and performance over time. You could continuously improve prompts, datasets, and models based on production data and annotations.
@ -256,4 +256,4 @@ To protect your privacy, please avoid posting security issues on GitHub. Instead
## License
This repository is available under the [Dify Open Source License](LICENSE), which is essentially Apache 2.0 with a few additional restrictions.
This repository is available under the [Dify Open Source License](LICENSE), which is essentially Apache 2.0 with a few additional restrictions.

View File

@ -64,7 +64,7 @@
문서 수집부터 검색까지 모든 것을 다루며, PDF, PPT 및 기타 일반적인 문서 형식에서 텍스트 추출을 위한 기본 지원이 포함되어 있는 광범위한 RAG 기능을 제공합니다.
**5. 에이전트 기능**:
LLM 함수 호출 또는 ReAct를 기반으로 에이전트를 정의하고 에이전트에 대해 사전 구축된 도구나 사용자 정의 도구를 추가할 수 있습니다. Dify는 Google Search, DELL·E, Stable Diffusion, WolframAlpha 등 AI 에이전트를 위한 50개 이상의 내장 도구를 제공합니다.
LLM 함수 호출 또는 ReAct를 기반으로 에이전트를 정의하고 에이전트에 대해 사전 구축된 도구나 사용자 정의 도구를 추가할 수 있습니다. Dify는 Google Search, DALL·E, Stable Diffusion, WolframAlpha 등 AI 에이전트를 위한 50개 이상의 내장 도구를 제공합니다.
**6. LLMOps**:
시간 경과에 따른 애플리케이션 로그와 성능을 모니터링하고 분석합니다. 생산 데이터와 주석을 기반으로 프롬프트, 데이터세트, 모델을 지속적으로 개선할 수 있습니다.

View File

@ -41,8 +41,12 @@ ENV TZ=UTC
WORKDIR /app/api
RUN apt-get update \
&& apt-get install -y --no-install-recommends curl wget vim nodejs ffmpeg libgmp-dev libmpfr-dev libmpc-dev \
&& apt-get autoremove \
&& apt-get install -y --no-install-recommends curl nodejs libgmp-dev libmpfr-dev libmpc-dev \
&& echo "deb http://deb.debian.org/debian testing main" > /etc/apt/sources.list \
&& apt-get update \
# For Security
&& apt-get install -y --no-install-recommends zlib1g=1:1.3.dfsg+really1.3.1-1 expat=2.6.2-1 libldap-2.5-0=2.5.18+dfsg-2 perl=5.38.2-5 libsqlite3-0=3.46.0-1 \
&& apt-get autoremove -y \
&& rm -rf /var/lib/apt/lists/*
# Copy Python environment and packages

View File

@ -9,7 +9,7 @@ class PackagingInfo(BaseSettings):
CURRENT_VERSION: str = Field(
description='Dify version',
default='0.6.15',
default='0.6.16',
)
COMMIT_SHA: str = Field(

View File

@ -17,7 +17,6 @@ from fields.app_fields import (
from libs.login import login_required
from services.app_dsl_service import AppDslService
from services.app_service import AppService
from services.feature_service import FeatureService
ALLOW_CREATE_APP_MODES = ['chat', 'agent-chat', 'advanced-chat', 'workflow', 'completion']
@ -363,32 +362,6 @@ class AppTraceApi(Resource):
return {"result": "success"}
class AppSSOApi(Resource):
@setup_required
@login_required
@account_initialization_required
def get(self):
return FeatureService.get_system_features().model_dump()
@setup_required
@login_required
@account_initialization_required
def patch(self):
parser = reqparse.RequestParser()
parser.add_argument('exclude_app_id_list', type=list, location='json')
if not current_user.is_editor:
raise Forbidden()
args = parser.parse_args()
current_user_id = current_user.id
FeatureService.update_web_sso_exclude_apps(args['exclude_app_id_list'], current_user_id)
return {"result": "success"}
api.add_resource(AppListApi, '/apps')
api.add_resource(AppImportApi, '/apps/import')
api.add_resource(AppImportFromUrlApi, '/apps/import/url')
@ -400,4 +373,3 @@ api.add_resource(AppIconApi, '/apps/<uuid:app_id>/icon')
api.add_resource(AppSiteStatus, '/apps/<uuid:app_id>/site-enable')
api.add_resource(AppApiStatus, '/apps/<uuid:app_id>/api-enable')
api.add_resource(AppTraceApi, '/apps/<uuid:app_id>/trace')
api.add_resource(AppSSOApi, '/apps/web-sso')

View File

@ -14,12 +14,10 @@ from services.feature_service import FeatureService
class PassportResource(Resource):
"""Base resource for passport."""
def get(self):
def get(self, app_id):
system_features = FeatureService.get_system_features()
web_sso_exclude_apps = system_features.sso_exclude_apps
if system_features.sso_enforced_for_web and app_id not in web_sso_exclude_apps:
if system_features.sso_enforced_for_web:
raise WebSSOAuthRequiredError()
app_code = request.headers.get('X-App-Code')

View File

@ -1,18 +1,16 @@
import hashlib
import logging
import re
import subprocess
import uuid
from abc import abstractmethod
from typing import Optional
from pydantic import ConfigDict
from core.model_runtime.entities.model_entities import ModelPropertyKey, ModelType
from core.model_runtime.errors.invoke import InvokeBadRequestError
from core.model_runtime.model_providers.__base.ai_model import AIModel
logger = logging.getLogger(__name__)
class TTSModel(AIModel):
"""
Model class for ttstext model.
@ -37,8 +35,6 @@ class TTSModel(AIModel):
:return: translated audio file
"""
try:
logger.info(f"Invoke TTS model: {model} , invoke content : {content_text}")
self._is_ffmpeg_installed()
return self._invoke(model=model, credentials=credentials, user=user,
content_text=content_text, voice=voice, tenant_id=tenant_id)
except Exception as e:
@ -75,7 +71,8 @@ class TTSModel(AIModel):
if model_schema and ModelPropertyKey.VOICES in model_schema.model_properties:
voices = model_schema.model_properties[ModelPropertyKey.VOICES]
if language:
return [{'name': d['name'], 'value': d['mode']} for d in voices if language and language in d.get('language')]
return [{'name': d['name'], 'value': d['mode']} for d in voices if
language and language in d.get('language')]
else:
return [{'name': d['name'], 'value': d['mode']} for d in voices]
@ -146,28 +143,3 @@ class TTSModel(AIModel):
if one_sentence != '':
result.append(one_sentence)
return result
@staticmethod
def _is_ffmpeg_installed():
try:
output = subprocess.check_output("ffmpeg -version", shell=True)
if "ffmpeg version" in output.decode("utf-8"):
return True
else:
raise InvokeBadRequestError("ffmpeg is not installed, "
"details: https://docs.dify.ai/getting-started/install-self-hosted"
"/install-faq#id-14.-what-to-do-if-this-error-occurs-in-text-to-speech")
except Exception:
raise InvokeBadRequestError("ffmpeg is not installed, "
"details: https://docs.dify.ai/getting-started/install-self-hosted"
"/install-faq#id-14.-what-to-do-if-this-error-occurs-in-text-to-speech")
# Todo: To improve the streaming function
@staticmethod
def _get_file_name(file_content: str) -> str:
hash_object = hashlib.sha256(file_content.encode())
hex_digest = hash_object.hexdigest()
namespace_uuid = uuid.UUID('a5da6ef9-b303-596f-8e88-bf8fa40f4b31')
unique_uuid = uuid.uuid5(namespace_uuid, hex_digest)
return str(unique_uuid)

View File

@ -5,6 +5,8 @@ label:
model_type: llm
features:
- agent-thought
- multi-tool-call
- stream-tool-call
model_properties:
mode: chat
context_size: 128000

View File

@ -5,6 +5,8 @@ label:
model_type: llm
features:
- agent-thought
- multi-tool-call
- stream-tool-call
model_properties:
mode: chat
context_size: 128000

View File

@ -19,7 +19,7 @@ parameter_rules:
min: 1
max: 8192
pricing:
input: '0.05'
output: '0.1'
input: '0.59'
output: '0.79'
unit: '0.000001'
currency: USD

View File

@ -19,7 +19,7 @@ parameter_rules:
min: 1
max: 8192
pricing:
input: '0.59'
output: '0.79'
input: '0.05'
output: '0.08'
unit: '0.000001'
currency: USD

View File

@ -10,5 +10,8 @@
- mistralai/mistral-large
- mistralai/mixtral-8x7b-instruct-v0.1
- mistralai/mixtral-8x22b-instruct-v0.1
- nvidia/nemotron-4-340b-instruct
- microsoft/phi-3-medium-128k-instruct
- microsoft/phi-3-mini-128k-instruct
- fuyu-8b
- snowflake/arctic

View File

@ -34,8 +34,10 @@ class NVIDIALargeLanguageModel(OAIAPICompatLargeLanguageModel):
'meta/llama-3.1-8b-instruct': '',
'meta/llama-3.1-70b-instruct': '',
'meta/llama-3.1-405b-instruct': '',
'google/recurrentgemma-2b': ''
'google/recurrentgemma-2b': '',
'nvidia/nemotron-4-340b-instruct': '',
'microsoft/phi-3-medium-128k-instruct':'',
'microsoft/phi-3-mini-128k-instruct':''
}
def _invoke(self, model: str, credentials: dict,

View File

@ -0,0 +1,36 @@
model: nvidia/nemotron-4-340b-instruct
label:
zh_Hans: nvidia/nemotron-4-340b-instruct
en_US: nvidia/nemotron-4-340b-instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 131072
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 1
default: 0.5
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 4096
default: 1024
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0

View File

@ -0,0 +1,36 @@
model: microsoft/phi-3-medium-128k-instruct
label:
zh_Hans: microsoft/phi-3-medium-128k-instruct
en_US: microsoft/phi-3-medium-128k-instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 131072
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 1
default: 0.5
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 4096
default: 1024
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0

View File

@ -0,0 +1,36 @@
model: microsoft/phi-3-mini-128k-instruct
label:
zh_Hans: microsoft/phi-3-mini-128k-instruct
en_US: microsoft/phi-3-mini-128k-instruct
model_type: llm
features:
- agent-thought
model_properties:
mode: chat
context_size: 131072
parameter_rules:
- name: temperature
use_template: temperature
min: 0
max: 1
default: 0.5
- name: top_p
use_template: top_p
min: 0
max: 1
default: 1
- name: max_tokens
use_template: max_tokens
min: 1
max: 4096
default: 1024
- name: frequency_penalty
use_template: frequency_penalty
min: -2
max: 2
default: 0
- name: presence_penalty
use_template: presence_penalty
min: -2
max: 2
default: 0

View File

@ -37,7 +37,7 @@ parameter_rules:
- text
- json_object
pricing:
input: '0.001'
output: '0.002'
input: '0.0005'
output: '0.0015'
unit: '0.001'
currency: USD

View File

@ -1,11 +1,7 @@
import concurrent.futures
from functools import reduce
from io import BytesIO
from typing import Optional
from flask import Response
from pydub import AudioSegment
from xinference_client.client.restful.restful_client import Client, RESTfulAudioModelHandle
from xinference_client.client.restful.restful_client import RESTfulAudioModelHandle
from core.model_runtime.entities.common_entities import I18nObject
from core.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType
@ -19,6 +15,7 @@ from core.model_runtime.errors.invoke import (
)
from core.model_runtime.errors.validate import CredentialsValidateFailedError
from core.model_runtime.model_providers.__base.tts_model import TTSModel
from core.model_runtime.model_providers.xinference.xinference_helper import XinferenceHelper
class XinferenceText2SpeechModel(TTSModel):
@ -26,7 +23,12 @@ class XinferenceText2SpeechModel(TTSModel):
def __init__(self):
# preset voices, need support custom voice
self.model_voices = {
'chattts': {
'__default': {
'all': [
{'name': 'Default', 'value': 'default'},
]
},
'ChatTTS': {
'all': [
{'name': 'Alloy', 'value': 'alloy'},
{'name': 'Echo', 'value': 'echo'},
@ -36,7 +38,7 @@ class XinferenceText2SpeechModel(TTSModel):
{'name': 'Shimmer', 'value': 'shimmer'},
]
},
'cosyvoice': {
'CosyVoice': {
'zh-Hans': [
{'name': '中文男', 'value': '中文男'},
{'name': '中文女', 'value': '中文女'},
@ -77,18 +79,21 @@ class XinferenceText2SpeechModel(TTSModel):
if credentials['server_url'].endswith('/'):
credentials['server_url'] = credentials['server_url'][:-1]
# initialize client
client = Client(
base_url=credentials['server_url']
extra_param = XinferenceHelper.get_xinference_extra_parameter(
server_url=credentials['server_url'],
model_uid=credentials['model_uid']
)
xinference_client = client.get_model(model_uid=credentials['model_uid'])
if not isinstance(xinference_client, RESTfulAudioModelHandle):
if 'text-to-audio' not in extra_param.model_ability:
raise InvokeBadRequestError(
'please check model type, the model you want to invoke is not a audio model')
'please check model type, the model you want to invoke is not a text-to-audio model')
self._tts_invoke(
if extra_param.model_family and extra_param.model_family in self.model_voices:
credentials['audio_model_name'] = extra_param.model_family
else:
credentials['audio_model_name'] = '__default'
self._tts_invoke_streaming(
model=model,
credentials=credentials,
content_text='Hello Dify!',
@ -110,7 +115,7 @@ class XinferenceText2SpeechModel(TTSModel):
:param user: unique user id
:return: text translated to audio file
"""
return self._tts_invoke(model, credentials, content_text, voice)
return self._tts_invoke_streaming(model, credentials, content_text, voice)
def get_customizable_model_schema(self, model: str, credentials: dict) -> AIModelEntity | None:
"""
@ -161,13 +166,15 @@ class XinferenceText2SpeechModel(TTSModel):
}
def get_tts_model_voices(self, model: str, credentials: dict, language: Optional[str] = None) -> list:
audio_model_name = credentials.get('audio_model_name', '__default')
for key, voices in self.model_voices.items():
if key in model.lower():
if language in voices:
if key in audio_model_name:
if language and language in voices:
return voices[language]
elif 'all' in voices:
return voices['all']
return []
return self.model_voices['__default']['all']
def _get_model_default_voice(self, model: str, credentials: dict) -> any:
return ""
@ -181,60 +188,55 @@ class XinferenceText2SpeechModel(TTSModel):
def _get_model_workers_limit(self, model: str, credentials: dict) -> int:
return 5
def _tts_invoke(self, model: str, credentials: dict, content_text: str, voice: str) -> any:
def _tts_invoke_streaming(self, model: str, credentials: dict, content_text: str,
voice: str) -> any:
"""
_tts_invoke text2speech model
_tts_invoke_streaming text2speech model
:param model: model name
:param credentials: model credentials
:param voice: model timbre
:param content_text: text content to be translated
:param voice: model timbre
:return: text translated to audio file
"""
if credentials['server_url'].endswith('/'):
credentials['server_url'] = credentials['server_url'][:-1]
word_limit = self._get_model_word_limit(model, credentials)
audio_type = self._get_model_audio_type(model, credentials)
handle = RESTfulAudioModelHandle(credentials['model_uid'], credentials['server_url'], auth_headers={})
try:
sentences = list(self._split_text_into_sentences(org_text=content_text, max_length=word_limit))
audio_bytes_list = []
handle = RESTfulAudioModelHandle(credentials['model_uid'], credentials['server_url'], auth_headers={})
with concurrent.futures.ThreadPoolExecutor(max_workers=min((3, len(sentences)))) as executor:
model_support_voice = [x.get("value") for x in
self.get_tts_model_voices(model=model, credentials=credentials)]
if not voice or voice not in model_support_voice:
voice = self._get_model_default_voice(model, credentials)
word_limit = self._get_model_word_limit(model, credentials)
if len(content_text) > word_limit:
sentences = self._split_text_into_sentences(content_text, max_length=word_limit)
executor = concurrent.futures.ThreadPoolExecutor(max_workers=min(3, len(sentences)))
futures = [executor.submit(
handle.speech, input=sentence, voice=voice, response_format="mp3", speed=1.0, stream=False)
for sentence in sentences]
for future in futures:
try:
if future.result():
audio_bytes_list.append(future.result())
except Exception as ex:
raise InvokeBadRequestError(str(ex))
handle.speech,
input=sentences[i],
voice=voice,
response_format="mp3",
speed=1.0,
stream=False
)
for i in range(len(sentences))]
if len(audio_bytes_list) > 0:
audio_segments = [AudioSegment.from_file(
BytesIO(audio_bytes), format=audio_type) for audio_bytes in
audio_bytes_list if audio_bytes]
combined_segment = reduce(lambda x, y: x + y, audio_segments)
buffer: BytesIO = BytesIO()
combined_segment.export(buffer, format=audio_type)
buffer.seek(0)
return Response(buffer.read(), status=200, mimetype=f"audio/{audio_type}")
for index, future in enumerate(futures):
response = future.result()
for i in range(0, len(response), 1024):
yield response[i:i + 1024]
else:
response = handle.speech(
input=content_text.strip(),
voice=voice,
response_format="mp3",
speed=1.0,
stream=False
)
for i in range(0, len(response), 1024):
yield response[i:i + 1024]
except Exception as ex:
raise InvokeBadRequestError(str(ex))
def _tts_invoke_streaming(self, model: str, credentials: dict, content_text: str, voice: str) -> any:
"""
_tts_invoke_streaming text2speech model
Attention: stream api may return error [Parallel generation is not supported by ggml]
:param model: model name
:param credentials: model credentials
:param voice: model timbre
:param content_text: text content to be translated
:return: text translated to audio file
"""
pass

View File

@ -1,5 +1,6 @@
from threading import Lock
from time import time
from typing import Optional
from requests.adapters import HTTPAdapter
from requests.exceptions import ConnectionError, MissingSchema, Timeout
@ -15,9 +16,11 @@ class XinferenceModelExtraParameter:
context_length: int = 2048
support_function_call: bool = False
support_vision: bool = False
model_family: Optional[str]
def __init__(self, model_format: str, model_handle_type: str, model_ability: list[str],
support_function_call: bool, support_vision: bool, max_tokens: int, context_length: int) -> None:
support_function_call: bool, support_vision: bool, max_tokens: int, context_length: int,
model_family: Optional[str]) -> None:
self.model_format = model_format
self.model_handle_type = model_handle_type
self.model_ability = model_ability
@ -25,6 +28,7 @@ class XinferenceModelExtraParameter:
self.support_vision = support_vision
self.max_tokens = max_tokens
self.context_length = context_length
self.model_family = model_family
cache = {}
cache_lock = Lock()
@ -78,9 +82,16 @@ class XinferenceHelper:
model_format = response_json.get('model_format', 'ggmlv3')
model_ability = response_json.get('model_ability', [])
model_family = response_json.get('model_family', None)
if response_json.get('model_type') == 'embedding':
model_handle_type = 'embedding'
elif response_json.get('model_type') == 'audio':
model_handle_type = 'audio'
if model_family and model_family in ['ChatTTS', 'CosyVoice']:
model_ability.append('text-to-audio')
else:
model_ability.append('audio-to-text')
elif model_format == 'ggmlv3' and 'chatglm' in response_json['model_name']:
model_handle_type = 'chatglm'
elif 'generate' in model_ability:
@ -88,7 +99,7 @@ class XinferenceHelper:
elif 'chat' in model_ability:
model_handle_type = 'chat'
else:
raise NotImplementedError(f'xinference model handle type {model_handle_type} is not supported')
raise NotImplementedError('xinference model handle type is not supported')
support_function_call = 'tools' in model_ability
support_vision = 'vision' in model_ability
@ -103,5 +114,6 @@ class XinferenceHelper:
support_function_call=support_function_call,
support_vision=support_vision,
max_tokens=max_tokens,
context_length=context_length
)
context_length=context_length,
model_family=model_family
)

View File

@ -28,7 +28,7 @@ class RetrievalService:
@classmethod
def retrieve(cls, retrival_method: str, dataset_id: str, query: str,
top_k: int, score_threshold: Optional[float] = .0,
reranking_model: Optional[dict] = None, reranking_mode: Optional[str] = None,
reranking_model: Optional[dict] = None, reranking_mode: Optional[str] = 'reranking_model',
weights: Optional[dict] = None):
dataset = db.session.query(Dataset).filter(
Dataset.id == dataset_id
@ -36,10 +36,6 @@ class RetrievalService:
if not dataset or dataset.available_document_count == 0 or dataset.available_segment_count == 0:
return []
all_documents = []
keyword_search_documents = []
embedding_search_documents = []
full_text_search_documents = []
hybrid_search_documents = []
threads = []
exceptions = []
# retrieval_model source with keyword

View File

@ -117,19 +117,63 @@ class WordExtractor(BaseExtractor):
return image_map
def _table_to_markdown(self, table):
markdown = ""
# deal with table headers
header_row = table.rows[0]
headers = [cell.text for cell in header_row.cells]
markdown += "| " + " | ".join(headers) + " |\n"
markdown += "| " + " | ".join(["---"] * len(headers)) + " |\n"
# deal with table rows
for row in table.rows[1:]:
row_cells = [cell.text for cell in row.cells]
markdown += "| " + " | ".join(row_cells) + " |\n"
def _table_to_markdown(self, table, image_map):
markdown = []
# calculate the total number of columns
total_cols = max(len(row.cells) for row in table.rows)
return markdown
header_row = table.rows[0]
headers = self._parse_row(header_row, image_map, total_cols)
markdown.append("| " + " | ".join(headers) + " |")
markdown.append("| " + " | ".join(["---"] * total_cols) + " |")
for row in table.rows[1:]:
row_cells = self._parse_row(row, image_map, total_cols)
markdown.append("| " + " | ".join(row_cells) + " |")
return "\n".join(markdown)
def _parse_row(self, row, image_map, total_cols):
# Initialize a row, all of which are empty by default
row_cells = [""] * total_cols
col_index = 0
for cell in row.cells:
# make sure the col_index is not out of range
while col_index < total_cols and row_cells[col_index] != "":
col_index += 1
# if col_index is out of range the loop is jumped
if col_index >= total_cols:
break
cell_content = self._parse_cell(cell, image_map).strip()
cell_colspan = cell.grid_span if cell.grid_span else 1
for i in range(cell_colspan):
if col_index + i < total_cols:
row_cells[col_index + i] = cell_content if i == 0 else ""
col_index += cell_colspan
return row_cells
def _parse_cell(self, cell, image_map):
cell_content = []
for paragraph in cell.paragraphs:
parsed_paragraph = self._parse_cell_paragraph(paragraph, image_map)
if parsed_paragraph:
cell_content.append(parsed_paragraph)
unique_content = list(dict.fromkeys(cell_content))
return " ".join(unique_content)
def _parse_cell_paragraph(self, paragraph, image_map):
paragraph_content = []
for run in paragraph.runs:
if run.element.xpath('.//a:blip'):
for blip in run.element.xpath('.//a:blip'):
image_id = blip.get("{http://schemas.openxmlformats.org/officeDocument/2006/relationships}embed")
image_part = paragraph.part.rels[image_id].target_part
if image_part in image_map:
image_link = image_map[image_part]
paragraph_content.append(image_link)
else:
paragraph_content.append(run.text)
return "".join(paragraph_content).strip()
def _parse_paragraph(self, paragraph, image_map):
paragraph_content = []
@ -183,6 +227,6 @@ class WordExtractor(BaseExtractor):
content.append(parsed_paragraph)
elif element.tag.endswith('tbl'): # table
table = tables.pop(0)
content.append(self._table_to_markdown(table))
content.append(self._table_to_markdown(table,image_map))
return '\n'.join(content)

View File

@ -278,6 +278,7 @@ class DatasetRetrieval:
query=query,
top_k=top_k, score_threshold=score_threshold,
reranking_model=reranking_model,
reranking_mode=retrieval_model_config.get('reranking_mode', 'reranking_model'),
weights=retrieval_model_config.get('weights', None),
)
self._on_query(query, [dataset_id], app_id, user_from, user_id)
@ -431,10 +432,12 @@ class DatasetRetrieval:
dataset_id=dataset.id,
query=query,
top_k=top_k,
score_threshold=retrieval_model['score_threshold']
score_threshold=retrieval_model.get('score_threshold', .0)
if retrieval_model['score_threshold_enabled'] else None,
reranking_model=retrieval_model['reranking_model']
reranking_model=retrieval_model.get('reranking_model', None)
if retrieval_model['reranking_enable'] else None,
reranking_mode=retrieval_model.get('reranking_mode')
if retrieval_model.get('reranking_mode') else 'reranking_model',
weights=retrieval_model.get('weights', None),
)

View File

@ -25,9 +25,9 @@ parameters:
type: select
required: true
options:
- value: gpt-3.5
- value: gpt-4o-mini
label:
en_US: GPT-3.5
en_US: GPT-4o-mini
- value: claude-3-haiku
label:
en_US: Claude 3

View File

@ -21,23 +21,16 @@ class DuckDuckGoSearchTool(BuiltinTool):
"""
Tool for performing a search using DuckDuckGo search engine.
"""
def _invoke(self, user_id: str, tool_parameters: dict[str, Any]) -> ToolInvokeMessage:
query = tool_parameters.get('query', '')
result_type = tool_parameters.get('result_type', 'text')
max_results = tool_parameters.get('max_results', 10)
def _invoke(self, user_id: str, tool_parameters: dict[str, Any]) -> ToolInvokeMessage | list[ToolInvokeMessage]:
query = tool_parameters.get('query')
max_results = tool_parameters.get('max_results', 5)
require_summary = tool_parameters.get('require_summary', False)
response = DDGS().text(query, max_results=max_results)
if result_type == 'link':
results = [f"[{res.get('title')}]({res.get('href')})" for res in response]
results = "\n".join(results)
return self.create_link_message(link=results)
results = [res.get("body") for res in response]
results = "\n".join(results)
if require_summary:
results = "\n".join([res.get("body") for res in response])
results = self.summary_results(user_id=user_id, content=results, query=query)
return self.create_text_message(text=results)
return self.create_text_message(text=results)
return [self.create_json_message(res) for res in response]
def summary_results(self, user_id: str, content: str, query: str) -> str:
prompt = SUMMARY_PROMPT.format(query=query, content=content)

View File

@ -28,29 +28,6 @@ parameters:
label:
en_US: Max results
zh_Hans: 最大结果数量
human_description:
en_US: The max results.
zh_Hans: 最大结果数量
form: form
- name: result_type
type: select
required: true
options:
- value: text
label:
en_US: text
zh_Hans: 文本
- value: link
label:
en_US: link
zh_Hans: 链接
default: text
label:
en_US: Result type
zh_Hans: 结果类型
human_description:
en_US: used for selecting the result type, text or link
zh_Hans: 用于选择结果类型,使用文本还是链接进行展示
form: form
- name: require_summary
type: boolean

View File

@ -177,10 +177,12 @@ class DatasetMultiRetrieverTool(DatasetRetrieverBaseTool):
dataset_id=dataset.id,
query=query,
top_k=self.top_k,
score_threshold=retrieval_model['score_threshold']
score_threshold=retrieval_model.get('score_threshold', .0)
if retrieval_model['score_threshold_enabled'] else None,
reranking_model=retrieval_model['reranking_model']
reranking_model=retrieval_model.get('reranking_model', None)
if retrieval_model['reranking_enable'] else None,
reranking_mode=retrieval_model.get('reranking_mode')
if retrieval_model.get('reranking_mode') else 'reranking_model',
weights=retrieval_model.get('weights', None),
)

View File

@ -14,6 +14,7 @@ default_retrieval_model = {
'reranking_provider_name': '',
'reranking_model_name': ''
},
'reranking_mode': 'reranking_model',
'top_k': 2,
'score_threshold_enabled': False
}
@ -71,14 +72,15 @@ class DatasetRetrieverTool(DatasetRetrieverBaseTool):
else:
if self.top_k > 0:
# retrieval source
documents = RetrievalService.retrieve(retrival_method=retrieval_model['search_method'],
documents = RetrievalService.retrieve(retrival_method=retrieval_model.get('search_method', 'semantic_search'),
dataset_id=dataset.id,
query=query,
top_k=self.top_k,
score_threshold=retrieval_model['score_threshold']
score_threshold=retrieval_model.get('score_threshold', .0)
if retrieval_model['score_threshold_enabled'] else None,
reranking_model=retrieval_model['reranking_model']
if retrieval_model['reranking_enable'] else None,
reranking_model=retrieval_model.get('reranking_model', None),
reranking_mode=retrieval_model.get('reranking_mode')
if retrieval_model.get('reranking_mode') else 'reranking_model',
weights=retrieval_model.get('weights', None),
)
else:

View File

@ -94,8 +94,11 @@ class CodeNode(BaseNode):
:return:
"""
if not isinstance(value, str):
raise ValueError(f"Output variable `{variable}` must be a string")
if isinstance(value, type(None)):
return None
else:
raise ValueError(f"Output variable `{variable}` must be a string")
if len(value) > MAX_STRING_LENGTH:
raise ValueError(f'The length of output variable `{variable}` must be less than {MAX_STRING_LENGTH} characters')
@ -109,7 +112,10 @@ class CodeNode(BaseNode):
:return:
"""
if not isinstance(value, int | float):
raise ValueError(f"Output variable `{variable}` must be a number")
if isinstance(value, type(None)):
return None
else:
raise ValueError(f"Output variable `{variable}` must be a number")
if value > MAX_NUMBER or value < MIN_NUMBER:
raise ValueError(f'Output variable `{variable}` is out of range, it must be between {MIN_NUMBER} and {MAX_NUMBER}.')
@ -157,28 +163,31 @@ class CodeNode(BaseNode):
elif isinstance(output_value, list):
first_element = output_value[0] if len(output_value) > 0 else None
if first_element is not None:
if isinstance(first_element, int | float) and all(isinstance(value, int | float) for value in output_value):
if isinstance(first_element, int | float) and all(value is None or isinstance(value, int | float) for value in output_value):
for i, value in enumerate(output_value):
self._check_number(
value=value,
variable=f'{prefix}.{output_name}[{i}]' if prefix else f'{output_name}[{i}]'
)
elif isinstance(first_element, str) and all(isinstance(value, str) for value in output_value):
elif isinstance(first_element, str) and all(value is None or isinstance(value, str) for value in output_value):
for i, value in enumerate(output_value):
self._check_string(
value=value,
variable=f'{prefix}.{output_name}[{i}]' if prefix else f'{output_name}[{i}]'
)
elif isinstance(first_element, dict) and all(isinstance(value, dict) for value in output_value):
elif isinstance(first_element, dict) and all(value is None or isinstance(value, dict) for value in output_value):
for i, value in enumerate(output_value):
self._transform_result(
result=value,
output_schema=None,
prefix=f'{prefix}.{output_name}[{i}]' if prefix else f'{output_name}[{i}]',
depth=depth + 1
)
if value is not None:
self._transform_result(
result=value,
output_schema=None,
prefix=f'{prefix}.{output_name}[{i}]' if prefix else f'{output_name}[{i}]',
depth=depth + 1
)
else:
raise ValueError(f'Output {prefix}.{output_name} is not a valid array. make sure all elements are of the same type.')
elif isinstance(output_value, type(None)):
pass
else:
raise ValueError(f'Output {prefix}.{output_name} is not a valid type.')
@ -193,16 +202,19 @@ class CodeNode(BaseNode):
if output_config.type == 'object':
# check if output is object
if not isinstance(result.get(output_name), dict):
raise ValueError(
f'Output {prefix}{dot}{output_name} is not an object, got {type(result.get(output_name))} instead.'
if isinstance(result.get(output_name), type(None)):
transformed_result[output_name] = None
else:
raise ValueError(
f'Output {prefix}{dot}{output_name} is not an object, got {type(result.get(output_name))} instead.'
)
else:
transformed_result[output_name] = self._transform_result(
result=result[output_name],
output_schema=output_config.children,
prefix=f'{prefix}.{output_name}',
depth=depth + 1
)
transformed_result[output_name] = self._transform_result(
result=result[output_name],
output_schema=output_config.children,
prefix=f'{prefix}.{output_name}',
depth=depth + 1
)
elif output_config.type == 'number':
# check if number available
transformed_result[output_name] = self._check_number(
@ -218,68 +230,80 @@ class CodeNode(BaseNode):
elif output_config.type == 'array[number]':
# check if array of number available
if not isinstance(result[output_name], list):
raise ValueError(
f'Output {prefix}{dot}{output_name} is not an array, got {type(result.get(output_name))} instead.'
)
if isinstance(result[output_name], type(None)):
transformed_result[output_name] = None
else:
raise ValueError(
f'Output {prefix}{dot}{output_name} is not an array, got {type(result.get(output_name))} instead.'
)
else:
if len(result[output_name]) > MAX_NUMBER_ARRAY_LENGTH:
raise ValueError(
f'The length of output variable `{prefix}{dot}{output_name}` must be less than {MAX_NUMBER_ARRAY_LENGTH} elements.'
)
if len(result[output_name]) > MAX_NUMBER_ARRAY_LENGTH:
raise ValueError(
f'The length of output variable `{prefix}{dot}{output_name}` must be less than {MAX_NUMBER_ARRAY_LENGTH} elements.'
)
transformed_result[output_name] = [
self._check_number(
value=value,
variable=f'{prefix}{dot}{output_name}[{i}]'
)
for i, value in enumerate(result[output_name])
]
transformed_result[output_name] = [
self._check_number(
value=value,
variable=f'{prefix}{dot}{output_name}[{i}]'
)
for i, value in enumerate(result[output_name])
]
elif output_config.type == 'array[string]':
# check if array of string available
if not isinstance(result[output_name], list):
raise ValueError(
f'Output {prefix}{dot}{output_name} is not an array, got {type(result.get(output_name))} instead.'
)
if isinstance(result[output_name], type(None)):
transformed_result[output_name] = None
else:
raise ValueError(
f'Output {prefix}{dot}{output_name} is not an array, got {type(result.get(output_name))} instead.'
)
else:
if len(result[output_name]) > MAX_STRING_ARRAY_LENGTH:
raise ValueError(
f'The length of output variable `{prefix}{dot}{output_name}` must be less than {MAX_STRING_ARRAY_LENGTH} elements.'
)
if len(result[output_name]) > MAX_STRING_ARRAY_LENGTH:
raise ValueError(
f'The length of output variable `{prefix}{dot}{output_name}` must be less than {MAX_STRING_ARRAY_LENGTH} elements.'
)
transformed_result[output_name] = [
self._check_string(
value=value,
variable=f'{prefix}{dot}{output_name}[{i}]'
)
for i, value in enumerate(result[output_name])
]
transformed_result[output_name] = [
self._check_string(
value=value,
variable=f'{prefix}{dot}{output_name}[{i}]'
)
for i, value in enumerate(result[output_name])
]
elif output_config.type == 'array[object]':
# check if array of object available
if not isinstance(result[output_name], list):
raise ValueError(
f'Output {prefix}{dot}{output_name} is not an array, got {type(result.get(output_name))} instead.'
)
if len(result[output_name]) > MAX_OBJECT_ARRAY_LENGTH:
raise ValueError(
f'The length of output variable `{prefix}{dot}{output_name}` must be less than {MAX_OBJECT_ARRAY_LENGTH} elements.'
)
for i, value in enumerate(result[output_name]):
if not isinstance(value, dict):
if isinstance(result[output_name], type(None)):
transformed_result[output_name] = None
else:
raise ValueError(
f'Output {prefix}{dot}{output_name}[{i}] is not an object, got {type(value)} instead at index {i}.'
f'Output {prefix}{dot}{output_name} is not an array, got {type(result.get(output_name))} instead.'
)
else:
if len(result[output_name]) > MAX_OBJECT_ARRAY_LENGTH:
raise ValueError(
f'The length of output variable `{prefix}{dot}{output_name}` must be less than {MAX_OBJECT_ARRAY_LENGTH} elements.'
)
for i, value in enumerate(result[output_name]):
if not isinstance(value, dict):
if isinstance(value, type(None)):
pass
else:
raise ValueError(
f'Output {prefix}{dot}{output_name}[{i}] is not an object, got {type(value)} instead at index {i}.'
)
transformed_result[output_name] = [
self._transform_result(
result=value,
output_schema=output_config.children,
prefix=f'{prefix}{dot}{output_name}[{i}]',
depth=depth + 1
)
for i, value in enumerate(result[output_name])
]
transformed_result[output_name] = [
None if value is None else self._transform_result(
result=value,
output_schema=output_config.children,
prefix=f'{prefix}{dot}{output_name}[{i}]',
depth=depth + 1
)
for i, value in enumerate(result[output_name])
]
else:
raise ValueError(f'Output type {output_config.type} is not supported.')

63
api/poetry.lock generated
View File

@ -2076,21 +2076,21 @@ files = [
[[package]]
name = "duckduckgo-search"
version = "6.2.1"
version = "6.2.6"
description = "Search for words, documents, images, news, maps and text translation using the DuckDuckGo.com search engine."
optional = false
python-versions = ">=3.8"
files = [
{file = "duckduckgo_search-6.2.1-py3-none-any.whl", hash = "sha256:1a03f799b85fdfa08d5e6478624683f373b9dc35e6f145544b9cab72a4f575fa"},
{file = "duckduckgo_search-6.2.1.tar.gz", hash = "sha256:d664ec096193e3fb43bdfae4b0ad9c04e44094b58f41998adcdd20a86ee1ed74"},
{file = "duckduckgo_search-6.2.6-py3-none-any.whl", hash = "sha256:c8171bcd6ff4d051f78c70ea23bd34c0d8e779d72973829d3a6b40ccc05cd7c2"},
{file = "duckduckgo_search-6.2.6.tar.gz", hash = "sha256:96529ecfbd55afa28705b38413003cb3cfc620e55762d33184887545de27dc96"},
]
[package.dependencies]
click = ">=8.1.7"
pyreqwest-impersonate = ">=0.5.0"
primp = ">=0.5.5"
[package.extras]
dev = ["mypy (>=1.10.1)", "pytest (>=8.2.2)", "pytest-asyncio (>=0.23.7)", "ruff (>=0.5.2)"]
dev = ["mypy (>=1.11.0)", "pytest (>=8.3.1)", "pytest-asyncio (>=0.23.8)", "ruff (>=0.5.5)"]
lxml = ["lxml (>=5.2.2)"]
[[package]]
@ -5868,6 +5868,26 @@ dev = ["black", "flake8", "flake8-print", "isort", "pre-commit"]
sentry = ["django", "sentry-sdk"]
test = ["coverage", "flake8", "freezegun (==0.3.15)", "mock (>=2.0.0)", "pylint", "pytest", "pytest-timeout"]
[[package]]
name = "primp"
version = "0.5.5"
description = "HTTP client that can impersonate web browsers, mimicking their headers and `TLS/JA3/JA4/HTTP2` fingerprints"
optional = false
python-versions = ">=3.8"
files = [
{file = "primp-0.5.5-cp38-abi3-macosx_10_12_x86_64.whl", hash = "sha256:cff9792e8422424528c23574b5364882d68134ee2743f4a2ae6a765746fb3028"},
{file = "primp-0.5.5-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:78e13fc5d4d90d44a005dbd5dda116981828c803c86cf85816b3bb5363b045c8"},
{file = "primp-0.5.5-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3714abfda79d3f5c90a5363db58994afbdbacc4b94fe14e9e5f8ab97e7b82577"},
{file = "primp-0.5.5-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e54765900ee40eceb6bde43676d7e0b2e16ca1f77c0753981fe5e40afc0c2010"},
{file = "primp-0.5.5-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:66c7eecc5a55225c42cfb99af857df04f994f3dd0d327c016d3af5414c1a2242"},
{file = "primp-0.5.5-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:df262271cc1a41f4bf80d68396e967a27d7d3d3de355a3d016f953130e7a20be"},
{file = "primp-0.5.5-cp38-abi3-win_amd64.whl", hash = "sha256:8b424118d6bab6f9d4980d0f35d5ccc1213ab9f1042497c6ee11730f2f94a876"},
{file = "primp-0.5.5.tar.gz", hash = "sha256:8623e8a25fd686785296b12175f4173250a08db1de9ee4063282e262b94bf3f2"},
]
[package.extras]
dev = ["pytest (>=8.1.1)"]
[[package]]
name = "prompt-toolkit"
version = "3.0.47"
@ -6300,17 +6320,6 @@ python-dotenv = ">=0.21.0"
toml = ["tomli (>=2.0.1)"]
yaml = ["pyyaml (>=6.0.1)"]
[[package]]
name = "pydub"
version = "0.25.1"
description = "Manipulate audio with an simple and easy high level interface"
optional = false
python-versions = "*"
files = [
{file = "pydub-0.25.1-py2.py3-none-any.whl", hash = "sha256:65617e33033874b59d87db603aa1ed450633288aefead953b30bded59cb599a6"},
{file = "pydub-0.25.1.tar.gz", hash = "sha256:980a33ce9949cab2a569606b65674d748ecbca4f0796887fd6f46173a7b0d30f"},
]
[[package]]
name = "pygments"
version = "2.18.0"
@ -6474,26 +6483,6 @@ files = [
{file = "pyreadline3-3.4.1.tar.gz", hash = "sha256:6f3d1f7b8a31ba32b73917cefc1f28cc660562f39aea8646d30bd6eff21f7bae"},
]
[[package]]
name = "pyreqwest-impersonate"
version = "0.5.3"
description = "HTTP client that can impersonate web browsers, mimicking their headers and `TLS/JA3/JA4/HTTP2` fingerprints"
optional = false
python-versions = ">=3.8"
files = [
{file = "pyreqwest_impersonate-0.5.3-cp38-abi3-macosx_10_12_x86_64.whl", hash = "sha256:f15922496f728769fb9e1b116d5d9d7ba5525d0f2f7a76a41a1daef8b2e0c6c3"},
{file = "pyreqwest_impersonate-0.5.3-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:77533133ae73020e59bc56d776eea3fe3af4ac41d763a89f39c495436da0f4cf"},
{file = "pyreqwest_impersonate-0.5.3-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:436055fa3eeb3e01e2e8efd42a9f6c4ab62fd643eddc7c66d0e671b71605f273"},
{file = "pyreqwest_impersonate-0.5.3-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e9d2e981a525fb72c1521f454e5581d2c7a3b1fcf1c97c0acfcb7a923d8cf3e"},
{file = "pyreqwest_impersonate-0.5.3-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:a6bf986d4a165f6976b3e862111e2a46091883cb55e9e6325150f5aea2644229"},
{file = "pyreqwest_impersonate-0.5.3-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:b7397f6dad3d5ae158e0b272cb3eafe8382e71775d829b286ae9c21cb5a879ff"},
{file = "pyreqwest_impersonate-0.5.3-cp38-abi3-win_amd64.whl", hash = "sha256:6026e4751b5912aec1e45238c07daf1e2c9126b3b32b33396b72885021e8990c"},
{file = "pyreqwest_impersonate-0.5.3.tar.gz", hash = "sha256:f21c10609958ff5be18df0c329eed42d2b3ba8a339b65dc5f96ab74537231692"},
]
[package.extras]
dev = ["pytest (>=8.1.1)"]
[[package]]
name = "pytest"
version = "8.1.2"
@ -9521,4 +9510,4 @@ cffi = ["cffi (>=1.11)"]
[metadata]
lock-version = "2.0"
python-versions = ">=3.10,<3.13"
content-hash = "6eb1649ed473ab7916683beb3a9a09c1fc97f99845ee77adb811ea95b93b32e4"
content-hash = "d40cddaa8cd9c7ee7f8bbca06c8dd844facf9b2b618131dd85a41da5e0d47125"

View File

@ -152,7 +152,6 @@ pycryptodome = "3.19.1"
pydantic = "~2.8.2"
pydantic-settings = "~2.3.4"
pydantic_extra_types = "~2.9.0"
pydub = "~0.25.1"
pyjwt = "~2.8.0"
pypdfium2 = "~4.17.0"
python = ">=3.10,<3.13"
@ -179,6 +178,7 @@ yarl = "~1.9.4"
zhipuai = "1.0.7"
rank-bm25 = "~0.2.2"
openpyxl = "^3.1.5"
kaleido = "0.2.1"
############################################################
# Tool dependencies required by tool implementations
@ -188,7 +188,7 @@ openpyxl = "^3.1.5"
arxiv = "2.1.0"
matplotlib = "~3.8.2"
newspaper3k = "0.2.8"
duckduckgo-search = "^6.1.8"
duckduckgo-search = "^6.2.6"
jsonpath-ng = "1.6.1"
numexpr = "~2.9.0"
opensearch-py = "2.4.0"

View File

@ -5,10 +5,4 @@ class EnterpriseService:
@classmethod
def get_info(cls):
return EnterpriseRequest.send_request("GET", "/inner/api/info")
@classmethod
def update_web_sso_exclude_apps(cls, app_id_list, user_id):
return EnterpriseRequest.send_request(
"PATCH", "/inner/api/web-sso-exclude-apps", json={"app_id_list": app_id_list, "user_id": user_id}
)
return EnterpriseRequest.send_request('GET', '/info')

View File

@ -41,7 +41,6 @@ class SystemFeatureModel(BaseModel):
sso_enforced_for_signin_protocol: str = ''
sso_enforced_for_web: bool = False
sso_enforced_for_web_protocol: str = ''
sso_exclude_apps: list = []
class FeatureService:
@ -117,9 +116,3 @@ class FeatureService:
features.sso_enforced_for_signin_protocol = enterprise_info['sso_enforced_for_signin_protocol']
features.sso_enforced_for_web = enterprise_info['sso_enforced_for_web']
features.sso_enforced_for_web_protocol = enterprise_info['sso_enforced_for_web_protocol']
features.sso_exclude_apps = enterprise_info['sso_exclude_apps']
@classmethod
def update_web_sso_exclude_apps(cls, app_id_list, user_id):
EnterpriseService.update_web_sso_exclude_apps(app_id_list, user_id)
return True

View File

@ -109,7 +109,7 @@ class FileService:
tenant_id=current_user.current_tenant_id,
storage_type=dify_config.STORAGE_TYPE,
key=file_key,
name=text_name + '.txt',
name=text_name,
size=len(text),
extension='txt',
mime_type='text/plain',

View File

@ -42,11 +42,11 @@ class HitTestingService:
dataset_id=dataset.id,
query=cls.escape_query_for_search(query),
top_k=retrieval_model.get('top_k', 2),
score_threshold=retrieval_model['score_threshold']
score_threshold=retrieval_model.get('score_threshold', .0)
if retrieval_model['score_threshold_enabled'] else None,
reranking_model=retrieval_model['reranking_model']
if retrieval_model['reranking_enable'] else None,
reranking_mode=retrieval_model.get('reranking_mode', None),
reranking_model=retrieval_model.get('reranking_model', None),
reranking_mode=retrieval_model.get('reranking_mode')
if retrieval_model.get('reranking_mode') else 'reranking_model',
weights=retrieval_model.get('weights', None),
)

View File

@ -2,7 +2,7 @@ version: '3'
services:
# API service
api:
image: langgenius/dify-api:0.6.15
image: langgenius/dify-api:0.6.16
restart: always
environment:
# Startup mode, 'api' starts the API server.
@ -224,7 +224,7 @@ services:
# worker service
# The Celery worker for processing the queue.
worker:
image: langgenius/dify-api:0.6.15
image: langgenius/dify-api:0.6.16
restart: always
environment:
CONSOLE_WEB_URL: ''
@ -390,7 +390,7 @@ services:
# Frontend web application.
web:
image: langgenius/dify-web:0.6.15
image: langgenius/dify-web:0.6.16
restart: always
environment:
# The base URL of console application api server, refers to the Console base URL of WEB service if console domain is

View File

@ -182,7 +182,7 @@ x-shared-env: &shared-api-worker-env
services:
# API service
api:
image: langgenius/dify-api:0.6.15
image: langgenius/dify-api:0.6.16
restart: always
environment:
# Use the shared environment variables.
@ -202,7 +202,7 @@ services:
# worker service
# The Celery worker for processing the queue.
worker:
image: langgenius/dify-api:0.6.15
image: langgenius/dify-api:0.6.16
restart: always
environment:
# Use the shared environment variables.
@ -221,7 +221,7 @@ services:
# Frontend web application.
web:
image: langgenius/dify-web:0.6.15
image: langgenius/dify-web:0.6.16
restart: always
environment:
CONSOLE_API_URL: ${CONSOLE_API_URL:-}

View File

@ -14,7 +14,7 @@ import {
import { Lock01 } from '@/app/components/base/icons/src/vender/solid/security'
import Button from '@/app/components/base/button'
import { LinkExternal02 } from '@/app/components/base/icons/src/vender/line/general'
import ConfirmUi from '@/app/components/base/confirm'
import Confirm from '@/app/components/base/confirm'
import { addTracingConfig, removeTracingConfig, updateTracingConfig } from '@/service/apps'
import Toast from '@/app/components/base/toast'
@ -276,9 +276,8 @@ const ProviderConfigModal: FC<Props> = ({
</PortalToFollowElem>
)
: (
<ConfirmUi
<Confirm
isShow
onClose={hideRemoveConfirm}
type='warning'
title={t(`${I18N_PREFIX}.removeConfirmTitle`, { key: t(`app.tracing.${type}.title`) })!}
content={t(`${I18N_PREFIX}.removeConfirmContent`)}

View File

@ -392,7 +392,6 @@ const AppCard = ({ app, onRefresh }: AppCardProps) => {
title={t('app.deleteAppConfirmTitle')}
content={t('app.deleteAppConfirmContent')}
isShow={showConfirmDelete}
onClose={() => setShowConfirmDelete(false)}
onConfirm={onConfirmDelete}
onCancel={() => setShowConfirmDelete(false)}
/>

View File

@ -219,7 +219,6 @@ const DatasetCard = ({
title={t('dataset.deleteDatasetConfirmTitle')}
content={confirmMessage}
isShow={showConfirmDelete}
onClose={() => setShowConfirmDelete(false)}
onConfirm={onConfirmDelete}
onCancel={() => setShowConfirmDelete(false)}
/>

View File

@ -426,7 +426,6 @@ const AppInfo = ({ expand }: IAppInfoProps) => {
title={t('app.deleteAppConfirmTitle')}
content={t('app.deleteAppConfirmContent')}
isShow={showConfirmDelete}
onClose={() => setShowConfirmDelete(false)}
onConfirm={onConfirmDelete}
onCancel={() => setShowConfirmDelete(false)}
/>

View File

@ -5,7 +5,7 @@ import { useTranslation } from 'react-i18next'
import EditItem, { EditItemType } from './edit-item'
import Drawer from '@/app/components/base/drawer-plus'
import { MessageCheckRemove } from '@/app/components/base/icons/src/vender/line/communication'
import DeleteConfirmModal from '@/app/components/base/modal/delete-confirm-modal'
import Confirm from '@/app/components/base/confirm'
import { addAnnotation, editAnnotation } from '@/service/annotation'
import Toast from '@/app/components/base/toast'
import { useProviderContext } from '@/context/provider-context'
@ -85,19 +85,31 @@ const EditAnnotationModal: FC<Props> = ({
maxWidthClassName='!max-w-[480px]'
title={t('appAnnotation.editModal.title') as string}
body={(
<div className='p-6 pb-4 space-y-6'>
<EditItem
type={EditItemType.Query}
content={query}
readonly={(isAdd && isAnnotationFull) || onlyEditResponse}
onSave={editedContent => handleSave(EditItemType.Query, editedContent)}
/>
<EditItem
type={EditItemType.Answer}
content={answer}
readonly={isAdd && isAnnotationFull}
onSave={editedContent => handleSave(EditItemType.Answer, editedContent)}
/>
<div>
<div className='p-6 pb-4 space-y-6'>
<EditItem
type={EditItemType.Query}
content={query}
readonly={(isAdd && isAnnotationFull) || onlyEditResponse}
onSave={editedContent => handleSave(EditItemType.Query, editedContent)}
/>
<EditItem
type={EditItemType.Answer}
content={answer}
readonly={isAdd && isAnnotationFull}
onSave={editedContent => handleSave(EditItemType.Answer, editedContent)}
/>
<Confirm
isShow={showModal}
onCancel={() => setShowModal(false)}
onConfirm={() => {
onRemove()
setShowModal(false)
onHide()
}}
title={t('appDebug.feature.annotation.removeConfirm')}
/>
</div>
</div>
)}
foot={
@ -127,16 +139,6 @@ const EditAnnotationModal: FC<Props> = ({
</div>
}
/>
<DeleteConfirmModal
isShow={showModal}
onHide={() => setShowModal(false)}
onRemove={() => {
onRemove()
setShowModal(false)
onHide()
}}
text={t('appDebug.feature.annotation.removeConfirm') as string}
/>
</div>
)

View File

@ -2,7 +2,7 @@
import type { FC } from 'react'
import React from 'react'
import { useTranslation } from 'react-i18next'
import DeleteConfirmModal from '@/app/components/base/modal/delete-confirm-modal'
import Confirm from '@/app/components/base/confirm'
type Props = {
isShow: boolean
@ -18,11 +18,11 @@ const RemoveAnnotationConfirmModal: FC<Props> = ({
const { t } = useTranslation()
return (
<DeleteConfirmModal
<Confirm
isShow={isShow}
onHide={onHide}
onRemove={onRemove}
text={t('appDebug.feature.annotation.removeConfirm') as string}
onCancel={onHide}
onConfirm={onRemove}
title={t('appDebug.feature.annotation.removeConfirm')}
/>
)
}

View File

@ -11,7 +11,7 @@ import HitHistoryNoData from './hit-history-no-data'
import cn from '@/utils/classnames'
import Drawer from '@/app/components/base/drawer-plus'
import { MessageCheckRemove } from '@/app/components/base/icons/src/vender/line/communication'
import DeleteConfirmModal from '@/app/components/base/modal/delete-confirm-modal'
import Confirm from '@/app/components/base/confirm'
import TabSlider from '@/app/components/base/tab-slider-plain'
import { fetchHitHistoryList } from '@/service/annotation'
import { APP_PAGE_LIMIT } from '@/config'
@ -201,8 +201,20 @@ const ViewAnnotationModal: FC<Props> = ({
/>
}
body={(
<div className='p-6 pb-4 space-y-6'>
{activeTab === TabType.annotation ? annotationTab : hitHistoryTab}
<div>
<div className='p-6 pb-4 space-y-6'>
{activeTab === TabType.annotation ? annotationTab : hitHistoryTab}
</div>
<Confirm
isShow={showModal}
onCancel={() => setShowModal(false)}
onConfirm={async () => {
await onRemove()
setShowModal(false)
onHide()
}}
title={t('appDebug.feature.annotation.removeConfirm')}
/>
</div>
)}
foot={id
@ -220,16 +232,6 @@ const ViewAnnotationModal: FC<Props> = ({
)
: undefined}
/>
<DeleteConfirmModal
isShow={showModal}
onHide={() => setShowModal(false)}
onRemove={async () => {
await onRemove()
setShowModal(false)
onHide()
}}
text={t('appDebug.feature.annotation.removeConfirm') as string}
/>
</div>
)

View File

@ -24,7 +24,7 @@ import { checkKeys, getNewVar } from '@/utils/var'
import Switch from '@/app/components/base/switch'
import Toast from '@/app/components/base/toast'
import { Settings01 } from '@/app/components/base/icons/src/vender/line/general'
import ConfirmModal from '@/app/components/base/confirm/common'
import Confirm from '@/app/components/base/confirm'
import ConfigContext from '@/context/debug-configuration'
import { AppType } from '@/types/app'
import type { ExternalDataTool } from '@/models/common'
@ -389,11 +389,10 @@ const ConfigVar: FC<IConfigVarProps> = ({ promptVariables, readonly, onPromptVar
)}
{isShowDeleteContextVarModal && (
<ConfirmModal
<Confirm
isShow={isShowDeleteContextVarModal}
title={t('appDebug.feature.dataSet.queryVariable.deleteContextVarTitle', { varName: promptVariables[removeIndex as number]?.name })}
desc={t('appDebug.feature.dataSet.queryVariable.deleteContextVarTip') as string}
confirmBtnClassName='bg-[#B42318] hover:bg-[#B42318]'
content={t('appDebug.feature.dataSet.queryVariable.deleteContextVarTip')}
onConfirm={() => {
didRemoveVar(removeIndex as number)
hideDeleteContextVarModal()

View File

@ -282,7 +282,6 @@ const GetAutomaticRes: FC<IGetAutomaticResProps> = ({
title={t('appDebug.generate.overwriteTitle')}
content={t('appDebug.generate.overwriteMessage')}
isShow={showConfirmOverwrite}
onClose={() => setShowConfirmOverwrite(false)}
onConfirm={() => {
setShowConfirmOverwrite(false)
onFinished(res!)

View File

@ -880,7 +880,6 @@ const Configuration: FC = () => {
title={t('appDebug.resetConfig.title')}
content={t('appDebug.resetConfig.message')}
isShow={restoreConfirmOpen}
onClose={() => setRestoreConfirmOpen(false)}
onConfirm={resetAppConfig}
onCancel={() => setRestoreConfirmOpen(false)}
/>
@ -890,7 +889,6 @@ const Configuration: FC = () => {
title={t('appDebug.trailUseGPT4Info.title')}
content={t('appDebug.trailUseGPT4Info.description')}
isShow={showUseGPT4Confirm}
onClose={() => setShowUseGPT4Confirm(false)}
onConfirm={() => {
setShowAccountSettingModal({ payload: 'provider' })
setShowUseGPT4Confirm(false)

View File

@ -185,7 +185,6 @@ function AppCard({
title={t('appOverview.overview.appInfo.regenerate')}
content={t('appOverview.overview.appInfo.regenerateNotice')}
isShow={showConfirmDelete}
onClose={() => setShowConfirmDelete(false)}
onConfirm={() => {
onGenCode()
setShowConfirmDelete(false)

View File

@ -147,10 +147,6 @@ const SwitchAppModal = ({ show, appDetail, inAppDetail = false, onSuccess, onClo
setShowConfirmDelete(false)
setRemoveOriginal(false)
}}
onClose={() => {
setShowConfirmDelete(false)
setRemoveOriginal(false)
}}
/>
)}
</>

View File

@ -121,7 +121,6 @@ const Sidebar = () => {
title={t('share.chat.deleteConversation.title')}
content={t('share.chat.deleteConversation.content') || ''}
isShow
onClose={handleCancelConfirm}
onCancel={handleCancelConfirm}
onConfirm={handleDelete}
/>

View File

@ -83,7 +83,7 @@ export const useChat = (
const { t } = useTranslation()
const { formatTime } = useTimestamp()
const { notify } = useToastContext()
const connversationId = useRef('')
const conversationId = useRef('')
const hasStopResponded = useRef(false)
const [isResponding, setIsResponding] = useState(false)
const isRespondingRef = useRef(false)
@ -152,7 +152,7 @@ export const useChat = (
}, [stopChat, handleResponding])
const handleRestart = useCallback(() => {
connversationId.current = ''
conversationId.current = ''
taskIdRef.current = ''
handleStop()
const newChatList = config?.opening_statement
@ -248,7 +248,7 @@ export const useChat = (
const bodyParams = {
response_mode: 'streaming',
conversation_id: connversationId.current,
conversation_id: conversationId.current,
...data,
}
if (bodyParams?.files?.length) {
@ -302,7 +302,7 @@ export const useChat = (
}
if (isFirstMessage && newConversationId)
connversationId.current = newConversationId
conversationId.current = newConversationId
taskIdRef.current = taskId
if (messageId)
@ -322,11 +322,11 @@ export const useChat = (
return
if (onConversationComplete)
onConversationComplete(connversationId.current)
onConversationComplete(conversationId.current)
if (connversationId.current && !hasStopResponded.current && onGetConvesationMessages) {
if (conversationId.current && !hasStopResponded.current && onGetConvesationMessages) {
const { data }: any = await onGetConvesationMessages(
connversationId.current,
conversationId.current,
newAbortController => conversationMessagesAbortControllerRef.current = newAbortController,
)
const newResponseItem = data.find((item: any) => item.id === responseItem.id)
@ -361,7 +361,7 @@ export const useChat = (
latency: newResponseItem.provider_response_latency.toFixed(2),
},
// for agent log
conversationId: connversationId.current,
conversationId: conversationId.current,
input: {
inputs: newResponseItem.inputs,
query: newResponseItem.query,
@ -640,7 +640,7 @@ export const useChat = (
return {
chatList,
setChatList,
conversationId: connversationId.current,
conversationId: conversationId.current,
isResponding,
setIsResponding,
handleSend,

View File

@ -1,52 +0,0 @@
'use client'
import type { FC } from 'react'
import React from 'react'
import { useTranslation } from 'react-i18next'
import Button from '../button'
export type IConfirmUIProps = {
type: 'info' | 'warning'
title: string
content: string
confirmText?: string
onConfirm: () => void
cancelText?: string
onCancel: () => void
}
const ConfirmUI: FC<IConfirmUIProps> = ({
type,
title,
content,
confirmText,
cancelText,
onConfirm,
onCancel,
}) => {
const { t } = useTranslation()
return (
<div className="w-[420px] max-w-full rounded-lg p-7 bg-white">
<div className='flex items-center'>
{type === 'info' && (<svg width="32" height="32" viewBox="0 0 32 32" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M17.3333 21.3333H16V16H14.6667M16 10.6667H16.0133M28 16C28 17.5759 27.6896 19.1363 27.0866 20.5922C26.4835 22.0481 25.5996 23.371 24.4853 24.4853C23.371 25.5996 22.0481 26.4835 20.5922 27.0866C19.1363 27.6896 17.5759 28 16 28C14.4241 28 12.8637 27.6896 11.4078 27.0866C9.95189 26.4835 8.62902 25.5996 7.51472 24.4853C6.40042 23.371 5.5165 22.0481 4.91345 20.5922C4.31039 19.1363 4 17.5759 4 16C4 12.8174 5.26428 9.76516 7.51472 7.51472C9.76516 5.26428 12.8174 4 16 4C19.1826 4 22.2348 5.26428 24.4853 7.51472C26.7357 9.76516 28 12.8174 28 16Z" stroke="#9CA3AF" strokeWidth="2" strokeLinecap="round" strokeLinejoin="round" />
</svg>)}
{type === 'warning' && (<svg width="32" height="32" viewBox="0 0 32 32" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M16 10.6667V16M16 21.3333H16.0133M28 16C28 17.5759 27.6896 19.1363 27.0866 20.5922C26.4835 22.0481 25.5996 23.371 24.4853 24.4853C23.371 25.5996 22.0481 26.4835 20.5922 27.0866C19.1363 27.6896 17.5759 28 16 28C14.4241 28 12.8637 27.6896 11.4078 27.0866C9.95189 26.4835 8.62902 25.5996 7.51472 24.4853C6.40042 23.371 5.5165 22.0481 4.91345 20.5922C4.31039 19.1363 4 17.5759 4 16C4 12.8174 5.26428 9.76516 7.51472 7.51472C9.76516 5.26428 12.8174 4 16 4C19.1826 4 22.2348 5.26428 24.4853 7.51472C26.7357 9.76516 28 12.8174 28 16Z" stroke="#FACA15" strokeWidth="2" strokeLinecap="round" strokeLinejoin="round" />
</svg>
)}
<div className='ml-4 text-lg text-gray-900'>{title}</div>
</div>
<div className='mt-1 ml-12'>
<div className='text-sm leading-normal text-gray-500'>{content}</div>
</div>
<div className='flex gap-3 mt-4 ml-12'>
<Button variant='primary' onClick={onConfirm}>{confirmText || t('common.operation.confirm')}</Button>
<Button onClick={onCancel}>{cancelText || t('common.operation.cancel')}</Button>
</div>
</div>
)
}
export default React.memo(ConfirmUI)

View File

@ -1,7 +0,0 @@
.wrapper-danger {
background: linear-gradient(180deg, rgba(217, 45, 32, 0.05) 0%, rgba(217, 45, 32, 0.00) 24.02%), #F9FAFB;
}
.wrapper-success {
background: linear-gradient(180deg, rgba(3, 152, 85, 0.05) 0%, rgba(3, 152, 85, 0.00) 22.44%), #F9FAFB;
}

View File

@ -1,97 +0,0 @@
import type { FC, ReactElement } from 'react'
import { useTranslation } from 'react-i18next'
import {
RiCloseLine,
RiErrorWarningFill,
} from '@remixicon/react'
import s from './common.module.css'
import cn from '@/utils/classnames'
import Modal from '@/app/components/base/modal'
import { CheckCircle } from '@/app/components/base/icons/src/vender/solid/general'
import Button from '@/app/components/base/button'
export type ConfirmCommonProps = {
type?: string
isShow: boolean
onCancel: () => void
title: string
desc?: string
onConfirm?: () => void
showOperate?: boolean
showOperateCancel?: boolean
confirmBtnClassName?: string
confirmText?: string
confirmWrapperClassName?: string
confirmDisabled?: boolean
}
const ConfirmCommon: FC<ConfirmCommonProps> = ({
type = 'danger',
isShow,
onCancel,
title,
desc,
onConfirm,
showOperate = true,
showOperateCancel = true,
confirmBtnClassName,
confirmText,
confirmWrapperClassName,
confirmDisabled,
}) => {
const { t } = useTranslation()
const CONFIRM_MAP: Record<string, { icon: ReactElement; confirmText: string }> = {
danger: {
icon: <RiErrorWarningFill className='w-6 h-6 text-[#D92D20]' />,
confirmText: t('common.operation.remove'),
},
success: {
icon: <CheckCircle className='w-6 h-6 text-[#039855]' />,
confirmText: t('common.operation.ok'),
},
}
return (
<Modal isShow={isShow} onClose={() => { }} className='!w-[480px] !max-w-[480px] !p-0 !rounded-2xl' wrapperClassName={confirmWrapperClassName}>
<div className={cn(s[`wrapper-${type}`], 'relative p-8')}>
<div className='flex items-center justify-center absolute top-4 right-4 w-8 h-8 cursor-pointer' onClick={onCancel}>
<RiCloseLine className='w-4 h-4 text-gray-500' />
</div>
<div className='flex items-center justify-center mb-3 w-12 h-12 bg-white shadow-xl rounded-xl'>
{CONFIRM_MAP[type].icon}
</div>
<div className='text-xl font-semibold text-gray-900'>{title}</div>
{
desc && <div className='mt-1 text-sm text-gray-500'>{desc}</div>
}
{
showOperate && (
<div className='flex items-center justify-end mt-10'>
{
showOperateCancel && (
<Button
className='mr-2'
onClick={onCancel}
>
{t('common.operation.cancel')}
</Button>
)
}
<Button
variant='primary'
className={confirmBtnClassName || ''}
onClick={onConfirm}
disabled={confirmDisabled}
>
{confirmText || CONFIRM_MAP[type].confirmText}
</Button>
</div>
)
}
</div>
</Modal>
)
}
export default ConfirmCommon

View File

@ -1,26 +1,27 @@
import { Dialog, Transition } from '@headlessui/react'
import { Fragment } from 'react'
import React, { useEffect, useRef, useState } from 'react'
import { createPortal } from 'react-dom'
import { useTranslation } from 'react-i18next'
import ConfirmUI from '../confirm-ui'
import Button from '../button'
// https://headlessui.com/react/dialog
type IConfirm = {
export type IConfirm = {
className?: string
isShow: boolean
onClose: () => void
type?: 'info' | 'warning'
title: string
content: string
confirmText?: string
content?: React.ReactNode
confirmText?: string | null
onConfirm: () => void
cancelText?: string
onCancel: () => void
isLoading?: boolean
isDisabled?: boolean
showConfirm?: boolean
showCancel?: boolean
maskClosable?: boolean
}
export default function Confirm({
function Confirm({
isShow,
onClose,
type = 'warning',
title,
content,
@ -28,52 +29,76 @@ export default function Confirm({
cancelText,
onConfirm,
onCancel,
showConfirm = true,
showCancel = true,
isLoading = false,
isDisabled = false,
maskClosable = true,
}: IConfirm) {
const { t } = useTranslation()
const dialogRef = useRef<HTMLDivElement>(null)
const [isVisible, setIsVisible] = useState(isShow)
const confirmTxt = confirmText || `${t('common.operation.confirm')}`
const cancelTxt = cancelText || `${t('common.operation.cancel')}`
return (
<Transition appear show={isShow} as={Fragment}>
<Dialog as="div" className="relative z-[100]" onClose={onClose} onClick={e => e.preventDefault()}>
<Transition.Child
as={Fragment}
enter="ease-out duration-300"
enterFrom="opacity-0"
enterTo="opacity-100"
leave="ease-in duration-200"
leaveFrom="opacity-100"
leaveTo="opacity-0"
>
<div className="fixed inset-0 bg-black bg-opacity-25" />
</Transition.Child>
<div className="fixed inset-0 overflow-y-auto">
<div className="flex items-center justify-center min-h-full p-4 text-center">
<Transition.Child
as={Fragment}
enter="ease-out duration-300"
enterFrom="opacity-0 scale-95"
enterTo="opacity-100 scale-100"
leave="ease-in duration-200"
leaveFrom="opacity-100 scale-100"
leaveTo="opacity-0 scale-95"
>
<Dialog.Panel className={'w-full max-w-md transform overflow-hidden rounded-2xl bg-white text-left align-middle shadow-xl transition-all'}>
<ConfirmUI
type={type}
title={title}
content={content}
confirmText={confirmTxt}
cancelText={cancelTxt}
onConfirm={onConfirm}
onCancel={onCancel}
/>
</Dialog.Panel>
</Transition.Child>
useEffect(() => {
const handleKeyDown = (event: KeyboardEvent) => {
if (event.key === 'Escape')
onCancel()
}
document.addEventListener('keydown', handleKeyDown)
return () => {
document.removeEventListener('keydown', handleKeyDown)
}
}, [onCancel])
const handleClickOutside = (event: MouseEvent) => {
if (maskClosable && dialogRef.current && !dialogRef.current.contains(event.target as Node))
onCancel()
}
useEffect(() => {
document.addEventListener('mousedown', handleClickOutside)
return () => {
document.removeEventListener('mousedown', handleClickOutside)
}
}, [maskClosable])
useEffect(() => {
if (isShow) {
setIsVisible(true)
}
else {
const timer = setTimeout(() => setIsVisible(false), 200)
return () => clearTimeout(timer)
}
}, [isShow])
if (!isVisible)
return null
return createPortal(
<div className={'fixed inset-0 flex items-center justify-center z-[10000000] bg-background-overlay'}
onClick={(e) => {
e.preventDefault()
e.stopPropagation()
}}>
<div ref={dialogRef} className={'relative w-full max-w-[480px] overflow-hidden'}>
<div className='flex flex-col items-start max-w-full rounded-2xl border-[0.5px] border-solid border-components-panel-border shadows-shadow-lg bg-components-panel-bg'>
<div className='flex pt-6 pl-6 pr-6 pb-4 flex-col items-start gap-2 self-stretch'>
<div className='title-2xl-semi-bold text-text-primary'>{title}</div>
<div className='system-md-regular text-text-tertiary'>{content}</div>
</div>
<div className='flex p-6 gap-2 justify-end items-start self-stretch'>
{showCancel && <Button onClick={onCancel}>{cancelTxt}</Button>}
{showConfirm && <Button variant={'primary'} destructive={type !== 'info'} loading={isLoading} disabled={isDisabled} onClick={onConfirm}>{confirmTxt}</Button>}
</div>
</div>
</Dialog>
</Transition>
</div>
</div>, document.body,
)
}
export default React.memo(Confirm)

View File

@ -1,66 +0,0 @@
'use client'
import type { FC } from 'react'
import React from 'react'
import { useTranslation } from 'react-i18next'
import {
RiErrorWarningFill,
} from '@remixicon/react'
import s from './style.module.css'
import Modal from '@/app/components/base/modal'
import Button from '@/app/components/base/button'
type Props = {
isShow: boolean
onHide: () => void
onRemove: () => void
text?: string
children?: JSX.Element
}
const DeleteConfirmModal: FC<Props> = ({
isShow,
onHide,
onRemove,
children,
text,
}) => {
const { t } = useTranslation()
if (!isShow)
return null
return (
<Modal
isShow={isShow}
onClose={onHide}
className={s.delModal}
closable
>
<div onClick={(e) => {
e.stopPropagation()
e.stopPropagation()
e.nativeEvent.stopImmediatePropagation()
}}>
<div className={s.warningWrapper}>
<RiErrorWarningFill className='w-6 h-6 text-red-600' />
</div>
{text
? (
<div className='text-xl font-semibold text-gray-900 mb-3'>{text}</div>
)
: children}
<div className='flex gap-2 justify-end'>
<Button onClick={onHide}>{t('common.operation.cancel')}</Button>
<Button
variant='warning'
onClick={onRemove}
className='border-red-700'
>
{t('common.operation.sure')}
</Button>
</div>
</div>
</Modal>
)
}
export default React.memo(DeleteConfirmModal)

View File

@ -1,16 +0,0 @@
.delModal {
background: linear-gradient(180deg,
rgba(217, 45, 32, 0.05) 0%,
rgba(217, 45, 32, 0) 24.02%),
#f9fafb;
box-shadow: 0px 20px 24px -4px rgba(16, 24, 40, 0.08),
0px 8px 8px -4px rgba(16, 24, 40, 0.03);
@apply rounded-2xl p-8;
}
.warningWrapper {
box-shadow: 0px 20px 24px -4px rgba(16, 24, 40, 0.08),
0px 8px 8px -4px rgba(16, 24, 40, 0.03);
background: rgba(255, 255, 255, 0.9);
@apply h-12 w-12 border-[0.5px] border-gray-100 rounded-xl mb-3 flex items-center justify-center;
}

View File

@ -4,15 +4,13 @@ import { ArrowUpRightIcon } from '@heroicons/react/24/outline'
import { useTranslation } from 'react-i18next'
import {
RiDeleteBinLine,
RiErrorWarningFill,
} from '@remixicon/react'
import { StatusItem } from '../../list'
import { DocumentTitle } from '../index'
import s from './style.module.css'
import { SegmentIndexTag } from './index'
import cn from '@/utils/classnames'
import Modal from '@/app/components/base/modal'
import Button from '@/app/components/base/button'
import Confirm from '@/app/components/base/confirm'
import Switch from '@/app/components/base/switch'
import Divider from '@/app/components/base/divider'
import Indicator from '@/app/components/header/indicator'
@ -217,26 +215,15 @@ const SegmentCard: FC<ISegmentCardProps> = ({
</div>
</>
)}
{showModal && <Modal isShow={showModal} onClose={() => setShowModal(false)} className={s.delModal} closable>
<div>
<div className={s.warningWrapper}>
<RiErrorWarningFill className='w-6 h-6 text-red-600' />
</div>
<div className='text-xl font-semibold text-gray-900 mb-1'>{t('datasetDocuments.segment.delete')}</div>
<div className='flex gap-2 justify-end'>
<Button onClick={() => setShowModal(false)}>{t('common.operation.cancel')}</Button>
<Button
variant='warning'
onClick={async () => {
await onDelete?.(id)
}}
className='border-red-700'
>
{t('common.operation.sure')}
</Button>
</div>
</div>
</Modal>}
{showModal
&& <Confirm
isShow={showModal}
title={t('datasetDocuments.segment.delete')}
confirmText={t('common.operation.sure')}
onConfirm={async () => { await onDelete?.(id) }}
onCancel={() => setShowModal(false)}
/>
}
</div>
)
}

View File

@ -4,7 +4,6 @@ import type { FC, SVGProps } from 'react'
import React, { useCallback, useEffect, useState } from 'react'
import { useBoolean, useDebounceFn } from 'ahooks'
import { ArrowDownIcon, TrashIcon } from '@heroicons/react/24/outline'
import { ExclamationCircleIcon } from '@heroicons/react/24/solid'
import { pick } from 'lodash-es'
import {
RiMoreFill,
@ -23,8 +22,7 @@ import cn from '@/utils/classnames'
import Switch from '@/app/components/base/switch'
import Divider from '@/app/components/base/divider'
import Popover from '@/app/components/base/popover'
import Modal from '@/app/components/base/modal'
import Button from '@/app/components/base/button'
import Confirm from '@/app/components/base/confirm'
import Tooltip from '@/app/components/base/tooltip'
import { ToastContext } from '@/app/components/base/toast'
import type { IndicatorProps } from '@/app/components/header/indicator'
@ -294,25 +292,16 @@ export const OperationAction: FC<{
className={`flex justify-end !w-[200px] h-fit !z-20 ${className}`}
/>
)}
{showModal && <Modal isShow={showModal} onClose={() => setShowModal(false)} className={s.delModal} closable>
<div>
<div className={s.warningWrapper}>
<ExclamationCircleIcon className={s.warningIcon} />
</div>
<div className='text-xl font-semibold text-gray-900 mb-1'>{t('datasetDocuments.list.delete.title')}</div>
<div className='text-sm text-gray-500 mb-10'>{t('datasetDocuments.list.delete.content')}</div>
<div className='flex gap-2 justify-end'>
<Button onClick={() => setShowModal(false)}>{t('common.operation.cancel')}</Button>
<Button
variant='warning'
onClick={() => onOperate('delete')}
className='border-red-700'
>
{t('common.operation.sure')}
</Button>
</div>
</div>
</Modal>}
{showModal
&& <Confirm
isShow={showModal}
title={t('datasetDocuments.list.delete.title')}
content={t('datasetDocuments.list.delete.content')}
confirmText={t('common.operation.sure')}
onConfirm={() => onOperate('delete')}
onCancel={() => setShowModal(false)}
/>
}
{isShowRenameModal && currDocument && (
<RenameModal

View File

@ -154,10 +154,6 @@ const SecretKeyModal = ({
title={`${t('appApi.actionMsg.deleteConfirmTitle')}`}
content={`${t('appApi.actionMsg.deleteConfirmTips')}`}
isShow={showConfirmDelete}
onClose={() => {
setDelKeyId('')
setShowConfirmDelete(false)
}}
onConfirm={onDel}
onCancel={() => {
setDelKeyId('')

View File

@ -137,7 +137,6 @@ const SideBar: FC<IExploreSideBarProps> = ({
title={t('explore.sidebar.delete.title')}
content={t('explore.sidebar.delete.content')}
isShow={showConfirm}
onClose={() => setShowConfirm(false)}
onConfirm={handleDelete}
onCancel={() => setShowConfirm(false)}
/>

View File

@ -1,16 +1,14 @@
'use client'
import { useState } from 'react'
import { useTranslation } from 'react-i18next'
import {
RiCloseLine,
RiErrorWarningFill,
} from '@remixicon/react'
import { useContext } from 'use-context-selector'
import Collapse from '../collapse'
import type { IItem } from '../collapse'
import s from './index.module.css'
import classNames from '@/utils/classnames'
import Modal from '@/app/components/base/modal'
import Confirm from '@/app/components/base/confirm'
import Button from '@/app/components/base/button'
import { updateUserProfile } from '@/service/common'
import { useAppContext } from '@/context/app-context'
@ -245,30 +243,24 @@ export default function AccountPage() {
</Modal>
)}
{showDeleteAccountModal && (
<Modal
className={classNames('p-8 max-w-[480px] w-[480px]', s.bg)}
isShow={showDeleteAccountModal}
onClose={() => { }}
>
<div className='absolute right-4 top-4 p-2 cursor-pointer' onClick={() => setShowDeleteAccountModal(false)}>
<RiCloseLine className='w-4 h-4 text-gray-500' />
</div>
<div className='w-12 h-12 p-3 bg-white rounded-xl border-[0.5px] border-gray-100 shadow-xl'>
<RiErrorWarningFill className='w-6 h-6 text-[#D92D20]' />
</div>
<div className='relative mt-3 text-xl font-semibold leading-[30px] text-gray-900'>{t('common.account.delete')}</div>
<div className='my-1 text-[#D92D20] text-sm leading-5'>
{t('common.account.deleteTip')}
</div>
<div className='mt-3 text-sm leading-5'>
<span>{t('common.account.deleteConfirmTip')}</span>
<a className='text-primary-600 cursor' href={`mailto:support@dify.ai?subject=Delete Account Request&body=Delete Account: ${userProfile.email}`} target='_blank'>support@dify.ai</a>
</div>
<div className='my-2 px-3 py-2 rounded-lg bg-gray-100 text-sm font-medium leading-5 text-gray-800'>{`Delete Account: ${userProfile.email}`}</div>
<div className='pt-6 flex justify-end items-center'>
<Button className='w-24' onClick={() => setShowDeleteAccountModal(false)}>{t('common.operation.ok')}</Button>
</div>
</Modal>
<Confirm
isShow
onCancel={() => setShowDeleteAccountModal(false)}
onConfirm={() => setShowDeleteAccountModal(false)}
showCancel={false}
type='warning'
title={t('common.account.delete')}
content={<>
<div className='my-1 text-[#D92D20] text-sm leading-5'>
{t('common.account.deleteTip')}
</div>
<div className='mt-3 text-sm leading-5'>
<span>{t('common.account.deleteConfirmTip')}</span>
<a className='text-primary-600 cursor' href={`mailto:support@dify.ai?subject=Delete Account Request&body=Delete Account: ${userProfile.email}`} target='_blank'>support@dify.ai</a>
</div>
</>}
confirmText={t('common.operation.ok') as string}
/>
)}
</>
)

View File

@ -8,7 +8,7 @@ import { Edit02 } from '@/app/components/base/icons/src/vender/line/general'
import type { ApiBasedExtension } from '@/models/common'
import { useModalContext } from '@/context/modal-context'
import { deleteApiBasedExtension } from '@/service/common'
import ConfirmCommon from '@/app/components/base/confirm/common'
import Confirm from '@/app/components/base/confirm'
type ItemProps = {
data: ApiBasedExtension
@ -57,18 +57,14 @@ const Item: FC<ItemProps> = ({
</div>
</div>
{
showDeleteConfirm && (
<ConfirmCommon
type='danger'
showDeleteConfirm
&& <Confirm
isShow={showDeleteConfirm}
onCancel={() => setShowDeleteConfirm(false)}
title={`${t('common.operation.delete')}${data.name}”?`}
onConfirm={handleDeleteApiBasedExtension}
confirmWrapperClassName='!z-30'
confirmText={t('common.operation.delete') || ''}
confirmBtnClassName='!bg-[#D92D20]'
/>
)
}
</div>
)

View File

@ -48,7 +48,7 @@ import {
PortalToFollowElemContent,
} from '@/app/components/base/portal-to-follow-elem'
import { useToastContext } from '@/app/components/base/toast'
import ConfirmCommon from '@/app/components/base/confirm/common'
import Confirm from '@/app/components/base/confirm'
import { useAppContext } from '@/context/app-context'
type ModelModalProps = {
@ -385,12 +385,11 @@ const ModelModal: FC<ModelModalProps> = ({
</div>
{
showConfirm && (
<ConfirmCommon
<Confirm
title={t('common.modelProvider.confirmDelete')}
isShow={showConfirm}
onCancel={() => setShowConfirm(false)}
onConfirm={handleRemove}
confirmWrapperClassName='z-[70]'
/>
)
}

View File

@ -40,7 +40,7 @@ import {
PortalToFollowElemContent,
} from '@/app/components/base/portal-to-follow-elem'
import { useToastContext } from '@/app/components/base/toast'
import ConfirmCommon from '@/app/components/base/confirm/common'
import Confirm from '@/app/components/base/confirm'
type ModelModalProps = {
provider: ModelProvider
@ -330,12 +330,11 @@ const ModelLoadBalancingEntryModal: FC<ModelModalProps> = ({
</div>
{
showConfirm && (
<ConfirmCommon
<Confirm
title={t('common.modelProvider.confirmDelete')}
isShow={showConfirm}
onCancel={() => setShowConfirm(false)}
onConfirm={handleRemove}
confirmWrapperClassName='z-[70]'
/>
)
}

View File

@ -366,7 +366,6 @@ const ProviderDetail = ({
title={t('tools.createTool.deleteToolConfirmTitle')}
content={t('tools.createTool.deleteToolConfirmContent')}
isShow={showConfirmDelete}
onClose={() => setShowConfirmDelete(false)}
onConfirm={handleConfirmDelete}
onCancel={() => setShowConfirmDelete(false)}
/>

View File

@ -280,7 +280,7 @@ export const NODE_WIDTH = 240
export const X_OFFSET = 60
export const NODE_WIDTH_X_OFFSET = NODE_WIDTH + X_OFFSET
export const Y_OFFSET = 39
export const MAX_TREE_DEEPTH = 50
export const MAX_TREE_DEPTH = 50
export const START_INITIAL_POSITION = { x: 80, y: 282 }
export const AUTO_LAYOUT_OFFSET = {
x: -42,

View File

@ -12,7 +12,7 @@ const RestoringTitle = () => {
return (
<div className='flex items-center h-[18px] text-xs text-gray-500'>
<ClockRefresh className='mr-1 w-3 h-3 text-gray-500' />
{t('workflow.common.latestPublished')}
{t('workflow.common.latestPublished')}<span> </span>
{formatTimeFromNow(publishedAt)}
</div>
)

View File

@ -16,7 +16,7 @@ import {
} from '../utils'
import {
CUSTOM_NODE,
MAX_TREE_DEEPTH,
MAX_TREE_DEPTH,
} from '../constants'
import type { ToolNodeType } from '../nodes/tool/types'
import { useIsChatMode } from './use-workflow'
@ -119,8 +119,8 @@ export const useChecklistBeforePublish = () => {
maxDepth,
} = getValidTreeNodes(nodes.filter(node => node.type === CUSTOM_NODE), edges)
if (maxDepth > MAX_TREE_DEEPTH) {
notify({ type: 'error', message: t('workflow.common.maxTreeDepth', { depth: MAX_TREE_DEEPTH }) })
if (maxDepth > MAX_TREE_DEPTH) {
notify({ type: 'error', message: t('workflow.common.maxTreeDepth', { depth: MAX_TREE_DEPTH }) })
return false
}

View File

@ -87,7 +87,7 @@ import { FeaturesProvider } from '@/app/components/base/features'
import type { Features as FeaturesData } from '@/app/components/base/features/types'
import { useFeaturesStore } from '@/app/components/base/features/hooks'
import { useEventEmitterContextContext } from '@/context/event-emitter'
import Confirm from '@/app/components/base/confirm/common'
import Confirm from '@/app/components/base/confirm'
const nodeTypes = {
[CUSTOM_NODE]: CustomNode,
@ -330,8 +330,7 @@ const Workflow: FC<WorkflowProps> = memo(({
onCancel={() => setShowConfirm(undefined)}
onConfirm={showConfirm.onConfirm}
title={showConfirm.title}
desc={showConfirm.desc}
confirmWrapperClassName='!z-[11]'
content={showConfirm.desc}
/>
)
}

View File

@ -25,7 +25,6 @@ const RemoveVarConfirm: FC<Props> = ({
content={t(`${i18nPrefix}.content`)}
onConfirm={onConfirm}
onCancel={onCancel}
onClose={onCancel}
/>
)
}

View File

@ -125,12 +125,14 @@ const formatItem = (
const {
outputs,
} = data as CodeNodeType
res.vars = Object.keys(outputs).map((key) => {
return {
variable: key,
type: outputs[key].type,
}
})
res.vars = outputs
? Object.keys(outputs).map((key) => {
return {
variable: key,
type: outputs[key].type,
}
})
: []
break
}

View File

@ -35,7 +35,7 @@ export const useChat = (
const { notify } = useToastContext()
const { handleRun } = useWorkflowRun()
const hasStopResponded = useRef(false)
const connversationId = useRef('')
const conversationId = useRef('')
const taskIdRef = useRef('')
const [chatList, setChatList] = useState<ChatItem[]>(prevChatList || [])
const chatListRef = useRef<ChatItem[]>(prevChatList || [])
@ -100,7 +100,7 @@ export const useChat = (
}, [handleResponding, stopChat])
const handleRestart = useCallback(() => {
connversationId.current = ''
conversationId.current = ''
taskIdRef.current = ''
handleStop()
const newChatList = config?.opening_statement
@ -185,7 +185,7 @@ export const useChat = (
handleResponding(true)
const bodyParams = {
conversation_id: connversationId.current,
conversation_id: conversationId.current,
...params,
}
if (bodyParams?.files?.length) {
@ -214,7 +214,7 @@ export const useChat = (
}
if (isFirstMessage && newConversationId)
connversationId.current = newConversationId
conversationId.current = newConversationId
taskIdRef.current = taskId
if (messageId)
@ -403,7 +403,7 @@ export const useChat = (
}, [handleRun, handleResponding, handleUpdateChatList, notify, t, updateCurrentQA, config.suggested_questions_after_answer?.enabled])
return {
conversationId: connversationId.current,
conversationId: conversationId.current,
chatList,
handleSend,
handleStop,

View File

@ -10,10 +10,10 @@ import {
fetchDataSourceNotionBinding,
fetchFreeQuotaVerify,
} from '@/service/common'
import type { ConfirmCommonProps } from '@/app/components/base/confirm/common'
import Confirm from '@/app/components/base/confirm/common'
import type { IConfirm } from '@/app/components/base/confirm'
import Confirm from '@/app/components/base/confirm'
export type ConfirmType = Pick<ConfirmCommonProps, 'type' | 'title' | 'desc'>
export type ConfirmType = Pick<IConfirm, 'type' | 'title' | 'content'>
export const useAnthropicCheckPay = () => {
const { t } = useTranslation()
@ -25,7 +25,7 @@ export const useAnthropicCheckPay = () => {
useEffect(() => {
if (providerName === 'anthropic' && (paymentResult === 'succeeded' || paymentResult === 'cancelled')) {
setConfirm({
type: paymentResult === 'succeeded' ? 'success' : 'danger',
type: paymentResult === 'succeeded' ? 'info' : 'warning',
title: paymentResult === 'succeeded' ? t('common.actionMsg.paySucceeded') : t('common.actionMsg.payCancelled'),
})
}
@ -44,7 +44,7 @@ export const useBillingPay = () => {
useEffect(() => {
if (paymentType === 'billing' && (paymentResult === 'succeeded' || paymentResult === 'cancelled')) {
setConfirm({
type: paymentResult === 'succeeded' ? 'success' : 'danger',
type: paymentResult === 'succeeded' ? 'info' : 'warning',
title: paymentResult === 'succeeded' ? t('common.actionMsg.paySucceeded') : t('common.actionMsg.payCancelled'),
})
}
@ -96,7 +96,7 @@ export const useCheckFreeQuota = () => {
useEffect(() => {
if (error)
router.replace('/', { forceOptimisticNavigation: false })
router.replace('/')
}, [error, router])
useEffect(() => {
@ -106,7 +106,7 @@ export const useCheckFreeQuota = () => {
return (data && provider)
? {
type: data.flag ? 'success' : 'danger',
type: data.flag ? 'info' : 'warning',
title: data.flag ? QUOTA_RECEIVE_STATUS[provider as string].success[locale] : QUOTA_RECEIVE_STATUS[provider].fail[locale],
desc: !data.flag ? data.reason : undefined,
}
@ -130,13 +130,13 @@ export const useCheckNotion = () => {
useEffect(() => {
if (data)
router.replace('/', { forceOptimisticNavigation: false })
router.replace('/')
}, [data, router])
useEffect(() => {
if (type === 'notion') {
if (notionError) {
setConfirm({
type: 'danger',
type: 'warning',
title: notionError,
})
}
@ -160,7 +160,7 @@ export const CheckModal = () => {
const handleCancelShowPayStatusModal = useCallback(() => {
setShowPayStatusModal(false)
router.replace('/', { forceOptimisticNavigation: false })
router.replace('/')
}, [router])
const confirmInfo = anthropicConfirmInfo || freeQuotaConfirmInfo || notionConfirmInfo || billingConfirmInfo
@ -173,11 +173,11 @@ export const CheckModal = () => {
isShow
onCancel={handleCancelShowPayStatusModal}
onConfirm={handleCancelShowPayStatusModal}
type={confirmInfo.type}
showCancel={false}
type={confirmInfo.type === 'info' ? 'info' : 'warning' }
title={confirmInfo.title}
desc={confirmInfo.desc}
showOperateCancel={false}
confirmText={(confirmInfo.type === 'danger' && t('common.operation.ok')) || ''}
content={(confirmInfo as { desc: string }).desc || ''}
confirmText={(confirmInfo.type === 'info' && t('common.operation.ok')) || ''}
/>
)
}

View File

@ -348,7 +348,7 @@ const translation = {
getFreeTokens: 'Get free Tokens',
priorityUsing: 'Prioritize using',
deprecated: 'Deprecated',
confirmDelete: 'confirm deletion?',
confirmDelete: 'Confirm deletion?',
quotaTip: 'Remaining available free tokens',
loadPresets: 'Load Presents',
parameters: 'PARAMETERS',

View File

@ -1,6 +1,6 @@
{
"name": "dify-web",
"version": "0.6.15",
"version": "0.6.16",
"private": true,
"engines": {
"node": ">=18.17.0"