Compare commits

..

432 Commits

Author SHA1 Message Date
c304f8e702 Fix:webapp UI issues (#15601) 2025-03-13 16:43:50 +08:00
b0e28f58aa chore: github action 2025-03-12 16:18:15 +08:00
b4a07d9e96 fix scroll in mobile
suggest questions display

tooltip for start a new chat

reset chat input position

fix webapp UI issues
2025-03-12 16:07:34 +08:00
46036e6ce6 fix: update version to 1.0.1 in configuration and Docker files (#15478)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-03-11 18:50:42 +08:00
1ffda0dd34 fix notion page display (#15508) 2025-03-11 18:40:02 +08:00
da01b460fe support workspace billing info (#15510) 2025-03-11 18:38:23 +08:00
90a1508b87 fix: update placeholders in version info modal to indicate optional field (#15499) 2025-03-11 18:30:47 +08:00
b07016113c fix: add animation to workflow process loader icon (#15497) 2025-03-11 18:04:58 +08:00
d8317fcf81 fix: remove unnecessary modal (#15493) 2025-03-11 17:18:23 +08:00
a6bc642721 refactor: optimize provider configuration queries with provider name … (#15491) 2025-03-11 17:09:51 +08:00
b730f243dc fix: displan badge based on workspace plan (#15489) 2025-03-11 17:01:17 +08:00
71a57275ab fix: improve selection of variable in workflow (#15484)
Signed-off-by: Yuichiro Utsumi <utsumi.yuichiro@fujitsu.com>
2025-03-11 16:57:45 +08:00
41bf8d925f fix:To fix the issue of missing reference to body parameter (#15443)
Co-authored-by: crazywoola <427733928@qq.com>
2025-03-11 16:16:53 +08:00
6d172498d1 Update the provider_id validation to fix the error message displayed … (#15466)
Co-authored-by: Kyle Chang <kylechang@91app.com>
2025-03-11 16:11:24 +08:00
cad58658c2 fix: simplify S3 client configuration by removing redundant checksum settings (#15474)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-03-11 14:50:03 +08:00
a58b990855 fix agent_execution_metadata (#15444) 2025-03-11 14:35:08 +08:00
b6b1903a37 fix: fix chatbot publish and restore handling (#15462) 2025-03-11 13:36:45 +08:00
ed5596a8f4 fix: avoid llm node result var not init issue while do retry. (#14286) 2025-03-11 12:43:24 +08:00
49d0acd188 fix: replace old-style <br> tags to fix Mermaid rendering issues (#13792) 2025-03-11 12:40:55 +08:00
58a74fe1fb chore: add comment to the PLUGIN_DIFY_INNER_API_KEY key (#15381) 2025-03-11 00:25:11 +08:00
a1ab4aec3d fix db migration (#15422) 2025-03-11 00:24:57 +08:00
f77f7e1437 fix text split (#15426) 2025-03-11 00:24:27 +08:00
adda049265 fix kb permission (#15199)
Signed-off-by: kenwoodjw <blackxin55@gmail.com>
Signed-off-by: kenwoodjw <blackxin55+@gmail.com>
2025-03-10 23:47:45 +08:00
9b2a9260ef Feat/new saas billing (#14996) 2025-03-10 19:50:11 +08:00
Joe
c8cc31af88 fix: app trace permission (#15397) 2025-03-10 18:45:25 +08:00
d333de274f chore(.github): add a new tracker template (#15391)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-03-10 18:39:35 +08:00
9e220d5d30 Feat: configure dark mode legacy (#15394) 2025-03-10 16:41:06 +08:00
2cf0cb471f fix: fix document list overlap and optimize document list fetching (#15377) 2025-03-10 15:34:40 +08:00
269ba6add9 fix: remove port expose on db (#15286) 2025-03-10 15:01:34 +08:00
78d460a6d1 Feat: time period filter for workflow logs (#14271)
Signed-off-by: -LAN- <laipz8200@outlook.com>
Co-authored-by: -LAN- <laipz8200@outlook.com>
2025-03-10 14:02:58 +08:00
3254018ddb feat(workflow_service): workflow version control api. (#14860)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-03-10 13:34:31 +08:00
f2b7df94d7 fix: return absolute path as the icon url if CONSOLE_API_URL is empty (#15279) 2025-03-10 13:15:06 +08:00
59fd3aad31 feat: add PIP_MIRROR_URL environment variable support (#15353) 2025-03-10 12:59:31 +08:00
a3d18d43ed fix: tool name in agent (#15344) 2025-03-10 11:21:46 +08:00
20cbebeef1 modify OCI_ENDPOINT example value Fixes #15336 (#15337)
Co-authored-by: engchina <atjapan2015@gmail.com>
2025-03-10 10:47:39 +08:00
2968482199 downgrade boto3 to use s3 compatible storage. Fixes #15225 (#15261)
Co-authored-by: engchina <atjapan2015@gmail.com>
2025-03-10 09:56:38 +08:00
znn
f8ac382072 raising error if plugin not initialized (#15319) 2025-03-10 09:54:51 +08:00
aef43910b1 fix: octet/stream => application/octet-stream (#15329)
Co-authored-by: crazywoola <427733928@qq.com>
2025-03-10 09:49:27 +08:00
87efd4ab84 Update login.py (#15320) 2025-03-10 09:49:14 +08:00
a8b600845e fix Unicode Escape Characters (#15318) 2025-03-10 09:22:41 +08:00
fcd9fd8513 fix: update image gallery styles (#15301) 2025-03-09 15:32:03 +08:00
ffe73f0124 feat: add docker-compose.override.yaml to .gitignore (#15289) 2025-03-09 10:51:55 +08:00
0c57250d87 feat: expose PYTHON_ENV_INIT_TIMEOUT and PLUGIN_MAX_EXECUTION_TIMEOUT (#15283) 2025-03-09 10:45:19 +08:00
f7e012d216 Fix: reranker OFF logic to preserve user setting (#15235)
Co-authored-by: crazywoola <427733928@qq.com>
2025-03-08 19:08:48 +08:00
c9e3c8b38d fix: dataset pagination state keeps resetting when filters changed (#15268) 2025-03-08 17:38:07 +08:00
908a7b6c3d fix: tool icons are missing (#15241) 2025-03-08 11:04:53 +08:00
cfd7e8a829 fix: missing action value to tools.includeToolNum lang for custom t… (#15239) 2025-03-08 10:55:13 +08:00
804b818c6b docs(readme): add a Traditional Chinese badge for README (#15258)
Signed-off-by: appleboy <appleboy.tw@gmail.com>
2025-03-08 10:48:16 +08:00
9b9d14c2c4 Feat/compliance (#14982) 2025-03-07 14:56:38 -05:00
38fc8eeaba typo チュンク to チャンク (#15240) 2025-03-07 20:55:07 +08:00
e70221a9f1 fix: website remote url display error (#15217)
Co-authored-by: 刘江波 <jiangbo721@163.com>
2025-03-07 20:32:29 +08:00
126202648f fix message sort (#15231) 2025-03-07 19:36:44 +08:00
dc8475995f fix: web style check task throw error (#15226) 2025-03-07 19:23:06 +08:00
3ca1373274 feat: version tag (#14949) 2025-03-07 18:10:40 +08:00
4aaf07d62a fix: update the link of contact sales in billing page (#15219) 2025-03-07 16:53:01 +08:00
ff10a4603f bugfix:cant correct display latex (#14910) 2025-03-07 14:06:14 +08:00
53eb56bb1e Fix: psycopg2.errors.StringDataRightTruncation value too long for type character varying(40) (#15179) 2025-03-07 12:15:52 +08:00
c6209d76eb chore: update langfuse description (#15136) 2025-03-07 12:15:38 +08:00
99dc8c7871 fix: http node request detect text/xml as file (#15174) 2025-03-07 12:12:06 +08:00
f588ccff72 Feat: settings dark mode (#15184) 2025-03-07 11:56:20 +08:00
69746f2f0b add: allowed_domains marketplace.dify.ai (#15139) 2025-03-07 10:55:08 +08:00
65da9425df Fix: only retrieval plugin-compatible providers when provider_name starts with langgenius (#15133) 2025-03-07 00:41:56 +08:00
b7583e95a5 fix: adjust scroll detection threshold in chat component (#14640) 2025-03-06 22:36:59 +08:00
9437a1a844 docs: add comprehensive Traditional Chinese contribution guide (#14816)
Signed-off-by: Bo-Yi Wu <appleboy.tw@gmail.com>
2025-03-06 22:36:05 +08:00
435564f0f2 fix parent-child retrival count (#15119) 2025-03-06 22:32:38 +08:00
2a6e522a87 Fixed incorrect use of key in the page /plugins?category=discover #15126 (#15127) 2025-03-06 20:14:39 +08:00
9c1db7dca7 modify oracle lexer name Fixes #15106 (#15108)
Co-authored-by: engchina <atjapan2015@gmail.com>
2025-03-06 18:58:51 +08:00
cd7cb19aee hotfix: Fixed tags not updating in real time in the label management of apps #15113 (#15110) 2025-03-06 18:55:25 +08:00
d84fa4d154 fix: with file conversation second chat raise error (#15097) 2025-03-06 18:21:40 +08:00
d574706600 Update ko-KR/plugin.ts (#15103) 2025-03-06 17:40:37 +08:00
8369e59b4d Feat/14573 support more types in form (#15093) 2025-03-06 17:38:50 +08:00
5be8fbab56 feat: refactor date-and-time-picker to use custom dayjs utility and add timezone support (#15101) 2025-03-06 16:24:03 +08:00
6101733232 fix(docker): plugin daemon lacks database dependency (#15038)
Co-authored-by: crazywoola <427733928@qq.com>
2025-03-06 13:18:59 +08:00
778861f461 fix: agent node can't use in parallel (#15047) 2025-03-06 13:13:24 +08:00
6c9d6a4d57 fix: add custom disallowed elements to Markdown component and restore the default disallowed elements (#15057) 2025-03-06 10:57:49 +08:00
9962118dbd Fix: new upgrade page (#12417) 2025-03-06 10:27:13 +08:00
a4b2c10fb8 Feat/compliance report download (#14477) 2025-03-06 10:25:18 +08:00
2c17bb2c36 Feature/newnew workflow loop node (#14863)
Co-authored-by: arkunzz <4873204@qq.com>
2025-03-05 17:41:15 +08:00
da91217bc9 fix docker-compose.yaml and docker-compose.middleware.yaml plugin_daemon environment parameter values (#14992) 2025-03-05 16:27:55 +08:00
2e467cbc74 [FIX]Ruff: lint errors for E731 (#13018) 2025-03-05 15:54:04 +08:00
561a3bf7a0 chore: remove the unused config INNER_API_KEY (#14780) 2025-03-05 15:39:48 +08:00
83cad07fb3 fix endpoint help link 404 (#14981) 2025-03-05 14:59:04 +08:00
4f6a4f244c fix(llm/nodes.py): Ensure that the output returns without any exceptions (#14880) 2025-03-05 14:35:08 +08:00
cef49da576 feat: Add caching mechanism for latest plugin version retrieval (#14968) 2025-03-05 13:47:06 +08:00
e16453591e add bangla (bengali) readme translation and link to all other readme (#14970) 2025-03-05 13:28:08 +08:00
450319790e fix: fixed to AWS Marketplace link (#14955) 2025-03-05 12:33:45 +08:00
4b783037d3 Fix typo: Window -> Windows in .gitattributes comment (#14961) 2025-03-05 12:33:30 +08:00
d04f40c274 Fix empty results issue in full-text search with Milvus vector database (#14885)
Co-authored-by: liusurong.lsr <liusurong.lsr@alibaba-inc.com>
2025-03-05 12:27:01 +08:00
9ab4f35b84 fix: workflow one step run form validate (#14934) 2025-03-05 10:28:46 +08:00
4668c4996a feat: Add caching mechanism for plugin model schemas (#14898) 2025-03-04 18:02:06 +08:00
330dc2fd44 fix: iteration log index error (#14855) 2025-03-04 14:48:14 +08:00
96eed571d9 chore: fix contaminated db migration commit title for add_retry_index_field_to_node_execution (#14787) 2025-03-04 13:28:17 +08:00
24d80000ac chore: Restore the parts that were overwritten during conflict resolution. (#14141) 2025-03-04 11:21:16 +08:00
7ff527293a Fix can not modify required filelist input before starting (#14825) 2025-03-04 10:39:13 +08:00
d88b2dd198 fix: eslint extension not working in vscode (#14725) 2025-03-04 10:29:48 +08:00
66654faef3 fix: fixed incorrect operation of publishing as tool (#14561) 2025-03-04 10:20:18 +08:00
c8de30f3d9 feat: support oracle oci autonomouse database. Fixes #14792 and Fixes #14628. (#14804)
Co-authored-by: engchina <atjapan2015@gmail.com>
2025-03-04 09:22:04 +08:00
Qun
f0fb38fed4 unify moderation and annotation's response behavior in message log of chatflow app with other types of app (#14800) 2025-03-04 09:09:32 +08:00
43ab7c22a7 fix EXPOSE_PLUGIN_DEBUGGING_HOST not working (#14742) 2025-03-03 18:32:33 +08:00
829cd70889 fix: When chatflow's file uploads changed from not being supported to… (#14341)
Co-authored-by: 刘江波 <jiangbo721@163.com>
2025-03-03 17:41:28 +08:00
972421efe2 Fix: update embed.min.js (#14772) 2025-03-03 16:28:38 +08:00
98ada40532 fix: improve layout in dataset selection component (#14756) 2025-03-03 16:06:41 +08:00
931d704612 Fix/explore darkmode (#14751) 2025-03-03 16:06:28 +08:00
bb4e7da720 Fix: web app theme intialization (#14761) 2025-03-03 16:05:35 +08:00
64e122c5f6 chore: Bump ruff to 0.9.9 (#13356) 2025-03-03 15:16:08 +08:00
d0d0bf570e Feat: web app dark mode (#14732) 2025-03-03 14:44:51 +08:00
e53052ab7a fix: one step run (#14724) 2025-03-03 13:29:59 +08:00
cd46ebbb34 fix: (psycopg2.errors.StringDataRightTruncation) value too long for type character varying(40) Fixes #14593 (#14597)
Co-authored-by: engchina <atjapan2015@gmail.com>
2025-03-03 13:16:51 +08:00
8d4136d864 fix: document extractor can't parse excel (#14695) 2025-03-03 10:35:47 +08:00
4125e575af fix: save site setting not work (#14700) 2025-03-03 10:32:20 +08:00
7259c0d69f fix: fix a typo of get_customizable_model_schema method name (#14449) 2025-03-01 22:56:13 +08:00
ce2dd22bd7 fix: typo doc_metadat (#14569) 2025-02-28 22:51:38 +08:00
1eb072fd43 fix: the edges between the nodes inside the copied iteration node are… (#12692) 2025-02-28 19:33:47 +08:00
f49b0822aa add german translation of README & CONTRIBUTING (#14498) 2025-02-28 19:28:26 +08:00
de824d3713 fix: add collapse icon for fullscreen toggle in segment detail compon… (#14530) 2025-02-28 18:57:59 +08:00
c0358d8d0c release/1.0.0 (#14478) 2025-02-28 15:09:23 +08:00
a9e4f345e9 fix: ensure correct provider ID comparison in tool provider query (#14527) 2025-02-28 15:05:16 +08:00
be18b103b7 fix: set method to POST when body exists (#14523) (#14524) 2025-02-28 14:58:01 +08:00
55405c1a26 fix: update default.conf.template avoid embed.min.js 404 (#13980) 2025-02-28 10:49:19 +08:00
779770dae5 fix: workflow tool file error (#14483) 2025-02-27 20:45:39 +08:00
002b16e1c6 fix: properly escape collectionName in query string parameters (#14476) 2025-02-27 18:59:07 +08:00
ddf9eb1f9a fix: page broke down while rendering node contained img and other elements (#14467) 2025-02-27 16:54:54 +08:00
bb4fecf3d1 fix(agent node): tool setting and workflow tool. (#14461) 2025-02-27 16:09:13 +08:00
4fbe52da40 fix: update dependencies and improve app detail handling (#14444) 2025-02-27 15:11:42 +08:00
1e3197a1ea Fixes 14217: database retrieve api and chat-messages api response doc_metadata (#14219) 2025-02-27 14:56:46 +08:00
5f692dfce2 fix: update trial tip time to 3/17 (#14452) 2025-02-27 14:02:04 +08:00
78a7d7fa21 fix: handle gitee old name (#14451) 2025-02-27 13:51:27 +08:00
a9dda1554e Fix/custom model credentials (#14450) 2025-02-27 13:35:41 +08:00
9a417bfc5e fix: update database query and model definitions (#14415) 2025-02-26 18:45:12 +08:00
90bc51ed2e fix: refine wording in LICENSE (#14412) 2025-02-26 02:26:25 -08:00
02dc835721 [chore] upgrade bedrock (#14387) 2025-02-26 16:51:20 +08:00
a05e8f0e37 Update ko-KR/workflow.ts (#14377) 2025-02-26 16:45:48 +08:00
b10cbb9b20 fix: update uniqueIdentifier assignment for marketplace plugins (#14385) 2025-02-26 15:12:40 +08:00
1aaab741a0 fix: workflow note editor placeholder (#14380) 2025-02-26 13:55:25 +08:00
bafa46393c chore: change use pnpm doc (#14373) 2025-02-26 11:37:23 +08:00
45d43c41bc chore: remove useless yarn lock file (#14370) 2025-02-26 11:18:39 +08:00
e944646541 fix(docker-compose.yaml): Fix the issue where Docker Compose doesn't … (#14361) 2025-02-26 10:27:11 +08:00
21e1443ed5 chore: cleanup unchanged overridden method in subclasses of BaseNode (#14281) 2025-02-26 09:41:38 +08:00
93a5ffb037 bugfix: db insert error when notion page_name too long (#14316) 2025-02-26 09:38:18 +08:00
d5711589cd fix(html-parser): sanitize unclosed tags in markdown rendering (#14309) 2025-02-26 09:31:29 +08:00
375a359c97 Fix: style of model card info (#14292) 2025-02-26 09:27:52 +08:00
3228bac56d chore: fix Japanese translation for updating tool (#14332) 2025-02-26 09:27:32 +08:00
c66b4e32db fix(i18n): update verification tips for clarity (#14342) 2025-02-25 21:05:46 +08:00
57b60dd51f fix(provider_manager): fix custom provider (#14340)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-25 20:48:47 +08:00
ff911d0dc5 fix: prioritize fixed model providers in sorting logic (#14338) 2025-02-25 20:28:59 +08:00
7a71498a3e chore(quota): Update deduct quota (#14337)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-25 20:10:08 +08:00
76bcdc2581 chore(provider_manager): Update hosted model's name (#14334)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-25 18:47:33 +08:00
91a218b29d fix: adjust width of file uploader component (#14328) 2025-02-25 17:47:48 +08:00
4a6cbda1b4 chore(provider_manager): Update hosted model's name (#14324)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-25 16:58:57 +08:00
8c08153e33 Fix/plugin badge i18n (#14319) 2025-02-25 16:35:10 +08:00
b44b3866a1 Fix: compatibel with legacy DSL (#14317) 2025-02-25 16:19:34 +08:00
c242bb372b fix: change default theme to light (#14304) 2025-02-25 16:03:33 +08:00
8c9e34133c Fix/disable autoflush in list tools (#14301) 2025-02-25 14:52:08 +08:00
3403ac361a fix: change default theme to light in layout (#14295) 2025-02-25 13:09:23 +08:00
07d6cb3f4a fix: incorrect variable type selection (#13906) 2025-02-25 13:06:58 +08:00
545aa61cf4 fix: tool info (#14293) 2025-02-25 12:56:27 +08:00
9fb78ce827 refactor(tool-engine): Improve tool provider handling with session ma… (#14291) 2025-02-25 12:33:29 +08:00
490b6d092e Fix/plugin race condition (#14253) 2025-02-25 12:20:47 +08:00
42b13bd312 feat: partner badge in marketplace (#14258) 2025-02-25 12:09:37 +08:00
28add22f20 fix: gemini model info (#14282) 2025-02-25 10:48:07 +08:00
ce545274a6 refactor(tool-engine): Optimize tool engine response handling (#14216)
Co-authored-by: xudong2.zhang <xudong2.zhang@zkh.com>
Co-authored-by: crazywoola <427733928@qq.com>
2025-02-25 09:58:43 +08:00
aa6c951e8c chore: compatible with es5 (#14268) 2025-02-25 09:46:54 +08:00
c4f4dfc3fb Fixed: The use of default parameters in API interfaces (#14138) 2025-02-25 09:43:36 +08:00
548f6ef2b6 fix: incorrect score in the chroma vector (#14273) 2025-02-25 09:40:22 +08:00
b15ff4eb8c fix: datasets documents update-by-file api missing assign field 'indexing_technique' internally (#14243) 2025-02-24 20:14:36 +08:00
7790214620 chore: Fix typos in simplified and traditional Chinese in i18n files (#14228) 2025-02-24 11:32:09 +08:00
3942e45cab fix: update refresh logic for plugin list to avoid redundant request and fix model provider list update issue in settings (#14152) 2025-02-24 10:49:43 +08:00
2ace9ae4e4 fix: add responsive layout for file uploader in datasets (#14159) 2025-02-23 19:35:10 +08:00
5ac0ef6253 Retain previous page's search params (#14176) 2025-02-23 13:47:03 +08:00
f552667312 chore: Simplify the unchanged overrided method in VariableAggregatorNode (#14175) 2025-02-23 13:46:48 +08:00
5669a18bd8 fix: typo in README_FR (#14179) (#14180) 2025-02-22 09:28:03 +08:00
a97d73ab05 Update iteration_node.py to fix #14098: Token count increases exponen… (#14100) 2025-02-21 09:15:46 +08:00
252d2c425b feat(docker): add PM2_INSTANCES variable and update entrypoint script… (#14083) 2025-02-20 17:30:19 +08:00
09fc4bba61 fix: agent tools setting (#14097) 2025-02-20 17:20:23 +08:00
79d4db8541 fix: support selecting yaml extension when importing dsl file (#14088) 2025-02-20 15:40:43 +08:00
9c42626772 fix: fix chunk and segment detail components (#14002) 2025-02-20 15:13:43 +08:00
bbfe83c86b Fix/upgrade btn show logic (#14072) 2025-02-20 14:31:56 +08:00
55aa4e424a fix: quota less than zero show error (#14080) 2025-02-20 14:04:13 +08:00
8015f5c0c5 Fix ko-KR/workflow.ts (#14075) 2025-02-20 13:49:04 +08:00
f3fe14863d fix: marketplace search url (#14061) 2025-02-20 10:00:04 +08:00
d96c368660 fix: frontend for <think> tags conflicting with original <details> tags (#14039) 2025-02-19 22:04:11 +08:00
3f34b8b0d1 fix: remove duplicated code (#14047)
Co-authored-by: 刘江波 <jiangbo721@163.com>
2025-02-19 22:03:24 +08:00
6a58ea9e56 chore(docker): Back to version 0.15.3 (#14042)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-19 20:09:14 +08:00
23888398d1 refactor: Simplify plugin and provider ID generation logic and deduplicate plugin_ids (#14041) 2025-02-19 20:05:01 +08:00
bfbc5eb91e fix: Prevent plugin installation with non-existent plugin IDs (#14038) 2025-02-19 19:37:09 +08:00
98b0d4169e fix workspace model list (#14033) 2025-02-19 18:18:25 +08:00
356cd271b2 fix: MessageLogModal closed unexpectedly (#13617) 2025-02-19 17:32:54 +08:00
baf7561cf8 fix: There is an error in the Japanese in the Japanese documentation (#14026) 2025-02-19 17:29:11 +08:00
b09f22961c fix #13573 (#13843)
Signed-off-by: kenwoodjw <blackxin55+@gmail.com>
2025-02-19 17:28:32 +08:00
f3ad3a5dfd fix: "today" statistics are not displayed in chart view (#13854) 2025-02-19 17:28:10 +08:00
ee49d321c5 fix(workflow): correct edge type mapping typo (#13988) 2025-02-19 17:27:26 +08:00
3467ad3d02 fix: build error 2 (#14003) 2025-02-19 14:00:33 +08:00
6741604027 fix: build web image error (#14000) 2025-02-19 13:42:56 +08:00
35312cf96c fix: remove run unit test ci to ensure build successfully (#13999) 2025-02-19 12:51:35 +08:00
15f028fe59 fix: build web image fail (#13996) 2025-02-19 12:35:16 +08:00
8a2301af56 fix: fix build web image install problem (#13994) 2025-02-19 11:52:36 +08:00
66747a8eef fix: some GitHub run action problem (#13991) 2025-02-19 11:35:19 +08:00
19d413ac1e feat: date and time picker (#13985) 2025-02-19 10:56:18 +08:00
eux
4a332ff1af fix: update the en-US i18n for logAndAnn (#13902) 2025-02-19 09:15:11 +08:00
dc942db52f chore: remove duplicate import statements (#13959) 2025-02-19 09:14:32 +08:00
f535a2aa71 chore: prompt_message is actually assistant_message which is a bit am… (#13839)
Co-authored-by: 刘江波 <jiangbo721@163.com>
2025-02-19 09:14:10 +08:00
dfdd6dfa20 fix: change the config name and fix typo in description of the number of retrieval executors (#13856) 2025-02-19 09:13:36 +08:00
2af81d1ee3 chore: translate i18n files (#13795)
Co-authored-by: douxc <7553076+douxc@users.noreply.github.com>
2025-02-19 09:13:26 +08:00
ece25bce1a fix: LLM may not return <think> tag, cause thinking time keep increase (#13962) 2025-02-18 21:39:01 +08:00
6fc234183a chore: replace marketplace search api (#13963) 2025-02-18 21:37:11 +08:00
15a56f705f fix: get tool provider (#13958) 2025-02-18 20:18:36 +08:00
899f7e125f Fix/add or update model credentials (#13952) 2025-02-18 20:00:39 +08:00
aa19bb3f30 fix session close issue (#13946) 2025-02-18 19:29:57 +08:00
562852a0ae fix: web test not install pnpm before use (#13931) 2025-02-18 18:18:37 +08:00
a4b992c1ab fix: web style workflow checkout error (#13929) 2025-02-18 18:18:20 +08:00
3460c1dfbd fix: tool id (#13932) 2025-02-18 18:17:41 +08:00
653f6c2d46 fix: fetch configured model providers (#13924) 2025-02-18 17:43:39 +08:00
ed7851a4b3 fix: plugin tool icon (#13918) 2025-02-18 16:54:14 +08:00
cb841e5cde fix: plugins install task (#13899) 2025-02-18 15:20:26 +08:00
4dae0e514e fix: ignore plugin already exists (#13888) 2025-02-18 13:22:39 +08:00
363c46ace8 fix: add missing package xinference_client to pass vdb CI tests (#13865) 2025-02-17 23:37:49 +08:00
abe5aca3e2 Retrieval service optimization (#13849) 2025-02-17 18:22:36 +08:00
bea10b4356 fix: undefined attribute 'query' on MessageAnnotation (#13852) 2025-02-17 18:16:45 +08:00
f5f83f1924 fix: markdown merge error (#13853) 2025-02-17 18:11:07 +08:00
403e2d58b9 Introduce Plugins (#13836)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Signed-off-by: -LAN- <laipz8200@outlook.com>
Signed-off-by: xhe <xw897002528@gmail.com>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: takatost <takatost@gmail.com>
Co-authored-by: kurokobo <kuro664@gmail.com>
Co-authored-by: Novice Lee <novicelee@NoviPro.local>
Co-authored-by: zxhlyh <jasonapring2015@outlook.com>
Co-authored-by: AkaraChen <akarachen@outlook.com>
Co-authored-by: Yi <yxiaoisme@gmail.com>
Co-authored-by: Joel <iamjoel007@gmail.com>
Co-authored-by: JzoNg <jzongcode@gmail.com>
Co-authored-by: twwu <twwu@dify.ai>
Co-authored-by: Hiroshi Fujita <fujita-h@users.noreply.github.com>
Co-authored-by: AkaraChen <85140972+AkaraChen@users.noreply.github.com>
Co-authored-by: NFish <douxc512@gmail.com>
Co-authored-by: Wu Tianwei <30284043+WTW0313@users.noreply.github.com>
Co-authored-by: 非法操作 <hjlarry@163.com>
Co-authored-by: Novice <857526207@qq.com>
Co-authored-by: Hiroki Nagai <82458324+nagaihiroki-git@users.noreply.github.com>
Co-authored-by: Gen Sato <52241300+halogen22@users.noreply.github.com>
Co-authored-by: eux <euxuuu@gmail.com>
Co-authored-by: huangzhuo1949 <167434202+huangzhuo1949@users.noreply.github.com>
Co-authored-by: huangzhuo <huangzhuo1@xiaomi.com>
Co-authored-by: lotsik <lotsik@mail.ru>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: nite-knite <nkCoding@gmail.com>
Co-authored-by: Jyong <76649700+JohnJyong@users.noreply.github.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: gakkiyomi <gakkiyomi@aliyun.com>
Co-authored-by: CN-P5 <heibai2006@gmail.com>
Co-authored-by: CN-P5 <heibai2006@qq.com>
Co-authored-by: Chuehnone <1897025+chuehnone@users.noreply.github.com>
Co-authored-by: yihong <zouzou0208@gmail.com>
Co-authored-by: Kevin9703 <51311316+Kevin9703@users.noreply.github.com>
Co-authored-by: -LAN- <laipz8200@outlook.com>
Co-authored-by: Boris Feld <lothiraldan@gmail.com>
Co-authored-by: mbo <himabo@gmail.com>
Co-authored-by: mabo <mabo@aeyes.ai>
Co-authored-by: Warren Chen <warren.chen830@gmail.com>
Co-authored-by: JzoNgKVO <27049666+JzoNgKVO@users.noreply.github.com>
Co-authored-by: jiandanfeng <chenjh3@wangsu.com>
Co-authored-by: zhu-an <70234959+xhdd123321@users.noreply.github.com>
Co-authored-by: zhaoqingyu.1075 <zhaoqingyu.1075@bytedance.com>
Co-authored-by: 海狸大師 <86974027+yenslife@users.noreply.github.com>
Co-authored-by: Xu Song <xusong.vip@gmail.com>
Co-authored-by: rayshaw001 <396301947@163.com>
Co-authored-by: Ding Jiatong <dingjiatong@gmail.com>
Co-authored-by: Bowen Liang <liangbowen@gf.com.cn>
Co-authored-by: JasonVV <jasonwangiii@outlook.com>
Co-authored-by: le0zh <newlight@qq.com>
Co-authored-by: zhuxinliang <zhuxinliang@didiglobal.com>
Co-authored-by: k-zaku <zaku99@outlook.jp>
Co-authored-by: luckylhb90 <luckylhb90@gmail.com>
Co-authored-by: hobo.l <hobo.l@binance.com>
Co-authored-by: jiangbo721 <365065261@qq.com>
Co-authored-by: 刘江波 <jiangbo721@163.com>
Co-authored-by: Shun Miyazawa <34241526+miya@users.noreply.github.com>
Co-authored-by: EricPan <30651140+Egfly@users.noreply.github.com>
Co-authored-by: crazywoola <427733928@qq.com>
Co-authored-by: sino <sino2322@gmail.com>
Co-authored-by: Jhvcc <37662342+Jhvcc@users.noreply.github.com>
Co-authored-by: lowell <lowell.hu@zkteco.in>
Co-authored-by: Boris Polonsky <BorisPolonsky@users.noreply.github.com>
Co-authored-by: Ademílson Tonato <ademilsonft@outlook.com>
Co-authored-by: Ademílson Tonato <ademilson.tonato@refurbed.com>
Co-authored-by: IWAI, Masaharu <iwaim.sub@gmail.com>
Co-authored-by: Yueh-Po Peng (Yabi) <94939112+y10ab1@users.noreply.github.com>
Co-authored-by: Jason <ggbbddjm@gmail.com>
Co-authored-by: Xin Zhang <sjhpzx@gmail.com>
Co-authored-by: yjc980121 <3898524+yjc980121@users.noreply.github.com>
Co-authored-by: heyszt <36215648+hieheihei@users.noreply.github.com>
Co-authored-by: Abdullah AlOsaimi <osaimiacc@gmail.com>
Co-authored-by: Abdullah AlOsaimi <189027247+osaimi@users.noreply.github.com>
Co-authored-by: Yingchun Lai <laiyingchun@apache.org>
Co-authored-by: Hash Brown <hi@xzd.me>
Co-authored-by: zuodongxu <192560071+zuodongxu@users.noreply.github.com>
Co-authored-by: Masashi Tomooka <tmokmss@users.noreply.github.com>
Co-authored-by: aplio <ryo.091219@gmail.com>
Co-authored-by: Obada Khalili <54270856+obadakhalili@users.noreply.github.com>
Co-authored-by: Nam Vu <zuzoovn@gmail.com>
Co-authored-by: Kei YAMAZAKI <1715090+kei-yamazaki@users.noreply.github.com>
Co-authored-by: TechnoHouse <13776377+deephbz@users.noreply.github.com>
Co-authored-by: Riddhimaan-Senapati <114703025+Riddhimaan-Senapati@users.noreply.github.com>
Co-authored-by: MaFee921 <31881301+2284730142@users.noreply.github.com>
Co-authored-by: te-chan <t-nakanome@sakura-is.co.jp>
Co-authored-by: HQidea <HQidea@users.noreply.github.com>
Co-authored-by: Joshbly <36315710+Joshbly@users.noreply.github.com>
Co-authored-by: xhe <xw897002528@gmail.com>
Co-authored-by: weiwenyan-dev <154779315+weiwenyan-dev@users.noreply.github.com>
Co-authored-by: ex_wenyan.wei <ex_wenyan.wei@tcl.com>
Co-authored-by: engchina <12236799+engchina@users.noreply.github.com>
Co-authored-by: engchina <atjapan2015@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: 呆萌闷油瓶 <253605712@qq.com>
Co-authored-by: Kemal <kemalmeler@outlook.com>
Co-authored-by: Lazy_Frog <4590648+lazyFrogLOL@users.noreply.github.com>
Co-authored-by: Yi Xiao <54782454+YIXIAO0@users.noreply.github.com>
Co-authored-by: Steven sun <98230804+Tuyohai@users.noreply.github.com>
Co-authored-by: steven <sunzwj@digitalchina.com>
Co-authored-by: Kalo Chin <91766386+fdb02983rhy@users.noreply.github.com>
Co-authored-by: Katy Tao <34019945+KatyTao@users.noreply.github.com>
Co-authored-by: depy <42985524+h4ckdepy@users.noreply.github.com>
Co-authored-by: 胡春东 <gycm520@gmail.com>
Co-authored-by: Junjie.M <118170653@qq.com>
Co-authored-by: MuYu <mr.muzea@gmail.com>
Co-authored-by: Naoki Takashima <39912547+takatea@users.noreply.github.com>
Co-authored-by: Summer-Gu <37869445+gubinjie@users.noreply.github.com>
Co-authored-by: Fei He <droxer.he@gmail.com>
Co-authored-by: ybalbert001 <120714773+ybalbert001@users.noreply.github.com>
Co-authored-by: Yuanbo Li <ybalbert@amazon.com>
Co-authored-by: douxc <7553076+douxc@users.noreply.github.com>
Co-authored-by: liuzhenghua <1090179900@qq.com>
Co-authored-by: Wu Jiayang <62842862+Wu-Jiayang@users.noreply.github.com>
Co-authored-by: Your Name <you@example.com>
Co-authored-by: kimjion <45935338+kimjion@users.noreply.github.com>
Co-authored-by: AugNSo <song.tiankai@icloud.com>
Co-authored-by: llinvokerl <38915183+llinvokerl@users.noreply.github.com>
Co-authored-by: liusurong.lsr <liusurong.lsr@alibaba-inc.com>
Co-authored-by: Vasu Negi <vasu-negi@users.noreply.github.com>
Co-authored-by: Hundredwz <1808096180@qq.com>
Co-authored-by: Xiyuan Chen <52963600+GareArc@users.noreply.github.com>
2025-02-17 17:05:13 +08:00
222df44d21 Retrieval Service efficiency optimization (#13543) 2025-02-17 14:09:57 +08:00
566e548713 fix: update trial expire time to 3/17 (#13796) 2025-02-17 10:02:21 +08:00
1434d54e7a Revert "Feat: compliance report download" (#13799) 2025-02-17 09:58:56 +08:00
4229d0f9a7 Revert "Feat/compliance" (#13798) 2025-02-16 20:58:25 -05:00
7f9eb35e1f Feat: compliance report download (#13282) 2025-02-17 09:43:41 +08:00
ed7d7a74ea Feat/compliance (#13548) 2025-02-16 20:31:52 -05:00
035e54ba4d fix: add install a package to improve the accuracy of guessing mime type and file extension (main) (#13752) 2025-02-16 21:39:40 +08:00
284707c3a8 perf(message): optimize message loading and reduce SQL queries (#13720) 2025-02-15 12:19:01 +08:00
1f63028a83 fix: reranking_enable setting failed #13668 (#13721) 2025-02-14 17:42:09 +08:00
8a0aa91ed7 Non-Streaming Models Do Not Return Results Properly in _handle_invoke_result (#13571)
Co-authored-by: crazywoola <427733928@qq.com>
2025-02-14 17:02:04 +08:00
62079991b7 fix:Knowledge Base with Parent-Child segment mode not support in Agent (#13663) 2025-02-14 14:34:59 +08:00
4e7e172ff3 Chore/format code (#13691)
Co-authored-by: 刘江波 <jiangbo721@163.com>
2025-02-14 13:38:17 +08:00
33a565a719 perf: Implemented short-circuit evaluation for logical conditions (#13674)
Co-authored-by: liusurong.lsr <liusurong.lsr@alibaba-inc.com>
2025-02-13 19:35:03 +08:00
f0b9257387 fix: error in obtaining end_to_node_id during conditional parallel execution (#13673) 2025-02-13 18:00:28 +08:00
c398c9cb6a chore:Remove duplicate code, lines 8 to 27, same as lines 29 & 45 to 62. (#13659)
Co-authored-by: 刘江波 <jiangbo721@163.com>
2025-02-13 14:51:38 +08:00
a3d3e30e3a fix: fix tongyi models blocking mode with incremental_output=stream (#13620) 2025-02-13 10:24:05 +08:00
2b86465d4c fix document extractor node incorrectly processing doc and ppt files (#12902) 2025-02-12 18:04:28 +08:00
6529240da6 fix: no longer using old app detail cover when switch pathname (#13585) 2025-02-12 15:02:11 +08:00
0751ad1eeb feat(vdb): add HNSW vector index for TiDB vector store with TiFlash (#12043) 2025-02-12 13:53:51 +08:00
786550bdc9 fix: changed topics/keywords to topic/keywords (#13544) 2025-02-12 09:15:15 +08:00
bde756a1ab chore:Remove useless brackets and format code (#13479)
Co-authored-by: 刘江波 <jiangbo721@163.com>
2025-02-11 22:05:29 +08:00
423fb2d7bc Ensure the 'inputs' field in /chat-messages takes effect every time (#7955)
Co-authored-by: Your Name <you@example.com>
Co-authored-by: -LAN- <laipz8200@outlook.com>
2025-02-11 18:44:56 +08:00
f96b4f287a fix: iteration node log time error (#13511) 2025-02-11 16:35:21 +08:00
c00e7d3f65 fix: retry log running error (#13472)
Co-authored-by: Novice Lee <novicelee@NoviPro.local>
2025-02-11 15:48:55 +08:00
1f38d4846b fix: issue #13483 and #13434 (#13518)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-02-11 12:45:49 +08:00
47a64610ca Fix the issue of repeated escaping of quotes in hit test (#13477) 2025-02-11 09:58:31 +08:00
f0a845f0f9 fix: removed LLM output from the main README (#13504) 2025-02-11 09:09:07 +08:00
abec23118d feat: add support for X-Forwarded-Port in ProxyFix middleware (#13102) 2025-02-10 22:28:29 +08:00
0957119550 fix: update UTC time format for consistency (#13471) 2025-02-10 19:37:50 +08:00
f48fa3e4e8 chore: translate i18n files (#13452)
Co-authored-by: douxc <7553076+douxc@users.noreply.github.com>
2025-02-10 14:14:15 +08:00
5ffc58d6ca feat: improve think content display (#13431) 2025-02-10 14:08:17 +08:00
7d958635f0 Fix/add trial expire tip time (#13464) 2025-02-10 12:53:59 +08:00
33990426c1 fix: add ids in FetchDatasetsParams (#13459) 2025-02-10 12:28:36 +08:00
9f3fc7ebf8 ci: make ci safe using zizmor (#13397)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-02-10 12:26:08 +08:00
c8357da13b [Fix] Sagemaker LLM Provider can't adjust context size, it'a always 2… (#13462)
Co-authored-by: Yuanbo Li <ybalbert@amazon.com>
2025-02-10 12:25:04 +08:00
2290f14fb1 feat: add tooltip if user's anthropic trial quota still available (#13418) 2025-02-10 10:44:20 +08:00
7796984444 Fix: Removed model params except max_token for deepseek r1 in volcengine (#13446) 2025-02-10 10:26:26 +08:00
75113c26c6 Feat : add deepseek support for tongyi (#13445) 2025-02-10 10:26:03 +08:00
xhe
939a9ecd21 chore: use the wrap thinking api for volcengine (#13432)
Signed-off-by: xhe <xw897002528@gmail.com>
2025-02-10 10:25:07 +08:00
f307c7cd88 feat: Docker adds SSRF-related timeout settings (#13395) 2025-02-10 10:21:31 +08:00
33ecceb90c Feat: add comparison table to main readme (#13435) 2025-02-10 10:13:46 +08:00
e0d1cab079 fix: add missed background color to iteration node (#13448) 2025-02-10 10:04:56 +08:00
811d72a727 feat: added a _position.yaml for vertex ai provider (#13367) 2025-02-09 10:29:07 +08:00
c3c575c2e1 Fix: model selector UI hover issue (#13396) 2025-02-09 10:24:57 +08:00
c189629eca Fix(i18n): Refine zh-Hant workflow translations (#13421) 2025-02-09 10:24:45 +08:00
37117c22d4 feat(model): support Gemini 2.0 Flash Lite Preview model (02-05) in Google's model provider (#13399) 2025-02-09 10:22:33 +08:00
b05e9d2ab4 feat: update backend documentation (#13374) 2025-02-08 20:36:33 +08:00
0451333990 fix(settings): add notClearable prop to language selection (#13406) 2025-02-08 20:36:23 +08:00
ab2e6c19a4 Fixes #13415 reset model-provider-page form value use schema.default (#13416) 2025-02-08 20:34:52 +08:00
f7959bc887 fix(chatbot): update button class to include text color for better visibility (#13411) 2025-02-08 20:34:37 +08:00
45874c699d Nitpick/fix typos in document (#13413) 2025-02-08 20:33:45 +08:00
286cdc41ab reasoning model unified think tag is <think></think> (#13392)
Co-authored-by: crazywoola <427733928@qq.com>
2025-02-08 16:19:41 +08:00
78708eb5d5 fix: merge conflict between #11301 and #11885 (#13391) 2025-02-08 14:38:10 +08:00
cf36745770 fix(workflow_tool): enable File parameter support after workflow is published as a tool (#13175) 2025-02-08 12:30:00 +08:00
6622c7f98d fix: Fix HTTP request node non 443 port SSL site inaccessible (#13376) 2025-02-08 12:00:45 +08:00
3112b74527 fix: build failed due to getPrevChatList no longer exists (#13383) 2025-02-08 11:59:02 +08:00
b3ae6b634f feat: add pan and zoom support for MiniMap (#13382) 2025-02-08 11:57:41 +08:00
982bca5d40 fix: add rate limiting to prevent brute force on password reset (#13292) 2025-02-08 10:28:31 +08:00
c8dcde6cd0 fix: Gemini 2.0 Flash 001 model yaml file naming (#13372) 2025-02-08 09:12:42 +08:00
8f9db61688 feat: added new silicon flow models (#13369) 2025-02-08 09:12:22 +08:00
ebdbaf34e6 chore: translate i18n files (#13349)
Co-authored-by: JzoNgKVO <27049666+JzoNgKVO@users.noreply.github.com>
2025-02-07 22:41:25 +08:00
a081b1e79e fix: add compatibility config for third-party S3-compatible providers (#13354)
Co-authored-by: zhaoqingyu.1075 <zhaoqingyu.1075@bytedance.com>
2025-02-07 22:35:24 +08:00
38c31e64db add enable_search parameter to qwen_max, plus, turbo (#13335)
Co-authored-by: steven <sunzwj@digitalchina.com>
2025-02-07 22:16:26 +08:00
ae6f67420c Chore: update app detail panel (#13337) 2025-02-07 18:56:43 +08:00
ca19bd31d4 chore(*): Bump version to 0.15.3 (#13308)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-07 15:20:05 +08:00
413dfd5628 feat: add completion mode and context size options for LLM configuration (#13325)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-07 15:08:53 +08:00
f9515901cc fix: Azure AI Foundry model cannot be used in the workflow (#13323)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-07 14:52:57 +08:00
3f42fabff8 chore:improve thinking display for llm from xinference and ollama pro… (#13318) 2025-02-07 14:29:29 +08:00
1caa578771 chore(*): Update style of thinking (#13319)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-07 14:06:35 +08:00
b7c11c1818 Fix the problem of Workflow terminates after parallel tasks execution, merge node not triggered (#12498)
Co-authored-by: Novice Lee <novicelee@NoviPro.local>
2025-02-07 13:56:08 +08:00
3eb3db0663 chore: refactor the OpenAICompatible and improve thinking display (#13299) 2025-02-07 13:28:46 +08:00
be46f32056 fix(credits): require model name equals (#13314)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-07 13:28:17 +08:00
6e5c915f96 feat(model): add deepseek-r1 for openrouter (#13312) 2025-02-07 12:39:13 +08:00
04d13a8116 feat(credits): Allow to configure model-credit mapping (#13274)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-07 11:01:31 +08:00
e638ede3f2 Update README_TR.md (#13294) 2025-02-07 09:11:39 +08:00
2348abe4bf feat: added a couple of models not defined in vertex ai, that were already … (#13296) 2025-02-07 09:11:25 +08:00
f7e7a399d9 feat:add think tag display for xinference deepseek r1 (#13291) 2025-02-06 22:04:58 +08:00
ba91f34636 fix: incorrect transferMethod assignment for remote file (#13286) 2025-02-06 19:32:21 +08:00
16865d43a8 feat: add deepseek models for volcengine provider (#13283)
Co-authored-by: zhaoqingyu.1075 <zhaoqingyu.1075@bytedance.com>
2025-02-06 18:20:03 +08:00
0d13aee15c feat:add deepseek r1 think display for ollama provider (#13272) 2025-02-06 15:32:10 +08:00
49b4144ffd fix: add dataset edit permissions (#13223) 2025-02-06 14:26:16 +08:00
186e2d972e chore(deps): bump katex from 0.16.10 to 0.16.21 in /web (#13270)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-06 13:27:07 +08:00
40dd63ecef Upgrade oracle models (#13174)
Co-authored-by: engchina <atjapan2015@gmail.com>
2025-02-06 13:24:27 +08:00
6d66d6da15 feat(model_providers): Support deepseek-r1 for Nvidia Catalog (#13269)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-06 13:03:19 +08:00
03ec3513f3 Fix bug large data no render (#12683)
Co-authored-by: ex_wenyan.wei <ex_wenyan.wei@tcl.com>
2025-02-06 13:00:04 +08:00
87763fc234 feat(model_providers): Support deepseek for Azure AI Foundry (#13267)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-06 12:45:48 +08:00
f6c44cae2e feat(model): add gemini-2.0 model (#13266) 2025-02-06 12:28:59 +08:00
xhe
da2ee04fce fix: correct linewrap think display in generic openai api (#13260)
Signed-off-by: xhe <xw897002528@gmail.com>
2025-02-06 10:53:08 +08:00
7673c36af3 feat(model): add gemini-2.0-flash-thinking-exp-01-21 (#13230) 2025-02-06 10:01:00 +08:00
9457b2af2f feat: added models :gemini 2.0 flash 001 and gemini 2.0 pro exp 02-05 (#13247) 2025-02-06 09:58:39 +08:00
7203991032 feat: add parameter "reasoning_effort" and Openai o3-mini (#13243) 2025-02-06 09:29:48 +08:00
xhe
5a685f7156 feat: add think display for volcengine and generic openapi (#13234)
Signed-off-by: xhe <xw897002528@gmail.com>
2025-02-06 09:24:40 +08:00
a6a25030ad fix: updated _position.yaml to include the latest model already integ… (#13245) 2025-02-06 09:21:51 +08:00
00458a31d5 feat: added deepseek r1 and v3 to siliconflow (#13238) 2025-02-05 21:59:18 +08:00
c6ddf6d6cc feat(model_providers): Add Groq DeepSeek-R1-Distill-Llama-70b (#13229)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-05 19:15:29 +08:00
34b21b3065 feat: Add o3-mini and o3-mini-2025-01-31 model variants (#13129)
Co-authored-by: crazywoola <427733928@qq.com>
2025-02-05 17:04:45 +08:00
8fbb355cd2 chore: squash system dependencies installation steps (#13206) 2025-02-05 16:42:53 +08:00
e8b3b7e578 Fix new variables in the conversation opener would override prompt_variables (#13191) 2025-02-05 16:16:00 +08:00
59ca44f493 chore(model_runtime): Move deepseek ahead in the providers list. (#13197)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-05 16:08:28 +08:00
9e1457c2c3 fix: mypy checks violation in AzureBlobStorage (#13215) 2025-02-05 15:56:23 +08:00
fac83e14bc Use DefaultAzureCredential for managed identity in azure blob extention (#11559) 2025-02-05 13:43:43 +08:00
a97cec57e4 fix: SSRF proxy file descriptor leak in concurrent requests (#13108) 2025-02-05 13:10:27 +08:00
38c10b47d3 Feat: add linkedin to readme (#13203) 2025-02-05 12:27:58 +08:00
1a2523fd15 feat: bedrock_endpoint_url (#12838) 2025-02-05 12:24:24 +08:00
03243cb422 Modify params for bedrock retrieve generate (#13182) 2025-02-05 12:17:42 +08:00
2ad7ee0344 chore: add tests for build docker image when dockerfile changed (#10732) 2025-02-05 11:40:22 +08:00
55ce3618ce fix: Dollar Sign Handling in Markdown (#13178)
Co-authored-by: crazywoola <427733928@qq.com>
2025-02-05 11:00:56 +08:00
e9e34c1ab2 Install apt dependencies using bookworm source, consistent with base image. Remove unnecessary, error-prone pins (#13176) 2025-02-05 10:07:22 +08:00
d4c916b496 chore(pyproject): Add type stubs into pyproject.toml (#13145)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-04 12:01:28 +08:00
8fbc9c9342 Solve circular dependency issue between workflow/constants.ts file and default.ts file (#13165) 2025-02-04 09:26:01 +08:00
1b6fd9dfe8 fix: set indexing technique from dataset during update-by-text (#13155) 2025-02-03 11:06:03 +08:00
304467e3f5 fix: not install libmagic raise error (#13146) 2025-02-03 11:05:20 +08:00
7452032d81 add azure openai api version 2024-12-01-preview (#13135) 2025-02-03 11:04:20 +08:00
87e2048f1b nitpick: fix small typos in template.en.mdx (#13156) 2025-02-03 11:03:11 +08:00
d876084392 chore: upgrade libldap2 (#13158) 2025-02-03 11:02:14 +08:00
840729afa5 feat: the think tag display of siliconflow's deepseek r1 (#13153) 2025-02-02 21:55:13 +08:00
941ad03f3c pass model and cost so that langfuse can show cost (#13117) 2025-02-02 15:27:27 +08:00
d73d191f99 feature. add feat to modify metadata via dataset api (#13116) 2025-02-02 15:27:12 +08:00
c2664e0283 chore: fix wrong VectorType match case (#13123) 2025-02-02 15:26:59 +08:00
ee61cede4e test(huggingface_hub): Skip the failed test temporarily. (#13142)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-02 14:47:26 +08:00
b47669b80b fix: deduct LLM quota after processing invoke result (#13075)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-02-02 12:05:11 +08:00
c0d0c63592 feat: switch to chat messages before regenerated (#11301)
Co-authored-by: zuodongxu <192560071+zuodongxu@users.noreply.github.com>
2025-01-31 13:05:10 +08:00
b09c39c8dc refactor: avoid to use extra space when finding model by name (#13043) 2025-01-30 15:08:29 +08:00
b4b09ddc3c add tongyi qwen2.5-14b/7b-instruct-1m model (#13089) 2025-01-29 11:58:01 +08:00
d0a21086bd refactor: Update Firecrawl API parameters and default settings (#13082) 2025-01-29 11:21:05 +08:00
d44882c1b5 refactor: reduce duplciate code by inheritance (#13073) 2025-01-28 10:52:01 +08:00
23c68efa2d fix: fix the formatter is not applied on log file (#12704) 2025-01-28 10:49:58 +08:00
560c5de1b7 Fixed Novita AI color and added DeepSeek R1 model (#13074) 2025-01-28 10:38:54 +08:00
5d91dbd000 Set default LOG_LEVEL to INFO for celery workers and beat (#13066)
Co-authored-by: Abdullah AlOsaimi <189027247+osaimi@users.noreply.github.com>
2025-01-27 17:09:41 +08:00
6c31ee36cd fix qwen-vl blocking mode (#13052) 2025-01-27 11:35:23 +08:00
edc29780ed fix: "Model schema not found" error only in agents (#12655) (#12760) 2025-01-27 11:33:13 +08:00
aad7e4dd1c fix:Improve MIME type detection for remote URL uploads using python-magic (#12693) 2025-01-27 11:33:03 +08:00
a6a727e8a4 feat: add inner API to create workspace without requiring email (#13021) 2025-01-26 15:36:56 +08:00
d1fc65fabc fix: adjust iteration node dark style (#13051) 2025-01-26 11:19:41 +08:00
d4be5ef9de Update Novita AI predefined models (#13045) 2025-01-26 09:25:29 +08:00
1374be5a31 fix: Unexpected tag creation when pressing enter during tag conversion (#13041) 2025-01-25 19:30:26 +08:00
b2bbc28580 support bedrock kb: retrieve and generate (#13027) 2025-01-25 17:28:06 +08:00
59b3e672aa feat: add agent thinking content display of deepseek R1 (#12949) 2025-01-24 20:13:42 +08:00
a2f8bce8f5 chore: add Japanese translation: model_providers/bedrock (#13016) 2025-01-24 18:43:33 +08:00
a2b9adb3a2 Change typo in translation (#13004) 2025-01-24 13:48:21 +08:00
28067640b5 fix: wrong zh_Hans translation: Ohio (#13006) 2025-01-24 13:41:20 +08:00
da67916843 feat: add glm-4-air-0111 (#12997)
Co-authored-by: lowell <lowell.hu@zkteco.in>
2025-01-24 10:04:46 +08:00
e54ce479ad Feat/prompt editor dark theme (#12976) 2025-01-23 16:20:00 +08:00
6024d8a42d refactor: Update Firecrawl to use v1 API (#12574)
Co-authored-by: Ademílson Tonato <ademilson.tonato@refurbed.com>
2025-01-23 11:14:48 +08:00
f565f08aa0 fix: get property of string type variable caused page crash (#12969) 2025-01-23 11:02:29 +08:00
fd4afe09f8 fix: tools translate search (#12950)
Co-authored-by: lowell <lowell.hu@zkteco.in>
2025-01-22 19:27:02 +08:00
dd0904f95c feat: add giteeAI risk control identification. (#12946) 2025-01-22 19:26:25 +08:00
4c3076f2a4 feat: add pg vector index (#12338)
Co-authored-by: huangzhuo <huangzhuo1@xiaomi.com>
2025-01-22 17:07:18 +08:00
1e73f63ff8 chore: update version to 0.15.2 in packaging and docker configurations (#12940)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-01-22 16:40:44 +08:00
d167d5b1be feat(ark): support doubao 1.5 series of models (#12935) 2025-01-22 15:25:57 +08:00
71fa14f791 fix: resolve clipboard.writeText failure under HTTP protocol (#12936) 2025-01-22 15:18:23 +08:00
8dd1873e76 feat: workflow note dark theme (#12932) 2025-01-22 14:22:33 +08:00
f91f5c7401 fix(batch_create_segment_to_index_task): count max_position in memory. (#12929) 2025-01-22 13:39:02 +08:00
c62b7cc679 chore(build): bump poetry from 1.x to 2.x (#12369) 2025-01-22 13:38:24 +08:00
3ee213ddca add milvus full text search setting (#12930) 2025-01-22 13:36:39 +08:00
8429877b02 fix: Agent is configured for ReAct inference mode, an error is reported when viewing the agent log (#12920)
Co-authored-by: crazywoola <427733928@qq.com>
2025-01-22 13:20:32 +08:00
05a0faff6a fix: app token's last_used_at can't be updated when last_used_at is null (#12770) 2025-01-22 11:01:45 +08:00
e09f6e4987 feat: support config chunk length by env (#12925) 2025-01-22 10:43:40 +08:00
e23f4b0265 feat: add gemini-2.0-flash-thinking-exp-01-21 (#12924) 2025-01-22 10:14:37 +08:00
f582d4a13e feat: Add ability to change profile avatar (#12642) 2025-01-22 10:11:31 +08:00
2f41bd495d fix:Fix a bug that returns null when the passed path is a file. (#12775)
Co-authored-by: 刘江波 <jiangbo721@163.com>
2025-01-22 10:10:03 +08:00
162a8c4393 fix update segment keyword with same content (#12908) 2025-01-21 19:19:32 +08:00
3d1ce4c53f bug: fixed bedrock rerank bug (#12774)
Co-authored-by: hobo.l <hobo.l@binance.com>
2025-01-21 19:09:36 +08:00
6db3ae9b8e chore: remove webapp ga (#12909) 2025-01-21 18:38:33 +08:00
6d0cb9dc33 fix: variable panel scrollable (#12769)
Co-authored-by: zhaoqingyu.1075 <zhaoqingyu.1075@bytedance.com>
2025-01-21 17:50:42 +08:00
46e95e8309 fix: OpenAI o1 Bad Request Error (#12839) 2025-01-21 15:29:13 +08:00
a7b9375877 Update deepseek model configuration (#12899) 2025-01-21 15:28:11 +08:00
0c6a8a130e fix: external dataset hit test display issue(#12564) (#12612)
Co-authored-by: zhuxinliang <zhuxinliang@didiglobal.com>
2025-01-21 14:31:45 +08:00
9903f1e703 add deepseek-reasoner (#12898) 2025-01-21 12:40:58 +08:00
6fad719e42 chore(fix): Invalid quotes for using Array[String] in HTTP request node as JSON body (#12761) 2025-01-21 10:38:44 +08:00
9aaee8ee47 fix: Issues related to the deletion of conversation_id (#12488) (#12665) 2025-01-21 10:25:35 +08:00
166221d784 chore(lint): fix quotes for f-string formatting by bumping ruff to 0.9.x (#12702) 2025-01-21 10:12:29 +08:00
925d69a2ee feat:Support Minimax-Text-01 (#12763) 2025-01-21 10:08:53 +08:00
5ff08e241a fix: serply credential check query might return empty records (#12784) 2025-01-21 09:38:56 +08:00
3defd24087 feat: allow updating chunk settings for the existing documents (#12833) 2025-01-21 09:25:40 +08:00
9d86147d20 fix: SparkLite API Auth error (#12781) (#12790) 2025-01-20 22:21:21 +08:00
80801ac4ab fix: "parmas" spelling mistake. (#12875) 2025-01-20 22:18:30 +08:00
210926cd91 Fix suggested_question_prompt (#12738) 2025-01-20 22:16:30 +08:00
677a69deed fix(i18n): correct typo in zh-Hant translation (#12852) 2025-01-20 22:15:41 +08:00
8dfdee21ce chore: fix chinese translation for 'recall' (#12772)
Co-authored-by: zhaoqingyu.1075 <zhaoqingyu.1075@bytedance.com>
2025-01-20 22:15:26 +08:00
6ea77ab4cd fix: DeepSeek API Error with response format active (text and json_object) (#12747) 2025-01-20 22:04:18 +08:00
e3c996688d feat: enhance credential extraction logic based on configurate method (#12853) 2025-01-20 21:59:22 +08:00
bc3a570dda fix: Fix rerank model switching issue (#12721)
ok
2025-01-14 15:42:45 +08:00
0800021a2d chore: translate i18n files (#12708)
Co-authored-by: JzoNgKVO <27049666+JzoNgKVO@users.noreply.github.com>
2025-01-14 13:35:23 +08:00
435eddd867 Feat: copyright modification (#12707) 2025-01-14 10:00:57 +08:00
6e0fb055d1 chore: bump version to 0.15.1 (#12690)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-01-13 19:21:06 +08:00
eux
1e9ac7ffeb feat: add table of contents to Knowledge API doc (#12688) 2025-01-13 18:31:43 +08:00
b4873ecb43 [fix] support feature restore (#12563) 2025-01-13 18:29:06 +08:00
mbo
1859d57784 api tool support multiple env url (#12249)
Co-authored-by: mabo <mabo@aeyes.ai>
2025-01-13 17:49:30 +08:00
69d58fbb50 Add new integration with Opik Tracking tool (#11501) 2025-01-13 17:41:44 +08:00
cb34991663 fix: add type hints for App model and improve error handling in audio services (#12677)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-01-13 15:55:16 +08:00
c700364e1c fix: Update variable handling in VariableAssignerNode and clean up app_dsl_service (#12672)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-01-13 15:54:26 +08:00
9a6b1dc3a1 Revert "Feat/new saas billing" (#12673) 2025-01-13 15:17:43 +08:00
54b5b80a07 fix(workflow): fix answer node stream processing in conditional branches (#12510) 2025-01-13 14:54:21 +08:00
831459b895 fix: ruff with statements (#12578)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2025-01-13 09:55:55 +08:00
4e101604c3 fix: ruff check for True if ... else (#12576)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-01-13 09:38:48 +08:00
a6455269f0 chore: Adjust translations to align with Taiwanese Mandarin conventions (#12633) 2025-01-13 09:12:43 +08:00
cd257b91c5 Fix pandas indexing method for knowledge base imports (#12637) (#12638)
Co-authored-by: CN-P5 <heibai2006@qq.com>
2025-01-13 09:06:59 +08:00
d8f57bf899 Feat/new saas billing (#12591) 2025-01-12 14:50:46 +08:00
989fb11fd7 improve the readability of the function generate_api_key (#12552) 2025-01-09 21:30:17 +08:00
140965b738 chore: translate i18n files (#12543)
Co-authored-by: WTW0313 <30284043+WTW0313@users.noreply.github.com>
2025-01-09 20:30:06 +08:00
14ee51aead Feat/add knowledge include all filter (#12537) 2025-01-09 20:21:25 +08:00
2e97ba5700 fix: Add datasets list access control and fix datasets config display issue (#12533)
Co-authored-by: nite-knite <nkCoding@gmail.com>
2025-01-09 17:44:11 +08:00
f549d53b68 fix: sum costs return error value on overview page (#12534) 2025-01-09 16:04:14 +08:00
a085ad4719 feat: show workflow running status (#12531) 2025-01-09 15:36:13 +08:00
f230a9232e fix: Parsing OpenAPI spec for external tools (#12518) (#12530) 2025-01-09 15:30:43 +08:00
e84bf35e2a fix: same chunk insert deadlock (#12502)
Co-authored-by: huangzhuo <huangzhuo1@xiaomi.com>
2025-01-09 15:16:41 +08:00
eux
20f090537f feat: add GET upload file API endpoint to dataset service api (#11899) 2025-01-09 14:52:09 +08:00
dbe7a7c4fd Fix: Add a INFO-level log when fallback to gpt2tokenizer (#12508) 2025-01-09 14:37:46 +08:00
b7a4e3903e fix: add last_refresh_time to track the validity of is_other_tab_refreshing (#12517) 2025-01-09 10:40:45 +08:00
b4c1c2f731 fix: Reverse sync docker-compose-template.yaml (#12509) 2025-01-09 10:21:22 +08:00
1b940e7daa feat: add ci job to test template for docker compose (#12514) 2025-01-09 00:04:58 +08:00
893 changed files with 34494 additions and 22539 deletions

2
.gitattributes vendored
View File

@ -1,5 +1,5 @@
# Ensure that .sh scripts use LF as line separator, even if they are checked out
# to Windows(NTFS) file-system, by a user of Docker for Window.
# to Windows(NTFS) file-system, by a user of Docker for Windows.
# These .sh scripts will be run from the Container after `docker compose up -d`.
# If they appear to be CRLF style, Dash from the Container will fail to execute
# them.

13
.github/ISSUE_TEMPLATE/tracker.yml vendored Normal file
View File

@ -0,0 +1,13 @@
name: "👾 Tracker"
description: For inner usages, please donot use this template.
title: "[Tracker] "
labels:
- tracker
body:
- type: textarea
id: content
attributes:
label: Blockers
placeholder: "- [ ] ..."
validations:
required: true

View File

@ -4,7 +4,6 @@ on:
pull_request:
branches:
- main
- plugins/beta
paths:
- api/**
- docker/**
@ -27,6 +26,9 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
persist-credentials: false
- name: Setup Poetry and Python ${{ matrix.python-version }}
uses: ./.github/actions/setup-poetry

View File

@ -5,7 +5,7 @@ on:
branches:
- "main"
- "deploy/dev"
- "plugins/beta"
- "release/1.0.1-fix1"
release:
types: [published]
@ -80,10 +80,12 @@ jobs:
cache-to: type=gha,mode=max,scope=${{ matrix.service_name }}
- name: Export digest
env:
DIGEST: ${{ steps.build.outputs.digest }}
run: |
mkdir -p /tmp/digests
digest="${{ steps.build.outputs.digest }}"
touch "/tmp/digests/${digest#sha256:}"
sanitized_digest=${DIGEST#sha256:}
touch "/tmp/digests/${sanitized_digest}"
- name: Upload digest
uses: actions/upload-artifact@v4
@ -133,10 +135,15 @@ jobs:
- name: Create manifest list and push
working-directory: /tmp/digests
env:
IMAGE_NAME: ${{ env[matrix.image_name_env] }}
run: |
docker buildx imagetools create $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
$(printf '${{ env[matrix.image_name_env] }}@sha256:%s ' *)
$(printf "$IMAGE_NAME@sha256:%s " *)
- name: Inspect image
env:
IMAGE_NAME: ${{ env[matrix.image_name_env] }}
IMAGE_VERSION: ${{ steps.meta.outputs.version }}
run: |
docker buildx imagetools inspect ${{ env[matrix.image_name_env] }}:${{ steps.meta.outputs.version }}
docker buildx imagetools inspect "$IMAGE_NAME:$IMAGE_VERSION"

View File

@ -20,6 +20,9 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
persist-credentials: false
- name: Setup Poetry and Python
uses: ./.github/actions/setup-poetry

View File

@ -9,6 +9,6 @@ yq eval '.services["pgvecto-rs"].ports += ["5431:5432"]' -i docker/docker-compos
yq eval '.services["elasticsearch"].ports += ["9200:9200"]' -i docker/docker-compose.yaml
yq eval '.services.couchbase-server.ports += ["8091-8096:8091-8096"]' -i docker/docker-compose.yaml
yq eval '.services.couchbase-server.ports += ["11210:11210"]' -i docker/docker-compose.yaml
yq eval '.services.tidb.ports += ["4000:4000"]' -i docker/docker-compose.yaml
yq eval '.services.tidb.ports += ["4000:4000"]' -i docker/tidb/docker-compose.yaml
echo "Ports exposed for sandbox, weaviate, tidb, qdrant, chroma, milvus, pgvector, pgvecto-rs, elasticsearch, couchbase"

View File

@ -4,7 +4,6 @@ on:
pull_request:
branches:
- main
- plugins/beta
concurrency:
group: style-${{ github.head_ref || github.run_id }}
@ -18,6 +17,9 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
persist-credentials: false
- name: Check changed files
id: changed-files
@ -60,6 +62,9 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
persist-credentials: false
- name: Check changed files
id: changed-files
@ -87,7 +92,7 @@ jobs:
- name: Web style check
if: steps.changed-files.outputs.any_changed == 'true'
run: yarn run lint
run: pnpm run lint
docker-compose-template:
name: Docker Compose Template
@ -96,6 +101,9 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
persist-credentials: false
- name: Check changed files
id: changed-files
@ -124,6 +132,9 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
persist-credentials: false
- name: Check changed files
id: changed-files
@ -141,7 +152,7 @@ jobs:
if: steps.changed-files.outputs.any_changed == 'true'
env:
BASH_SEVERITY: warning
DEFAULT_BRANCH: plugins/beta
DEFAULT_BRANCH: main
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
IGNORE_GENERATED_FILES: true
IGNORE_GITIGNORED_FILES: true

View File

@ -26,6 +26,9 @@ jobs:
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
persist-credentials: false
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
@ -35,7 +38,7 @@ jobs:
cache-dependency-path: 'pnpm-lock.yaml'
- name: Install Dependencies
run: pnpm install
run: pnpm install --frozen-lockfile
- name: Test
run: pnpm test

View File

@ -16,6 +16,7 @@ jobs:
- uses: actions/checkout@v4
with:
fetch-depth: 2 # last 2 commits
persist-credentials: false
- name: Check for file changes in i18n/en-US
id: check_files

View File

@ -28,6 +28,9 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
persist-credentials: false
- name: Setup Poetry and Python ${{ matrix.python-version }}
uses: ./.github/actions/setup-poetry
@ -51,7 +54,15 @@ jobs:
- name: Expose Service Ports
run: sh .github/workflows/expose_service_ports.sh
- name: Set up Vector Stores (TiDB, Weaviate, Qdrant, PGVector, Milvus, PgVecto-RS, Chroma, MyScale, ElasticSearch, Couchbase)
- name: Set up Vector Store (TiDB)
uses: hoverkraft-tech/compose-action@v2.0.2
with:
compose-file: docker/tidb/docker-compose.yaml
services: |
tidb
tiflash
- name: Set up Vector Stores (Weaviate, Qdrant, PGVector, Milvus, PgVecto-RS, Chroma, MyScale, ElasticSearch, Couchbase)
uses: hoverkraft-tech/compose-action@v2.0.2
with:
compose-file: |
@ -67,7 +78,9 @@ jobs:
pgvector
chroma
elasticsearch
tidb
- name: Check TiDB Ready
run: poetry run -P api python api/tests/integration_tests/vdb/tidb_vector/check_tiflash_ready.py
- name: Test Vector Stores
run: poetry run -P api bash dev/pytest/pytest_vdb.sh

View File

@ -22,25 +22,34 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
persist-credentials: false
- name: Check changed files
id: changed-files
uses: tj-actions/changed-files@v45
with:
files: web/**
# to run pnpm, should install package canvas, but it always install failed on amd64 under ubuntu-latest
# - name: Install pnpm
# uses: pnpm/action-setup@v4
# with:
# version: 10
# run_install: false
- name: Setup Node.js
uses: actions/setup-node@v4
if: steps.changed-files.outputs.any_changed == 'true'
with:
node-version: 20
cache: pnpm
cache-dependency-path: ./web/package.json
# - name: Setup Node.js
# uses: actions/setup-node@v4
# if: steps.changed-files.outputs.any_changed == 'true'
# with:
# node-version: 20
# cache: pnpm
# cache-dependency-path: ./web/package.json
- name: Install dependencies
if: steps.changed-files.outputs.any_changed == 'true'
run: pnpm install --frozen-lockfile
# - name: Install dependencies
# if: steps.changed-files.outputs.any_changed == 'true'
# run: pnpm install --frozen-lockfile
- name: Run tests
if: steps.changed-files.outputs.any_changed == 'true'
run: pnpm test
# - name: Run tests
# if: steps.changed-files.outputs.any_changed == 'true'
# run: pnpm test

2
.gitignore vendored
View File

@ -163,6 +163,7 @@ docker/volumes/db/data/*
docker/volumes/redis/data/*
docker/volumes/weaviate/*
docker/volumes/qdrant/*
docker/tidb/volumes/*
docker/volumes/etcd/*
docker/volumes/minio/*
docker/volumes/milvus/*
@ -182,6 +183,7 @@ docker/nginx/conf.d/default.conf
docker/nginx/ssl/*
!docker/nginx/ssl/.gitkeep
docker/middleware.env
docker/docker-compose.override.yaml
sdks/python-client/build
sdks/python-client/dist

View File

@ -73,7 +73,7 @@ Dify requires the following dependencies to build, make sure they're installed o
* [Docker](https://www.docker.com/)
* [Docker Compose](https://docs.docker.com/compose/install/)
* [Node.js v18.x (LTS)](http://nodejs.org)
* [npm](https://www.npmjs.com/) version 8.x.x or [Yarn](https://yarnpkg.com/)
* [pnpm](https://pnpm.io/)
* [Python](https://www.python.org/) version 3.11.x or 3.12.x
### 4. Installations

View File

@ -70,7 +70,7 @@ Dify 依赖以下工具和库:
- [Docker](https://www.docker.com/)
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Node.js v18.x (LTS)](http://nodejs.org)
- [npm](https://www.npmjs.com/) version 8.x.x or [Yarn](https://yarnpkg.com/)
- [pnpm](https://pnpm.io/)
- [Python](https://www.python.org/) version 3.11.x or 3.12.x
### 4. 安装

155
CONTRIBUTING_DE.md Normal file
View File

@ -0,0 +1,155 @@
# MITWIRKEN
So, du möchtest zu Dify beitragen das ist großartig, wir können es kaum erwarten, zu sehen, was du beisteuern wirst. Als ein Startup mit begrenzter Mitarbeiterzahl und Finanzierung haben wir große Ambitionen, den intuitivsten Workflow zum Aufbau und zur Verwaltung von LLM-Anwendungen zu entwickeln. Jede Unterstützung aus der Community zählt wirklich.
Dieser Leitfaden, ebenso wie Dify selbst, ist ein ständig in Entwicklung befindliches Projekt. Wir schätzen Ihr Verständnis, falls er zeitweise hinter dem tatsächlichen Projekt zurückbleibt, und freuen uns über jegliches Feedback, das uns hilft, ihn zu verbessern.
Bezüglich der Lizenzierung nehmen Sie sich bitte einen Moment Zeit, um unser kurzes [License and Contributor Agreement](./LICENSE) zu lesen. Die Community hält sich außerdem an den [Code of Conduct](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md).
## Bevor Sie loslegen
[Finde](https://github.com/langgenius/dify/issues?q=is:issue+is:open) ein bestehendes Issue, oder [öffne](https://github.com/langgenius/dify/issues/new/choose) ein neues. Wir kategorisieren Issues in zwei Typen:
### Feature-Anfragen
* Wenn Sie eine neue Feature-Anfrage stellen, bitten wir Sie zu erklären, was das vorgeschlagene Feature bewirken soll und so viel Kontext wie möglich bereitzustellen. [@perzeusss](https://github.com/perzeuss) hat einen soliden [Feature Request Copilot](https://udify.app/chat/MK2kVSnw1gakVwMX) entwickelt, der Ihnen dabei hilft, Ihre Anforderungen zu formulieren. Probieren Sie ihn gerne aus.
* Wenn Sie eines der bestehenden Issues übernehmen möchten, hinterlassen Sie einfach einen Kommentar darunter, in dem Sie uns dies mitteilen.
Ein Teammitglied, das in der entsprechenden Richtung arbeitet, wird hinzugezogen. Wenn alles in Ordnung ist, gibt es das Okay, mit der Codierung zu beginnen. Wir bitten Sie, mit der Umsetzung des Features zu warten, damit keine Ihrer Arbeiten verloren gehen sollte unsererseits Änderungen vorgeschlagen werden.
Je nachdem, in welchen Bereich das vorgeschlagene Feature fällt, können Sie mit verschiedenen Teammitgliedern sprechen. Hier ist eine Übersicht der Bereiche, an denen unsere Teammitglieder derzeit arbeiten:
| Member | Scope |
| ------------------------------------------------------------ | ---------------------------------------------------- |
| [@yeuoly](https://github.com/Yeuoly) | Architecting Agents |
| [@jyong](https://github.com/JohnJyong) | RAG pipeline design |
| [@GarfieldDai](https://github.com/GarfieldDai) | Building workflow orchestrations |
| [@iamjoel](https://github.com/iamjoel) & [@zxhlyh](https://github.com/zxhlyh) | Making our frontend a breeze to use |
| [@guchenhe](https://github.com/guchenhe) & [@crazywoola](https://github.com/crazywoola) | Developer experience, points of contact for anything |
| [@takatost](https://github.com/takatost) | Overall product direction and architecture |
Wie wir Prioritäten setzen:
| Feature Type | Priority |
| ------------------------------------------------------------ | --------------- |
| Funktionen mit hoher Priorität, wie sie von einem Teammitglied gekennzeichnet wurden | High Priority |
| Beliebte Funktionsanfragen von unserem [Community-Feedback-Board](https://github.com/langgenius/dify/discussions/categories/feedbacks) | Medium Priority |
| Nicht-Kernfunktionen und kleinere Verbesserungen | Low Priority |
| Wertvoll, aber nicht unmittelbar | Future-Feature |
### Sonstiges (e.g. bug report, performance optimization, typo correction)
* Fangen Sie sofort an zu programmieren..
Wie wir Prioritäten setzen:
| Issue Type | Priority |
| ------------------------------------------------------------ | --------------- |
| Fehler in Kernfunktionen (Anmeldung nicht möglich, Anwendungen funktionieren nicht, Sicherheitslücken) | Critical |
| Nicht-kritische Fehler, Leistungsverbesserungen | Medium Priority |
| Kleinere Fehlerkorrekturen (Schreibfehler, verwirrende, aber funktionierende Benutzeroberfläche) | Low Priority |
## Installieren
Hier sind die Schritte, um Dify für die Entwicklung einzurichten:
### 1. Fork dieses Repository
### 2. Clone das Repo
Klonen Sie das geforkte Repository von Ihrem Terminal aus:
```shell
git clone git@github.com:<github_username>/dify.git
```
### 3. Abhängigkeiten prüfen
Dify benötigt die folgenden Abhängigkeiten zum Bauen stellen Sie sicher, dass sie auf Ihrem System installiert sind:
* [Docker](https://www.docker.com/)
* [Docker Compose](https://docs.docker.com/compose/install/)
* [Node.js v18.x (LTS)](http://nodejs.org)
* [pnpm](https://pnpm.io/)
* [Python](https://www.python.org/) version 3.11.x or 3.12.x
### 4. Installationen
Dify setzt sich aus einem Backend und einem Frontend zusammen. Wechseln Sie in das Backend-Verzeichnis mit `cd api/` und folgen Sie der [Backend README](api/README.md) zur Installation. Öffnen Sie in einem separaten Terminal das Frontend-Verzeichnis mit `cd web/` und folgen Sie der [Frontend README](web/README.md) zur Installation.
Überprüfen Sie die [Installation FAQ](https://docs.dify.ai/learn-more/faq/install-faq) für eine Liste bekannter Probleme und Schritte zur Fehlerbehebung.
### 5. Besuchen Sie dify in Ihrem Browser
Um Ihre Einrichtung zu validieren, öffnen Sie Ihren Browser und navigieren Sie zu [http://localhost:3000](http://localhost:3000) (Standardwert oder Ihre selbst konfigurierte URL und Port). Sie sollten nun Dify im laufenden Betrieb sehen.
## Entwickeln
Wenn Sie einen Modellanbieter hinzufügen, ist [dieser Leitfaden](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/README.md) für Sie.
Wenn Sie einen Tool-Anbieter für Agent oder Workflow hinzufügen möchten, ist [dieser Leitfaden](./api/core/tools/README.md) für Sie.
Um Ihnen eine schnelle Orientierung zu bieten, wo Ihr Beitrag passt, folgt eine kurze, kommentierte Übersicht des Backends und Frontends von Dify:
### Backend
Difys Backend ist in Python geschrieben und nutzt [Flask](https://flask.palletsprojects.com/en/3.0.x/) als Web-Framework. Es verwendet [SQLAlchemy](https://www.sqlalchemy.org/) für ORM und [Celery](https://docs.celeryq.dev/en/stable/getting-started/introduction.html) für Task-Queueing. Die Autorisierungslogik erfolgt über Flask-login.
```text
[api/]
├── constants // Konstante Einstellungen, die in der gesamten Codebasis verwendet werden.
├── controllers // API-Routendefinitionen und Logik zur Bearbeitung von Anfragen.
├── core // Orchestrierung von Kernanwendungen, Modellintegrationen und Tools.
├── docker // Konfigurationen im Zusammenhang mit Docker und Containerisierung.
├── events // Ereignisbehandlung und -verarbeitung
├── extensions // Erweiterungen mit Frameworks/Plattformen von Drittanbietern.
├── fields // Felddefinitionen für die Serialisierung/Marshalling.
├── libs // Wiederverwendbare Bibliotheken und Hilfsprogramme
├── migrations // Skripte für die Datenbankmigration.
├── models // Datenbankmodelle und Schemadefinitionen.
├── services // Gibt die Geschäftslogik an.
├── storage // Speicherung privater Schlüssel.
├── tasks // Handhabung von asynchronen Aufgaben und Hintergrundaufträgen.
└── tests
```
### Frontend
Die Website basiert auf einem [Next.js](https://nextjs.org/)-Boilerplate in TypeScript und verwendet [Tailwind CSS](https://tailwindcss.com/) für das Styling. [React-i18next](https://react.i18next.com/) wird für die Internationalisierung genutzt.
```text
[web/]
├── app // Layouts, Seiten und Komponenten
│ ├── (commonLayout) // gemeinsames Layout für die gesamte Anwendung
│ ├── (shareLayout) // Layouts, die speziell für tokenspezifische Sitzungen gemeinsam genutzt werden
│ ├── activate // Seite aufrufen
│ ├── components // gemeinsam genutzt von Seiten und Layouts
│ ├── install // Seite installieren
│ ├── signin // Anmeldeseite
│ └── styles // global geteilte Stile
├── assets // Statische Vermögenswerte
├── bin // Skripte, die beim Build-Schritt ausgeführt werden
├── config // einstellbare Einstellungen und Optionen
├── context // gemeinsame Kontexte, die von verschiedenen Teilen der Anwendung verwendet werden
├── dictionaries // Sprachspezifische Übersetzungsdateien
├── docker // Container-Konfigurationen
├── hooks // Wiederverwendbare Haken
├── i18n // Konfiguration der Internationalisierung
├── models // beschreibt Datenmodelle und Formen von API-Antworten
├── public // Meta-Assets wie Favicon
├── service // legt Formen von API-Aktionen fest
├── test
├── types // Beschreibungen von Funktionsparametern und Rückgabewerten
└── utils // Gemeinsame Nutzenfunktionen
```
## Einreichung Ihrer PR
Am Ende ist es Zeit, einen Pull Request (PR) in unserem Repository zu eröffnen. Für wesentliche Features mergen wir diese zunächst in den `deploy/dev`-Branch zum Testen, bevor sie in den `main`-Branch übernommen werden. Falls Sie auf Probleme wie Merge-Konflikte stoßen oder nicht wissen, wie man einen Pull Request erstellt, schauen Sie sich [GitHub's Pull Request Tutorial](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests) an.
Und das war's! Sobald Ihr PR gemerged wurde, werden Sie als Mitwirkender in unserem [README](https://github.com/langgenius/dify/blob/main/README.md) aufgeführt.
## Hilfe bekommen
Wenn Sie beim Beitragen jemals nicht weiter wissen oder eine brennende Frage haben, richten Sie Ihre Anfrage einfach über das entsprechende GitHub-Issue an uns oder besuchen Sie unseren [Discord](https://discord.gg/8Tpq4AcN9c) für ein kurzes Gespräch.

View File

@ -73,7 +73,7 @@ Dify を構築するには次の依存関係が必要です。それらがシス
- [Docker](https://www.docker.com/)
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Node.js v18.x (LTS)](http://nodejs.org)
- [npm](https://www.npmjs.com/) version 8.x.x or [Yarn](https://yarnpkg.com/)
- [pnpm](https://pnpm.io/)
- [Python](https://www.python.org/) version 3.11.x or 3.12.x
### 4. インストール

153
CONTRIBUTING_TW.md Normal file
View File

@ -0,0 +1,153 @@
# 貢獻指南
您想為 Dify 做出貢獻 - 這太棒了,我們迫不及待地想看看您的成果。作為一家人力和資金有限的初創公司,我們有宏大的抱負,希望設計出最直觀的工作流程來構建和管理 LLM 應用程式。來自社群的任何幫助都非常珍貴,真的。
鑑於我們的現狀,我們需要靈活且快速地發展,但同時也希望確保像您這樣的貢獻者能夠獲得盡可能順暢的貢獻體驗。我們編寫了這份貢獻指南,目的是幫助您熟悉代碼庫以及我們如何與貢獻者合作,讓您可以更快地進入有趣的部分。
這份指南,就像 Dify 本身一樣,是不斷發展的。如果有時它落後於實際項目,我們非常感謝您的理解,也歡迎任何改進的反饋。
關於授權,請花一分鐘閱讀我們簡短的[授權和貢獻者協議](./LICENSE)。社群也遵守[行為準則](https://github.com/langgenius/.github/blob/main/CODE_OF_CONDUCT.md)。
## 在開始之前
[尋找](https://github.com/langgenius/dify/issues?q=is:issue+is:open)現有的 issue或[創建](https://github.com/langgenius/dify/issues/new/choose)一個新的。我們將 issues 分為 2 種類型:
### 功能請求
- 如果您要開啟新的功能請求,我們希望您能解釋所提議的功能要達成什麼目標,並且盡可能包含更多的相關背景資訊。[@perzeusss](https://github.com/perzeuss) 已經製作了一個實用的[功能請求輔助工具](https://udify.app/chat/MK2kVSnw1gakVwMX),能幫助您草擬您的需求。歡迎試用。
- 如果您想從現有問題中選擇一個來處理,只需在其下方留言表示即可。
相關方向的團隊成員會加入討論。如果一切順利,他們會同意您開始編寫代碼。我們要求您在得到許可前先不要開始處理該功能,以免我們提出變更時您的工作成果被浪費。
根據所提議功能的領域不同,您可能會與不同的團隊成員討論。以下是目前每位團隊成員所負責的領域概述:
| 成員 | 負責領域 |
| --------------------------------------------------------------------------------------- | ------------------------------ |
| [@yeuoly](https://github.com/Yeuoly) | 設計 Agents 架構 |
| [@jyong](https://github.com/JohnJyong) | RAG 管道設計 |
| [@GarfieldDai](https://github.com/GarfieldDai) | 建構工作流程編排 |
| [@iamjoel](https://github.com/iamjoel) & [@zxhlyh](https://github.com/zxhlyh) | 打造易用的前端界面 |
| [@guchenhe](https://github.com/guchenhe) & [@crazywoola](https://github.com/crazywoola) | 開發者體驗,各類問題的聯絡窗口 |
| [@takatost](https://github.com/takatost) | 整體產品方向與架構 |
我們如何排定優先順序:
| 功能類型 | 優先級 |
| ------------------------------------------------------------------------------------------------------- | -------- |
| 被團隊成員標記為高優先級的功能 | 高優先級 |
| 來自我們[社群回饋版](https://github.com/langgenius/dify/discussions/categories/feedbacks)的熱門功能請求 | 中優先級 |
| 非核心功能和次要增強 | 低優先級 |
| 有價值但非急迫的功能 | 未來功能 |
### 其他事項 (例如錯誤回報、效能優化、錯字更正)
- 可以直接開始編寫程式碼。
我們如何排定優先順序:
| 問題類型 | 優先級 |
| ----------------------------------------------------- | -------- |
| 核心功能的錯誤 (無法登入、應用程式無法運行、安全漏洞) | 重要 |
| 非關鍵性錯誤、效能提升 | 中優先級 |
| 小修正 (錯字、令人困惑但仍可運作的使用者界面) | 低優先級 |
## 安裝
以下是設置 Dify 開發環境的步驟:
### 1. 分叉此存儲庫
### 2. 複製代碼庫
從您的終端機複製分叉的代碼庫:
```shell
git clone git@github.com:<github_username>/dify.git
```
- [Docker](https://www.docker.com/)
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Node.js v18.x (LTS)](http://nodejs.org)
- [pnpm](https://pnpm.io/)
- [Python](https://www.python.org/) version 3.11.x or 3.12.x
### 4. 安裝
Dify 由後端和前端組成。透過 `cd api/` 導航至後端目錄,然後按照[後端 README](api/README.md)進行安裝。在另一個終端機視窗中,透過 `cd web/` 導航至前端目錄,然後按照[前端 README](web/README.md)進行安裝。
查閱[安裝常見問題](https://docs.dify.ai/learn-more/faq/install-faq)了解常見問題和故障排除步驟的列表。
### 5. 在瀏覽器中訪問 Dify
要驗證您的設置,請在瀏覽器中訪問 [http://localhost:3000](http://localhost:3000)(預設值,或您自行設定的 URL 和埠號)。現在您應該能看到 Dify 已啟動並運行。
## 開發
如果您要添加模型提供者,請參考[此指南](https://github.com/langgenius/dify/blob/main/api/core/model_runtime/README.md)。
如果您要為 Agent 或工作流程添加工具提供者,請參考[此指南](./api/core/tools/README.md)。
為了幫助您快速找到您的貢獻適合的位置,以下是 Dify 後端和前端的簡要註解大綱:
### 後端
Dify 的後端使用 Python 的 [Flask](https://flask.palletsprojects.com/en/3.0.x/) 框架編寫。它使用 [SQLAlchemy](https://www.sqlalchemy.org/) 作為 ORM 工具,使用 [Celery](https://docs.celeryq.dev/en/stable/getting-started/introduction.html) 進行任務佇列處理。授權邏輯則透過 Flask-login 實現。
```text
[api/]
├── constants // 整個專案中使用的常數與設定值
├── controllers // API 路由定義與請求處理邏輯
├── core // 核心應用服務、模型整合與工具實現
├── docker // Docker 容器化相關設定檔案
├── events // 事件處理與流程管理機制
├── extensions // 與第三方框架或平台的整合擴充功能
├── fields // 資料序列化與結構定義欄位
├── libs // 可重複使用的共用程式庫與輔助工具
├── migrations // 資料庫結構變更與遷移腳本
├── models // 資料庫模型與資料結構定義
├── services // 核心業務邏輯與功能實現
├── storage // 私鑰與敏感資訊儲存機制
├── tasks // 非同步任務與背景作業處理器
└── tests
```
### 前端
網站基於 [Next.js](https://nextjs.org/) 的 Typescript 樣板,並使用 [Tailwind CSS](https://tailwindcss.com/) 進行樣式設計。[React-i18next](https://react.i18next.com/) 用於國際化。
```text
[web/]
├── app // 頁面佈局與介面元件
│ ├── (commonLayout) // 應用程式共用佈局結構
│ ├── (shareLayout) // Token 會話專用共享佈局
│ ├── activate // 帳號啟用頁面
│ ├── components // 頁面與佈局共用元件
│ ├── install // 系統安裝頁面
│ ├── signin // 使用者登入頁面
│ └── styles // 全域共用樣式定義
├── assets // 靜態資源檔案庫
├── bin // 建構流程執行腳本
├── config // 系統可調整設定與選項
├── context // 應用程式狀態共享上下文
├── dictionaries // 多語系翻譯詞彙庫
├── docker // Docker 容器設定檔
├── hooks // 可重複使用的 React Hooks
├── i18n // 國際化與本地化設定
├── models // 資料結構與 API 回應模型
├── public // 靜態資源與網站圖標
├── service // API 操作介面定義
├── test // 測試用例與測試框架
├── types // TypeScript 型別定義
└── utils // 共用輔助功能函式庫
```
## 提交您的 PR
最後是時候向我們的存儲庫開啟拉取請求PR了。對於主要功能我們會先將它們合併到 `deploy/dev` 分支進行測試,然後才會進入 `main` 分支。如果您遇到合併衝突或不知道如何開啟拉取請求等問題,請查看 [GitHub 的拉取請求教學](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests)。
就是這樣!一旦您的 PR 被合併,您將作為貢獻者出現在我們的 [README](https://github.com/langgenius/dify/blob/main/README.md) 中。
## 獲取幫助
如果您在貢獻過程中遇到困難或有迫切的問題,只需通過相關的 GitHub issue 向我們提問,或加入我們的 [Discord](https://discord.gg/8Tpq4AcN9c) 進行快速交流。

View File

@ -72,7 +72,7 @@ Dify yêu cầu các phụ thuộc sau để build, hãy đảm bảo chúng đ
- [Docker](https://www.docker.com/)
- [Docker Compose](https://docs.docker.com/compose/install/)
- [Node.js v18.x (LTS)](http://nodejs.org)
- [npm](https://www.npmjs.com/) phiên bản 8.x.x hoặc [Yarn](https://yarnpkg.com/)
- [pnpm](https://pnpm.io/)
- [Python](https://www.python.org/) phiên bản 3.11.x hoặc 3.12.x
### 4. Cài đặt

23
LICENSE
View File

@ -1,12 +1,12 @@
# Open Source License
Dify is licensed under the Apache License 2.0, with the following additional conditions:
Dify is licensed under a modified version of the Apache License 2.0, with the following additional conditions:
1. Dify may be utilized commercially, including as a backend service for other applications or as an application development platform for enterprises. Should the conditions below be met, a commercial license must be obtained from the producer:
a. Multi-tenant service: Unless explicitly authorized by Dify in writing, you may not use the Dify source code to operate a multi-tenant environment.
a. Multi-tenant service: Unless explicitly authorized by Dify in writing, you may not use the Dify source code to operate a multi-tenant environment.
- Tenant Definition: Within the context of Dify, one tenant corresponds to one workspace. The workspace provides a separated area for each tenant's data and configurations.
b. LOGO and copyright information: In the process of using Dify's frontend, you may not remove or modify the LOGO or copyright information in the Dify console or applications. This restriction is inapplicable to uses of Dify that do not involve its frontend.
- Frontend Definition: For the purposes of this license, the "frontend" of Dify includes all components located in the `web/` directory when running Dify from the raw source code, or the "web" image when running Dify with Docker.
@ -21,19 +21,4 @@ Apart from the specific conditions mentioned above, all other rights and restric
The interactive design of this product is protected by appearance patent.
© 2024 LangGenius, Inc.
----------
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
© 2025 LangGenius, Inc.

140
README.md
View File

@ -40,6 +40,7 @@
<p align="center">
<a href="./README.md"><img alt="README in English" src="https://img.shields.io/badge/English-d9d9d9"></a>
<a href="./README_TW.md"><img alt="繁體中文文件" src="https://img.shields.io/badge/繁體中文-d9d9d9"></a>
<a href="./README_CN.md"><img alt="简体中文版自述文件" src="https://img.shields.io/badge/简体中文-d9d9d9"></a>
<a href="./README_JA.md"><img alt="日本語のREADME" src="https://img.shields.io/badge/日本語-d9d9d9"></a>
<a href="./README_ES.md"><img alt="README en Español" src="https://img.shields.io/badge/Español-d9d9d9"></a>
@ -49,16 +50,18 @@
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./README_DE.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="./README_BN.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
</p>
Dify is an open-source LLM app development platform. Its intuitive interface combines agentic AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
Dify is an open-source LLM app development platform. Its intuitive interface combines agentic AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.
## Quick start
> Before installing Dify, make sure your machine meets the following minimum system requirements:
>
>- CPU >= 2 Core
>- RAM >= 4 GiB
>
> - CPU >= 2 Core
> - RAM >= 4 GiB
</br>
@ -74,62 +77,125 @@ docker compose up -d
After running, you can access the Dify dashboard in your browser at [http://localhost/install](http://localhost/install) and start the initialization process.
#### Seeking help
Please refer to our [FAQ](https://docs.dify.ai/getting-started/install-self-hosted/faqs) if you encounter problems setting up Dify. Reach out to [the community and us](#community--contact) if you are still having issues.
> If you'd like to contribute to Dify or do additional development, refer to our [guide to deploying from source code](https://docs.dify.ai/getting-started/install-self-hosted/local-source-code)
## Key features
**1. Workflow**:
Build and test powerful AI workflows on a visual canvas, leveraging all the following features and beyond.
**1. Workflow**:
Build and test powerful AI workflows on a visual canvas, leveraging all the following features and beyond.
https://github.com/langgenius/dify/assets/13230914/356df23e-1604-483d-80a6-9517ece318aa
https://github.com/langgenius/dify/assets/13230914/356df23e-1604-483d-80a6-9517ece318aa
**2. Comprehensive model support**:
Seamless integration with hundreds of proprietary / open-source LLMs from dozens of inference providers and self-hosted solutions, covering GPT, Mistral, Llama3, and any OpenAI API-compatible models. A full list of supported model providers can be found [here](https://docs.dify.ai/getting-started/readme/model-providers).
**2. Comprehensive model support**:
Seamless integration with hundreds of proprietary / open-source LLMs from dozens of inference providers and self-hosted solutions, covering GPT, Mistral, Llama3, and any OpenAI API-compatible models. A full list of supported model providers can be found [here](https://docs.dify.ai/getting-started/readme/model-providers).
![providers-v5](https://github.com/langgenius/dify/assets/13230914/5a17bdbe-097a-4100-8363-40255b70f6e3)
**3. Prompt IDE**:
Intuitive interface for crafting prompts, comparing model performance, and adding additional features such as text-to-speech to a chat-based app.
**3. Prompt IDE**:
Intuitive interface for crafting prompts, comparing model performance, and adding additional features such as text-to-speech to a chat-based app.
**4. RAG Pipeline**:
Extensive RAG capabilities that cover everything from document ingestion to retrieval, with out-of-box support for text extraction from PDFs, PPTs, and other common document formats.
**4. RAG Pipeline**:
Extensive RAG capabilities that cover everything from document ingestion to retrieval, with out-of-box support for text extraction from PDFs, PPTs, and other common document formats.
**5. Agent capabilities**:
You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DALL·E, Stable Diffusion and WolframAlpha.
**5. Agent capabilities**:
You can define agents based on LLM Function Calling or ReAct, and add pre-built or custom tools for the agent. Dify provides 50+ built-in tools for AI agents, such as Google Search, DALL·E, Stable Diffusion and WolframAlpha.
**6. LLMOps**:
Monitor and analyze application logs and performance over time. You could continuously improve prompts, datasets, and models based on production data and annotations.
**6. LLMOps**:
Monitor and analyze application logs and performance over time. You could continuously improve prompts, datasets, and models based on production data and annotations.
**7. Backend-as-a-Service**:
All of Dify's offerings come with corresponding APIs, so you could effortlessly integrate Dify into your own business logic.
**7. Backend-as-a-Service**:
All of Dify's offerings come with corresponding APIs, so you could effortlessly integrate Dify into your own business logic.
## Feature Comparison
<table style="width: 100%;">
<tr>
<th align="center">Feature</th>
<th align="center">Dify.AI</th>
<th align="center">LangChain</th>
<th align="center">Flowise</th>
<th align="center">OpenAI Assistants API</th>
</tr>
<tr>
<td align="center">Programming Approach</td>
<td align="center">API + App-oriented</td>
<td align="center">Python Code</td>
<td align="center">App-oriented</td>
<td align="center">API-oriented</td>
</tr>
<tr>
<td align="center">Supported LLMs</td>
<td align="center">Rich Variety</td>
<td align="center">Rich Variety</td>
<td align="center">Rich Variety</td>
<td align="center">OpenAI-only</td>
</tr>
<tr>
<td align="center">RAG Engine</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Agent</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Workflow</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Observability</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Enterprise Feature (SSO/Access control)</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Local Deployment</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
</table>
## Using Dify
- **Cloud </br>**
We host a [Dify Cloud](https://dify.ai) service for anyone to try with zero setup. It provides all the capabilities of the self-deployed version, and includes 200 free GPT-4 calls in the sandbox plan.
We host a [Dify Cloud](https://dify.ai) service for anyone to try with zero setup. It provides all the capabilities of the self-deployed version, and includes 200 free GPT-4 calls in the sandbox plan.
- **Self-hosting Dify Community Edition</br>**
Quickly get Dify running in your environment with this [starter guide](#quick-start).
Use our [documentation](https://docs.dify.ai) for further references and more in-depth instructions.
Quickly get Dify running in your environment with this [starter guide](#quick-start).
Use our [documentation](https://docs.dify.ai) for further references and more in-depth instructions.
- **Dify for enterprise / organizations</br>**
We provide additional enterprise-centric features. [Log your questions for us through this chatbot](https://udify.app/chat/22L1zSxg6yW1cWQg) or [send us an email](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry) to discuss enterprise needs. </br>
We provide additional enterprise-centric features. [Log your questions for us through this chatbot](https://udify.app/chat/22L1zSxg6yW1cWQg) or [send us an email](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry) to discuss enterprise needs. </br>
> For startups and small businesses using AWS, check out [Dify Premium on AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) and deploy it to your own AWS VPC with one-click. It's an affordable AMI offering with the option to create apps with custom logo and branding.
## Staying ahead
Star Dify on GitHub and be instantly notified of new releases.
![star-us](https://github.com/langgenius/dify/assets/13230914/b823edc1-6388-4e25-ad45-2f6b187adbb4)
## Advanced Setup
If you need to customize the configuration, please refer to the comments in our [.env.example](docker/.env.example) file and update the corresponding values in your `.env` file. Additionally, you might need to make adjustments to the `docker-compose.yaml` file itself, such as changing image versions, port mappings, or volume mounts, based on your specific deployment environment and requirements. After making any changes, please re-run `docker-compose up -d`. You can find the full list of available environment variables [here](https://docs.dify.ai/getting-started/install-self-hosted/environments).
@ -145,32 +211,34 @@ If you'd like to configure a highly-available setup, there are community-contrib
Deploy Dify to Cloud Platform with a single click using [terraform](https://www.terraform.io/)
##### Azure Global
- [Azure Terraform by @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform by @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
#### Using AWS CDK for Deployment
Deploy Dify to AWS with [CDK](https://aws.amazon.com/cdk/)
##### AWS
##### AWS
- [AWS CDK by @KevinZhao](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
## Contributing
For those who'd like to contribute code, see our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
For those who'd like to contribute code, see our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
At the same time, please consider supporting Dify by sharing it on social media and at events and conferences.
> We are looking for contributors to help with translating Dify to languages other than Mandarin or English. If you are interested in helping, please see the [i18n README](https://github.com/langgenius/dify/blob/main/web/i18n/README.md) for more information, and leave us a comment in the `global-users` channel of our [Discord Community Server](https://discord.gg/8Tpq4AcN9c).
## Community & contact
* [Github Discussion](https://github.com/langgenius/dify/discussions). Best for: sharing feedback and asking questions.
* [GitHub Issues](https://github.com/langgenius/dify/issues). Best for: bugs you encounter using Dify.AI, and feature proposals. See our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [Discord](https://discord.gg/FngNHpbcY7). Best for: sharing your applications and hanging out with the community.
* [X(Twitter)](https://twitter.com/dify_ai). Best for: sharing your applications and hanging out with the community.
- [Github Discussion](https://github.com/langgenius/dify/discussions). Best for: sharing feedback and asking questions.
- [GitHub Issues](https://github.com/langgenius/dify/issues). Best for: bugs you encounter using Dify.AI, and feature proposals. See our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
- [Discord](https://discord.gg/FngNHpbcY7). Best for: sharing your applications and hanging out with the community.
- [X(Twitter)](https://twitter.com/dify_ai). Best for: sharing your applications and hanging out with the community.
**Contributors**
@ -182,7 +250,6 @@ At the same time, please consider supporting Dify by sharing it on social media
[![Star History Chart](https://api.star-history.com/svg?repos=langgenius/dify&type=Date)](https://star-history.com/#langgenius/dify&Date)
## Security disclosure
To protect your privacy, please avoid posting security issues on GitHub. Instead, send your questions to security@dify.ai and we will provide you with a more detailed answer.
@ -190,4 +257,3 @@ To protect your privacy, please avoid posting security issues on GitHub. Instead
## License
This repository is available under the [Dify Open Source License](LICENSE), which is essentially Apache 2.0 with a few additional restrictions.

View File

@ -45,6 +45,7 @@
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./README_BN.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
</p>
<div style="text-align: right;">
@ -53,8 +54,7 @@
**1. سير العمل**: قم ببناء واختبار سير عمل الذكاء الاصطناعي القوي على قماش بصري، مستفيدًا من جميع الميزات التالية وأكثر.
https://github.com/langgenius/dify/assets/13230914/356df23e-1604-483d-80a6-9517ece318aa
<https://github.com/langgenius/dify/assets/13230914/356df23e-1604-483d-80a6-9517ece318aa>
**2. الدعم الشامل للنماذج**: تكامل سلس مع مئات من LLMs الخاصة / مفتوحة المصدر من عشرات من موفري التحليل والحلول المستضافة ذاتيًا، مما يغطي GPT و Mistral و Llama3 وأي نماذج متوافقة مع واجهة OpenAI API. يمكن العثور على قائمة كاملة بمزودي النموذج المدعومين [هنا](https://docs.dify.ai/getting-started/readme/model-providers).
@ -69,7 +69,9 @@
**6. الـ LLMOps**: راقب وتحلل سجلات التطبيق والأداء على مر الزمن. يمكنك تحسين الأوامر والبيانات والنماذج باستمرار استنادًا إلى البيانات الإنتاجية والتعليقات.
**7.الواجهة الخلفية (Backend) كخدمة**: تأتي جميع عروض Dify مع APIs مطابقة، حتى يمكنك دمج Dify بسهولة في منطق أعمالك الخاص.
## مقارنة الميزات
<table style="width: 100%;">
<tr>
<th align="center">الميزة</th>
@ -136,8 +138,8 @@
</tr>
</table>
## استخدام Dify
- **سحابة </br>**
نحن نستضيف [خدمة Dify Cloud](https://dify.ai) لأي شخص لتجربتها بدون أي إعدادات. توفر كل قدرات النسخة التي تمت استضافتها ذاتيًا، وتتضمن 200 أمر GPT-4 مجانًا في خطة الصندوق الرملي.
@ -147,15 +149,19 @@
- **مشروع Dify للشركات / المؤسسات</br>**
نحن نوفر ميزات إضافية مركزة على الشركات. [جدول اجتماع معنا](https://cal.com/guchenhe/30min) أو [أرسل لنا بريدًا إلكترونيًا](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry) لمناقشة احتياجات الشركات. </br>
> بالنسبة للشركات الناشئة والشركات الصغيرة التي تستخدم خدمات AWS، تحقق من [Dify Premium على AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) ونشرها في شبكتك الخاصة على AWS VPC بنقرة واحدة. إنها عرض AMI بأسعار معقولة مع خيار إنشاء تطبيقات بشعار وعلامة تجارية مخصصة.
>
## البقاء قدمًا
قم بإضافة نجمة إلى Dify على GitHub وتلق تنبيهًا فوريًا بالإصدارات الجديدة.
![نجمنا](https://github.com/langgenius/dify/assets/13230914/b823edc1-6388-4e25-ad45-2f6b187adbb4)
## البداية السريعة
>
> قبل تثبيت Dify، تأكد من أن جهازك يلبي الحد الأدنى من متطلبات النظام التالية:
>
>
>- معالج >= 2 نواة
>- ذاكرة وصول عشوائي (RAM) >= 4 جيجابايت
@ -188,24 +194,26 @@ docker compose up -d
انشر Dify إلى منصة السحابة بنقرة واحدة باستخدام [terraform](https://www.terraform.io/)
##### Azure Global
- [Azure Terraform بواسطة @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform بواسطة @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
#### استخدام AWS CDK للنشر
انشر Dify على AWS باستخدام [CDK](https://aws.amazon.com/cdk/)
##### AWS
##### AWS
- [AWS CDK بواسطة @KevinZhao](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
## المساهمة
لأولئك الذين يرغبون في المساهمة، انظر إلى [دليل المساهمة](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) لدينا.
لأولئك الذين يرغبون في المساهمة، انظر إلى [دليل المساهمة](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) لدينا.
في الوقت نفسه، يرجى النظر في دعم Dify عن طريق مشاركته على وسائل التواصل الاجتماعي وفي الفعاليات والمؤتمرات.
> نحن نبحث عن مساهمين لمساعدة في ترجمة Dify إلى لغات أخرى غير اللغة الصينية المندرين أو الإنجليزية. إذا كنت مهتمًا بالمساعدة، يرجى الاطلاع على [README للترجمة](https://github.com/langgenius/dify/blob/main/web/i18n/README.md) لمزيد من المعلومات، واترك لنا تعليقًا في قناة `global-users` على [خادم المجتمع على Discord](https://discord.gg/8Tpq4AcN9c).
**المساهمون**
@ -215,26 +223,26 @@ docker compose up -d
</a>
## المجتمع والاتصال
* [مناقشة Github](https://github.com/langgenius/dify/discussions). الأفضل لـ: مشاركة التعليقات وطرح الأسئلة.
* [المشكلات على GitHub](https://github.com/langgenius/dify/issues). الأفضل لـ: الأخطاء التي تواجهها في استخدام Dify.AI، واقتراحات الميزات. انظر [دليل المساهمة](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [Discord](https://discord.gg/FngNHpbcY7). الأفضل لـ: مشاركة تطبيقاتك والترفيه مع المجتمع.
* [تويتر](https://twitter.com/dify_ai). الأفضل لـ: مشاركة تطبيقاتك والترفيه مع المجتمع.
- [مناقشة Github](https://github.com/langgenius/dify/discussions). الأفضل لـ: مشاركة التعليقات وطرح الأسئلة.
- [المشكلات على GitHub](https://github.com/langgenius/dify/issues). الأفضل لـ: الأخطاء التي تواجهها في استخدام Dify.AI، واقتراحات الميزات. انظر [دليل المساهمة](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
- [Discord](https://discord.gg/FngNHpbcY7). الأفضل لـ: مشاركة تطبيقاتك والترفيه مع المجتمع.
- [تويتر](https://twitter.com/dify_ai). الأفضل لـ: مشاركة تطبيقاتك والترفيه مع المجتمع.
## تاريخ النجمة
[![Star History Chart](https://api.star-history.com/svg?repos=langgenius/dify&type=Date)](https://star-history.com/#langgenius/dify&Date)
## الكشف عن الأمان
لحماية خصوصيتك، يرجى تجنب نشر مشكلات الأمان على GitHub. بدلاً من ذلك، أرسل أسئلتك إلى security@dify.ai وسنقدم لك إجابة أكثر تفصيلاً.
لحماية خصوصيتك، يرجى تجنب نشر مشكلات الأمان على GitHub. بدلاً من ذلك، أرسل أسئلتك إلى <security@dify.ai> وسنقدم لك إجابة أكثر تفصيلاً.
## الرخصة
هذا المستودع متاح تحت [رخصة البرنامج الحر Dify](LICENSE)، والتي تعتبر بشكل أساسي Apache 2.0 مع بعض القيود الإضافية.
## الكشف عن الأمان
لحماية خصوصيتك، يرجى تجنب نشر مشكلات الأمان على GitHub. بدلاً من ذلك، أرسل أسئلتك إلى security@dify.ai وسنقدم لك إجابة أكثر تفصيلاً.
لحماية خصوصيتك، يرجى تجنب نشر مشكلات الأمان على GitHub. بدلاً من ذلك، أرسل أسئلتك إلى <security@dify.ai> وسنقدم لك إجابة أكثر تفصيلاً.
## الرخصة

258
README_BN.md Normal file
View File

@ -0,0 +1,258 @@
![cover-v5-optimized](https://github.com/langgenius/dify/assets/13230914/f9e19af5-61ba-4119-b926-d10c4c06ebab)
<p align="center">
📌 <a href="https://dify.ai/blog/introducing-dify-workflow-file-upload-a-demo-on-ai-podcast">ডিফাই ওয়ার্কফ্লো ফাইল আপলোড পরিচিতি: গুগল নোটবুক-এলএম পডকাস্ট পুনর্নির্মাণ</a>
</p>
<p align="center">
<a href="https://cloud.dify.ai">ডিফাই ক্লাউড</a> ·
<a href="https://docs.dify.ai/getting-started/install-self-hosted">সেল্ফ-হোস্টিং</a> ·
<a href="https://docs.dify.ai">ডকুমেন্টেশন</a> ·
<a href="https://udify.app/chat/22L1zSxg6yW1cWQg">ব্যাবসায়িক অনুসন্ধান</a>
</p>
<p align="center">
<a href="https://dify.ai" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Product-F04438"></a>
<a href="https://dify.ai/pricing" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/free-pricing?logo=free&color=%20%23155EEF&label=pricing&labelColor=%20%23528bff"></a>
<a href="https://discord.gg/FngNHpbcY7" target="_blank">
<img src="https://img.shields.io/discord/1082486657678311454?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb"
alt="chat on Discord"></a>
<a href="https://reddit.com/r/difyai" target="_blank">
<img src="https://img.shields.io/reddit/subreddit-subscribers/difyai?style=plastic&logo=reddit&label=r%2Fdifyai&labelColor=white"
alt="join Reddit"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on X(Twitter)"></a>
<a href="https://www.linkedin.com/company/langgenius/" target="_blank">
<img src="https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff"
alt="follow on LinkedIn"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
<img alt="Commits last month" src="https://img.shields.io/github/commit-activity/m/langgenius/dify?labelColor=%20%2332b583&color=%20%2312b76a"></a>
<a href="https://github.com/langgenius/dify/" target="_blank">
<img alt="Issues closed" src="https://img.shields.io/github/issues-search?query=repo%3Alanggenius%2Fdify%20is%3Aclosed&label=issues%20closed&labelColor=%20%237d89b0&color=%20%235d6b98"></a>
<a href="https://github.com/langgenius/dify/discussions/" target="_blank">
<img alt="Discussion posts" src="https://img.shields.io/github/discussions/langgenius/dify?labelColor=%20%239b8afb&color=%20%237a5af8"></a>
</p>
<p align="center">
<a href="./README.md"><img alt="README in English" src="https://img.shields.io/badge/English-d9d9d9"></a>
<a href="./README_CN.md"><img alt="简体中文版自述文件" src="https://img.shields.io/badge/简体中文-d9d9d9"></a>
<a href="./README_JA.md"><img alt="日本語のREADME" src="https://img.shields.io/badge/日本語-d9d9d9"></a>
<a href="./README_ES.md"><img alt="README en Español" src="https://img.shields.io/badge/Español-d9d9d9"></a>
<a href="./README_FR.md"><img alt="README en Français" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_KL.md"><img alt="README tlhIngan Hol" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="README in Korean" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./README_DE.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="./README_BN.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
</p>
ডিফাই একটি ওপেন-সোর্স LLM অ্যাপ ডেভেলপমেন্ট প্ল্যাটফর্ম। এটি ইন্টুইটিভ ইন্টারফেস, এজেন্টিক AI ওয়ার্কফ্লো, RAG পাইপলাইন, এজেন্ট ক্যাপাবিলিটি, মডেল ম্যানেজমেন্ট, মনিটরিং সুবিধা এবং আরও অনেক কিছু একত্রিত করে, যা দ্রুত প্রোটোটাইপ থেকে প্রোডাকশন পর্যন্ত নিয়ে যেতে সহায়তা করে।
## কুইক স্টার্ট
>
> ডিফাই ইনস্টল করার আগে, নিশ্চিত করুন যে আপনার মেশিন নিম্নলিখিত ন্যূনতম কনফিগারেশনের প্রয়োজনীয়তা পূরন করে :
>
>- সিপিউ >= 2 কোর
>- র‍্যাম >= 4 জিবি
</br>
ডিফাই সার্ভার চালু করার সবচেয়ে সহজ উপায় [docker compose](docker/docker-compose.yaml) মাধ্যমে। নিম্নলিখিত কমান্ডগুলো ব্যবহার করে ডিফাই চালানোর আগে, নিশ্চিত করুন যে আপনার মেশিনে [Docker](https://docs.docker.com/get-docker/) এবং [Docker Compose](https://docs.docker.com/compose/install/) ইনস্টল করা আছে :
```bash
cd dify
cd docker
cp .env.example .env
docker compose up -d
```
চালানোর পর, আপনি আপনার ব্রাউজারে [http://localhost/install](http://localhost/install)-এ ডিফাই ড্যাশবোর্ডে অ্যাক্সেস করতে পারেন এবং ইনিশিয়ালাইজেশন প্রক্রিয়া শুরু করতে পারেন।
#### সাহায্যের খোঁজে
ডিফাই সেট আপ করতে সমস্যা হলে দয়া করে আমাদের [FAQ](https://docs.dify.ai/getting-started/install-self-hosted/faqs) দেখুন। যদি তবুও সমস্যা থেকে থাকে, তাহলে [কমিউনিটি এবং আমাদের](#community--contact) সাথে যোগাযোগ করুন।
> যদি আপনি ডিফাইতে অবদান রাখতে বা অতিরিক্ত উন্নয়ন করতে চান, আমাদের [সোর্স কোড থেকে ডিপ্লয়মেন্টের গাইড](https://docs.dify.ai/getting-started/install-self-hosted/local-source-code) দেখুন।
## প্রধান ফিচারসমূহ
**১. ওয়ার্কফ্লো**:
ভিজ্যুয়াল ক্যানভাসে AI ওয়ার্কফ্লো তৈরি এবং পরীক্ষা করুন, নিম্নলিখিত সব ফিচার এবং তার বাইরেও আরও অনেক কিছু ব্যবহার করে।
<https://github.com/langgenius/dify/assets/13230914/356df23e-1604-483d-80a6-9517ece318aa>
**২. মডেল সাপোর্ট**:
GPT, Mistral, Llama3, এবং যেকোনো OpenAI API-সামঞ্জস্যপূর্ণ মডেলসহ, কয়েক ডজন ইনফারেন্স প্রদানকারী এবং সেল্ফ-হোস্টেড সমাধান থেকে শুরু করে প্রোপ্রাইটরি/ওপেন-সোর্স LLM-এর সাথে সহজে ইন্টিগ্রেশন। সমর্থিত মডেল প্রদানকারীদের একটি সম্পূর্ণ তালিকা পাওয়া যাবে [এখানে](https://docs.dify.ai/getting-started/readme/model-providers)।
![providers-v5](https://github.com/langgenius/dify/assets/13230914/5a17bdbe-097a-4100-8363-40255b70f6e3)
**3. প্রম্পট IDE**:
প্রম্পট তৈরি, মডেলের পারফরম্যান্স তুলনা এবং চ্যাট-বেজড অ্যাপে টেক্সট-টু-স্পিচের মতো বৈশিষ্ট্য যুক্ত করার জন্য ইন্টুইটিভ ইন্টারফেস।
**4. RAG পাইপলাইন**:
ডকুমেন্ট ইনজেশন থেকে শুরু করে রিট্রিভ পর্যন্ত সবকিছুই বিস্তৃত RAG ক্যাপাবিলিটির আওতাভুক্ত। PDF, PPT এবং অন্যান্য সাধারণ ডকুমেন্ট ফর্ম্যাট থেকে টেক্সট এক্সট্রাকশনের জন্য আউট-অফ-বক্স সাপোর্ট।
**5. এজেন্ট ক্যাপাবিলিটি**:
LLM ফাংশন কলিং বা ReAct উপর ভিত্তি করে এজেন্ট ডিফাইন করতে পারেন এবং এজেন্টের জন্য পূর্ব-নির্মিত বা কাস্টম টুলস যুক্ত করতে পারেন। Dify AI এজেন্টদের জন্য 50+ বিল্ট-ইন টুলস সরবরাহ করে, যেমন Google Search, DALL·E, Stable Diffusion এবং WolframAlpha।
**6. এলএলএম-অপ্স**:
সময়ের সাথে সাথে অ্যাপ্লিকেশন লগ এবং পারফরম্যান্স মনিটর এবং বিশ্লেষণ করুন। প্রডাকশন ডেটা এবং annotation এর উপর ভিত্তি করে প্রম্পট, ডেটাসেট এবং মডেলগুলিকে ক্রমাগত উন্নত করতে পারেন।
**7. ব্যাকএন্ড-অ্যাজ-এ-সার্ভিস**:
ডিফাই-এর সমস্ত অফার সংশ্লিষ্ট API-সহ আছে, যাতে আপনি অনায়াসে ডিফাইকে আপনার নিজস্ব বিজনেস লজিকে ইন্টেগ্রেট করতে পারেন।
## বৈশিষ্ট্য তুলনা
<table style="width: 100%;">
<tr>
<th align="center">বৈশিষ্ট্য</th>
<th align="center">Dify.AI</th>
<th align="center">LangChain</th>
<th align="center">Flowise</th>
<th align="center">OpenAI Assistants API</th>
</tr>
<tr>
<td align="center">প্রোগ্রামিং পদ্ধতি</td>
<td align="center">API + App-oriented</td>
<td align="center">Python Code</td>
<td align="center">App-oriented</td>
<td align="center">API-oriented</td>
</tr>
<tr>
<td align="center">সাপোর্টেড LLMs</td>
<td align="center">Rich Variety</td>
<td align="center">Rich Variety</td>
<td align="center">Rich Variety</td>
<td align="center">OpenAI-only</td>
</tr>
<tr>
<td align="center">RAG ইঞ্জিন</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">এজেন্ট</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">ওয়ার্কফ্লো</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">অবজার্ভেবল</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">এন্টারপ্রাইজ ফিচার (SSO/Access control)</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">লোকাল ডেপ্লয়মেন্ট</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
</table>
## ডিফাই-এর ব্যবহার
- **ক্লাউড </br>**
জিরো সেটাপে ব্যবহার করতে আমাদের [Dify Cloud](https://dify.ai) সার্ভিসটি ব্যবহার করতে পারেন। এখানে সেল্ফহোস্টিং-এর সকল ফিচার ও ক্যাপাবিলিটিসহ স্যান্ডবক্সে ২০০ জিপিটি- কল ফ্রি পাবেন।
- **সেল্ফহোস্টিং ডিফাই কমিউনিটি সংস্করণ</br>**
সেল্ফহোস্ট করতে এই [স্টার্টার গাইড](#quick-start) ব্যবহার করে দ্রুত আপনার এনভায়রনমেন্টে ডিফাই চালান।
আরো ইন-ডেপথ রেফারেন্সের জন্য [ডকুমেন্টেশন](https://docs.dify.ai) দেখেন।
- **এন্টারপ্রাইজ / প্রতিষ্ঠানের জন্য Dify</br>**
আমরা এন্টারপ্রাইজ/প্রতিষ্ঠান-কেন্দ্রিক সেবা প্রদান করে থাকি । [এই চ্যাটবটের মাধ্যমে আপনার প্রশ্নগুলি আমাদের জন্য লগ করুন।](https://udify.app/chat/22L1zSxg6yW1cWQg) অথবা [আমাদের ইমেল পাঠান](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry) আপনার চাহিদা সম্পর্কে আলোচনা করার জন্য। </br>
> AWS ব্যবহারকারী স্টার্টআপ এবং ছোট ব্যবসার জন্য, [AWS মার্কেটপ্লেসে Dify Premium](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) দেখুন এবং এক-ক্লিকের মাধ্যমে এটি আপনার নিজস্ব AWS VPC-তে ডিপ্লয় করুন। এটি একটি সাশ্রয়ী মূল্যের AMI অফার, যাতে কাস্টম লোগো এবং ব্র্যান্ডিং সহ অ্যাপ তৈরির সুবিধা আছে।
## এগিয়ে থাকুন
GitHub-এ ডিফাইকে স্টার দিয়ে রাখুন এবং নতুন রিলিজের খবর তাৎক্ষণিকভাবে পান।
![star-us](https://github.com/langgenius/dify/assets/13230914/b823edc1-6388-4e25-ad45-2f6b187adbb4)
## Advanced Setup
যদি আপনার কনফিগারেশনটি কাস্টমাইজ করার প্রয়োজন হয়, তাহলে অনুগ্রহ করে আমাদের [.env.example](docker/.env.example) ফাইল দেখুন এবং আপনার `.env` ফাইলে সংশ্লিষ্ট মানগুলি আপডেট করুন। এছাড়াও, আপনার নির্দিষ্ট এনভায়রনমেন্ট এবং প্রয়োজনীয়তার উপর ভিত্তি করে আপনাকে `docker-compose.yaml` ফাইলে সমন্বয় করতে হতে পারে, যেমন ইমেজ ভার্সন পরিবর্তন করা, পোর্ট ম্যাপিং করা, অথবা ভলিউম মাউন্ট করা।
যেকোনো পরিবর্তন করার পর, অনুগ্রহ করে `docker-compose up -d` পুনরায় চালান। ভেরিয়েবলের সম্পূর্ণ তালিকা [এখানে] (https://docs.dify.ai/getting-started/install-self-hosted/environments) খুঁজে পেতে পারেন।
যদি আপনি একটি হাইলি এভেইলেবল সেটআপ কনফিগার করতে চান, তাহলে কমিউনিটি [Helm Charts](https://helm.sh/) এবং YAML ফাইল রয়েছে যা Dify কে Kubernetes-এ ডিপ্লয় করার প্রক্রিয়া বর্ণনা করে।
- [Helm Chart by @LeoQuote](https://github.com/douban/charts/tree/master/charts/dify)
- [Helm Chart by @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)
- [YAML file by @Winson-030](https://github.com/Winson-030/dify-kubernetes)
#### টেরাফর্ম ব্যবহার করে ডিপ্লয়
[terraform](https://www.terraform.io/) ব্যবহার করে এক ক্লিকেই ক্লাউড প্ল্যাটফর্মে Dify ডিপ্লয় করুন।
##### অ্যাজুর গ্লোবাল
- [Azure Terraform by @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### গুগল ক্লাউড
- [Google Cloud Terraform by @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
#### AWS CDK ব্যবহার করে ডিপ্লয়
[CDK](https://aws.amazon.com/cdk/) দিয়ে AWS-এ Dify ডিপ্লয় করুন
##### AWS
- [AWS CDK by @KevinZhao](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
## Contributing
যারা কোড অবদান রাখতে চান, তাদের জন্য আমাদের [অবদান নির্দেশিকা] দেখুন (https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md)।
একই সাথে, সোশ্যাল মিডিয়া এবং ইভেন্ট এবং কনফারেন্সে এটি শেয়ার করে Dify কে সমর্থন করুন।
> আমরা ম্যান্ডারিন বা ইংরেজি ছাড়া অন্য ভাষায় Dify অনুবাদ করতে সাহায্য করার জন্য অবদানকারীদের খুঁজছি। আপনি যদি সাহায্য করতে আগ্রহী হন, তাহলে আরও তথ্যের জন্য [i18n README](https://github.com/langgenius/dify/blob/main/web/i18n/README.md) দেখুন এবং আমাদের [ডিসকর্ড কমিউনিটি সার্ভার](https://discord.gg/8Tpq4AcN9c) এর `গ্লোবাল-ইউজারস` চ্যানেলে আমাদের একটি মন্তব্য করুন।
## কমিউনিটি এবং যোগাযোগ
- [Github Discussion](https://github.com/langgenius/dify/discussions) ফিডব্যাক এবং প্রতিক্রিয়া জানানোর মাধ্যম।
- [GitHub Issues](https://github.com/langgenius/dify/issues). Dify.AI ব্যবহার করে আপনি যেসব বাগের সম্মুখীন হন এবং ফিচার প্রস্তাবনা। আমাদের [অবদান নির্দেশিকা](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) দেখুন।
- [Discord](https://discord.gg/FngNHpbcY7) আপনার এপ্লিকেশন শেয়ার এবং কমিউনিটি আড্ডার মাধ্যম।
- [X(Twitter)](https://twitter.com/dify_ai) আপনার এপ্লিকেশন শেয়ার এবং কমিউনিটি আড্ডার মাধ্যম।
**অবদানকারীদের তালিকা**
<a href="https://github.com/langgenius/dify/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langgenius/dify" />
</a>
## স্টার হিস্ট্রি
[![Star History Chart](https://api.star-history.com/svg?repos=langgenius/dify&type=Date)](https://star-history.com/#langgenius/dify&Date)
## নিরাপত্তা বিষয়ক
আপনার গোপনীয়তা রক্ষা করতে, অনুগ্রহ করে GitHub-এ নিরাপত্তা সংক্রান্ত সমস্যা পোস্ট করা এড়িয়ে চলুন। পরিবর্তে, আপনার প্রশ্নগুলি <security@dify.ai> ঠিকানায় পাঠান এবং আমরা আপনাকে আরও বিস্তারিত উত্তর প্রদান করব।
## লাইসেন্স
এই রিপোজিটরিটি [ডিফাই ওপেন সোর্স লাইসেন্স](LICENSE) এর অধিনে , যা মূলত অ্যাপাচি ২., তবে কিছু অতিরিক্ত বিধিনিষেধ রয়েছে।

View File

@ -45,6 +45,7 @@
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./README_BN.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
</div>

259
README_DE.md Normal file
View File

@ -0,0 +1,259 @@
![cover-v5-optimized](https://github.com/langgenius/dify/assets/13230914/f9e19af5-61ba-4119-b926-d10c4c06ebab)
<p align="center">
📌 <a href="https://dify.ai/blog/introducing-dify-workflow-file-upload-a-demo-on-ai-podcast">Einführung in Dify Workflow File Upload: Google NotebookLM Podcast nachbilden</a>
</p>
<p align="center">
<a href="https://cloud.dify.ai">Dify Cloud</a> ·
<a href="https://docs.dify.ai/getting-started/install-self-hosted">Selbstgehostetes</a> ·
<a href="https://docs.dify.ai">Dokumentation</a> ·
<a href="https://udify.app/chat/22L1zSxg6yW1cWQg">Anfrage an Unternehmen</a>
</p>
<p align="center">
<a href="https://dify.ai" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Product-F04438"></a>
<a href="https://dify.ai/pricing" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/free-pricing?logo=free&color=%20%23155EEF&label=pricing&labelColor=%20%23528bff"></a>
<a href="https://discord.gg/FngNHpbcY7" target="_blank">
<img src="https://img.shields.io/discord/1082486657678311454?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb"
alt="chat on Discord"></a>
<a href="https://reddit.com/r/difyai" target="_blank">
<img src="https://img.shields.io/reddit/subreddit-subscribers/difyai?style=plastic&logo=reddit&label=r%2Fdifyai&labelColor=white"
alt="join Reddit"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on X(Twitter)"></a>
<a href="https://www.linkedin.com/company/langgenius/" target="_blank">
<img src="https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff"
alt="follow on LinkedIn"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
<img alt="Commits last month" src="https://img.shields.io/github/commit-activity/m/langgenius/dify?labelColor=%20%2332b583&color=%20%2312b76a"></a>
<a href="https://github.com/langgenius/dify/" target="_blank">
<img alt="Issues closed" src="https://img.shields.io/github/issues-search?query=repo%3Alanggenius%2Fdify%20is%3Aclosed&label=issues%20closed&labelColor=%20%237d89b0&color=%20%235d6b98"></a>
<a href="https://github.com/langgenius/dify/discussions/" target="_blank">
<img alt="Discussion posts" src="https://img.shields.io/github/discussions/langgenius/dify?labelColor=%20%239b8afb&color=%20%237a5af8"></a>
</p>
<p align="center">
<a href="./README.md"><img alt="README in English" src="https://img.shields.io/badge/English-d9d9d9"></a>
<a href="./README_CN.md"><img alt="简体中文版自述文件" src="https://img.shields.io/badge/简体中文-d9d9d9"></a>
<a href="./README_JA.md"><img alt="日本語のREADME" src="https://img.shields.io/badge/日本語-d9d9d9"></a>
<a href="./README_ES.md"><img alt="README en Español" src="https://img.shields.io/badge/Español-d9d9d9"></a>
<a href="./README_FR.md"><img alt="README en Français" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_KL.md"><img alt="README tlhIngan Hol" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="README in Korean" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./README_DE.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
<a href="./README_BN.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
</p>
Dify ist eine Open-Source-Plattform zur Entwicklung von LLM-Anwendungen. Ihre intuitive Benutzeroberfläche vereint agentenbasierte KI-Workflows, RAG-Pipelines, Agentenfunktionen, Modellverwaltung, Überwachungsfunktionen und mehr, sodass Sie schnell von einem Prototyp in die Produktion übergehen können.
## Schnellstart
> Bevor Sie Dify installieren, stellen Sie sicher, dass Ihr System die folgenden Mindestanforderungen erfüllt:
>
>- CPU >= 2 Core
>- RAM >= 4 GiB
</br>
Der einfachste Weg, den Dify-Server zu starten, ist über [docker compose](docker/docker-compose.yaml). Stellen Sie vor dem Ausführen von Dify mit den folgenden Befehlen sicher, dass [Docker](https://docs.docker.com/get-docker/) und [Docker Compose](https://docs.docker.com/compose/install/) auf Ihrem System installiert sind:
```bash
cd dify
cd docker
cp .env.example .env
docker compose up -d
```
Nachdem Sie den Server gestartet haben, können Sie über Ihren Browser auf das Dify Dashboard unter [http://localhost/install](http://localhost/install) zugreifen und den Initialisierungsprozess starten.
#### Hilfe suchen
Bitte beachten Sie unsere [FAQ](https://docs.dify.ai/getting-started/install-self-hosted/faqs), wenn Sie Probleme bei der Einrichtung von Dify haben. Wenden Sie sich an [die Community und uns](#community--contact), falls weiterhin Schwierigkeiten auftreten.
> Wenn Sie zu Dify beitragen oder zusätzliche Entwicklungen durchführen möchten, lesen Sie bitte unseren [Leitfaden zur Bereitstellung aus dem Quellcode](https://docs.dify.ai/getting-started/install-self-hosted/local-source-code).
## Wesentliche Merkmale
**1. Workflow**:
Erstellen und testen Sie leistungsstarke KI-Workflows auf einer visuellen Oberfläche, wobei Sie alle der folgenden Funktionen und darüber hinaus nutzen können.
https://github.com/langgenius/dify/assets/13230914/356df23e-1604-483d-80a6-9517ece318aa
**2. Umfassende Modellunterstützung**:
Nahtlose Integration mit Hunderten von proprietären und Open-Source-LLMs von Dutzenden Inferenzanbietern und selbstgehosteten Lösungen, die GPT, Mistral, Llama3 und alle mit der OpenAI API kompatiblen Modelle abdecken. Eine vollständige Liste der unterstützten Modellanbieter finden Sie [hier](https://docs.dify.ai/getting-started/readme/model-providers).
![providers-v5](https://github.com/langgenius/dify/assets/13230914/5a17bdbe-097a-4100-8363-40255b70f6e3)
**3. Prompt IDE**:
Intuitive Benutzeroberfläche zum Erstellen von Prompts, zum Vergleichen der Modellleistung und zum Hinzufügen zusätzlicher Funktionen wie Text-to-Speech in einer chatbasierten Anwendung.
**4. RAG Pipeline**:
Umfassende RAG-Funktionalitäten, die alles von der Dokumenteneinlesung bis zur -abfrage abdecken, mit sofort einsatzbereiter Unterstützung für die Textextraktion aus PDFs, PPTs und anderen gängigen Dokumentformaten.
**5. Fähigkeiten des Agenten**:
Sie können Agenten basierend auf LLM Function Calling oder ReAct definieren und vorgefertigte oder benutzerdefinierte Tools für den Agenten hinzufügen. Dify stellt über 50 integrierte Tools für KI-Agenten bereit, wie zum Beispiel Google Search, DALL·E, Stable Diffusion und WolframAlpha.
**6. LLMOps**:
Überwachen und analysieren Sie Anwendungsprotokolle und die Leistung im Laufe der Zeit. Sie können kontinuierlich Prompts, Datensätze und Modelle basierend auf Produktionsdaten und Annotationen verbessern.
**7. Backend-as-a-Service**:
Alle Dify-Angebote kommen mit entsprechenden APIs, sodass Sie Dify mühelos in Ihre eigene Geschäftslogik integrieren können.
## Vergleich der Merkmale
<table style="width: 100%;">
<tr>
<th align="center">Feature</th>
<th align="center">Dify.AI</th>
<th align="center">LangChain</th>
<th align="center">Flowise</th>
<th align="center">OpenAI Assistants API</th>
</tr>
<tr>
<td align="center">Programming Approach</td>
<td align="center">API + App-oriented</td>
<td align="center">Python Code</td>
<td align="center">App-oriented</td>
<td align="center">API-oriented</td>
</tr>
<tr>
<td align="center">Supported LLMs</td>
<td align="center">Rich Variety</td>
<td align="center">Rich Variety</td>
<td align="center">Rich Variety</td>
<td align="center">OpenAI-only</td>
</tr>
<tr>
<td align="center">RAG Engine</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Agent</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Workflow</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Observability</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Enterprise Feature (SSO/Access control)</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Local Deployment</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
</table>
## Dify verwenden
- **Cloud </br>**
Wir hosten einen [Dify Cloud](https://dify.ai)-Service, den jeder ohne Einrichtung ausprobieren kann. Er bietet alle Funktionen der selbstgehosteten Version und beinhaltet 200 kostenlose GPT-4-Aufrufe im Sandbox-Plan.
- **Selbstgehostete Dify Community Edition</br>**
Starten Sie Dify schnell in Ihrer Umgebung mit diesem [Schnellstart-Leitfaden](#quick-start). Nutzen Sie unsere [Dokumentation](https://docs.dify.ai) für weiterführende Informationen und detaillierte Anweisungen.
- **Dify für Unternehmen / Organisationen</br>**
Wir bieten zusätzliche, unternehmensspezifische Funktionen. [Über diesen Chatbot können Sie uns Ihre Fragen mitteilen](https://udify.app/chat/22L1zSxg6yW1cWQg) oder [senden Sie uns eine E-Mail](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry), um Ihre unternehmerischen Bedürfnisse zu besprechen. </br>
> Für Startups und kleine Unternehmen, die AWS nutzen, schauen Sie sich [Dify Premium on AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6) an und stellen Sie es mit nur einem Klick in Ihrer eigenen AWS VPC bereit. Es handelt sich um ein erschwingliches AMI-Angebot mit der Option, Apps mit individuellem Logo und Branding zu erstellen.
## Immer einen Schritt voraus
Star Dify auf GitHub und lassen Sie sich sofort über neue Releases benachrichtigen.
![star-us](https://github.com/langgenius/dify/assets/13230914/b823edc1-6388-4e25-ad45-2f6b187adbb4)
## Erweiterte Einstellungen
Falls Sie die Konfiguration anpassen müssen, lesen Sie bitte die Kommentare in unserer [.env.example](docker/.env.example)-Datei und aktualisieren Sie die entsprechenden Werte in Ihrer `.env`-Datei. Zusätzlich müssen Sie eventuell Anpassungen an der `docker-compose.yaml`-Datei vornehmen, wie zum Beispiel das Ändern von Image-Versionen, Portzuordnungen oder Volumen-Mounts, je nach Ihrer spezifischen Einsatzumgebung und Ihren Anforderungen. Nachdem Sie Änderungen vorgenommen haben, starten Sie `docker-compose up -d` erneut. Eine vollständige Liste der verfügbaren Umgebungsvariablen finden Sie [hier](https://docs.dify.ai/getting-started/install-self-hosted/environments).
Falls Sie eine hochverfügbare Konfiguration einrichten möchten, gibt es von der Community bereitgestellte [Helm Charts](https://helm.sh/) und YAML-Dateien, die es ermöglichen, Dify auf Kubernetes bereitzustellen.
- [Helm Chart by @LeoQuote](https://github.com/douban/charts/tree/master/charts/dify)
- [Helm Chart by @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)
- [YAML file by @Winson-030](https://github.com/Winson-030/dify-kubernetes)
#### Terraform für die Bereitstellung verwenden
Stellen Sie Dify mit nur einem Klick mithilfe von [terraform](https://www.terraform.io/) auf einer Cloud-Plattform bereit.
##### Azure Global
- [Azure Terraform by @nikawang](https://github.com/nikawang/dify-azure-terraform)
##### Google Cloud
- [Google Cloud Terraform by @sotazum](https://github.com/DeNA/dify-google-cloud-terraform)
#### Verwendung von AWS CDK für die Bereitstellung
Bereitstellung von Dify auf AWS mit [CDK](https://aws.amazon.com/cdk/)
##### AWS
- [AWS CDK by @KevinZhao](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
## Contributing
Falls Sie Code beitragen möchten, lesen Sie bitte unseren [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md). Gleichzeitig bitten wir Sie, Dify zu unterstützen, indem Sie es in den sozialen Medien teilen und auf Veranstaltungen und Konferenzen präsentieren.
> Wir suchen Mitwirkende, die dabei helfen, Dify in weitere Sprachen zu übersetzen außer Mandarin oder Englisch. Wenn Sie Interesse an einer Mitarbeit haben, lesen Sie bitte die [i18n README](https://github.com/langgenius/dify/blob/main/web/i18n/README.md) für weitere Informationen und hinterlassen Sie einen Kommentar im `global-users`-Kanal unseres [Discord Community Servers](https://discord.gg/8Tpq4AcN9c).
## Gemeinschaft & Kontakt
* [Github Discussion](https://github.com/langgenius/dify/discussions). Am besten geeignet für: den Austausch von Feedback und das Stellen von Fragen.
* [GitHub Issues](https://github.com/langgenius/dify/issues). Am besten für: Fehler, auf die Sie bei der Verwendung von Dify.AI stoßen, und Funktionsvorschläge. Siehe unseren [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [Discord](https://discord.gg/FngNHpbcY7). Am besten geeignet für: den Austausch von Bewerbungen und den Austausch mit der Community.
* [X(Twitter)](https://twitter.com/dify_ai). Am besten geeignet für: den Austausch von Bewerbungen und den Austausch mit der Community.
**Mitwirkende**
<a href="https://github.com/langgenius/dify/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langgenius/dify" />
</a>
## Star-Geschichte
[![Star History Chart](https://api.star-history.com/svg?repos=langgenius/dify&type=Date)](https://star-history.com/#langgenius/dify&Date)
## Offenlegung der Sicherheit
Um Ihre Privatsphäre zu schützen, vermeiden Sie es bitte, Sicherheitsprobleme auf GitHub zu posten. Schicken Sie Ihre Fragen stattdessen an security@dify.ai und wir werden Ihnen eine ausführlichere Antwort geben.
## Lizenz
Dieses Repository steht unter der [Dify Open Source License](LICENSE), die im Wesentlichen Apache 2.0 mit einigen zusätzlichen Einschränkungen ist.

View File

@ -45,6 +45,7 @@
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./README_BN.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
</p>
#

View File

@ -45,6 +45,7 @@
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./README_BN.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
</p>
#
@ -55,7 +56,7 @@
Dify est une plateforme de développement d'applications LLM open source. Son interface intuitive combine un flux de travail d'IA, un pipeline RAG, des capacités d'agent, une gestion de modèles, des fonctionnalités d'observabilité, et plus encore, vous permettant de passer rapidement du prototype à la production. Voici une liste des fonctionnalités principales:
</br> </br>
**1. Flux de travail**:
**1. Flux de travail** :
Construisez et testez des flux de travail d'IA puissants sur un canevas visuel, en utilisant toutes les fonctionnalités suivantes et plus encore.
@ -63,27 +64,25 @@ Dify est une plateforme de développement d'applications LLM open source. Son in
**2. Prise en charge complète des modèles**:
**2. Prise en charge complète des modèles** :
Intégration transparente avec des centaines de LLM propriétaires / open source provenant de dizaines de fournisseurs d'inférence et de solutions auto-hébergées, couvrant GPT, Mistral, Llama3, et tous les modèles compatibles avec l'API OpenAI. Une liste complète des fournisseurs de modèles pris en charge se trouve [ici](https://docs.dify.ai/getting-started/readme/model-providers).
![providers-v5](https://github.com/langgenius/dify/assets/13230914/5a17bdbe-097a-4100-8363-40255b70f6e3)
**3. IDE de prompt**:
**3. IDE de prompt** :
Interface intuitive pour créer des prompts, comparer les performances des modèles et ajouter des fonctionnalités supplémentaires telles que la synthèse vocale à une application basée sur des chats.
**4. Pipeline RAG**:
**4. Pipeline RAG** :
Des capacités RAG étendues qui couvrent tout, de l'ingestion de documents à la récupération, avec un support prêt à l'emploi pour l'extraction de texte à partir de PDF, PPT et autres formats de document courants.
**5. Capac
ités d'agent**:
**5. Capacités d'agent** :
Vous pouvez définir des agents basés sur l'appel de fonction LLM ou ReAct, et ajouter des outils pré-construits ou personnalisés pour l'agent. Dify fournit plus de 50 outils intégrés pour les agents d'IA, tels que la recherche Google, DALL·E, Stable Diffusion et WolframAlpha.
**6. LLMOps**:
**6. LLMOps** :
Surveillez et analysez les journaux d'application et les performances au fil du temps. Vous pouvez continuellement améliorer les prompts, les ensembles de données et les modèles en fonction des données de production et des annotations.
**7. Backend-as-a-Service**:
**7. Backend-as-a-Service** :
Toutes les offres de Dify sont accompagnées d'API correspondantes, vous permettant d'intégrer facilement Dify dans votre propre logique métier.

View File

@ -15,7 +15,7 @@
<a href="https://discord.gg/FngNHpbcY7" target="_blank">
<img src="https://img.shields.io/discord/1082486657678311454?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb"
alt="Discordでチャット"></a>
<a href="https://reddit.com/r/difyai" target="_blank">
<a href="https://reddit.com/r/difyai" target="_blank">
<img src="https://img.shields.io/reddit/subreddit-subscribers/difyai?style=plastic&logo=reddit&label=r%2Fdifyai&labelColor=white"
alt="Reddit"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
@ -45,6 +45,7 @@
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./README_BN.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
</p>
#
@ -56,7 +57,7 @@
DifyはオープンソースのLLMアプリケーション開発プラットフォームです。直感的なインターフェイスには、AIワークフロー、RAGパイプライン、エージェント機能、モデル管理、観測機能などが組み合わさっており、プロトタイプから生産まで迅速に進めることができます。以下の機能が含まれます
</br> </br>
**1. ワークフロー**:
**1. ワークフロー**:
強力なAIワークフローをビジュアルキャンバス上で構築し、テストできます。すべての機能、および以下の機能を使用できます。
@ -64,25 +65,25 @@ DifyはオープンソースのLLMアプリケーション開発プラットフ
**2. 総合的なモデルサポート**:
**2. 総合的なモデルサポート**:
数百ものプロプライエタリ/オープンソースのLLMと、数十もの推論プロバイダーおよびセルフホスティングソリューションとのシームレスな統合を提供します。GPT、Mistral、Llama3、OpenAI APIと互換性のあるすべてのモデルを統合されています。サポートされているモデルプロバイダーの完全なリストは[こちら](https://docs.dify.ai/getting-started/readme/model-providers)をご覧ください。
![providers-v5](https://github.com/langgenius/dify/assets/13230914/5a17bdbe-097a-4100-8363-40255b70f6e3)
**3. プロンプトIDE**:
**3. プロンプトIDE**:
プロンプトの作成、モデルパフォーマンスの比較が行え、チャットベースのアプリに音声合成などの機能も追加できます。
**4. RAGパイプライン**:
**4. RAGパイプライン**:
ドキュメントの取り込みから検索までをカバーする広範なRAG機能ができます。ほかにもPDF、PPT、その他の一般的なドキュメントフォーマットからのテキスト抽出のサポートも提供します。
**5. エージェント機能**:
**5. エージェント機能**:
LLM Function CallingやReActに基づくエージェントの定義が可能で、AIエージェント用のプリビルトまたはカスタムツールを追加できます。Difyには、Google検索、DALL·E、Stable Diffusion、WolframAlphaなどのAIエージェント用の50以上の組み込みツールが提供します。
**6. LLMOps**:
**6. LLMOps**:
アプリケーションのログやパフォーマンスを監視と分析し、生産のデータと注釈に基づいて、プロンプト、データセット、モデルを継続的に改善できます。
**7. Backend-as-a-Service**:
**7. Backend-as-a-Service**:
すべての機能はAPIを提供されており、Difyを自分のビジネスロジックに簡単に統合できます。
@ -164,7 +165,7 @@ DifyはオープンソースのLLMアプリケーション開発プラットフ
- **企業/組織向けのDify</br>**
企業中心の機能を提供しています。[メールを送信](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry)して企業のニーズについて相談してください。 </br>
> AWSを使用しているスタートアップ企業や中小企業の場合は、[AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6)のDify Premiumをチェックして、ワンクリックで自分のAWS VPCにデプロイできます。さらに、手頃な価格のAMIオファリングして、ロゴやブランディングをカスタマイズしてアプリケーションを作成するオプションがあります。
> AWSを使用しているスタートアップ企業や中小企業の場合は、[AWS Marketplace](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6)のDify Premiumをチェックして、ワンクリックで自分のAWS VPCにデプロイできます。さらに、手頃な価格のAMIオファリングして、ロゴやブランディングをカスタマイズしてアプリケーションを作成するオプションがあります。
## 最新の情報を入手
@ -177,7 +178,7 @@ GitHub上でDifyにスターを付けることで、Difyに関する新しいニ
## クイックスタート
> Difyをインストールする前に、お使いのマシンが以下の最小システム要件を満たしていることを確認してください
>
>
>- CPU >= 2コア
>- RAM >= 4GB
@ -219,7 +220,7 @@ docker compose up -d
[CDK](https://aws.amazon.com/cdk/) を使用して、DifyをAWSにデプロイします
##### AWS
##### AWS
- [@KevinZhaoによるAWS CDK](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
## 貢献

View File

@ -45,6 +45,7 @@
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./README_BN.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
</p>
#
@ -87,9 +88,7 @@ Dify is an open-source LLM app development platform. Its intuitive interface com
## Feature Comparison
<table style="width: 100%;">
<tr
>
<tr>
<th align="center">Feature</th>
<th align="center">Dify.AI</th>
<th align="center">LangChain</th>

View File

@ -45,6 +45,7 @@
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./README_BN.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
</p>

View File

@ -50,6 +50,7 @@
<a href="./README_TR.md"><img alt="README em Turco" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README em Vietnamita" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./README_PT.md"><img alt="README em Português - BR" src="https://img.shields.io/badge/Portugu%C3%AAs-BR?style=flat&label=BR&color=d9d9d9"></a>
<a href="./README_BN.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
</p>
Dify é uma plataforma de desenvolvimento de aplicativos LLM de código aberto. Sua interface intuitiva combina workflow de IA, pipeline RAG, capacidades de agente, gerenciamento de modelos, recursos de observabilidade e muito mais, permitindo que você vá rapidamente do protótipo à produção. Aqui está uma lista das principais funcionalidades:

View File

@ -47,6 +47,7 @@
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./README_SI.md"><img alt="README Slovenščina" src="https://img.shields.io/badge/Sloven%C5%A1%C4%8Dina-d9d9d9"></a>
<a href="./README_BN.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
</p>
@ -106,6 +107,73 @@ Prosimo, glejte naša pogosta vprašanja [FAQ](https://docs.dify.ai/getting-star
**7. Backend-as-a-Service**:
AVse ponudbe Difyja so opremljene z ustreznimi API-ji, tako da lahko Dify brez težav integrirate v svojo poslovno logiko.
## Primerjava Funkcij
<table style="width: 100%;">
<tr>
<th align="center">Funkcija</th>
<th align="center">Dify.AI</th>
<th align="center">LangChain</th>
<th align="center">Flowise</th>
<th align="center">OpenAI Assistants API</th>
</tr>
<tr>
<td align="center">Programski pristop</td>
<td align="center">API + usmerjeno v aplikacije</td>
<td align="center">Python koda</td>
<td align="center">Usmerjeno v aplikacije</td>
<td align="center">Usmerjeno v API</td>
</tr>
<tr>
<td align="center">Podprti LLM-ji</td>
<td align="center">Bogata izbira</td>
<td align="center">Bogata izbira</td>
<td align="center">Bogata izbira</td>
<td align="center">Samo OpenAI</td>
</tr>
<tr>
<td align="center">RAG pogon</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Agent</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Potek dela</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Spremljanje</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Funkcija za podjetja (SSO/nadzor dostopa)</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">Lokalna namestitev</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
</table>
## Uporaba Dify
@ -187,4 +255,4 @@ Zaradi zaščite vaše zasebnosti se izogibajte objavljanju varnostnih vprašanj
## Licenca
To skladišče je na voljo pod [odprtokodno licenco Dify](LICENSE) , ki je v bistvu Apache 2.0 z nekaj dodatnimi omejitvami.
To skladišče je na voljo pod [odprtokodno licenco Dify](LICENSE) , ki je v bistvu Apache 2.0 z nekaj dodatnimi omejitvami.

View File

@ -45,6 +45,7 @@
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./README_BN.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
</p>

258
README_TW.md Normal file
View File

@ -0,0 +1,258 @@
![cover-v5-optimized](https://github.com/langgenius/dify/assets/13230914/f9e19af5-61ba-4119-b926-d10c4c06ebab)
<p align="center">
📌 <a href="https://dify.ai/blog/introducing-dify-workflow-file-upload-a-demo-on-ai-podcast">介紹 Dify 工作流程檔案上傳功能:重現 Google NotebookLM Podcast</a>
</p>
<p align="center">
<a href="https://cloud.dify.ai">Dify 雲端服務</a> ·
<a href="https://docs.dify.ai/getting-started/install-self-hosted">自行託管</a> ·
<a href="https://docs.dify.ai">說明文件</a> ·
<a href="https://udify.app/chat/22L1zSxg6yW1cWQg">企業諮詢</a>
</p>
<p align="center">
<a href="https://dify.ai" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/Product-F04438"></a>
<a href="https://dify.ai/pricing" target="_blank">
<img alt="Static Badge" src="https://img.shields.io/badge/free-pricing?logo=free&color=%20%23155EEF&label=pricing&labelColor=%20%23528bff"></a>
<a href="https://discord.gg/FngNHpbcY7" target="_blank">
<img src="https://img.shields.io/discord/1082486657678311454?logo=discord&labelColor=%20%235462eb&logoColor=%20%23f5f5f5&color=%20%235462eb"
alt="chat on Discord"></a>
<a href="https://reddit.com/r/difyai" target="_blank">
<img src="https://img.shields.io/reddit/subreddit-subscribers/difyai?style=plastic&logo=reddit&label=r%2Fdifyai&labelColor=white"
alt="join Reddit"></a>
<a href="https://twitter.com/intent/follow?screen_name=dify_ai" target="_blank">
<img src="https://img.shields.io/twitter/follow/dify_ai?logo=X&color=%20%23f5f5f5"
alt="follow on X(Twitter)"></a>
<a href="https://www.linkedin.com/company/langgenius/" target="_blank">
<img src="https://custom-icon-badges.demolab.com/badge/LinkedIn-0A66C2?logo=linkedin-white&logoColor=fff"
alt="follow on LinkedIn"></a>
<a href="https://hub.docker.com/u/langgenius" target="_blank">
<img alt="Docker Pulls" src="https://img.shields.io/docker/pulls/langgenius/dify-web?labelColor=%20%23FDB062&color=%20%23f79009"></a>
<a href="https://github.com/langgenius/dify/graphs/commit-activity" target="_blank">
<img alt="Commits last month" src="https://img.shields.io/github/commit-activity/m/langgenius/dify?labelColor=%20%2332b583&color=%20%2312b76a"></a>
<a href="https://github.com/langgenius/dify/" target="_blank">
<img alt="Issues closed" src="https://img.shields.io/github/issues-search?query=repo%3Alanggenius%2Fdify%20is%3Aclosed&label=issues%20closed&labelColor=%20%237d89b0&color=%20%235d6b98"></a>
<a href="https://github.com/langgenius/dify/discussions/" target="_blank">
<img alt="Discussion posts" src="https://img.shields.io/github/discussions/langgenius/dify?labelColor=%20%239b8afb&color=%20%237a5af8"></a>
</p>
<p align="center">
<a href="./README.md"><img alt="README in English" src="https://img.shields.io/badge/English-d9d9d9"></a>
<a href="./README_TW.md"><img alt="繁體中文文件" src="https://img.shields.io/badge/繁體中文-d9d9d9"></a>
<a href="./README_CN.md"><img alt="简体中文版自述文件" src="https://img.shields.io/badge/简体中文-d9d9d9"></a>
<a href="./README_JA.md"><img alt="日本語のREADME" src="https://img.shields.io/badge/日本語-d9d9d9"></a>
<a href="./README_ES.md"><img alt="README en Español" src="https://img.shields.io/badge/Español-d9d9d9"></a>
<a href="./README_FR.md"><img alt="README en Français" src="https://img.shields.io/badge/Français-d9d9d9"></a>
<a href="./README_KL.md"><img alt="README tlhIngan Hol" src="https://img.shields.io/badge/Klingon-d9d9d9"></a>
<a href="./README_KR.md"><img alt="README in Korean" src="https://img.shields.io/badge/한국어-d9d9d9"></a>
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./README_DE.md"><img alt="README in Deutsch" src="https://img.shields.io/badge/German-d9d9d9"></a>
</p>
Dify 是一個開源的 LLM 應用程式開發平台。其直觀的界面結合了智能代理工作流程、RAG 管道、代理功能、模型管理、可觀察性功能等,讓您能夠快速從原型進展到生產環境。
## 快速開始
> 安裝 Dify 之前,請確保您的機器符合以下最低系統要求:
>
> - CPU >= 2 核心
> - 記憶體 >= 4 GiB
</br>
啟動 Dify 伺服器最簡單的方式是透過 [docker compose](docker/docker-compose.yaml)。在使用以下命令運行 Dify 之前,請確保您的機器已安裝 [Docker](https://docs.docker.com/get-docker/) 和 [Docker Compose](https://docs.docker.com/compose/install/)
```bash
cd dify
cd docker
cp .env.example .env
docker compose up -d
```
運行後,您可以在瀏覽器中通過 [http://localhost/install](http://localhost/install) 訪問 Dify 儀表板並開始初始化過程。
### 尋求幫助
如果您在設置 Dify 時遇到問題,請參考我們的 [常見問題](https://docs.dify.ai/getting-started/install-self-hosted/faqs)。如果仍有疑問,請聯絡 [社區和我們](#community--contact)。
> 如果您想為 Dify 做出貢獻或進行額外開發,請參考我們的 [從原始碼部署指南](https://docs.dify.ai/getting-started/install-self-hosted/local-source-code)
## 核心功能
**1. 工作流程**
在視覺化畫布上建立和測試強大的 AI 工作流程,利用以下所有功能及更多。
https://github.com/langgenius/dify/assets/13230914/356df23e-1604-483d-80a6-9517ece318aa
**2. 全面的模型支援**
無縫整合來自數十個推理提供商和自託管解決方案的數百個專有/開源 LLM涵蓋 GPT、Mistral、Llama3 和任何與 OpenAI API 兼容的模型。您可以在[此處](https://docs.dify.ai/getting-started/readme/model-providers)找到支援的模型提供商完整列表。
![providers-v5](https://github.com/langgenius/dify/assets/13230914/5a17bdbe-097a-4100-8363-40255b70f6e3)
**3. 提示詞 IDE**
直觀的界面,用於編寫提示詞、比較模型性能,以及為聊天型應用程式添加文字轉語音等額外功能。
**4. RAG 管道**
廣泛的 RAG 功能,涵蓋從文件擷取到檢索的全部流程,內建支援從 PDF、PPT 和其他常見文件格式提取文本。
**5. 代理功能**
您可以基於 LLM 函數調用或 ReAct 定義代理並為代理添加預構建或自定義工具。Dify 為 AI 代理提供 50 多種內建工具,如 Google 搜尋、DALL·E、Stable Diffusion 和 WolframAlpha。
**6. LLMOps**
監控並分析應用程式日誌和長期效能。您可以根據生產數據和標註持續改進提示詞、數據集和模型。
**7. 後端即服務**
Dify 的所有功能都提供相應的 API因此您可以輕鬆地將 Dify 整合到您自己的業務邏輯中。
## 功能比較
<table style="width: 100%;">
<tr>
<th align="center">功能</th>
<th align="center">Dify.AI</th>
<th align="center">LangChain</th>
<th align="center">Flowise</th>
<th align="center">OpenAI Assistants API</th>
</tr>
<tr>
<td align="center">程式設計方法</td>
<td align="center">API + 應用導向</td>
<td align="center">Python 代碼</td>
<td align="center">應用導向</td>
<td align="center">API 導向</td>
</tr>
<tr>
<td align="center">支援的 LLM 模型</td>
<td align="center">豐富多樣</td>
<td align="center">豐富多樣</td>
<td align="center">豐富多樣</td>
<td align="center">僅限 OpenAI</td>
</tr>
<tr>
<td align="center">RAG 引擎</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">代理功能</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">工作流程</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">可觀察性</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">企業級功能 (SSO/存取控制)</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
<tr>
<td align="center">本地部署</td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
<td align="center"></td>
</tr>
</table>
## 使用 Dify
- **雲端服務 </br>**
我們提供 [Dify Cloud](https://dify.ai) 服務,任何人都可以零配置嘗試。它提供與自部署版本相同的所有功能,並在沙盒計劃中包含 200 次免費 GPT-4 調用。
- **自託管 Dify 社區版</br>**
使用這份[快速指南](#快速開始)在您的環境中快速運行 Dify。
使用我們的[文檔](https://docs.dify.ai)獲取更多參考和深入指導。
- **企業/組織版 Dify</br>**
我們提供額外的企業中心功能。[通過這個聊天機器人記錄您的問題](https://udify.app/chat/22L1zSxg6yW1cWQg)或[發送電子郵件給我們](mailto:business@dify.ai?subject=[GitHub]Business%20License%20Inquiry)討論企業需求。</br>
> 對於使用 AWS 的初創企業和小型企業,請查看 [AWS Marketplace 上的 Dify Premium](https://aws.amazon.com/marketplace/pp/prodview-t22mebxzwjhu6),並一鍵部署到您自己的 AWS VPC。這是一個經濟實惠的 AMI 產品,可選擇使用自定義徽標和品牌創建應用。
## 保持領先
在 GitHub 上為 Dify 加星,即時獲取新版本通知。
![star-us](https://github.com/langgenius/dify/assets/13230914/b823edc1-6388-4e25-ad45-2f6b187adbb4)
## 進階設定
如果您需要自定義配置,請參考我們的 [.env.example](docker/.env.example) 文件中的註釋,並在您的 `.env` 文件中更新相應的值。此外,根據您特定的部署環境和需求,您可能需要調整 `docker-compose.yaml` 文件本身,例如更改映像版本、端口映射或卷掛載。進行任何更改後,請重新運行 `docker-compose up -d`。您可以在[這裡](https://docs.dify.ai/getting-started/install-self-hosted/environments)找到可用環境變數的完整列表。
如果您想配置高可用性設置,社區貢獻的 [Helm Charts](https://helm.sh/) 和 YAML 文件允許在 Kubernetes 上部署 Dify。
- [由 @LeoQuote 提供的 Helm Chart](https://github.com/douban/charts/tree/master/charts/dify)
- [由 @BorisPolonsky 提供的 Helm Chart](https://github.com/BorisPolonsky/dify-helm)
- [由 @Winson-030 提供的 YAML 文件](https://github.com/Winson-030/dify-kubernetes)
### 使用 Terraform 進行部署
使用 [terraform](https://www.terraform.io/) 一鍵部署 Dify 到雲端平台
### Azure 全球
- [由 @nikawang 提供的 Azure Terraform](https://github.com/nikawang/dify-azure-terraform)
### Google Cloud
- [由 @sotazum 提供的 Google Cloud Terraform](https://github.com/DeNA/dify-google-cloud-terraform)
### 使用 AWS CDK 進行部署
使用 [CDK](https://aws.amazon.com/cdk/) 部署 Dify 到 AWS
### AWS
- [由 @KevinZhao 提供的 AWS CDK](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
## 貢獻
對於想要貢獻程式碼的開發者,請參閱我們的[貢獻指南](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md)。
同時,也請考慮透過在社群媒體和各種活動與會議上分享 Dify 來支持我們。
> 我們正在尋找貢獻者協助將 Dify 翻譯成中文和英文以外的語言。如果您有興趣幫忙,請查看 [i18n README](https://github.com/langgenius/dify/blob/main/web/i18n/README.md) 獲取更多資訊,並在我們的 [Discord 社群伺服器](https://discord.gg/8Tpq4AcN9c) 的 `global-users` 頻道留言給我們。
## 社群與聯絡方式
- [Github Discussion](https://github.com/langgenius/dify/discussions):最適合分享反饋和提問。
- [GitHub Issues](https://github.com/langgenius/dify/issues):最適合報告使用 Dify.AI 時遇到的問題和提出功能建議。請參閱我們的[貢獻指南](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md)。
- [Discord](https://discord.gg/FngNHpbcY7):最適合分享您的應用程式並與社群互動。
- [X(Twitter)](https://twitter.com/dify_ai):最適合分享您的應用程式並與社群互動。
**貢獻者**
<a href="https://github.com/langgenius/dify/graphs/contributors">
<img src="https://contrib.rocks/image?repo=langgenius/dify" />
</a>
## 星星歷史
[![Star History Chart](https://api.star-history.com/svg?repos=langgenius/dify&type=Date)](https://star-history.com/#langgenius/dify&Date)
## 安全揭露
為保護您的隱私,請避免在 GitHub 上發布安全性問題。請將您的問題發送至 security@dify.ai我們將為您提供更詳細的答覆。
## 授權條款
本代碼庫採用 [Dify 開源授權](LICENSE),這基本上是 Apache 2.0 授權加上一些額外限制條款。

View File

@ -45,6 +45,7 @@
<a href="./README_AR.md"><img alt="README بالعربية" src="https://img.shields.io/badge/العربية-d9d9d9"></a>
<a href="./README_TR.md"><img alt="Türkçe README" src="https://img.shields.io/badge/Türkçe-d9d9d9"></a>
<a href="./README_VI.md"><img alt="README Tiếng Việt" src="https://img.shields.io/badge/Ti%E1%BA%BFng%20Vi%E1%BB%87t-d9d9d9"></a>
<a href="./README_BN.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
</p>

View File

@ -427,7 +427,6 @@ PLUGIN_DAEMON_URL=http://127.0.0.1:5002
PLUGIN_REMOTE_INSTALL_PORT=5003
PLUGIN_REMOTE_INSTALL_HOST=localhost
PLUGIN_MAX_PACKAGE_SIZE=15728640
INNER_API_KEY=QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1
INNER_API_KEY_FOR_PLUGIN=QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y1
# Marketplace configuration

View File

@ -7,7 +7,7 @@ line-length = 120
quote-style = "double"
[lint]
preview = true
preview = false
select = [
"B", # flake8-bugbear rules
"C4", # flake8-comprehensions
@ -18,7 +18,6 @@ select = [
"N", # pep8-naming
"PT", # flake8-pytest-style rules
"PLC0208", # iteration-over-set
"PLC2801", # unnecessary-dunder-call
"PLC0414", # useless-import-alias
"PLE0604", # invalid-all-object
"PLE0605", # invalid-all-format
@ -46,7 +45,6 @@ ignore = [
"E712", # true-false-comparison
"E721", # type-comparison
"E722", # bare-except
"E731", # lambda-assignment
"F821", # undefined-name
"F841", # unused-variable
"FURB113", # repeated-append

View File

@ -55,9 +55,11 @@ RUN \
# basic environment
curl nodejs libgmp-dev libmpfr-dev libmpc-dev \
# For Security
# expat libldap-2.5-0 perl libsqlite3-0 zlib1g \
expat libldap-2.5-0 perl libsqlite3-0 zlib1g \
# install a chinese font to support the use of tools like matplotlib
fonts-noto-cjk \
# install a package to improve the accuracy of guessing mime type and file extension
media-types \
# install libmagic to support the use of python-magic guess MIMETYPE
libmagic1 \
&& apt-get autoremove -y \

View File

@ -37,7 +37,13 @@
4. Create environment.
Dify API service uses [Poetry](https://python-poetry.org/docs/) to manage dependencies. You can execute `poetry shell` to activate the environment.
Dify API service uses [Poetry](https://python-poetry.org/docs/) to manage dependencies. First, you need to add the poetry shell plugin, if you don't have it already, in order to run in a virtual environment. [Note: Poetry shell is no longer a native command so you need to install the poetry plugin beforehand]
```bash
poetry self add poetry-plugin-shell
```
Then, You can execute `poetry shell` to activate the environment.
5. Install dependencies

View File

@ -2,6 +2,7 @@ import logging
import time
from configs import dify_config
from contexts.wrapper import RecyclableContextVar
from dify_app import DifyApp
@ -16,6 +17,12 @@ def create_flask_app_with_configs() -> DifyApp:
dify_app = DifyApp(__name__)
dify_app.config.from_mapping(dify_config.model_dump())
# add before request hook
@dify_app.before_request
def before_request():
# add an unique identifier to each request
RecyclableContextVar.increment_thread_recycles()
return dify_app

View File

@ -707,12 +707,13 @@ def extract_unique_plugins(output_file: str, input_file: str):
@click.option(
"--output_file", prompt=True, help="The file to store the installed plugins.", default="installed_plugins.jsonl"
)
def install_plugins(input_file: str, output_file: str):
@click.option("--workers", prompt=True, help="The number of workers to install plugins.", default=100)
def install_plugins(input_file: str, output_file: str, workers: int):
"""
Install plugins.
"""
click.echo(click.style("Starting install plugins.", fg="white"))
PluginMigration.install_plugins(input_file, output_file)
PluginMigration.install_plugins(input_file, output_file, workers)
click.echo(click.style("Install plugins completed.", fg="green"))

View File

@ -373,8 +373,8 @@ class HttpConfig(BaseSettings):
)
RESPECT_XFORWARD_HEADERS_ENABLED: bool = Field(
description="Enable or disable the X-Forwarded-For Proxy Fix middleware from Werkzeug"
" to respect X-* headers to redirect clients",
description="Enable handling of X-Forwarded-For, X-Forwarded-Proto, and X-Forwarded-Port headers"
" when the app is behind a single trusted reverse proxy.",
default=False,
)
@ -389,11 +389,6 @@ class InnerAPIConfig(BaseSettings):
default=False,
)
INNER_API_KEY: Optional[str] = Field(
description="API key for accessing the internal API",
default=None,
)
class LoggingConfig(BaseSettings):
"""

View File

@ -1,3 +1,4 @@
import os
from typing import Any, Literal, Optional
from urllib.parse import quote_plus
@ -166,6 +167,11 @@ class DatabaseConfig(BaseSettings):
default=False,
)
RETRIEVAL_SERVICE_EXECUTORS: NonNegativeInt = Field(
description="Number of processes for the retrieval service, default to CPU cores.",
default=os.cpu_count(),
)
@computed_field
def SQLALCHEMY_ENGINE_OPTIONS(self) -> dict[str, Any]:
return {

View File

@ -1,6 +1,6 @@
from typing import Optional
from pydantic import Field, PositiveInt
from pydantic import Field
from pydantic_settings import BaseSettings
@ -9,16 +9,6 @@ class OracleConfig(BaseSettings):
Configuration settings for Oracle database
"""
ORACLE_HOST: Optional[str] = Field(
description="Hostname or IP address of the Oracle database server (e.g., 'localhost' or 'oracle.example.com')",
default=None,
)
ORACLE_PORT: PositiveInt = Field(
description="Port number on which the Oracle database server is listening (default is 1521)",
default=1521,
)
ORACLE_USER: Optional[str] = Field(
description="Username for authenticating with the Oracle database",
default=None,
@ -29,7 +19,28 @@ class OracleConfig(BaseSettings):
default=None,
)
ORACLE_DATABASE: Optional[str] = Field(
description="Name of the Oracle database or service to connect to (e.g., 'ORCL' or 'pdborcl')",
ORACLE_DSN: Optional[str] = Field(
description="Oracle database connection string. For traditional database, use format 'host:port/service_name'. "
"For autonomous database, use the service name from tnsnames.ora in the wallet",
default=None,
)
ORACLE_CONFIG_DIR: Optional[str] = Field(
description="Directory containing the tnsnames.ora configuration file. Only used in thin mode connection",
default=None,
)
ORACLE_WALLET_LOCATION: Optional[str] = Field(
description="Oracle wallet directory path containing the wallet files for secure connection",
default=None,
)
ORACLE_WALLET_PASSWORD: Optional[str] = Field(
description="Password to decrypt the Oracle wallet, if it is encrypted",
default=None,
)
ORACLE_IS_AUTONOMOUS: bool = Field(
description="Flag indicating whether connecting to Oracle Autonomous Database",
default=False,
)

View File

@ -9,7 +9,7 @@ class PackagingInfo(BaseSettings):
CURRENT_VERSION: str = Field(
description="Dify version",
default="1.0.0",
default="1.0.1",
)
COMMIT_SHA: str = Field(

View File

@ -15,7 +15,7 @@ AUDIO_EXTENSIONS.extend([ext.upper() for ext in AUDIO_EXTENSIONS])
if dify_config.ETL_TYPE == "Unstructured":
DOCUMENT_EXTENSIONS = ["txt", "markdown", "md", "mdx", "pdf", "html", "htm", "xlsx", "xls"]
DOCUMENT_EXTENSIONS.extend(("docx", "csv", "eml", "msg", "pptx", "xml", "epub"))
DOCUMENT_EXTENSIONS.extend(("doc", "docx", "csv", "eml", "msg", "pptx", "xml", "epub"))
if dify_config.UNSTRUCTURED_API_URL:
DOCUMENT_EXTENSIONS.append("ppt")
DOCUMENT_EXTENSIONS.extend([ext.upper() for ext in DOCUMENT_EXTENSIONS])

View File

@ -2,7 +2,10 @@ from contextvars import ContextVar
from threading import Lock
from typing import TYPE_CHECKING
from contexts.wrapper import RecyclableContextVar
if TYPE_CHECKING:
from core.model_runtime.entities.model_entities import AIModelEntity
from core.plugin.entities.plugin_daemon import PluginModelProviderEntity
from core.tools.plugin_tool.provider import PluginToolProviderController
from core.workflow.entities.variable_pool import VariablePool
@ -12,8 +15,25 @@ tenant_id: ContextVar[str] = ContextVar("tenant_id")
workflow_variable_pool: ContextVar["VariablePool"] = ContextVar("workflow_variable_pool")
plugin_tool_providers: ContextVar[dict[str, "PluginToolProviderController"]] = ContextVar("plugin_tool_providers")
plugin_tool_providers_lock: ContextVar[Lock] = ContextVar("plugin_tool_providers_lock")
"""
To avoid race-conditions caused by gunicorn thread recycling, using RecyclableContextVar to replace with
"""
plugin_tool_providers: RecyclableContextVar[dict[str, "PluginToolProviderController"]] = RecyclableContextVar(
ContextVar("plugin_tool_providers")
)
plugin_model_providers: ContextVar[list["PluginModelProviderEntity"] | None] = ContextVar("plugin_model_providers")
plugin_model_providers_lock: ContextVar[Lock] = ContextVar("plugin_model_providers_lock")
plugin_tool_providers_lock: RecyclableContextVar[Lock] = RecyclableContextVar(ContextVar("plugin_tool_providers_lock"))
plugin_model_providers: RecyclableContextVar[list["PluginModelProviderEntity"] | None] = RecyclableContextVar(
ContextVar("plugin_model_providers")
)
plugin_model_providers_lock: RecyclableContextVar[Lock] = RecyclableContextVar(
ContextVar("plugin_model_providers_lock")
)
plugin_model_schema_lock: RecyclableContextVar[Lock] = RecyclableContextVar(ContextVar("plugin_model_schema_lock"))
plugin_model_schemas: RecyclableContextVar[dict[str, "AIModelEntity"]] = RecyclableContextVar(
ContextVar("plugin_model_schemas")
)

65
api/contexts/wrapper.py Normal file
View File

@ -0,0 +1,65 @@
from contextvars import ContextVar
from typing import Generic, TypeVar
T = TypeVar("T")
class HiddenValue:
pass
_default = HiddenValue()
class RecyclableContextVar(Generic[T]):
"""
RecyclableContextVar is a wrapper around ContextVar
It's safe to use in gunicorn with thread recycling, but features like `reset` are not available for now
NOTE: you need to call `increment_thread_recycles` before requests
"""
_thread_recycles: ContextVar[int] = ContextVar("thread_recycles")
@classmethod
def increment_thread_recycles(cls):
try:
recycles = cls._thread_recycles.get()
cls._thread_recycles.set(recycles + 1)
except LookupError:
cls._thread_recycles.set(0)
def __init__(self, context_var: ContextVar[T]):
self._context_var = context_var
self._updates = ContextVar[int](context_var.name + "_updates", default=0)
def get(self, default: T | HiddenValue = _default) -> T:
thread_recycles = self._thread_recycles.get(0)
self_updates = self._updates.get()
if thread_recycles > self_updates:
self._updates.set(thread_recycles)
# check if thread is recycled and should be updated
if thread_recycles < self_updates:
return self._context_var.get()
else:
# thread_recycles >= self_updates, means current context is invalid
if isinstance(default, HiddenValue) or default is _default:
raise LookupError
else:
return default
def set(self, value: T):
# it leads to a situation that self.updates is less than cls.thread_recycles if `set` was never called before
# increase it manually
thread_recycles = self._thread_recycles.get(0)
self_updates = self._updates.get()
if thread_recycles > self_updates:
self._updates.set(thread_recycles)
if self._updates.get() == self._thread_recycles.get(0):
# after increment,
self._updates.set(self._updates.get() + 1)
# set the context
self._context_var.set(value)

View File

@ -71,7 +71,7 @@ from .app import (
from .auth import activate, data_source_bearer_auth, data_source_oauth, forgot_password, login, oauth
# Import billing controllers
from .billing import billing
from .billing import billing, compliance
# Import datasets controllers
from .datasets import (

View File

@ -316,7 +316,7 @@ class AppTraceApi(Resource):
@account_initialization_required
def post(self, app_id):
# add app trace
if not current_user.is_admin_or_owner:
if not current_user.is_editing_role:
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument("enabled", type=bool, required=True, location="json")

View File

@ -2,7 +2,6 @@ from datetime import UTC, datetime
from flask_login import current_user # type: ignore
from flask_restful import Resource, marshal_with, reqparse # type: ignore
from sqlalchemy.orm import Session
from werkzeug.exceptions import Forbidden, NotFound
from constants.languages import supported_language
@ -51,37 +50,35 @@ class AppSite(Resource):
if not current_user.is_editor:
raise Forbidden()
with Session(db.engine) as session:
site = session.query(Site).filter(Site.app_id == app_model.id).first()
site = db.session.query(Site).filter(Site.app_id == app_model.id).first()
if not site:
raise NotFound
if not site:
raise NotFound
for attr_name in [
"title",
"icon_type",
"icon",
"icon_background",
"description",
"default_language",
"chat_color_theme",
"chat_color_theme_inverted",
"customize_domain",
"copyright",
"privacy_policy",
"custom_disclaimer",
"customize_token_strategy",
"prompt_public",
"show_workflow_steps",
"use_icon_as_answer_icon",
]:
value = args.get(attr_name)
if value is not None:
setattr(site, attr_name, value)
for attr_name in [
"title",
"icon_type",
"icon",
"icon_background",
"description",
"default_language",
"chat_color_theme",
"chat_color_theme_inverted",
"customize_domain",
"copyright",
"privacy_policy",
"custom_disclaimer",
"customize_token_strategy",
"prompt_public",
"show_workflow_steps",
"use_icon_as_answer_icon",
]:
value = args.get(attr_name)
if value is not None:
setattr(site, attr_name, value)
site.updated_by = current_user.id
site.updated_at = datetime.now(UTC).replace(tzinfo=None)
session.commit()
site.updated_by = current_user.id
site.updated_at = datetime.now(UTC).replace(tzinfo=None)
db.session.commit()
return site

View File

@ -1,8 +1,10 @@
import json
import logging
from typing import cast
from flask import abort, request
from flask_restful import Resource, inputs, marshal_with, reqparse # type: ignore
from sqlalchemy.orm import Session
from werkzeug.exceptions import Forbidden, InternalServerError, NotFound
import services
@ -13,6 +15,7 @@ from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import account_initialization_required, setup_required
from core.app.apps.base_app_queue_manager import AppQueueManager
from core.app.entities.app_invoke_entities import InvokeFrom
from extensions.ext_database import db
from factories import variable_factory
from fields.workflow_fields import workflow_fields, workflow_pagination_fields
from fields.workflow_run_fields import workflow_run_node_execution_fields
@ -24,7 +27,7 @@ from models.account import Account
from models.model import AppMode
from services.app_generate_service import AppGenerateService
from services.errors.app import WorkflowHashNotEqualError
from services.workflow_service import WorkflowService
from services.workflow_service import DraftWorkflowDeletionError, WorkflowInUseError, WorkflowService
logger = logging.getLogger(__name__)
@ -246,6 +249,80 @@ class WorkflowDraftRunIterationNodeApi(Resource):
raise InternalServerError()
class AdvancedChatDraftRunLoopNodeApi(Resource):
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT])
def post(self, app_model: App, node_id: str):
"""
Run draft workflow loop node
"""
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
if not isinstance(current_user, Account):
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument("inputs", type=dict, location="json")
args = parser.parse_args()
try:
response = AppGenerateService.generate_single_loop(
app_model=app_model, user=current_user, node_id=node_id, args=args, streaming=True
)
return helper.compact_generate_response(response)
except services.errors.conversation.ConversationNotExistsError:
raise NotFound("Conversation Not Exists.")
except services.errors.conversation.ConversationCompletedError:
raise ConversationCompletedError()
except ValueError as e:
raise e
except Exception:
logging.exception("internal server error.")
raise InternalServerError()
class WorkflowDraftRunLoopNodeApi(Resource):
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.WORKFLOW])
def post(self, app_model: App, node_id: str):
"""
Run draft workflow loop node
"""
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
raise Forbidden()
if not isinstance(current_user, Account):
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument("inputs", type=dict, location="json")
args = parser.parse_args()
try:
response = AppGenerateService.generate_single_loop(
app_model=app_model, user=current_user, node_id=node_id, args=args, streaming=True
)
return helper.compact_generate_response(response)
except services.errors.conversation.ConversationNotExistsError:
raise NotFound("Conversation Not Exists.")
except services.errors.conversation.ConversationCompletedError:
raise ConversationCompletedError()
except ValueError as e:
raise e
except Exception:
logging.exception("internal server error.")
raise InternalServerError()
class DraftWorkflowRunApi(Resource):
@setup_required
@login_required
@ -365,10 +442,38 @@ class PublishedWorkflowApi(Resource):
if not isinstance(current_user, Account):
raise Forbidden()
workflow_service = WorkflowService()
workflow = workflow_service.publish_workflow(app_model=app_model, account=current_user)
parser = reqparse.RequestParser()
parser.add_argument("marked_name", type=str, required=False, default="", location="json")
parser.add_argument("marked_comment", type=str, required=False, default="", location="json")
args = parser.parse_args()
return {"result": "success", "created_at": TimestampField().format(workflow.created_at)}
# Validate name and comment length
if args.marked_name and len(args.marked_name) > 20:
raise ValueError("Marked name cannot exceed 20 characters")
if args.marked_comment and len(args.marked_comment) > 100:
raise ValueError("Marked comment cannot exceed 100 characters")
workflow_service = WorkflowService()
with Session(db.engine) as session:
workflow = workflow_service.publish_workflow(
session=session,
app_model=app_model,
account=current_user,
marked_name=args.marked_name or "",
marked_comment=args.marked_comment or "",
)
app_model.workflow_id = workflow.id
db.session.commit()
workflow_created_at = TimestampField().format(workflow.created_at)
session.commit()
return {
"result": "success",
"created_at": workflow_created_at,
}
class DefaultBlockConfigsApi(Resource):
@ -490,32 +595,193 @@ class PublishedAllWorkflowApi(Resource):
parser = reqparse.RequestParser()
parser.add_argument("page", type=inputs.int_range(1, 99999), required=False, default=1, location="args")
parser.add_argument("limit", type=inputs.int_range(1, 100), required=False, default=20, location="args")
parser.add_argument("user_id", type=str, required=False, location="args")
parser.add_argument("named_only", type=inputs.boolean, required=False, default=False, location="args")
args = parser.parse_args()
page = args.get("page")
limit = args.get("limit")
page = int(args.get("page", 1))
limit = int(args.get("limit", 10))
user_id = args.get("user_id")
named_only = args.get("named_only", False)
if user_id:
if user_id != current_user.id:
raise Forbidden()
user_id = cast(str, user_id)
workflow_service = WorkflowService()
workflows, has_more = workflow_service.get_all_published_workflow(app_model=app_model, page=page, limit=limit)
with Session(db.engine) as session:
workflows, has_more = workflow_service.get_all_published_workflow(
session=session,
app_model=app_model,
page=page,
limit=limit,
user_id=user_id,
named_only=named_only,
)
return {"items": workflows, "page": page, "limit": limit, "has_more": has_more}
return {
"items": workflows,
"page": page,
"limit": limit,
"has_more": has_more,
}
api.add_resource(DraftWorkflowApi, "/apps/<uuid:app_id>/workflows/draft")
api.add_resource(WorkflowConfigApi, "/apps/<uuid:app_id>/workflows/draft/config")
api.add_resource(AdvancedChatDraftWorkflowRunApi, "/apps/<uuid:app_id>/advanced-chat/workflows/draft/run")
api.add_resource(DraftWorkflowRunApi, "/apps/<uuid:app_id>/workflows/draft/run")
api.add_resource(WorkflowTaskStopApi, "/apps/<uuid:app_id>/workflow-runs/tasks/<string:task_id>/stop")
api.add_resource(DraftWorkflowNodeRunApi, "/apps/<uuid:app_id>/workflows/draft/nodes/<string:node_id>/run")
class WorkflowByIdApi(Resource):
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
@marshal_with(workflow_fields)
def patch(self, app_model: App, workflow_id: str):
"""
Update workflow attributes
"""
# Check permission
if not current_user.is_editor:
raise Forbidden()
if not isinstance(current_user, Account):
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument("marked_name", type=str, required=False, location="json")
parser.add_argument("marked_comment", type=str, required=False, location="json")
args = parser.parse_args()
# Validate name and comment length
if args.marked_name and len(args.marked_name) > 20:
raise ValueError("Marked name cannot exceed 20 characters")
if args.marked_comment and len(args.marked_comment) > 100:
raise ValueError("Marked comment cannot exceed 100 characters")
args = parser.parse_args()
# Prepare update data
update_data = {}
if args.get("marked_name") is not None:
update_data["marked_name"] = args["marked_name"]
if args.get("marked_comment") is not None:
update_data["marked_comment"] = args["marked_comment"]
if not update_data:
return {"message": "No valid fields to update"}, 400
workflow_service = WorkflowService()
# Create a session and manage the transaction
with Session(db.engine, expire_on_commit=False) as session:
workflow = workflow_service.update_workflow(
session=session,
workflow_id=workflow_id,
tenant_id=app_model.tenant_id,
account_id=current_user.id,
data=update_data,
)
if not workflow:
raise NotFound("Workflow not found")
# Commit the transaction in the controller
session.commit()
return workflow
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def delete(self, app_model: App, workflow_id: str):
"""
Delete workflow
"""
# Check permission
if not current_user.is_editor:
raise Forbidden()
if not isinstance(current_user, Account):
raise Forbidden()
workflow_service = WorkflowService()
# Create a session and manage the transaction
with Session(db.engine) as session:
try:
workflow_service.delete_workflow(
session=session, workflow_id=workflow_id, tenant_id=app_model.tenant_id
)
# Commit the transaction in the controller
session.commit()
except WorkflowInUseError as e:
abort(400, description=str(e))
except DraftWorkflowDeletionError as e:
abort(400, description=str(e))
except ValueError as e:
raise NotFound(str(e))
return None, 204
api.add_resource(
DraftWorkflowApi,
"/apps/<uuid:app_id>/workflows/draft",
)
api.add_resource(
WorkflowConfigApi,
"/apps/<uuid:app_id>/workflows/draft/config",
)
api.add_resource(
AdvancedChatDraftWorkflowRunApi,
"/apps/<uuid:app_id>/advanced-chat/workflows/draft/run",
)
api.add_resource(
DraftWorkflowRunApi,
"/apps/<uuid:app_id>/workflows/draft/run",
)
api.add_resource(
WorkflowTaskStopApi,
"/apps/<uuid:app_id>/workflow-runs/tasks/<string:task_id>/stop",
)
api.add_resource(
DraftWorkflowNodeRunApi,
"/apps/<uuid:app_id>/workflows/draft/nodes/<string:node_id>/run",
)
api.add_resource(
AdvancedChatDraftRunIterationNodeApi,
"/apps/<uuid:app_id>/advanced-chat/workflows/draft/iteration/nodes/<string:node_id>/run",
)
api.add_resource(
WorkflowDraftRunIterationNodeApi, "/apps/<uuid:app_id>/workflows/draft/iteration/nodes/<string:node_id>/run"
WorkflowDraftRunIterationNodeApi,
"/apps/<uuid:app_id>/workflows/draft/iteration/nodes/<string:node_id>/run",
)
api.add_resource(PublishedWorkflowApi, "/apps/<uuid:app_id>/workflows/publish")
api.add_resource(PublishedAllWorkflowApi, "/apps/<uuid:app_id>/workflows")
api.add_resource(DefaultBlockConfigsApi, "/apps/<uuid:app_id>/workflows/default-workflow-block-configs")
api.add_resource(
DefaultBlockConfigApi, "/apps/<uuid:app_id>/workflows/default-workflow-block-configs/<string:block_type>"
AdvancedChatDraftRunLoopNodeApi,
"/apps/<uuid:app_id>/advanced-chat/workflows/draft/loop/nodes/<string:node_id>/run",
)
api.add_resource(
WorkflowDraftRunLoopNodeApi,
"/apps/<uuid:app_id>/workflows/draft/loop/nodes/<string:node_id>/run",
)
api.add_resource(
PublishedWorkflowApi,
"/apps/<uuid:app_id>/workflows/publish",
)
api.add_resource(
PublishedAllWorkflowApi,
"/apps/<uuid:app_id>/workflows",
)
api.add_resource(
DefaultBlockConfigsApi,
"/apps/<uuid:app_id>/workflows/default-workflow-block-configs",
)
api.add_resource(
DefaultBlockConfigApi,
"/apps/<uuid:app_id>/workflows/default-workflow-block-configs/<string:block_type>",
)
api.add_resource(
ConvertToWorkflowApi,
"/apps/<uuid:app_id>/convert-to-workflow",
)
api.add_resource(
WorkflowByIdApi,
"/apps/<uuid:app_id>/workflows/<string:workflow_id>",
)
api.add_resource(ConvertToWorkflowApi, "/apps/<uuid:app_id>/convert-to-workflow")

View File

@ -1,13 +1,18 @@
from datetime import datetime
from flask_restful import Resource, marshal_with, reqparse # type: ignore
from flask_restful.inputs import int_range # type: ignore
from sqlalchemy.orm import Session
from controllers.console import api
from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import account_initialization_required, setup_required
from extensions.ext_database import db
from fields.workflow_app_log_fields import workflow_app_log_pagination_fields
from libs.login import login_required
from models import App
from models.model import AppMode
from models.workflow import WorkflowRunStatus
from services.workflow_app_service import WorkflowAppService
@ -24,17 +29,38 @@ class WorkflowAppLogApi(Resource):
parser = reqparse.RequestParser()
parser.add_argument("keyword", type=str, location="args")
parser.add_argument("status", type=str, choices=["succeeded", "failed", "stopped"], location="args")
parser.add_argument(
"created_at__before", type=str, location="args", help="Filter logs created before this timestamp"
)
parser.add_argument(
"created_at__after", type=str, location="args", help="Filter logs created after this timestamp"
)
parser.add_argument("page", type=int_range(1, 99999), default=1, location="args")
parser.add_argument("limit", type=int_range(1, 100), default=20, location="args")
args = parser.parse_args()
args.status = WorkflowRunStatus(args.status) if args.status else None
if args.created_at__before:
args.created_at__before = datetime.fromisoformat(args.created_at__before.replace("Z", "+00:00"))
if args.created_at__after:
args.created_at__after = datetime.fromisoformat(args.created_at__after.replace("Z", "+00:00"))
# get paginate workflow app logs
workflow_app_service = WorkflowAppService()
workflow_app_log_pagination = workflow_app_service.get_paginate_workflow_app_logs(
app_model=app_model, args=args
)
with Session(db.engine) as session:
workflow_app_log_pagination = workflow_app_service.get_paginate_workflow_app_logs(
session=session,
app_model=app_model,
keyword=args.keyword,
status=args.status,
created_at_before=args.created_at__before,
created_at_after=args.created_at__after,
page=args.page,
limit=args.limit,
)
return workflow_app_log_pagination
return workflow_app_log_pagination
api.add_resource(WorkflowAppLogApi, "/apps/<uuid:app_id>/workflow-app-logs")

View File

@ -0,0 +1,35 @@
from flask import request
from flask_login import current_user # type: ignore
from flask_restful import Resource, reqparse # type: ignore
from libs.helper import extract_remote_ip
from libs.login import login_required
from services.billing_service import BillingService
from .. import api
from ..wraps import account_initialization_required, only_edition_cloud, setup_required
class ComplianceApi(Resource):
@setup_required
@login_required
@account_initialization_required
@only_edition_cloud
def get(self):
parser = reqparse.RequestParser()
parser.add_argument("doc_name", type=str, required=True, location="args")
args = parser.parse_args()
ip_address = extract_remote_ip(request)
device_info = request.headers.get("User-Agent", "Unknown device")
return BillingService.get_compliance_download_link(
doc_name=args.doc_name,
account_id=current_user.id,
tenant_id=current_user.current_tenant_id,
ip=ip_address,
device_info=device_info,
)
api.add_resource(ComplianceApi, "/compliance/download")

View File

@ -122,7 +122,7 @@ class DataSourceNotionListApi(Resource):
if dataset.data_source_type != "notion_import":
raise ValueError("Dataset is not notion type.")
documents = session.execute(
documents = session.scalars(
select(Document).filter_by(
dataset_id=dataset_id,
tenant_id=current_user.current_tenant_id,

View File

@ -10,7 +10,12 @@ from controllers.console import api
from controllers.console.apikey import api_key_fields, api_key_list
from controllers.console.app.error import ProviderNotInitializeError
from controllers.console.datasets.error import DatasetInUseError, DatasetNameDuplicateError, IndexingEstimateError
from controllers.console.wraps import account_initialization_required, enterprise_license_required, setup_required
from controllers.console.wraps import (
account_initialization_required,
cloud_edition_billing_rate_limit_check,
enterprise_license_required,
setup_required,
)
from core.errors.error import LLMBadRequestError, ProviderTokenNotInitError
from core.indexing_runner import IndexingRunner
from core.model_runtime.entities.model_entities import ModelType
@ -96,6 +101,7 @@ class DatasetListApi(Resource):
@setup_required
@login_required
@account_initialization_required
@cloud_edition_billing_rate_limit_check("knowledge")
def post(self):
parser = reqparse.RequestParser()
parser.add_argument(
@ -210,6 +216,7 @@ class DatasetApi(Resource):
@setup_required
@login_required
@account_initialization_required
@cloud_edition_billing_rate_limit_check("knowledge")
def patch(self, dataset_id):
dataset_id_str = str(dataset_id)
dataset = DatasetService.get_dataset(dataset_id_str)
@ -276,7 +283,11 @@ class DatasetApi(Resource):
data = request.get_json()
# check embedding model setting
if data.get("indexing_technique") == "high_quality":
if (
data.get("indexing_technique") == "high_quality"
and data.get("embedding_model_provider") is not None
and data.get("embedding_model") is not None
):
DatasetService.check_embedding_model_setting(
dataset.tenant_id, data.get("embedding_model_provider"), data.get("embedding_model")
)
@ -313,6 +324,7 @@ class DatasetApi(Resource):
@setup_required
@login_required
@account_initialization_required
@cloud_edition_billing_rate_limit_check("knowledge")
def delete(self, dataset_id):
dataset_id_str = str(dataset_id)
@ -623,7 +635,6 @@ class DatasetRetrievalSettingApi(Resource):
match vector_type:
case (
VectorType.RELYT
| VectorType.PGVECTOR
| VectorType.TIDB_VECTOR
| VectorType.CHROMA
| VectorType.TENCENT

View File

@ -26,6 +26,7 @@ from controllers.console.datasets.error import (
)
from controllers.console.wraps import (
account_initialization_required,
cloud_edition_billing_rate_limit_check,
cloud_edition_billing_resource_check,
setup_required,
)
@ -242,6 +243,7 @@ class DatasetDocumentListApi(Resource):
@account_initialization_required
@marshal_with(documents_and_batch_fields)
@cloud_edition_billing_resource_check("vector_space")
@cloud_edition_billing_rate_limit_check("knowledge")
def post(self, dataset_id):
dataset_id = str(dataset_id)
@ -297,6 +299,7 @@ class DatasetDocumentListApi(Resource):
@setup_required
@login_required
@account_initialization_required
@cloud_edition_billing_rate_limit_check("knowledge")
def delete(self, dataset_id):
dataset_id = str(dataset_id)
dataset = DatasetService.get_dataset(dataset_id)
@ -320,6 +323,7 @@ class DatasetInitApi(Resource):
@account_initialization_required
@marshal_with(dataset_and_document_fields)
@cloud_edition_billing_resource_check("vector_space")
@cloud_edition_billing_rate_limit_check("knowledge")
def post(self):
# The role of the current user in the ta table must be admin, owner, or editor
if not current_user.is_editor:
@ -694,6 +698,7 @@ class DocumentProcessingApi(DocumentResource):
@setup_required
@login_required
@account_initialization_required
@cloud_edition_billing_rate_limit_check("knowledge")
def patch(self, dataset_id, document_id, action):
dataset_id = str(dataset_id)
document_id = str(document_id)
@ -730,6 +735,7 @@ class DocumentDeleteApi(DocumentResource):
@setup_required
@login_required
@account_initialization_required
@cloud_edition_billing_rate_limit_check("knowledge")
def delete(self, dataset_id, document_id):
dataset_id = str(dataset_id)
document_id = str(document_id)
@ -798,6 +804,7 @@ class DocumentStatusApi(DocumentResource):
@login_required
@account_initialization_required
@cloud_edition_billing_resource_check("vector_space")
@cloud_edition_billing_rate_limit_check("knowledge")
def patch(self, dataset_id, action):
dataset_id = str(dataset_id)
dataset = DatasetService.get_dataset(dataset_id)
@ -893,6 +900,7 @@ class DocumentPauseApi(DocumentResource):
@setup_required
@login_required
@account_initialization_required
@cloud_edition_billing_rate_limit_check("knowledge")
def patch(self, dataset_id, document_id):
"""pause document."""
dataset_id = str(dataset_id)
@ -925,6 +933,7 @@ class DocumentRecoverApi(DocumentResource):
@setup_required
@login_required
@account_initialization_required
@cloud_edition_billing_rate_limit_check("knowledge")
def patch(self, dataset_id, document_id):
"""recover document."""
dataset_id = str(dataset_id)
@ -954,6 +963,7 @@ class DocumentRetryApi(DocumentResource):
@setup_required
@login_required
@account_initialization_required
@cloud_edition_billing_rate_limit_check("knowledge")
def post(self, dataset_id):
"""retry document."""

View File

@ -19,6 +19,7 @@ from controllers.console.datasets.error import (
from controllers.console.wraps import (
account_initialization_required,
cloud_edition_billing_knowledge_limit_check,
cloud_edition_billing_rate_limit_check,
cloud_edition_billing_resource_check,
setup_required,
)
@ -106,6 +107,7 @@ class DatasetDocumentSegmentListApi(Resource):
@setup_required
@login_required
@account_initialization_required
@cloud_edition_billing_rate_limit_check("knowledge")
def delete(self, dataset_id, document_id):
# check dataset
dataset_id = str(dataset_id)
@ -137,6 +139,7 @@ class DatasetDocumentSegmentApi(Resource):
@login_required
@account_initialization_required
@cloud_edition_billing_resource_check("vector_space")
@cloud_edition_billing_rate_limit_check("knowledge")
def patch(self, dataset_id, document_id, action):
dataset_id = str(dataset_id)
dataset = DatasetService.get_dataset(dataset_id)
@ -191,6 +194,7 @@ class DatasetDocumentSegmentAddApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check("vector_space")
@cloud_edition_billing_knowledge_limit_check("add_segment")
@cloud_edition_billing_rate_limit_check("knowledge")
def post(self, dataset_id, document_id):
# check dataset
dataset_id = str(dataset_id)
@ -240,6 +244,7 @@ class DatasetDocumentSegmentUpdateApi(Resource):
@login_required
@account_initialization_required
@cloud_edition_billing_resource_check("vector_space")
@cloud_edition_billing_rate_limit_check("knowledge")
def patch(self, dataset_id, document_id, segment_id):
# check dataset
dataset_id = str(dataset_id)
@ -299,6 +304,7 @@ class DatasetDocumentSegmentUpdateApi(Resource):
@setup_required
@login_required
@account_initialization_required
@cloud_edition_billing_rate_limit_check("knowledge")
def delete(self, dataset_id, document_id, segment_id):
# check dataset
dataset_id = str(dataset_id)
@ -336,6 +342,7 @@ class DatasetDocumentSegmentBatchImportApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check("vector_space")
@cloud_edition_billing_knowledge_limit_check("add_segment")
@cloud_edition_billing_rate_limit_check("knowledge")
def post(self, dataset_id, document_id):
# check dataset
dataset_id = str(dataset_id)
@ -402,6 +409,7 @@ class ChildChunkAddApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check("vector_space")
@cloud_edition_billing_knowledge_limit_check("add_segment")
@cloud_edition_billing_rate_limit_check("knowledge")
def post(self, dataset_id, document_id, segment_id):
# check dataset
dataset_id = str(dataset_id)
@ -499,6 +507,7 @@ class ChildChunkAddApi(Resource):
@login_required
@account_initialization_required
@cloud_edition_billing_resource_check("vector_space")
@cloud_edition_billing_rate_limit_check("knowledge")
def patch(self, dataset_id, document_id, segment_id):
# check dataset
dataset_id = str(dataset_id)
@ -542,6 +551,7 @@ class ChildChunkUpdateApi(Resource):
@setup_required
@login_required
@account_initialization_required
@cloud_edition_billing_rate_limit_check("knowledge")
def delete(self, dataset_id, document_id, segment_id, child_chunk_id):
# check dataset
dataset_id = str(dataset_id)
@ -586,6 +596,7 @@ class ChildChunkUpdateApi(Resource):
@login_required
@account_initialization_required
@cloud_edition_billing_resource_check("vector_space")
@cloud_edition_billing_rate_limit_check("knowledge")
def patch(self, dataset_id, document_id, segment_id, child_chunk_id):
# check dataset
dataset_id = str(dataset_id)

View File

@ -2,7 +2,11 @@ from flask_restful import Resource # type: ignore
from controllers.console import api
from controllers.console.datasets.hit_testing_base import DatasetsHitTestingBase
from controllers.console.wraps import account_initialization_required, setup_required
from controllers.console.wraps import (
account_initialization_required,
cloud_edition_billing_rate_limit_check,
setup_required,
)
from libs.login import login_required
@ -10,6 +14,7 @@ class HitTestingApi(Resource, DatasetsHitTestingBase):
@setup_required
@login_required
@account_initialization_required
@cloud_edition_billing_rate_limit_check("knowledge")
def post(self, dataset_id):
dataset_id_str = str(dataset_id)

View File

@ -101,3 +101,9 @@ class AccountInFreezeError(BaseHTTPException):
"This email account has been deleted within the past 30 days"
"and is temporarily unavailable for new account registration."
)
class CompilanceRateLimitError(BaseHTTPException):
error_code = "compilance_rate_limit"
description = "Rate limit exceeded for downloading compliance report."
code = 429

View File

@ -26,6 +26,7 @@ from libs.helper import TimestampField
from libs.login import login_required
from models.account import Tenant, TenantStatus
from services.account_service import TenantService
from services.feature_service import FeatureService
from services.file_service import FileService
from services.workspace_service import WorkspaceService
@ -68,6 +69,11 @@ class TenantListApi(Resource):
tenants = TenantService.get_join_tenants(current_user)
for tenant in tenants:
features = FeatureService.get_features(tenant.id)
if features.billing.enabled:
tenant.plan = features.billing.subscription.plan
else:
tenant.plan = "sandbox"
if tenant.id == current_user.current_tenant_id:
tenant.current = True # Set current=True for current tenant
return {"workspaces": marshal(tenants, tenants_fields)}, 200

View File

@ -1,5 +1,6 @@
import json
import os
import time
from functools import wraps
from flask import abort, request
@ -8,6 +9,8 @@ from flask_login import current_user # type: ignore
from configs import dify_config
from controllers.console.workspace.error import AccountNotInitializedError
from extensions.ext_database import db
from extensions.ext_redis import redis_client
from models.dataset import RateLimitLog
from models.model import DifySetup
from services.feature_service import FeatureService, LicenseStatus
from services.operation_service import OperationService
@ -67,7 +70,9 @@ def cloud_edition_billing_resource_check(resource: str):
elif resource == "apps" and 0 < apps.limit <= apps.size:
abort(403, "The number of apps has reached the limit of your subscription.")
elif resource == "vector_space" and 0 < vector_space.limit <= vector_space.size:
abort(403, "The capacity of the vector space has reached the limit of your subscription.")
abort(
403, "The capacity of the knowledge storage space has reached the limit of your subscription."
)
elif resource == "documents" and 0 < documents_upload_quota.limit <= documents_upload_quota.size:
# The api of file upload is used in the multiple places,
# so we need to check the source of the request from datasets
@ -112,6 +117,41 @@ def cloud_edition_billing_knowledge_limit_check(resource: str):
return interceptor
def cloud_edition_billing_rate_limit_check(resource: str):
def interceptor(view):
@wraps(view)
def decorated(*args, **kwargs):
if resource == "knowledge":
knowledge_rate_limit = FeatureService.get_knowledge_rate_limit(current_user.current_tenant_id)
if knowledge_rate_limit.enabled:
current_time = int(time.time() * 1000)
key = f"rate_limit_{current_user.current_tenant_id}"
redis_client.zadd(key, {current_time: current_time})
redis_client.zremrangebyscore(key, 0, current_time - 60000)
request_count = redis_client.zcard(key)
if request_count > knowledge_rate_limit.limit:
# add ratelimit record
rate_limit_log = RateLimitLog(
tenant_id=current_user.current_tenant_id,
subscription_plan=knowledge_rate_limit.subscription_plan,
operation="knowledge",
)
db.session.add(rate_limit_log)
db.session.commit()
abort(
403, "Sorry, you have reached the knowledge base request rate limit of your subscription."
)
return view(*args, **kwargs)
return decorated
return interceptor
def cloud_utm_record(view):
@wraps(view)
def decorated(*args, **kwargs):

View File

@ -1,3 +1,5 @@
from urllib.parse import quote
from flask import Response, request
from flask_restful import Resource, reqparse # type: ignore
from werkzeug.exceptions import NotFound
@ -71,7 +73,8 @@ class FilePreviewApi(Resource):
if upload_file.size > 0:
response.headers["Content-Length"] = str(upload_file.size)
if args["as_attachment"]:
response.headers["Content-Disposition"] = f"attachment; filename={upload_file.name}"
encoded_filename = quote(upload_file.name)
response.headers["Content-Disposition"] = f"attachment; filename*=UTF-8''{encoded_filename}"
return response

View File

@ -50,8 +50,8 @@ class EnterpriseWorkspaceNoOwnerEmail(Resource):
"plan": tenant.plan,
"status": tenant.status,
"custom_config": json.loads(tenant.custom_config) if tenant.custom_config else {},
"created_at": tenant.created_at.isoformat() if tenant.created_at else None,
"updated_at": tenant.updated_at.isoformat() if tenant.updated_at else None,
"created_at": tenant.created_at.isoformat() + "Z" if tenant.created_at else None,
"updated_at": tenant.updated_at.isoformat() + "Z" if tenant.updated_at else None,
}
return {

View File

@ -10,6 +10,7 @@ from controllers.service_api.app.error import NotChatAppError
from controllers.service_api.wraps import FetchUserArg, WhereisUserArg, validate_app_token
from core.app.entities.app_invoke_entities import InvokeFrom
from fields.conversation_fields import message_file_fields
from fields.message_fields import feedback_fields, retriever_resource_fields
from fields.raws import FilesContainedField
from libs.helper import TimestampField, uuid_value
from models.model import App, AppMode, EndUser
@ -18,26 +19,6 @@ from services.message_service import MessageService
class MessageListApi(Resource):
feedback_fields = {"rating": fields.String}
retriever_resource_fields = {
"id": fields.String,
"message_id": fields.String,
"position": fields.Integer,
"dataset_id": fields.String,
"dataset_name": fields.String,
"document_id": fields.String,
"document_name": fields.String,
"data_source_type": fields.String,
"segment_id": fields.String,
"score": fields.Float,
"hit_count": fields.Integer,
"word_count": fields.Integer,
"segment_position": fields.Integer,
"index_node_hash": fields.String,
"content": fields.String,
"created_at": TimestampField,
}
agent_thought_fields = {
"id": fields.String,
"chain_id": fields.String,

View File

@ -1,7 +1,9 @@
import logging
from datetime import datetime
from flask_restful import Resource, fields, marshal_with, reqparse # type: ignore
from flask_restful.inputs import int_range # type: ignore
from sqlalchemy.orm import Session
from werkzeug.exceptions import InternalServerError
from controllers.service_api import api
@ -25,7 +27,7 @@ from extensions.ext_database import db
from fields.workflow_app_log_fields import workflow_app_log_pagination_fields
from libs import helper
from models.model import App, AppMode, EndUser
from models.workflow import WorkflowRun
from models.workflow import WorkflowRun, WorkflowRunStatus
from services.app_generate_service import AppGenerateService
from services.workflow_app_service import WorkflowAppService
@ -125,17 +127,34 @@ class WorkflowAppLogApi(Resource):
parser = reqparse.RequestParser()
parser.add_argument("keyword", type=str, location="args")
parser.add_argument("status", type=str, choices=["succeeded", "failed", "stopped"], location="args")
parser.add_argument("created_at__before", type=str, location="args")
parser.add_argument("created_at__after", type=str, location="args")
parser.add_argument("page", type=int_range(1, 99999), default=1, location="args")
parser.add_argument("limit", type=int_range(1, 100), default=20, location="args")
args = parser.parse_args()
args.status = WorkflowRunStatus(args.status) if args.status else None
if args.created_at__before:
args.created_at__before = datetime.fromisoformat(args.created_at__before.replace("Z", "+00:00"))
if args.created_at__after:
args.created_at__after = datetime.fromisoformat(args.created_at__after.replace("Z", "+00:00"))
# get paginate workflow app logs
workflow_app_service = WorkflowAppService()
workflow_app_log_pagination = workflow_app_service.get_paginate_workflow_app_logs(
app_model=app_model, args=args
)
with Session(db.engine) as session:
workflow_app_log_pagination = workflow_app_service.get_paginate_workflow_app_logs(
session=session,
app_model=app_model,
keyword=args.keyword,
status=args.status,
created_at_before=args.created_at__before,
created_at_after=args.created_at__after,
page=args.page,
limit=args.limit,
)
return workflow_app_log_pagination
return workflow_app_log_pagination
api.add_resource(WorkflowRunApi, "/workflows/run")

View File

@ -336,6 +336,10 @@ class DocumentUpdateByFileApi(DatasetApiResource):
if not dataset:
raise ValueError("Dataset is not exist.")
# indexing_technique is already set in dataset since this is an update
args["indexing_technique"] = dataset.indexing_technique
if "file" in request.files:
# save file info
file = request.files["file"]

View File

@ -1,3 +1,4 @@
import time
from collections.abc import Callable
from datetime import UTC, datetime, timedelta
from enum import Enum
@ -13,8 +14,10 @@ from sqlalchemy.orm import Session
from werkzeug.exceptions import Forbidden, Unauthorized
from extensions.ext_database import db
from extensions.ext_redis import redis_client
from libs.login import _get_user
from models.account import Account, Tenant, TenantAccountJoin, TenantStatus
from models.dataset import RateLimitLog
from models.model import ApiToken, App, EndUser
from services.feature_service import FeatureService
@ -139,6 +142,43 @@ def cloud_edition_billing_knowledge_limit_check(resource: str, api_token_type: s
return interceptor
def cloud_edition_billing_rate_limit_check(resource: str, api_token_type: str):
def interceptor(view):
@wraps(view)
def decorated(*args, **kwargs):
api_token = validate_and_get_api_token(api_token_type)
if resource == "knowledge":
knowledge_rate_limit = FeatureService.get_knowledge_rate_limit(api_token.tenant_id)
if knowledge_rate_limit.enabled:
current_time = int(time.time() * 1000)
key = f"rate_limit_{api_token.tenant_id}"
redis_client.zadd(key, {current_time: current_time})
redis_client.zremrangebyscore(key, 0, current_time - 60000)
request_count = redis_client.zcard(key)
if request_count > knowledge_rate_limit.limit:
# add ratelimit record
rate_limit_log = RateLimitLog(
tenant_id=api_token.tenant_id,
subscription_plan=knowledge_rate_limit.subscription_plan,
operation="knowledge",
)
db.session.add(rate_limit_log)
db.session.commit()
raise Forbidden(
"Sorry, you have reached the knowledge base request rate limit of your subscription."
)
return view(*args, **kwargs)
return decorated
return interceptor
def validate_dataset_token(view=None):
def decorator(view):
@wraps(view)
@ -154,7 +194,7 @@ def validate_dataset_token(view=None):
) # TODO: only owner information is required, so only one is returned.
if tenant_account_join:
tenant, ta = tenant_account_join
account = Account.query.filter_by(id=ta.account_id).first()
account = db.session.query(Account).filter(Account.id == ta.account_id).first()
# Login admin
if account:
account.current_tenant = tenant

View File

@ -21,7 +21,7 @@ from core.app.entities.app_invoke_entities import InvokeFrom
from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError
from core.model_runtime.errors.invoke import InvokeError
from fields.conversation_fields import message_file_fields
from fields.message_fields import agent_thought_fields
from fields.message_fields import agent_thought_fields, feedback_fields, retriever_resource_fields
from fields.raws import FilesContainedField
from libs import helper
from libs.helper import TimestampField, uuid_value
@ -34,27 +34,6 @@ from services.message_service import MessageService
class MessageListApi(WebApiResource):
feedback_fields = {"rating": fields.String}
retriever_resource_fields = {
"id": fields.String,
"message_id": fields.String,
"position": fields.Integer,
"dataset_id": fields.String,
"dataset_name": fields.String,
"document_id": fields.String,
"document_name": fields.String,
"data_source_type": fields.String,
"segment_id": fields.String,
"score": fields.Float,
"hit_count": fields.Integer,
"word_count": fields.Integer,
"segment_position": fields.Integer,
"index_node_hash": fields.String,
"content": fields.String,
"created_at": TimestampField,
}
message_fields = {
"id": fields.String,
"conversation_id": fields.String,

View File

@ -1,7 +1,7 @@
from enum import StrEnum
from typing import Any, Optional, Union
from pydantic import BaseModel
from pydantic import BaseModel, Field
from core.tools.entities.tool_entities import ToolInvokeMessage, ToolProviderType
@ -14,7 +14,7 @@ class AgentToolEntity(BaseModel):
provider_type: ToolProviderType
provider_id: str
tool_name: str
tool_parameters: dict[str, Any] = {}
tool_parameters: dict[str, Any] = Field(default_factory=dict)
plugin_unique_identifier: str | None = None

View File

@ -2,9 +2,9 @@ from collections.abc import Mapping
from typing import Any
from core.app.app_config.entities import ModelConfigEntity
from core.entities import DEFAULT_PLUGIN_ID
from core.model_runtime.entities.model_entities import ModelPropertyKey, ModelType
from core.model_runtime.model_providers.model_provider_factory import ModelProviderFactory
from core.plugin.entities.plugin import ModelProviderID
from core.provider_manager import ProviderManager
@ -61,9 +61,7 @@ class ModelConfigManager:
raise ValueError(f"model.provider is required and must be in {str(model_provider_names)}")
if "/" not in config["model"]["provider"]:
config["model"]["provider"] = (
f"{DEFAULT_PLUGIN_ID}/{config['model']['provider']}/{config['model']['provider']}"
)
config["model"]["provider"] = str(ModelProviderID(config["model"]["provider"]))
if config["model"]["provider"] not in model_provider_names:
raise ValueError(f"model.provider is required and must be in {str(model_provider_names)}")

View File

@ -17,8 +17,8 @@ class ModelConfigEntity(BaseModel):
provider: str
model: str
mode: Optional[str] = None
parameters: dict[str, Any] = {}
stop: list[str] = []
parameters: dict[str, Any] = Field(default_factory=dict)
stop: list[str] = Field(default_factory=list)
class AdvancedChatMessageEntity(BaseModel):
@ -132,7 +132,7 @@ class ExternalDataVariableEntity(BaseModel):
variable: str
type: str
config: dict[str, Any] = {}
config: dict[str, Any] = Field(default_factory=dict)
class DatasetRetrieveConfigEntity(BaseModel):
@ -188,7 +188,7 @@ class SensitiveWordAvoidanceEntity(BaseModel):
"""
type: str
config: dict[str, Any] = {}
config: dict[str, Any] = Field(default_factory=dict)
class TextToSpeechEntity(BaseModel):

View File

@ -140,9 +140,7 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
app_config=app_config,
file_upload_config=file_extra_config,
conversation_id=conversation.id if conversation else None,
inputs=conversation.inputs
if conversation
else self._prepare_user_inputs(
inputs=self._prepare_user_inputs(
user_inputs=inputs, variables=app_config.variables, tenant_id=app_model.tenant_id
),
query=query,
@ -225,6 +223,61 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
stream=streaming,
)
def single_loop_generate(
self,
app_model: App,
workflow: Workflow,
node_id: str,
user: Account | EndUser,
args: Mapping,
streaming: bool = True,
) -> Mapping[str, Any] | Generator[str | Mapping[str, Any], Any, None]:
"""
Generate App response.
:param app_model: App
:param workflow: Workflow
:param user: account or end user
:param args: request args
:param invoke_from: invoke from source
:param stream: is stream
"""
if not node_id:
raise ValueError("node_id is required")
if args.get("inputs") is None:
raise ValueError("inputs is required")
# convert to app config
app_config = AdvancedChatAppConfigManager.get_app_config(app_model=app_model, workflow=workflow)
# init application generate entity
application_generate_entity = AdvancedChatAppGenerateEntity(
task_id=str(uuid.uuid4()),
app_config=app_config,
conversation_id=None,
inputs={},
query="",
files=[],
user_id=user.id,
stream=streaming,
invoke_from=InvokeFrom.DEBUGGER,
extras={"auto_generate_conversation_name": False},
single_loop_run=AdvancedChatAppGenerateEntity.SingleLoopRunEntity(node_id=node_id, inputs=args["inputs"]),
)
contexts.tenant_id.set(application_generate_entity.app_config.tenant_id)
contexts.plugin_tool_providers.set({})
contexts.plugin_tool_providers_lock.set(threading.Lock())
return self._generate(
workflow=workflow,
user=user,
invoke_from=InvokeFrom.DEBUGGER,
application_generate_entity=application_generate_entity,
conversation=None,
stream=streaming,
)
def _generate(
self,
*,

View File

@ -79,6 +79,13 @@ class AdvancedChatAppRunner(WorkflowBasedAppRunner):
node_id=self.application_generate_entity.single_iteration_run.node_id,
user_inputs=dict(self.application_generate_entity.single_iteration_run.inputs),
)
elif self.application_generate_entity.single_loop_run:
# if only single loop run is requested
graph, variable_pool = self._get_graph_and_variable_pool_of_single_loop(
workflow=workflow,
node_id=self.application_generate_entity.single_loop_run.node_id,
user_inputs=dict(self.application_generate_entity.single_loop_run.inputs),
)
else:
inputs = self.application_generate_entity.inputs
query = self.application_generate_entity.query

View File

@ -23,10 +23,14 @@ from core.app.entities.queue_entities import (
QueueIterationCompletedEvent,
QueueIterationNextEvent,
QueueIterationStartEvent,
QueueLoopCompletedEvent,
QueueLoopNextEvent,
QueueLoopStartEvent,
QueueMessageReplaceEvent,
QueueNodeExceptionEvent,
QueueNodeFailedEvent,
QueueNodeInIterationFailedEvent,
QueueNodeInLoopFailedEvent,
QueueNodeRetryEvent,
QueueNodeStartedEvent,
QueueNodeSucceededEvent,
@ -372,7 +376,13 @@ class AdvancedChatAppGenerateTaskPipeline:
if node_finish_resp:
yield node_finish_resp
elif isinstance(event, QueueNodeFailedEvent | QueueNodeInIterationFailedEvent | QueueNodeExceptionEvent):
elif isinstance(
event,
QueueNodeFailedEvent
| QueueNodeInIterationFailedEvent
| QueueNodeInLoopFailedEvent
| QueueNodeExceptionEvent,
):
with Session(db.engine, expire_on_commit=False) as session:
workflow_node_execution = self._workflow_cycle_manager._handle_workflow_node_execution_failed(
session=session, event=event
@ -472,6 +482,54 @@ class AdvancedChatAppGenerateTaskPipeline:
)
yield iter_finish_resp
elif isinstance(event, QueueLoopStartEvent):
if not self._workflow_run_id:
raise ValueError("workflow run not initialized.")
with Session(db.engine, expire_on_commit=False) as session:
workflow_run = self._workflow_cycle_manager._get_workflow_run(
session=session, workflow_run_id=self._workflow_run_id
)
loop_start_resp = self._workflow_cycle_manager._workflow_loop_start_to_stream_response(
session=session,
task_id=self._application_generate_entity.task_id,
workflow_run=workflow_run,
event=event,
)
yield loop_start_resp
elif isinstance(event, QueueLoopNextEvent):
if not self._workflow_run_id:
raise ValueError("workflow run not initialized.")
with Session(db.engine, expire_on_commit=False) as session:
workflow_run = self._workflow_cycle_manager._get_workflow_run(
session=session, workflow_run_id=self._workflow_run_id
)
loop_next_resp = self._workflow_cycle_manager._workflow_loop_next_to_stream_response(
session=session,
task_id=self._application_generate_entity.task_id,
workflow_run=workflow_run,
event=event,
)
yield loop_next_resp
elif isinstance(event, QueueLoopCompletedEvent):
if not self._workflow_run_id:
raise ValueError("workflow run not initialized.")
with Session(db.engine, expire_on_commit=False) as session:
workflow_run = self._workflow_cycle_manager._get_workflow_run(
session=session, workflow_run_id=self._workflow_run_id
)
loop_finish_resp = self._workflow_cycle_manager._workflow_loop_completed_to_stream_response(
session=session,
task_id=self._application_generate_entity.task_id,
workflow_run=workflow_run,
event=event,
)
yield loop_finish_resp
elif isinstance(event, QueueWorkflowSucceededEvent):
if not self._workflow_run_id:
raise ValueError("workflow run not initialized.")
@ -582,6 +640,15 @@ class AdvancedChatAppGenerateTaskPipeline:
session.commit()
yield workflow_finish_resp
elif event.stopped_by in (
QueueStopEvent.StopBy.INPUT_MODERATION,
QueueStopEvent.StopBy.ANNOTATION_REPLY,
):
# When hitting input-moderation or annotation-reply, the workflow will not start
with Session(db.engine, expire_on_commit=False) as session:
# Save message
self._save_message(session=session)
session.commit()
yield self._message_end_to_stream_response()
break

View File

@ -149,9 +149,7 @@ class AgentChatAppGenerator(MessageBasedAppGenerator):
model_conf=ModelConfigConverter.convert(app_config),
file_upload_config=file_extra_config,
conversation_id=conversation.id if conversation else None,
inputs=conversation.inputs
if conversation
else self._prepare_user_inputs(
inputs=self._prepare_user_inputs(
user_inputs=inputs, variables=app_config.variables, tenant_id=app_model.tenant_id
),
query=query,

View File

@ -141,9 +141,7 @@ class ChatAppGenerator(MessageBasedAppGenerator):
model_conf=ModelConfigConverter.convert(app_config),
file_upload_config=file_extra_config,
conversation_id=conversation.id if conversation else None,
inputs=conversation.inputs
if conversation
else self._prepare_user_inputs(
inputs=self._prepare_user_inputs(
user_inputs=inputs, variables=app_config.variables, tenant_id=app_model.tenant_id
),
query=query,

View File

@ -42,7 +42,6 @@ class MessageBasedAppGenerator(BaseAppGenerator):
ChatAppGenerateEntity,
CompletionAppGenerateEntity,
AgentChatAppGenerateEntity,
AgentChatAppGenerateEntity,
],
queue_manager: AppQueueManager,
conversation: Conversation,

View File

@ -250,6 +250,60 @@ class WorkflowAppGenerator(BaseAppGenerator):
streaming=streaming,
)
def single_loop_generate(
self,
app_model: App,
workflow: Workflow,
node_id: str,
user: Account | EndUser,
args: Mapping[str, Any],
streaming: bool = True,
) -> Mapping[str, Any] | Generator[str | Mapping[str, Any], None, None]:
"""
Generate App response.
:param app_model: App
:param workflow: Workflow
:param user: account or end user
:param args: request args
:param invoke_from: invoke from source
:param stream: is stream
"""
if not node_id:
raise ValueError("node_id is required")
if args.get("inputs") is None:
raise ValueError("inputs is required")
# convert to app config
app_config = WorkflowAppConfigManager.get_app_config(app_model=app_model, workflow=workflow)
# init application generate entity
application_generate_entity = WorkflowAppGenerateEntity(
task_id=str(uuid.uuid4()),
app_config=app_config,
inputs={},
files=[],
user_id=user.id,
stream=streaming,
invoke_from=InvokeFrom.DEBUGGER,
extras={"auto_generate_conversation_name": False},
single_loop_run=WorkflowAppGenerateEntity.SingleLoopRunEntity(node_id=node_id, inputs=args["inputs"]),
workflow_run_id=str(uuid.uuid4()),
)
contexts.tenant_id.set(application_generate_entity.app_config.tenant_id)
contexts.plugin_tool_providers.set({})
contexts.plugin_tool_providers_lock.set(threading.Lock())
return self._generate(
app_model=app_model,
workflow=workflow,
user=user,
invoke_from=InvokeFrom.DEBUGGER,
application_generate_entity=application_generate_entity,
streaming=streaming,
)
def _generate_worker(
self,
flask_app: Flask,

View File

@ -81,6 +81,13 @@ class WorkflowAppRunner(WorkflowBasedAppRunner):
node_id=self.application_generate_entity.single_iteration_run.node_id,
user_inputs=self.application_generate_entity.single_iteration_run.inputs,
)
elif self.application_generate_entity.single_loop_run:
# if only single loop run is requested
graph, variable_pool = self._get_graph_and_variable_pool_of_single_loop(
workflow=workflow,
node_id=self.application_generate_entity.single_loop_run.node_id,
user_inputs=self.application_generate_entity.single_loop_run.inputs,
)
else:
inputs = self.application_generate_entity.inputs
files = self.application_generate_entity.files

View File

@ -18,9 +18,13 @@ from core.app.entities.queue_entities import (
QueueIterationCompletedEvent,
QueueIterationNextEvent,
QueueIterationStartEvent,
QueueLoopCompletedEvent,
QueueLoopNextEvent,
QueueLoopStartEvent,
QueueNodeExceptionEvent,
QueueNodeFailedEvent,
QueueNodeInIterationFailedEvent,
QueueNodeInLoopFailedEvent,
QueueNodeRetryEvent,
QueueNodeStartedEvent,
QueueNodeSucceededEvent,
@ -323,7 +327,13 @@ class WorkflowAppGenerateTaskPipeline:
if node_success_response:
yield node_success_response
elif isinstance(event, QueueNodeFailedEvent | QueueNodeInIterationFailedEvent | QueueNodeExceptionEvent):
elif isinstance(
event,
QueueNodeFailedEvent
| QueueNodeInIterationFailedEvent
| QueueNodeInLoopFailedEvent
| QueueNodeExceptionEvent,
):
with Session(db.engine, expire_on_commit=False) as session:
workflow_node_execution = self._workflow_cycle_manager._handle_workflow_node_execution_failed(
session=session,
@ -429,6 +439,57 @@ class WorkflowAppGenerateTaskPipeline:
yield iter_finish_resp
elif isinstance(event, QueueLoopStartEvent):
if not self._workflow_run_id:
raise ValueError("workflow run not initialized.")
with Session(db.engine, expire_on_commit=False) as session:
workflow_run = self._workflow_cycle_manager._get_workflow_run(
session=session, workflow_run_id=self._workflow_run_id
)
loop_start_resp = self._workflow_cycle_manager._workflow_loop_start_to_stream_response(
session=session,
task_id=self._application_generate_entity.task_id,
workflow_run=workflow_run,
event=event,
)
yield loop_start_resp
elif isinstance(event, QueueLoopNextEvent):
if not self._workflow_run_id:
raise ValueError("workflow run not initialized.")
with Session(db.engine, expire_on_commit=False) as session:
workflow_run = self._workflow_cycle_manager._get_workflow_run(
session=session, workflow_run_id=self._workflow_run_id
)
loop_next_resp = self._workflow_cycle_manager._workflow_loop_next_to_stream_response(
session=session,
task_id=self._application_generate_entity.task_id,
workflow_run=workflow_run,
event=event,
)
yield loop_next_resp
elif isinstance(event, QueueLoopCompletedEvent):
if not self._workflow_run_id:
raise ValueError("workflow run not initialized.")
with Session(db.engine, expire_on_commit=False) as session:
workflow_run = self._workflow_cycle_manager._get_workflow_run(
session=session, workflow_run_id=self._workflow_run_id
)
loop_finish_resp = self._workflow_cycle_manager._workflow_loop_completed_to_stream_response(
session=session,
task_id=self._application_generate_entity.task_id,
workflow_run=workflow_run,
event=event,
)
yield loop_finish_resp
elif isinstance(event, QueueWorkflowSucceededEvent):
if not self._workflow_run_id:
raise ValueError("workflow run not initialized.")

View File

@ -9,9 +9,13 @@ from core.app.entities.queue_entities import (
QueueIterationCompletedEvent,
QueueIterationNextEvent,
QueueIterationStartEvent,
QueueLoopCompletedEvent,
QueueLoopNextEvent,
QueueLoopStartEvent,
QueueNodeExceptionEvent,
QueueNodeFailedEvent,
QueueNodeInIterationFailedEvent,
QueueNodeInLoopFailedEvent,
QueueNodeRetryEvent,
QueueNodeStartedEvent,
QueueNodeSucceededEvent,
@ -38,7 +42,12 @@ from core.workflow.graph_engine.entities.event import (
IterationRunNextEvent,
IterationRunStartedEvent,
IterationRunSucceededEvent,
LoopRunFailedEvent,
LoopRunNextEvent,
LoopRunStartedEvent,
LoopRunSucceededEvent,
NodeInIterationFailedEvent,
NodeInLoopFailedEvent,
NodeRunExceptionEvent,
NodeRunFailedEvent,
NodeRunRetrieverResourceEvent,
@ -173,6 +182,96 @@ class WorkflowBasedAppRunner(AppRunner):
return graph, variable_pool
def _get_graph_and_variable_pool_of_single_loop(
self,
workflow: Workflow,
node_id: str,
user_inputs: dict,
) -> tuple[Graph, VariablePool]:
"""
Get variable pool of single loop
"""
# fetch workflow graph
graph_config = workflow.graph_dict
if not graph_config:
raise ValueError("workflow graph not found")
graph_config = cast(dict[str, Any], graph_config)
if "nodes" not in graph_config or "edges" not in graph_config:
raise ValueError("nodes or edges not found in workflow graph")
if not isinstance(graph_config.get("nodes"), list):
raise ValueError("nodes in workflow graph must be a list")
if not isinstance(graph_config.get("edges"), list):
raise ValueError("edges in workflow graph must be a list")
# filter nodes only in loop
node_configs = [
node
for node in graph_config.get("nodes", [])
if node.get("id") == node_id or node.get("data", {}).get("loop_id", "") == node_id
]
graph_config["nodes"] = node_configs
node_ids = [node.get("id") for node in node_configs]
# filter edges only in loop
edge_configs = [
edge
for edge in graph_config.get("edges", [])
if (edge.get("source") is None or edge.get("source") in node_ids)
and (edge.get("target") is None or edge.get("target") in node_ids)
]
graph_config["edges"] = edge_configs
# init graph
graph = Graph.init(graph_config=graph_config, root_node_id=node_id)
if not graph:
raise ValueError("graph not found in workflow")
# fetch node config from node id
loop_node_config = None
for node in node_configs:
if node.get("id") == node_id:
loop_node_config = node
break
if not loop_node_config:
raise ValueError("loop node id not found in workflow graph")
# Get node class
node_type = NodeType(loop_node_config.get("data", {}).get("type"))
node_version = loop_node_config.get("data", {}).get("version", "1")
node_cls = NODE_TYPE_CLASSES_MAPPING[node_type][node_version]
# init variable pool
variable_pool = VariablePool(
system_variables={},
user_inputs={},
environment_variables=workflow.environment_variables,
)
try:
variable_mapping = node_cls.extract_variable_selector_to_variable_mapping(
graph_config=workflow.graph_dict, config=loop_node_config
)
except NotImplementedError:
variable_mapping = {}
WorkflowEntry.mapping_user_inputs_to_variable_pool(
variable_mapping=variable_mapping,
user_inputs=user_inputs,
variable_pool=variable_pool,
tenant_id=workflow.tenant_id,
)
return graph, variable_pool
def _handle_event(self, workflow_entry: WorkflowEntry, event: GraphEngineEvent) -> None:
"""
Handle event
@ -216,6 +315,7 @@ class WorkflowBasedAppRunner(AppRunner):
node_run_index=event.route_node_state.index,
predecessor_node_id=event.predecessor_node_id,
in_iteration_id=event.in_iteration_id,
in_loop_id=event.in_loop_id,
parallel_mode_run_id=event.parallel_mode_run_id,
inputs=inputs,
process_data=process_data,
@ -240,6 +340,7 @@ class WorkflowBasedAppRunner(AppRunner):
node_run_index=event.route_node_state.index,
predecessor_node_id=event.predecessor_node_id,
in_iteration_id=event.in_iteration_id,
in_loop_id=event.in_loop_id,
parallel_mode_run_id=event.parallel_mode_run_id,
agent_strategy=event.agent_strategy,
)
@ -272,6 +373,7 @@ class WorkflowBasedAppRunner(AppRunner):
outputs=outputs,
execution_metadata=execution_metadata,
in_iteration_id=event.in_iteration_id,
in_loop_id=event.in_loop_id,
)
)
elif isinstance(event, NodeRunFailedEvent):
@ -302,6 +404,7 @@ class WorkflowBasedAppRunner(AppRunner):
if event.route_node_state.node_run_result
else {},
in_iteration_id=event.in_iteration_id,
in_loop_id=event.in_loop_id,
)
)
elif isinstance(event, NodeRunExceptionEvent):
@ -332,6 +435,7 @@ class WorkflowBasedAppRunner(AppRunner):
if event.route_node_state.node_run_result
else {},
in_iteration_id=event.in_iteration_id,
in_loop_id=event.in_loop_id,
)
)
elif isinstance(event, NodeInIterationFailedEvent):
@ -362,18 +466,49 @@ class WorkflowBasedAppRunner(AppRunner):
error=event.error,
)
)
elif isinstance(event, NodeInLoopFailedEvent):
self._publish_event(
QueueNodeInLoopFailedEvent(
node_execution_id=event.id,
node_id=event.node_id,
node_type=event.node_type,
node_data=event.node_data,
parallel_id=event.parallel_id,
parallel_start_node_id=event.parallel_start_node_id,
parent_parallel_id=event.parent_parallel_id,
parent_parallel_start_node_id=event.parent_parallel_start_node_id,
start_at=event.route_node_state.start_at,
inputs=event.route_node_state.node_run_result.inputs
if event.route_node_state.node_run_result
else {},
process_data=event.route_node_state.node_run_result.process_data
if event.route_node_state.node_run_result
else {},
outputs=event.route_node_state.node_run_result.outputs or {}
if event.route_node_state.node_run_result
else {},
execution_metadata=event.route_node_state.node_run_result.metadata
if event.route_node_state.node_run_result
else {},
in_loop_id=event.in_loop_id,
error=event.error,
)
)
elif isinstance(event, NodeRunStreamChunkEvent):
self._publish_event(
QueueTextChunkEvent(
text=event.chunk_content,
from_variable_selector=event.from_variable_selector,
in_iteration_id=event.in_iteration_id,
in_loop_id=event.in_loop_id,
)
)
elif isinstance(event, NodeRunRetrieverResourceEvent):
self._publish_event(
QueueRetrieverResourcesEvent(
retriever_resources=event.retriever_resources, in_iteration_id=event.in_iteration_id
retriever_resources=event.retriever_resources,
in_iteration_id=event.in_iteration_id,
in_loop_id=event.in_loop_id,
)
)
elif isinstance(event, AgentLogEvent):
@ -387,6 +522,7 @@ class WorkflowBasedAppRunner(AppRunner):
status=event.status,
data=event.data,
metadata=event.metadata,
node_id=event.node_id,
)
)
elif isinstance(event, ParallelBranchRunStartedEvent):
@ -397,6 +533,7 @@ class WorkflowBasedAppRunner(AppRunner):
parent_parallel_id=event.parent_parallel_id,
parent_parallel_start_node_id=event.parent_parallel_start_node_id,
in_iteration_id=event.in_iteration_id,
in_loop_id=event.in_loop_id,
)
)
elif isinstance(event, ParallelBranchRunSucceededEvent):
@ -407,6 +544,7 @@ class WorkflowBasedAppRunner(AppRunner):
parent_parallel_id=event.parent_parallel_id,
parent_parallel_start_node_id=event.parent_parallel_start_node_id,
in_iteration_id=event.in_iteration_id,
in_loop_id=event.in_loop_id,
)
)
elif isinstance(event, ParallelBranchRunFailedEvent):
@ -417,6 +555,7 @@ class WorkflowBasedAppRunner(AppRunner):
parent_parallel_id=event.parent_parallel_id,
parent_parallel_start_node_id=event.parent_parallel_start_node_id,
in_iteration_id=event.in_iteration_id,
in_loop_id=event.in_loop_id,
error=event.error,
)
)
@ -476,6 +615,62 @@ class WorkflowBasedAppRunner(AppRunner):
error=event.error if isinstance(event, IterationRunFailedEvent) else None,
)
)
elif isinstance(event, LoopRunStartedEvent):
self._publish_event(
QueueLoopStartEvent(
node_execution_id=event.loop_id,
node_id=event.loop_node_id,
node_type=event.loop_node_type,
node_data=event.loop_node_data,
parallel_id=event.parallel_id,
parallel_start_node_id=event.parallel_start_node_id,
parent_parallel_id=event.parent_parallel_id,
parent_parallel_start_node_id=event.parent_parallel_start_node_id,
start_at=event.start_at,
node_run_index=workflow_entry.graph_engine.graph_runtime_state.node_run_steps,
inputs=event.inputs,
predecessor_node_id=event.predecessor_node_id,
metadata=event.metadata,
)
)
elif isinstance(event, LoopRunNextEvent):
self._publish_event(
QueueLoopNextEvent(
node_execution_id=event.loop_id,
node_id=event.loop_node_id,
node_type=event.loop_node_type,
node_data=event.loop_node_data,
parallel_id=event.parallel_id,
parallel_start_node_id=event.parallel_start_node_id,
parent_parallel_id=event.parent_parallel_id,
parent_parallel_start_node_id=event.parent_parallel_start_node_id,
index=event.index,
node_run_index=workflow_entry.graph_engine.graph_runtime_state.node_run_steps,
output=event.pre_loop_output,
parallel_mode_run_id=event.parallel_mode_run_id,
duration=event.duration,
)
)
elif isinstance(event, (LoopRunSucceededEvent | LoopRunFailedEvent)):
self._publish_event(
QueueLoopCompletedEvent(
node_execution_id=event.loop_id,
node_id=event.loop_node_id,
node_type=event.loop_node_type,
node_data=event.loop_node_data,
parallel_id=event.parallel_id,
parallel_start_node_id=event.parallel_start_node_id,
parent_parallel_id=event.parent_parallel_id,
parent_parallel_start_node_id=event.parent_parallel_start_node_id,
start_at=event.start_at,
node_run_index=workflow_entry.graph_engine.graph_runtime_state.node_run_steps,
inputs=event.inputs,
outputs=event.outputs,
metadata=event.metadata,
steps=event.steps,
error=event.error if isinstance(event, LoopRunFailedEvent) else None,
)
)
def get_workflow(self, app_model: App, workflow_id: str) -> Optional[Workflow]:
"""

View File

@ -63,9 +63,9 @@ class ModelConfigWithCredentialsEntity(BaseModel):
model_schema: AIModelEntity
mode: str
provider_model_bundle: ProviderModelBundle
credentials: dict[str, Any] = {}
parameters: dict[str, Any] = {}
stop: list[str] = []
credentials: dict[str, Any] = Field(default_factory=dict)
parameters: dict[str, Any] = Field(default_factory=dict)
stop: list[str] = Field(default_factory=list)
# pydantic configs
model_config = ConfigDict(protected_namespaces=())
@ -94,7 +94,7 @@ class AppGenerateEntity(BaseModel):
call_depth: int = 0
# extra parameters, like: auto_generate_conversation_name
extras: dict[str, Any] = {}
extras: dict[str, Any] = Field(default_factory=dict)
# tracing instance
trace_manager: Optional[TraceQueueManager] = None
@ -187,6 +187,16 @@ class AdvancedChatAppGenerateEntity(ConversationAppGenerateEntity):
single_iteration_run: Optional[SingleIterationRunEntity] = None
class SingleLoopRunEntity(BaseModel):
"""
Single Loop Run Entity.
"""
node_id: str
inputs: Mapping
single_loop_run: Optional[SingleLoopRunEntity] = None
class WorkflowAppGenerateEntity(AppGenerateEntity):
"""
@ -206,3 +216,13 @@ class WorkflowAppGenerateEntity(AppGenerateEntity):
inputs: dict
single_iteration_run: Optional[SingleIterationRunEntity] = None
class SingleLoopRunEntity(BaseModel):
"""
Single Loop Run Entity.
"""
node_id: str
inputs: dict
single_loop_run: Optional[SingleLoopRunEntity] = None

View File

@ -30,6 +30,9 @@ class QueueEvent(StrEnum):
ITERATION_START = "iteration_start"
ITERATION_NEXT = "iteration_next"
ITERATION_COMPLETED = "iteration_completed"
LOOP_START = "loop_start"
LOOP_NEXT = "loop_next"
LOOP_COMPLETED = "loop_completed"
NODE_STARTED = "node_started"
NODE_SUCCEEDED = "node_succeeded"
NODE_FAILED = "node_failed"
@ -149,6 +152,89 @@ class QueueIterationCompletedEvent(AppQueueEvent):
error: Optional[str] = None
class QueueLoopStartEvent(AppQueueEvent):
"""
QueueLoopStartEvent entity
"""
event: QueueEvent = QueueEvent.LOOP_START
node_execution_id: str
node_id: str
node_type: NodeType
node_data: BaseNodeData
parallel_id: Optional[str] = None
"""parallel id if node is in parallel"""
parallel_start_node_id: Optional[str] = None
"""parallel start node id if node is in parallel"""
parent_parallel_id: Optional[str] = None
"""parent parallel id if node is in parallel"""
parent_parallel_start_node_id: Optional[str] = None
"""parent parallel start node id if node is in parallel"""
start_at: datetime
node_run_index: int
inputs: Optional[Mapping[str, Any]] = None
predecessor_node_id: Optional[str] = None
metadata: Optional[Mapping[str, Any]] = None
class QueueLoopNextEvent(AppQueueEvent):
"""
QueueLoopNextEvent entity
"""
event: QueueEvent = QueueEvent.LOOP_NEXT
index: int
node_execution_id: str
node_id: str
node_type: NodeType
node_data: BaseNodeData
parallel_id: Optional[str] = None
"""parallel id if node is in parallel"""
parallel_start_node_id: Optional[str] = None
"""parallel start node id if node is in parallel"""
parent_parallel_id: Optional[str] = None
"""parent parallel id if node is in parallel"""
parent_parallel_start_node_id: Optional[str] = None
"""parent parallel start node id if node is in parallel"""
parallel_mode_run_id: Optional[str] = None
"""iteratoin run in parallel mode run id"""
node_run_index: int
output: Optional[Any] = None # output for the current loop
duration: Optional[float] = None
class QueueLoopCompletedEvent(AppQueueEvent):
"""
QueueLoopCompletedEvent entity
"""
event: QueueEvent = QueueEvent.LOOP_COMPLETED
node_execution_id: str
node_id: str
node_type: NodeType
node_data: BaseNodeData
parallel_id: Optional[str] = None
"""parallel id if node is in parallel"""
parallel_start_node_id: Optional[str] = None
"""parallel start node id if node is in parallel"""
parent_parallel_id: Optional[str] = None
"""parent parallel id if node is in parallel"""
parent_parallel_start_node_id: Optional[str] = None
"""parent parallel start node id if node is in parallel"""
start_at: datetime
node_run_index: int
inputs: Optional[Mapping[str, Any]] = None
outputs: Optional[Mapping[str, Any]] = None
metadata: Optional[Mapping[str, Any]] = None
steps: int = 0
error: Optional[str] = None
class QueueTextChunkEvent(AppQueueEvent):
"""
QueueTextChunkEvent entity
@ -160,6 +246,8 @@ class QueueTextChunkEvent(AppQueueEvent):
"""from variable selector"""
in_iteration_id: Optional[str] = None
"""iteration id if node is in iteration"""
in_loop_id: Optional[str] = None
"""loop id if node is in loop"""
class QueueAgentMessageEvent(AppQueueEvent):
@ -189,6 +277,8 @@ class QueueRetrieverResourcesEvent(AppQueueEvent):
retriever_resources: list[dict]
in_iteration_id: Optional[str] = None
"""iteration id if node is in iteration"""
in_loop_id: Optional[str] = None
"""loop id if node is in loop"""
class QueueAnnotationReplyEvent(AppQueueEvent):
@ -278,6 +368,8 @@ class QueueNodeStartedEvent(AppQueueEvent):
"""parent parallel start node id if node is in parallel"""
in_iteration_id: Optional[str] = None
"""iteration id if node is in iteration"""
in_loop_id: Optional[str] = None
"""loop id if node is in loop"""
start_at: datetime
parallel_mode_run_id: Optional[str] = None
"""iteratoin run in parallel mode run id"""
@ -305,6 +397,8 @@ class QueueNodeSucceededEvent(AppQueueEvent):
"""parent parallel start node id if node is in parallel"""
in_iteration_id: Optional[str] = None
"""iteration id if node is in iteration"""
in_loop_id: Optional[str] = None
"""loop id if node is in loop"""
start_at: datetime
inputs: Optional[Mapping[str, Any]] = None
@ -315,6 +409,8 @@ class QueueNodeSucceededEvent(AppQueueEvent):
error: Optional[str] = None
"""single iteration duration map"""
iteration_duration_map: Optional[dict[str, float]] = None
"""single loop duration map"""
loop_duration_map: Optional[dict[str, float]] = None
class QueueAgentLogEvent(AppQueueEvent):
@ -331,6 +427,7 @@ class QueueAgentLogEvent(AppQueueEvent):
status: str
data: Mapping[str, Any]
metadata: Optional[Mapping[str, Any]] = None
node_id: str
class QueueNodeRetryEvent(QueueNodeStartedEvent):
@ -368,6 +465,41 @@ class QueueNodeInIterationFailedEvent(AppQueueEvent):
"""parent parallel start node id if node is in parallel"""
in_iteration_id: Optional[str] = None
"""iteration id if node is in iteration"""
in_loop_id: Optional[str] = None
"""loop id if node is in loop"""
start_at: datetime
inputs: Optional[Mapping[str, Any]] = None
process_data: Optional[Mapping[str, Any]] = None
outputs: Optional[Mapping[str, Any]] = None
execution_metadata: Optional[Mapping[NodeRunMetadataKey, Any]] = None
error: str
class QueueNodeInLoopFailedEvent(AppQueueEvent):
"""
QueueNodeInLoopFailedEvent entity
"""
event: QueueEvent = QueueEvent.NODE_FAILED
node_execution_id: str
node_id: str
node_type: NodeType
node_data: BaseNodeData
parallel_id: Optional[str] = None
"""parallel id if node is in parallel"""
parallel_start_node_id: Optional[str] = None
"""parallel start node id if node is in parallel"""
parent_parallel_id: Optional[str] = None
"""parent parallel id if node is in parallel"""
parent_parallel_start_node_id: Optional[str] = None
"""parent parallel start node id if node is in parallel"""
in_iteration_id: Optional[str] = None
"""iteration id if node is in iteration"""
in_loop_id: Optional[str] = None
"""loop id if node is in loop"""
start_at: datetime
inputs: Optional[Mapping[str, Any]] = None
@ -399,6 +531,8 @@ class QueueNodeExceptionEvent(AppQueueEvent):
"""parent parallel start node id if node is in parallel"""
in_iteration_id: Optional[str] = None
"""iteration id if node is in iteration"""
in_loop_id: Optional[str] = None
"""loop id if node is in loop"""
start_at: datetime
inputs: Optional[Mapping[str, Any]] = None
@ -430,6 +564,8 @@ class QueueNodeFailedEvent(AppQueueEvent):
"""parent parallel start node id if node is in parallel"""
in_iteration_id: Optional[str] = None
"""iteration id if node is in iteration"""
in_loop_id: Optional[str] = None
"""loop id if node is in loop"""
start_at: datetime
inputs: Optional[Mapping[str, Any]] = None
@ -549,6 +685,8 @@ class QueueParallelBranchRunStartedEvent(AppQueueEvent):
"""parent parallel start node id if node is in parallel"""
in_iteration_id: Optional[str] = None
"""iteration id if node is in iteration"""
in_loop_id: Optional[str] = None
"""loop id if node is in loop"""
class QueueParallelBranchRunSucceededEvent(AppQueueEvent):
@ -566,6 +704,8 @@ class QueueParallelBranchRunSucceededEvent(AppQueueEvent):
"""parent parallel start node id if node is in parallel"""
in_iteration_id: Optional[str] = None
"""iteration id if node is in iteration"""
in_loop_id: Optional[str] = None
"""loop id if node is in loop"""
class QueueParallelBranchRunFailedEvent(AppQueueEvent):
@ -583,4 +723,6 @@ class QueueParallelBranchRunFailedEvent(AppQueueEvent):
"""parent parallel start node id if node is in parallel"""
in_iteration_id: Optional[str] = None
"""iteration id if node is in iteration"""
in_loop_id: Optional[str] = None
"""loop id if node is in loop"""
error: str

View File

@ -59,6 +59,9 @@ class StreamEvent(Enum):
ITERATION_STARTED = "iteration_started"
ITERATION_NEXT = "iteration_next"
ITERATION_COMPLETED = "iteration_completed"
LOOP_STARTED = "loop_started"
LOOP_NEXT = "loop_next"
LOOP_COMPLETED = "loop_completed"
TEXT_CHUNK = "text_chunk"
TEXT_REPLACE = "text_replace"
AGENT_LOG = "agent_log"
@ -248,6 +251,7 @@ class NodeStartStreamResponse(StreamResponse):
parent_parallel_id: Optional[str] = None
parent_parallel_start_node_id: Optional[str] = None
iteration_id: Optional[str] = None
loop_id: Optional[str] = None
parallel_run_id: Optional[str] = None
agent_strategy: Optional[AgentNodeStrategyInit] = None
@ -275,6 +279,7 @@ class NodeStartStreamResponse(StreamResponse):
"parent_parallel_id": self.data.parent_parallel_id,
"parent_parallel_start_node_id": self.data.parent_parallel_start_node_id,
"iteration_id": self.data.iteration_id,
"loop_id": self.data.loop_id,
},
}
@ -310,6 +315,7 @@ class NodeFinishStreamResponse(StreamResponse):
parent_parallel_id: Optional[str] = None
parent_parallel_start_node_id: Optional[str] = None
iteration_id: Optional[str] = None
loop_id: Optional[str] = None
event: StreamEvent = StreamEvent.NODE_FINISHED
workflow_run_id: str
@ -342,6 +348,7 @@ class NodeFinishStreamResponse(StreamResponse):
"parent_parallel_id": self.data.parent_parallel_id,
"parent_parallel_start_node_id": self.data.parent_parallel_start_node_id,
"iteration_id": self.data.iteration_id,
"loop_id": self.data.loop_id,
},
}
@ -377,6 +384,7 @@ class NodeRetryStreamResponse(StreamResponse):
parent_parallel_id: Optional[str] = None
parent_parallel_start_node_id: Optional[str] = None
iteration_id: Optional[str] = None
loop_id: Optional[str] = None
retry_index: int = 0
event: StreamEvent = StreamEvent.NODE_RETRY
@ -410,6 +418,7 @@ class NodeRetryStreamResponse(StreamResponse):
"parent_parallel_id": self.data.parent_parallel_id,
"parent_parallel_start_node_id": self.data.parent_parallel_start_node_id,
"iteration_id": self.data.iteration_id,
"loop_id": self.data.loop_id,
"retry_index": self.data.retry_index,
},
}
@ -430,6 +439,7 @@ class ParallelBranchStartStreamResponse(StreamResponse):
parent_parallel_id: Optional[str] = None
parent_parallel_start_node_id: Optional[str] = None
iteration_id: Optional[str] = None
loop_id: Optional[str] = None
created_at: int
event: StreamEvent = StreamEvent.PARALLEL_BRANCH_STARTED
@ -452,6 +462,7 @@ class ParallelBranchFinishedStreamResponse(StreamResponse):
parent_parallel_id: Optional[str] = None
parent_parallel_start_node_id: Optional[str] = None
iteration_id: Optional[str] = None
loop_id: Optional[str] = None
status: str
error: Optional[str] = None
created_at: int
@ -548,6 +559,93 @@ class IterationNodeCompletedStreamResponse(StreamResponse):
data: Data
class LoopNodeStartStreamResponse(StreamResponse):
"""
NodeStartStreamResponse entity
"""
class Data(BaseModel):
"""
Data entity
"""
id: str
node_id: str
node_type: str
title: str
created_at: int
extras: dict = {}
metadata: Mapping = {}
inputs: Mapping = {}
parallel_id: Optional[str] = None
parallel_start_node_id: Optional[str] = None
event: StreamEvent = StreamEvent.LOOP_STARTED
workflow_run_id: str
data: Data
class LoopNodeNextStreamResponse(StreamResponse):
"""
NodeStartStreamResponse entity
"""
class Data(BaseModel):
"""
Data entity
"""
id: str
node_id: str
node_type: str
title: str
index: int
created_at: int
pre_loop_output: Optional[Any] = None
extras: dict = {}
parallel_id: Optional[str] = None
parallel_start_node_id: Optional[str] = None
parallel_mode_run_id: Optional[str] = None
duration: Optional[float] = None
event: StreamEvent = StreamEvent.LOOP_NEXT
workflow_run_id: str
data: Data
class LoopNodeCompletedStreamResponse(StreamResponse):
"""
NodeCompletedStreamResponse entity
"""
class Data(BaseModel):
"""
Data entity
"""
id: str
node_id: str
node_type: str
title: str
outputs: Optional[Mapping] = None
created_at: int
extras: Optional[dict] = None
inputs: Optional[Mapping] = None
status: WorkflowNodeExecutionStatus
error: Optional[str] = None
elapsed_time: float
total_tokens: int
execution_metadata: Optional[Mapping] = None
finished_at: int
steps: int
parallel_id: Optional[str] = None
parallel_start_node_id: Optional[str] = None
event: StreamEvent = StreamEvent.LOOP_COMPLETED
workflow_run_id: str
data: Data
class TextChunkStreamResponse(StreamResponse):
"""
TextChunkStreamResponse entity
@ -719,6 +817,7 @@ class AgentLogStreamResponse(StreamResponse):
status: str
data: Mapping[str, Any]
metadata: Optional[Mapping[str, Any]] = None
node_id: str
event: StreamEvent = StreamEvent.AGENT_LOG
data: Data

View File

@ -14,9 +14,13 @@ from core.app.entities.queue_entities import (
QueueIterationCompletedEvent,
QueueIterationNextEvent,
QueueIterationStartEvent,
QueueLoopCompletedEvent,
QueueLoopNextEvent,
QueueLoopStartEvent,
QueueNodeExceptionEvent,
QueueNodeFailedEvent,
QueueNodeInIterationFailedEvent,
QueueNodeInLoopFailedEvent,
QueueNodeRetryEvent,
QueueNodeStartedEvent,
QueueNodeSucceededEvent,
@ -29,6 +33,9 @@ from core.app.entities.task_entities import (
IterationNodeCompletedStreamResponse,
IterationNodeNextStreamResponse,
IterationNodeStartStreamResponse,
LoopNodeCompletedStreamResponse,
LoopNodeNextStreamResponse,
LoopNodeStartStreamResponse,
NodeFinishStreamResponse,
NodeRetryStreamResponse,
NodeStartStreamResponse,
@ -304,6 +311,7 @@ class WorkflowCycleManage:
{
NodeRunMetadataKey.PARALLEL_MODE_RUN_ID: event.parallel_mode_run_id,
NodeRunMetadataKey.ITERATION_ID: event.in_iteration_id,
NodeRunMetadataKey.LOOP_ID: event.in_loop_id,
}
)
workflow_node_execution.created_at = datetime.now(UTC).replace(tzinfo=None)
@ -344,7 +352,10 @@ class WorkflowCycleManage:
self,
*,
session: Session,
event: QueueNodeFailedEvent | QueueNodeInIterationFailedEvent | QueueNodeExceptionEvent,
event: QueueNodeFailedEvent
| QueueNodeInIterationFailedEvent
| QueueNodeInLoopFailedEvent
| QueueNodeExceptionEvent,
) -> WorkflowNodeExecution:
"""
Workflow node execution failed
@ -396,6 +407,7 @@ class WorkflowCycleManage:
origin_metadata = {
NodeRunMetadataKey.ITERATION_ID: event.in_iteration_id,
NodeRunMetadataKey.PARALLEL_MODE_RUN_ID: event.parallel_mode_run_id,
NodeRunMetadataKey.LOOP_ID: event.in_loop_id,
}
merged_metadata = (
{**jsonable_encoder(event.execution_metadata), **origin_metadata}
@ -540,6 +552,7 @@ class WorkflowCycleManage:
parent_parallel_id=event.parent_parallel_id,
parent_parallel_start_node_id=event.parent_parallel_start_node_id,
iteration_id=event.in_iteration_id,
loop_id=event.in_loop_id,
parallel_run_id=event.parallel_mode_run_id,
agent_strategy=event.agent_strategy,
),
@ -563,6 +576,7 @@ class WorkflowCycleManage:
event: QueueNodeSucceededEvent
| QueueNodeFailedEvent
| QueueNodeInIterationFailedEvent
| QueueNodeInLoopFailedEvent
| QueueNodeExceptionEvent,
task_id: str,
workflow_node_execution: WorkflowNodeExecution,
@ -601,6 +615,7 @@ class WorkflowCycleManage:
parent_parallel_id=event.parent_parallel_id,
parent_parallel_start_node_id=event.parent_parallel_start_node_id,
iteration_id=event.in_iteration_id,
loop_id=event.in_loop_id,
),
)
@ -646,6 +661,7 @@ class WorkflowCycleManage:
parent_parallel_id=event.parent_parallel_id,
parent_parallel_start_node_id=event.parent_parallel_start_node_id,
iteration_id=event.in_iteration_id,
loop_id=event.in_loop_id,
retry_index=event.retry_index,
),
)
@ -664,6 +680,7 @@ class WorkflowCycleManage:
parent_parallel_id=event.parent_parallel_id,
parent_parallel_start_node_id=event.parent_parallel_start_node_id,
iteration_id=event.in_iteration_id,
loop_id=event.in_loop_id,
created_at=int(time.time()),
),
)
@ -687,6 +704,7 @@ class WorkflowCycleManage:
parent_parallel_id=event.parent_parallel_id,
parent_parallel_start_node_id=event.parent_parallel_start_node_id,
iteration_id=event.in_iteration_id,
loop_id=event.in_loop_id,
status="succeeded" if isinstance(event, QueueParallelBranchRunSucceededEvent) else "failed",
error=event.error if isinstance(event, QueueParallelBranchRunFailedEvent) else None,
created_at=int(time.time()),
@ -770,6 +788,83 @@ class WorkflowCycleManage:
),
)
def _workflow_loop_start_to_stream_response(
self, *, session: Session, task_id: str, workflow_run: WorkflowRun, event: QueueLoopStartEvent
) -> LoopNodeStartStreamResponse:
# receive session to make sure the workflow_run won't be expired, need a more elegant way to handle this
_ = session
return LoopNodeStartStreamResponse(
task_id=task_id,
workflow_run_id=workflow_run.id,
data=LoopNodeStartStreamResponse.Data(
id=event.node_id,
node_id=event.node_id,
node_type=event.node_type.value,
title=event.node_data.title,
created_at=int(time.time()),
extras={},
inputs=event.inputs or {},
metadata=event.metadata or {},
parallel_id=event.parallel_id,
parallel_start_node_id=event.parallel_start_node_id,
),
)
def _workflow_loop_next_to_stream_response(
self, *, session: Session, task_id: str, workflow_run: WorkflowRun, event: QueueLoopNextEvent
) -> LoopNodeNextStreamResponse:
# receive session to make sure the workflow_run won't be expired, need a more elegant way to handle this
_ = session
return LoopNodeNextStreamResponse(
task_id=task_id,
workflow_run_id=workflow_run.id,
data=LoopNodeNextStreamResponse.Data(
id=event.node_id,
node_id=event.node_id,
node_type=event.node_type.value,
title=event.node_data.title,
index=event.index,
pre_loop_output=event.output,
created_at=int(time.time()),
extras={},
parallel_id=event.parallel_id,
parallel_start_node_id=event.parallel_start_node_id,
parallel_mode_run_id=event.parallel_mode_run_id,
duration=event.duration,
),
)
def _workflow_loop_completed_to_stream_response(
self, *, session: Session, task_id: str, workflow_run: WorkflowRun, event: QueueLoopCompletedEvent
) -> LoopNodeCompletedStreamResponse:
# receive session to make sure the workflow_run won't be expired, need a more elegant way to handle this
_ = session
return LoopNodeCompletedStreamResponse(
task_id=task_id,
workflow_run_id=workflow_run.id,
data=LoopNodeCompletedStreamResponse.Data(
id=event.node_id,
node_id=event.node_id,
node_type=event.node_type.value,
title=event.node_data.title,
outputs=event.outputs,
created_at=int(time.time()),
extras={},
inputs=event.inputs or {},
status=WorkflowNodeExecutionStatus.SUCCEEDED
if event.error is None
else WorkflowNodeExecutionStatus.FAILED,
error=None,
elapsed_time=(datetime.now(UTC).replace(tzinfo=None) - event.start_at).total_seconds(),
total_tokens=event.metadata.get("total_tokens", 0) if event.metadata else 0,
execution_metadata=event.metadata,
finished_at=int(time.time()),
steps=event.steps,
parallel_id=event.parallel_id,
parallel_start_node_id=event.parallel_start_node_id,
),
)
def _fetch_files_from_node_outputs(self, outputs_dict: Mapping[str, Any]) -> Sequence[Mapping[str, Any]]:
"""
Fetch files from node outputs
@ -844,7 +939,7 @@ class WorkflowCycleManage:
if node_execution_id not in self._workflow_node_executions:
raise ValueError(f"Workflow node execution not found: {node_execution_id}")
cached_workflow_node_execution = self._workflow_node_executions[node_execution_id]
return cached_workflow_node_execution
return session.merge(cached_workflow_node_execution)
def _handle_agent_log(self, task_id: str, event: QueueAgentLogEvent) -> AgentLogStreamResponse:
"""
@ -864,5 +959,6 @@ class WorkflowCycleManage:
status=event.status,
data=event.data,
metadata=event.metadata,
node_id=event.node_id,
),
)

View File

@ -1,9 +1,11 @@
from core.app.apps.base_app_queue_manager import AppQueueManager, PublishFrom
from core.app.entities.app_invoke_entities import InvokeFrom
from core.app.entities.queue_entities import QueueRetrieverResourcesEvent
from core.rag.index_processor.constant.index_type import IndexType
from core.rag.models.document import Document
from extensions.ext_database import db
from models.dataset import DatasetQuery, DocumentSegment
from models.dataset import ChildChunk, DatasetQuery, DocumentSegment
from models.dataset import Document as DatasetDocument
from models.model import DatasetRetrieverResource
@ -41,15 +43,29 @@ class DatasetIndexToolCallbackHandler:
"""Handle tool end."""
for document in documents:
if document.metadata is not None:
query = db.session.query(DocumentSegment).filter(
DocumentSegment.index_node_id == document.metadata["doc_id"]
)
dataset_document = DatasetDocument.query.filter(
DatasetDocument.id == document.metadata["document_id"]
).first()
if dataset_document.doc_form == IndexType.PARENT_CHILD_INDEX:
child_chunk = ChildChunk.query.filter(
ChildChunk.index_node_id == document.metadata["doc_id"],
ChildChunk.dataset_id == dataset_document.dataset_id,
ChildChunk.document_id == dataset_document.id,
).first()
if child_chunk:
segment = DocumentSegment.query.filter(DocumentSegment.id == child_chunk.segment_id).update(
{DocumentSegment.hit_count: DocumentSegment.hit_count + 1}, synchronize_session=False
)
else:
query = db.session.query(DocumentSegment).filter(
DocumentSegment.index_node_id == document.metadata["doc_id"]
)
if "dataset_id" in document.metadata:
query = query.filter(DocumentSegment.dataset_id == document.metadata["dataset_id"])
if "dataset_id" in document.metadata:
query = query.filter(DocumentSegment.dataset_id == document.metadata["dataset_id"])
# add hit count to document segment
query.update({DocumentSegment.hit_count: DocumentSegment.hit_count + 1}, synchronize_session=False)
# add hit count to document segment
query.update({DocumentSegment.hit_count: DocumentSegment.hit_count + 1}, synchronize_session=False)
db.session.commit()

View File

@ -6,10 +6,9 @@ from collections.abc import Iterator, Sequence
from json import JSONDecodeError
from typing import Optional
from pydantic import BaseModel, ConfigDict
from pydantic import BaseModel, ConfigDict, Field
from constants import HIDDEN_VALUE
from core.entities import DEFAULT_PLUGIN_ID
from core.entities.model_entities import ModelStatus, ModelWithProviderEntity, SimpleModelProviderEntity
from core.entities.provider_entities import (
CustomConfiguration,
@ -28,6 +27,7 @@ from core.model_runtime.entities.provider_entities import (
)
from core.model_runtime.model_providers.__base.ai_model import AIModel
from core.model_runtime.model_providers.model_provider_factory import ModelProviderFactory
from core.plugin.entities.plugin import ModelProviderID
from extensions.ext_database import db
from models.provider import (
LoadBalancingModelConfig,
@ -179,22 +179,35 @@ class ProviderConfiguration(BaseModel):
else [],
)
def _get_custom_provider_credentials(self) -> Provider | None:
"""
Get custom provider credentials.
"""
# get provider
model_provider_id = ModelProviderID(self.provider.provider)
provider_names = [self.provider.provider]
if model_provider_id.is_langgenius():
provider_names.append(model_provider_id.provider_name)
provider_record = (
db.session.query(Provider)
.filter(
Provider.tenant_id == self.tenant_id,
Provider.provider_type == ProviderType.CUSTOM.value,
Provider.provider_name.in_(provider_names),
)
.first()
)
return provider_record
def custom_credentials_validate(self, credentials: dict) -> tuple[Provider | None, dict]:
"""
Validate custom credentials.
:param credentials: provider credentials
:return:
"""
# get provider
provider_record = (
db.session.query(Provider)
.filter(
Provider.tenant_id == self.tenant_id,
Provider.provider_name == self.provider.provider,
Provider.provider_type == ProviderType.CUSTOM.value,
)
.first()
)
provider_record = self._get_custom_provider_credentials()
# Get provider credential secret variables
provider_credential_secret_variables = self.extract_secret_variables(
@ -275,15 +288,7 @@ class ProviderConfiguration(BaseModel):
:return:
"""
# get provider
provider_record = (
db.session.query(Provider)
.filter(
Provider.tenant_id == self.tenant_id,
Provider.provider_name == self.provider.provider,
Provider.provider_type == ProviderType.CUSTOM.value,
)
.first()
)
provider_record = self._get_custom_provider_credentials()
# delete provider
if provider_record:
@ -330,6 +335,33 @@ class ProviderConfiguration(BaseModel):
return None
def _get_custom_model_credentials(
self,
model_type: ModelType,
model: str,
) -> ProviderModel | None:
"""
Get custom model credentials.
"""
# get provider model
model_provider_id = ModelProviderID(self.provider.provider)
provider_names = [self.provider.provider]
if model_provider_id.is_langgenius():
provider_names.append(model_provider_id.provider_name)
provider_model_record = (
db.session.query(ProviderModel)
.filter(
ProviderModel.tenant_id == self.tenant_id,
ProviderModel.provider_name.in_(provider_names),
ProviderModel.model_name == model,
ProviderModel.model_type == model_type.to_origin_model_type(),
)
.first()
)
return provider_model_record
def custom_model_credentials_validate(
self, model_type: ModelType, model: str, credentials: dict
) -> tuple[ProviderModel | None, dict]:
@ -342,16 +374,7 @@ class ProviderConfiguration(BaseModel):
:return:
"""
# get provider model
provider_model_record = (
db.session.query(ProviderModel)
.filter(
ProviderModel.tenant_id == self.tenant_id,
ProviderModel.provider_name == self.provider.provider,
ProviderModel.model_name == model,
ProviderModel.model_type == model_type.to_origin_model_type(),
)
.first()
)
provider_model_record = self._get_custom_model_credentials(model_type, model)
# Get provider credential secret variables
provider_credential_secret_variables = self.extract_secret_variables(
@ -432,16 +455,7 @@ class ProviderConfiguration(BaseModel):
:return:
"""
# get provider model
provider_model_record = (
db.session.query(ProviderModel)
.filter(
ProviderModel.tenant_id == self.tenant_id,
ProviderModel.provider_name == self.provider.provider,
ProviderModel.model_name == model,
ProviderModel.model_type == model_type.to_origin_model_type(),
)
.first()
)
provider_model_record = self._get_custom_model_credentials(model_type, model)
# delete provider model
if provider_model_record:
@ -456,6 +470,26 @@ class ProviderConfiguration(BaseModel):
provider_model_credentials_cache.delete()
def _get_provider_model_setting(self, model_type: ModelType, model: str) -> ProviderModelSetting | None:
"""
Get provider model setting.
"""
model_provider_id = ModelProviderID(self.provider.provider)
provider_names = [self.provider.provider]
if model_provider_id.is_langgenius():
provider_names.append(model_provider_id.provider_name)
return (
db.session.query(ProviderModelSetting)
.filter(
ProviderModelSetting.tenant_id == self.tenant_id,
ProviderModelSetting.provider_name.in_(provider_names),
ProviderModelSetting.model_type == model_type.to_origin_model_type(),
ProviderModelSetting.model_name == model,
)
.first()
)
def enable_model(self, model_type: ModelType, model: str) -> ProviderModelSetting:
"""
Enable model.
@ -463,16 +497,7 @@ class ProviderConfiguration(BaseModel):
:param model: model name
:return:
"""
model_setting = (
db.session.query(ProviderModelSetting)
.filter(
ProviderModelSetting.tenant_id == self.tenant_id,
ProviderModelSetting.provider_name == self.provider.provider,
ProviderModelSetting.model_type == model_type.to_origin_model_type(),
ProviderModelSetting.model_name == model,
)
.first()
)
model_setting = self._get_provider_model_setting(model_type, model)
if model_setting:
model_setting.enabled = True
@ -497,16 +522,7 @@ class ProviderConfiguration(BaseModel):
:param model: model name
:return:
"""
model_setting = (
db.session.query(ProviderModelSetting)
.filter(
ProviderModelSetting.tenant_id == self.tenant_id,
ProviderModelSetting.provider_name == self.provider.provider,
ProviderModelSetting.model_type == model_type.to_origin_model_type(),
ProviderModelSetting.model_name == model,
)
.first()
)
model_setting = self._get_provider_model_setting(model_type, model)
if model_setting:
model_setting.enabled = False
@ -531,13 +547,24 @@ class ProviderConfiguration(BaseModel):
:param model: model name
:return:
"""
return self._get_provider_model_setting(model_type, model)
def _get_load_balancing_config(self, model_type: ModelType, model: str) -> Optional[LoadBalancingModelConfig]:
"""
Get load balancing config.
"""
model_provider_id = ModelProviderID(self.provider.provider)
provider_names = [self.provider.provider]
if model_provider_id.is_langgenius():
provider_names.append(model_provider_id.provider_name)
return (
db.session.query(ProviderModelSetting)
db.session.query(LoadBalancingModelConfig)
.filter(
ProviderModelSetting.tenant_id == self.tenant_id,
ProviderModelSetting.provider_name == self.provider.provider,
ProviderModelSetting.model_type == model_type.to_origin_model_type(),
ProviderModelSetting.model_name == model,
LoadBalancingModelConfig.tenant_id == self.tenant_id,
LoadBalancingModelConfig.provider_name.in_(provider_names),
LoadBalancingModelConfig.model_type == model_type.to_origin_model_type(),
LoadBalancingModelConfig.model_name == model,
)
.first()
)
@ -549,11 +576,16 @@ class ProviderConfiguration(BaseModel):
:param model: model name
:return:
"""
model_provider_id = ModelProviderID(self.provider.provider)
provider_names = [self.provider.provider]
if model_provider_id.is_langgenius():
provider_names.append(model_provider_id.provider_name)
load_balancing_config_count = (
db.session.query(LoadBalancingModelConfig)
.filter(
LoadBalancingModelConfig.tenant_id == self.tenant_id,
LoadBalancingModelConfig.provider_name == self.provider.provider,
LoadBalancingModelConfig.provider_name.in_(provider_names),
LoadBalancingModelConfig.model_type == model_type.to_origin_model_type(),
LoadBalancingModelConfig.model_name == model,
)
@ -563,16 +595,7 @@ class ProviderConfiguration(BaseModel):
if load_balancing_config_count <= 1:
raise ValueError("Model load balancing configuration must be more than 1.")
model_setting = (
db.session.query(ProviderModelSetting)
.filter(
ProviderModelSetting.tenant_id == self.tenant_id,
ProviderModelSetting.provider_name == self.provider.provider,
ProviderModelSetting.model_type == model_type.to_origin_model_type(),
ProviderModelSetting.model_name == model,
)
.first()
)
model_setting = self._get_provider_model_setting(model_type, model)
if model_setting:
model_setting.load_balancing_enabled = True
@ -597,11 +620,16 @@ class ProviderConfiguration(BaseModel):
:param model: model name
:return:
"""
model_provider_id = ModelProviderID(self.provider.provider)
provider_names = [self.provider.provider]
if model_provider_id.is_langgenius():
provider_names.append(model_provider_id.provider_name)
model_setting = (
db.session.query(ProviderModelSetting)
.filter(
ProviderModelSetting.tenant_id == self.tenant_id,
ProviderModelSetting.provider_name == self.provider.provider,
ProviderModelSetting.provider_name.in_(provider_names),
ProviderModelSetting.model_type == model_type.to_origin_model_type(),
ProviderModelSetting.model_name == model,
)
@ -658,11 +686,16 @@ class ProviderConfiguration(BaseModel):
return
# get preferred provider
model_provider_id = ModelProviderID(self.provider.provider)
provider_names = [self.provider.provider]
if model_provider_id.is_langgenius():
provider_names.append(model_provider_id.provider_name)
preferred_model_provider = (
db.session.query(TenantPreferredModelProvider)
.filter(
TenantPreferredModelProvider.tenant_id == self.tenant_id,
TenantPreferredModelProvider.provider_name == self.provider.provider,
TenantPreferredModelProvider.provider_name.in_(provider_names),
)
.first()
)
@ -996,7 +1029,7 @@ class ProviderConfigurations(BaseModel):
"""
tenant_id: str
configurations: dict[str, ProviderConfiguration] = {}
configurations: dict[str, ProviderConfiguration] = Field(default_factory=dict)
def __init__(self, tenant_id: str):
super().__init__(tenant_id=tenant_id)
@ -1052,7 +1085,7 @@ class ProviderConfigurations(BaseModel):
def __getitem__(self, key):
if "/" not in key:
key = f"{DEFAULT_PLUGIN_ID}/{key}/{key}"
key = str(ModelProviderID(key))
return self.configurations[key]
@ -1067,7 +1100,7 @@ class ProviderConfigurations(BaseModel):
def get(self, key, default=None) -> ProviderConfiguration | None:
if "/" not in key:
key = f"{DEFAULT_PLUGIN_ID}/{key}/{key}"
key = str(ModelProviderID(key))
return self.configurations.get(key, default) # type: ignore

View File

@ -97,32 +97,18 @@ class File(BaseModel):
return text
def generate_url(self) -> Optional[str]:
if self.type == FileType.IMAGE:
if self.transfer_method == FileTransferMethod.REMOTE_URL:
return self.remote_url
elif self.transfer_method == FileTransferMethod.LOCAL_FILE:
if self.related_id is None:
raise ValueError("Missing file related_id")
return helpers.get_signed_file_url(upload_file_id=self.related_id)
elif self.transfer_method == FileTransferMethod.TOOL_FILE:
assert self.related_id is not None
assert self.extension is not None
return ToolFileParser.get_tool_file_manager().sign_file(
tool_file_id=self.related_id, extension=self.extension
)
else:
if self.transfer_method == FileTransferMethod.REMOTE_URL:
return self.remote_url
elif self.transfer_method == FileTransferMethod.LOCAL_FILE:
if self.related_id is None:
raise ValueError("Missing file related_id")
return helpers.get_signed_file_url(upload_file_id=self.related_id)
elif self.transfer_method == FileTransferMethod.TOOL_FILE:
assert self.related_id is not None
assert self.extension is not None
return ToolFileParser.get_tool_file_manager().sign_file(
tool_file_id=self.related_id, extension=self.extension
)
if self.transfer_method == FileTransferMethod.REMOTE_URL:
return self.remote_url
elif self.transfer_method == FileTransferMethod.LOCAL_FILE:
if self.related_id is None:
raise ValueError("Missing file related_id")
return helpers.get_signed_file_url(upload_file_id=self.related_id)
elif self.transfer_method == FileTransferMethod.TOOL_FILE:
assert self.related_id is not None
assert self.extension is not None
return ToolFileParser.get_tool_file_manager().sign_file(
tool_file_id=self.related_id, extension=self.extension
)
def to_plugin_parameter(self) -> dict[str, Any]:
return {

View File

@ -41,9 +41,13 @@ class HostedModerationConfig(BaseModel):
class HostingConfiguration:
provider_map: dict[str, HostingProvider] = {}
provider_map: dict[str, HostingProvider]
moderation_config: Optional[HostedModerationConfig] = None
def __init__(self) -> None:
self.provider_map = {}
self.moderation_config = None
def init_app(self, app: Flask) -> None:
if dify_config.EDITION != "CLOUD":
return

View File

@ -1,8 +1,11 @@
import decimal
import hashlib
from threading import Lock
from typing import Optional
from pydantic import BaseModel, ConfigDict, Field
import contexts
from core.model_runtime.entities.common_entities import I18nObject
from core.model_runtime.entities.defaults import PARAMETER_RULE_TEMPLATE
from core.model_runtime.entities.model_entities import (
@ -139,15 +142,35 @@ class AIModel(BaseModel):
:return: model schema
"""
plugin_model_manager = PluginModelManager()
return plugin_model_manager.get_model_schema(
tenant_id=self.tenant_id,
user_id="unknown",
plugin_id=self.plugin_id,
provider=self.provider_name,
model_type=self.model_type.value,
model=model,
credentials=credentials or {},
)
cache_key = f"{self.tenant_id}:{self.plugin_id}:{self.provider_name}:{self.model_type.value}:{model}"
# sort credentials
sorted_credentials = sorted(credentials.items()) if credentials else []
cache_key += ":".join([hashlib.md5(f"{k}:{v}".encode()).hexdigest() for k, v in sorted_credentials])
try:
contexts.plugin_model_schemas.get()
except LookupError:
contexts.plugin_model_schemas.set({})
contexts.plugin_model_schema_lock.set(Lock())
with contexts.plugin_model_schema_lock.get():
if cache_key in contexts.plugin_model_schemas.get():
return contexts.plugin_model_schemas.get()[cache_key]
schema = plugin_model_manager.get_model_schema(
tenant_id=self.tenant_id,
user_id="unknown",
plugin_id=self.plugin_id,
provider=self.provider_name,
model_type=self.model_type.value,
model=model,
credentials=credentials or {},
)
if schema:
contexts.plugin_model_schemas.get()[cache_key] = schema
return schema
def get_customizable_model_schema_from_credentials(self, model: str, credentials: dict) -> Optional[AIModelEntity]:
"""
@ -157,14 +180,9 @@ class AIModel(BaseModel):
:param credentials: model credentials
:return: model schema
"""
return self._get_customizable_model_schema(model, credentials)
def _get_customizable_model_schema(self, model: str, credentials: dict) -> Optional[AIModelEntity]:
"""
Get customizable model schema and fill in the template
"""
# get customizable model schema
schema = self.get_customizable_model_schema(model, credentials)
if not schema:
return None

View File

@ -228,7 +228,7 @@ class LargeLanguageModel(AIModel):
:return: result generator
"""
callbacks = callbacks or []
prompt_message = AssistantPromptMessage(content="")
assistant_message = AssistantPromptMessage(content="")
usage = None
system_fingerprint = None
real_model = model
@ -250,7 +250,7 @@ class LargeLanguageModel(AIModel):
callbacks=callbacks,
)
prompt_message.content += chunk.delta.message.content
assistant_message.content += chunk.delta.message.content
real_model = chunk.model
if chunk.delta.usage:
usage = chunk.delta.usage
@ -265,7 +265,7 @@ class LargeLanguageModel(AIModel):
result=LLMResult(
model=real_model,
prompt_messages=prompt_messages,
message=prompt_message,
message=assistant_message,
usage=usage or LLMUsage.empty_usage(),
system_fingerprint=system_fingerprint,
),

View File

@ -1,3 +1,4 @@
import hashlib
import logging
import os
from collections.abc import Sequence
@ -7,7 +8,6 @@ from typing import Optional
from pydantic import BaseModel
import contexts
from core.entities import DEFAULT_PLUGIN_ID
from core.helper.position_helper import get_provider_position_map, sort_to_dict_by_position_map
from core.model_runtime.entities.model_entities import AIModelEntity, ModelType
from core.model_runtime.entities.provider_entities import ProviderConfig, ProviderEntity, SimpleProviderEntity
@ -34,9 +34,11 @@ class ModelProviderExtension(BaseModel):
class ModelProviderFactory:
provider_position_map: dict[str, int] = {}
provider_position_map: dict[str, int]
def __init__(self, tenant_id: str) -> None:
self.provider_position_map = {}
self.tenant_id = tenant_id
self.plugin_model_manager = PluginModelManager()
@ -205,17 +207,35 @@ class ModelProviderFactory:
Get model schema
"""
plugin_id, provider_name = self.get_plugin_id_and_provider_name_from_provider(provider)
model_schema = self.plugin_model_manager.get_model_schema(
tenant_id=self.tenant_id,
user_id="unknown",
plugin_id=plugin_id,
provider=provider_name,
model_type=model_type.value,
model=model,
credentials=credentials,
)
cache_key = f"{self.tenant_id}:{plugin_id}:{provider_name}:{model_type.value}:{model}"
# sort credentials
sorted_credentials = sorted(credentials.items()) if credentials else []
cache_key += ":".join([hashlib.md5(f"{k}:{v}".encode()).hexdigest() for k, v in sorted_credentials])
return model_schema
try:
contexts.plugin_model_schemas.get()
except LookupError:
contexts.plugin_model_schemas.set({})
contexts.plugin_model_schema_lock.set(Lock())
with contexts.plugin_model_schema_lock.get():
if cache_key in contexts.plugin_model_schemas.get():
return contexts.plugin_model_schemas.get()[cache_key]
schema = self.plugin_model_manager.get_model_schema(
tenant_id=self.tenant_id,
user_id="unknown",
plugin_id=plugin_id,
provider=provider_name,
model_type=model_type.value,
model=model,
credentials=credentials or {},
)
if schema:
contexts.plugin_model_schemas.get()[cache_key] = schema
return schema
def get_models(
self,
@ -360,11 +380,5 @@ class ModelProviderFactory:
:param provider: provider name
:return: plugin id and provider name
"""
plugin_id = DEFAULT_PLUGIN_ID
provider_name = provider
if "/" in provider:
# get the plugin_id before provider
plugin_id = "/".join(provider.split("/")[:-1])
provider_name = provider.split("/")[-1]
return str(plugin_id), provider_name
provider_id = ModelProviderID(provider)
return provider_id.plugin_id, provider_id.provider_name

View File

@ -0,0 +1,22 @@
- claude-3-haiku@20240307
- claude-3-opus@20240229
- claude-3-sonnet@20240229
- claude-3-5-sonnet-v2@20241022
- claude-3-5-sonnet@20240620
- gemini-1.0-pro-vision-001
- gemini-1.0-pro-002
- gemini-1.5-flash-001
- gemini-1.5-flash-002
- gemini-1.5-pro-001
- gemini-1.5-pro-002
- gemini-2.0-flash-001
- gemini-2.0-flash-exp
- gemini-2.0-flash-lite-preview-02-05
- gemini-2.0-flash-thinking-exp-01-21
- gemini-2.0-flash-thinking-exp-1219
- gemini-2.0-pro-exp-02-05
- gemini-exp-1114
- gemini-exp-1121
- gemini-exp-1206
- gemini-flash-experimental
- gemini-pro-experimental

View File

@ -5,6 +5,7 @@ from collections.abc import Mapping
from typing import Any, Optional
from pydantic import BaseModel, Field, model_validator
from werkzeug.exceptions import NotFound
from core.agent.plugin_entities import AgentStrategyProviderEntity
from core.model_runtime.entities.provider_entities import ProviderEntity
@ -153,17 +154,22 @@ class GenericProviderID:
return f"{self.organization}/{self.plugin_name}/{self.provider_name}"
def __init__(self, value: str, is_hardcoded: bool = False) -> None:
if not value:
raise NotFound("plugin not found, please add plugin")
# check if the value is a valid plugin id with format: $organization/$plugin_name/$provider_name
if not re.match(r"^[a-z0-9_-]+\/[a-z0-9_-]+\/[a-z0-9_-]+$", value):
# check if matches [a-z0-9_-]+, if yes, append with langgenius/$value/$value
if re.match(r"^[a-z0-9_-]+$", value):
value = f"langgenius/{value}/{value}"
else:
raise ValueError("Invalid plugin id")
raise ValueError(f"Invalid plugin id {value}")
self.organization, self.plugin_name, self.provider_name = value.split("/")
self.is_hardcoded = is_hardcoded
def is_langgenius(self) -> bool:
return self.organization == "langgenius"
@property
def plugin_id(self) -> str:
return f"{self.organization}/{self.plugin_name}"
@ -180,7 +186,7 @@ class ToolProviderID(GenericProviderID):
def __init__(self, value: str, is_hardcoded: bool = False) -> None:
super().__init__(value, is_hardcoded)
if self.organization == "langgenius":
if self.provider_name in ["jina", "siliconflow"]:
if self.provider_name in ["jina", "siliconflow", "stepfun", "gitee_ai"]:
self.plugin_name = f"{self.provider_name}_tool"
@ -212,3 +218,9 @@ class PluginDependency(BaseModel):
type: Type
value: Github | Marketplace | Package
current_identifier: Optional[str] = None
class MissingPluginDependency(BaseModel):
plugin_unique_identifier: str
current_identifier: Optional[str] = None

View File

@ -3,6 +3,7 @@ from collections.abc import Sequence
from core.plugin.entities.bundle import PluginBundleDependency
from core.plugin.entities.plugin import (
GenericProviderID,
MissingPluginDependency,
PluginDeclaration,
PluginEntity,
PluginInstallation,
@ -175,14 +176,16 @@ class PluginInstallationManager(BasePluginManager):
headers={"Content-Type": "application/json"},
)
def fetch_missing_dependencies(self, tenant_id: str, plugin_unique_identifiers: list[str]) -> list[str]:
def fetch_missing_dependencies(
self, tenant_id: str, plugin_unique_identifiers: list[str]
) -> list[MissingPluginDependency]:
"""
Fetch missing dependencies
"""
return self._request_with_plugin_daemon_response(
"POST",
f"plugin/{tenant_id}/management/installation/missing",
list[str],
list[MissingPluginDependency],
data={"plugin_unique_identifiers": plugin_unique_identifiers},
headers={"Content-Type": "application/json"},
)

View File

@ -3,7 +3,7 @@ from typing import Any, Optional
from pydantic import BaseModel
from core.plugin.entities.plugin import GenericProviderID
from core.plugin.entities.plugin import GenericProviderID, ToolProviderID
from core.plugin.entities.plugin_daemon import PluginBasicBooleanResponse, PluginToolProviderEntity
from core.plugin.manager.base import BasePluginManager
from core.tools.entities.tool_entities import ToolInvokeMessage, ToolParameter
@ -45,7 +45,7 @@ class PluginToolManager(BasePluginManager):
"""
Fetch tool provider for the given tenant and plugin.
"""
tool_provider_id = GenericProviderID(provider)
tool_provider_id = ToolProviderID(provider)
def transformer(json_response: dict[str, Any]) -> dict:
data = json_response.get("data")

View File

@ -100,8 +100,23 @@ class ProviderManager:
tenant_id, provider_name_to_provider_records_dict
)
# append providers with langgenius/openai/openai
provider_name_list = list(provider_name_to_provider_records_dict.keys())
for provider_name in provider_name_list:
provider_id = ModelProviderID(provider_name)
if str(provider_id) not in provider_name_list:
provider_name_to_provider_records_dict[str(provider_id)] = provider_name_to_provider_records_dict[
provider_name
]
# Get all provider model records of the workspace
provider_name_to_provider_model_records_dict = self._get_all_provider_models(tenant_id)
for provider_name in list(provider_name_to_provider_model_records_dict.keys()):
provider_id = ModelProviderID(provider_name)
if str(provider_id) not in provider_name_to_provider_model_records_dict:
provider_name_to_provider_model_records_dict[str(provider_id)] = (
provider_name_to_provider_model_records_dict[provider_name]
)
# Get all provider entities
model_provider_factory = ModelProviderFactory(tenant_id)
@ -360,7 +375,8 @@ class ProviderManager:
provider_name_to_provider_records_dict = defaultdict(list)
for provider in providers:
provider_name_to_provider_records_dict[provider.provider_name].append(provider)
# TODO: Use provider name with prefix after the data migration
provider_name_to_provider_records_dict[str(ModelProviderID(provider.provider_name))].append(provider)
return provider_name_to_provider_records_dict
@ -454,11 +470,9 @@ class ProviderManager:
provider_name_to_provider_load_balancing_model_configs_dict = defaultdict(list)
for provider_load_balancing_config in provider_load_balancing_configs:
(
provider_name_to_provider_load_balancing_model_configs_dict[
provider_load_balancing_config.provider_name
].append(provider_load_balancing_config)
)
provider_name_to_provider_load_balancing_model_configs_dict[
provider_load_balancing_config.provider_name
].append(provider_load_balancing_config)
return provider_name_to_provider_load_balancing_model_configs_dict
@ -501,7 +515,8 @@ class ProviderManager:
# FIXME ignore the type errork, onyl TrialHostingQuota has limit need to change the logic
provider_record = Provider(
tenant_id=tenant_id,
provider_name=provider_name,
# TODO: Use provider name with prefix after the data migration.
provider_name=ModelProviderID(provider_name).provider_name,
provider_type=ProviderType.SYSTEM.value,
quota_type=ProviderQuotaType.TRIAL.value,
quota_limit=quota.quota_limit, # type: ignore
@ -516,13 +531,12 @@ class ProviderManager:
db.session.query(Provider)
.filter(
Provider.tenant_id == tenant_id,
Provider.provider_name == provider_name,
Provider.provider_name == ModelProviderID(provider_name).provider_name,
Provider.provider_type == ProviderType.SYSTEM.value,
Provider.quota_type == ProviderQuotaType.TRIAL.value,
)
.first()
)
if provider_record and not provider_record.is_valid:
provider_record.is_valid = True
db.session.commit()

View File

@ -1,8 +1,12 @@
import threading
import concurrent.futures
import json
from concurrent.futures import ThreadPoolExecutor
from typing import Optional
from flask import Flask, current_app
from sqlalchemy.orm import load_only
from configs import dify_config
from core.rag.data_post_processor.data_post_processor import DataPostProcessor
from core.rag.datasource.keyword.keyword_factory import Keyword
from core.rag.datasource.vdb.vector_factory import Vector
@ -26,6 +30,7 @@ default_retrieval_model = {
class RetrievalService:
# Cache precompiled regular expressions to avoid repeated compilation
@classmethod
def retrieve(
cls,
@ -40,74 +45,62 @@ class RetrievalService:
):
if not query:
return []
dataset = db.session.query(Dataset).filter(Dataset.id == dataset_id).first()
if not dataset:
return []
dataset = cls._get_dataset(dataset_id)
if not dataset or dataset.available_document_count == 0 or dataset.available_segment_count == 0:
return []
all_documents: list[Document] = []
threads: list[threading.Thread] = []
exceptions: list[str] = []
# retrieval_model source with keyword
if retrieval_method == "keyword_search":
keyword_thread = threading.Thread(
target=RetrievalService.keyword_search,
kwargs={
"flask_app": current_app._get_current_object(), # type: ignore
"dataset_id": dataset_id,
"query": query,
"top_k": top_k,
"all_documents": all_documents,
"exceptions": exceptions,
},
)
threads.append(keyword_thread)
keyword_thread.start()
# retrieval_model source with semantic
if RetrievalMethod.is_support_semantic_search(retrieval_method):
embedding_thread = threading.Thread(
target=RetrievalService.embedding_search,
kwargs={
"flask_app": current_app._get_current_object(), # type: ignore
"dataset_id": dataset_id,
"query": query,
"top_k": top_k,
"score_threshold": score_threshold,
"reranking_model": reranking_model,
"all_documents": all_documents,
"retrieval_method": retrieval_method,
"exceptions": exceptions,
},
)
threads.append(embedding_thread)
embedding_thread.start()
# retrieval source with full text
if RetrievalMethod.is_support_fulltext_search(retrieval_method):
full_text_index_thread = threading.Thread(
target=RetrievalService.full_text_index_search,
kwargs={
"flask_app": current_app._get_current_object(), # type: ignore
"dataset_id": dataset_id,
"query": query,
"retrieval_method": retrieval_method,
"score_threshold": score_threshold,
"top_k": top_k,
"reranking_model": reranking_model,
"all_documents": all_documents,
"exceptions": exceptions,
},
)
threads.append(full_text_index_thread)
full_text_index_thread.start()
for thread in threads:
thread.join()
# Optimize multithreading with thread pools
with ThreadPoolExecutor(max_workers=dify_config.RETRIEVAL_SERVICE_EXECUTORS) as executor: # type: ignore
futures = []
if retrieval_method == "keyword_search":
futures.append(
executor.submit(
cls.keyword_search,
flask_app=current_app._get_current_object(), # type: ignore
dataset_id=dataset_id,
query=query,
top_k=top_k,
all_documents=all_documents,
exceptions=exceptions,
)
)
if RetrievalMethod.is_support_semantic_search(retrieval_method):
futures.append(
executor.submit(
cls.embedding_search,
flask_app=current_app._get_current_object(), # type: ignore
dataset_id=dataset_id,
query=query,
top_k=top_k,
score_threshold=score_threshold,
reranking_model=reranking_model,
all_documents=all_documents,
retrieval_method=retrieval_method,
exceptions=exceptions,
)
)
if RetrievalMethod.is_support_fulltext_search(retrieval_method):
futures.append(
executor.submit(
cls.full_text_index_search,
flask_app=current_app._get_current_object(), # type: ignore
dataset_id=dataset_id,
query=query,
top_k=top_k,
score_threshold=score_threshold,
reranking_model=reranking_model,
all_documents=all_documents,
retrieval_method=retrieval_method,
exceptions=exceptions,
)
)
concurrent.futures.wait(futures, timeout=30, return_when=concurrent.futures.ALL_COMPLETED)
if exceptions:
exception_message = ";\n".join(exceptions)
raise ValueError(exception_message)
raise ValueError(";\n".join(exceptions))
if retrieval_method == RetrievalMethod.HYBRID_SEARCH.value:
data_post_processor = DataPostProcessor(
@ -132,18 +125,21 @@ class RetrievalService:
)
return all_documents
@classmethod
def _get_dataset(cls, dataset_id: str) -> Optional[Dataset]:
return db.session.query(Dataset).filter(Dataset.id == dataset_id).first()
@classmethod
def keyword_search(
cls, flask_app: Flask, dataset_id: str, query: str, top_k: int, all_documents: list, exceptions: list
):
with flask_app.app_context():
try:
dataset = db.session.query(Dataset).filter(Dataset.id == dataset_id).first()
dataset = cls._get_dataset(dataset_id)
if not dataset:
raise ValueError("dataset not found")
keyword = Keyword(dataset=dataset)
documents = keyword.search(cls.escape_query_for_search(query), top_k=top_k)
all_documents.extend(documents)
except Exception as e:
@ -164,14 +160,13 @@ class RetrievalService:
):
with flask_app.app_context():
try:
dataset = db.session.query(Dataset).filter(Dataset.id == dataset_id).first()
dataset = cls._get_dataset(dataset_id)
if not dataset:
raise ValueError("dataset not found")
vector = Vector(dataset=dataset)
documents = vector.search_by_vector(
cls.escape_query_for_search(query),
query,
search_type="similarity_score_threshold",
top_k=top_k,
score_threshold=score_threshold,
@ -186,7 +181,7 @@ class RetrievalService:
and retrieval_method == RetrievalMethod.SEMANTIC_SEARCH.value
):
data_post_processor = DataPostProcessor(
str(dataset.tenant_id), RerankMode.RERANKING_MODEL.value, reranking_model, None, False
str(dataset.tenant_id), str(RerankMode.RERANKING_MODEL.value), reranking_model, None, False
)
all_documents.extend(
data_post_processor.invoke(
@ -216,13 +211,11 @@ class RetrievalService:
):
with flask_app.app_context():
try:
dataset = db.session.query(Dataset).filter(Dataset.id == dataset_id).first()
dataset = cls._get_dataset(dataset_id)
if not dataset:
raise ValueError("dataset not found")
vector_processor = Vector(
dataset=dataset,
)
vector_processor = Vector(dataset=dataset)
documents = vector_processor.search_by_full_text(cls.escape_query_for_search(query), top_k=top_k)
if documents:
@ -233,7 +226,7 @@ class RetrievalService:
and retrieval_method == RetrievalMethod.FULL_TEXT_SEARCH.value
):
data_post_processor = DataPostProcessor(
str(dataset.tenant_id), RerankMode.RERANKING_MODEL.value, reranking_model, None, False
str(dataset.tenant_id), str(RerankMode.RERANKING_MODEL.value), reranking_model, None, False
)
all_documents.extend(
data_post_processor.invoke(
@ -250,66 +243,106 @@ class RetrievalService:
@staticmethod
def escape_query_for_search(query: str) -> str:
return query.replace('"', '\\"')
return json.dumps(query).strip('"')
@classmethod
def format_retrieval_documents(cls, documents: list[Document]) -> list[RetrievalSegments]:
"""Format retrieval documents with optimized batch processing"""
if not documents:
return []
try:
# Collect document IDs
document_ids = {doc.metadata.get("document_id") for doc in documents if "document_id" in doc.metadata}
if not document_ids:
return []
# Batch query dataset documents
dataset_documents = {
doc.id: doc
for doc in db.session.query(DatasetDocument)
.filter(DatasetDocument.id.in_(document_ids))
.options(load_only(DatasetDocument.id, DatasetDocument.doc_form, DatasetDocument.dataset_id))
.all()
}
records = []
include_segment_ids = set()
segment_child_map = {}
# Process documents
for document in documents:
document_id = document.metadata.get("document_id")
if document_id not in dataset_documents:
continue
dataset_document = dataset_documents[document_id]
@staticmethod
def format_retrieval_documents(documents: list[Document]) -> list[RetrievalSegments]:
records = []
include_segment_ids = []
segment_child_map = {}
for document in documents:
document_id = document.metadata.get("document_id")
dataset_document = db.session.query(DatasetDocument).filter(DatasetDocument.id == document_id).first()
if dataset_document:
if dataset_document.doc_form == IndexType.PARENT_CHILD_INDEX:
# Handle parent-child documents
child_index_node_id = document.metadata.get("doc_id")
result = (
db.session.query(ChildChunk, DocumentSegment)
.join(DocumentSegment, ChildChunk.segment_id == DocumentSegment.id)
child_chunk = (
db.session.query(ChildChunk).filter(ChildChunk.index_node_id == child_index_node_id).first()
)
if not child_chunk:
continue
segment = (
db.session.query(DocumentSegment)
.filter(
ChildChunk.index_node_id == child_index_node_id,
DocumentSegment.dataset_id == dataset_document.dataset_id,
DocumentSegment.enabled == True,
DocumentSegment.status == "completed",
DocumentSegment.id == child_chunk.segment_id,
)
.options(
load_only(
DocumentSegment.id,
DocumentSegment.content,
DocumentSegment.answer,
)
)
.first()
)
if result:
child_chunk, segment = result
if not segment:
continue
if segment.id not in include_segment_ids:
include_segment_ids.append(segment.id)
child_chunk_detail = {
"id": child_chunk.id,
"content": child_chunk.content,
"position": child_chunk.position,
"score": document.metadata.get("score", 0.0),
}
map_detail = {
"max_score": document.metadata.get("score", 0.0),
"child_chunks": [child_chunk_detail],
}
segment_child_map[segment.id] = map_detail
record = {
"segment": segment,
}
records.append(record)
else:
child_chunk_detail = {
"id": child_chunk.id,
"content": child_chunk.content,
"position": child_chunk.position,
"score": document.metadata.get("score", 0.0),
}
segment_child_map[segment.id]["child_chunks"].append(child_chunk_detail)
segment_child_map[segment.id]["max_score"] = max(
segment_child_map[segment.id]["max_score"], document.metadata.get("score", 0.0)
)
else:
if not segment:
continue
if segment.id not in include_segment_ids:
include_segment_ids.add(segment.id)
child_chunk_detail = {
"id": child_chunk.id,
"content": child_chunk.content,
"position": child_chunk.position,
"score": document.metadata.get("score", 0.0),
}
map_detail = {
"max_score": document.metadata.get("score", 0.0),
"child_chunks": [child_chunk_detail],
}
segment_child_map[segment.id] = map_detail
record = {
"segment": segment,
}
records.append(record)
else:
child_chunk_detail = {
"id": child_chunk.id,
"content": child_chunk.content,
"position": child_chunk.position,
"score": document.metadata.get("score", 0.0),
}
segment_child_map[segment.id]["child_chunks"].append(child_chunk_detail)
segment_child_map[segment.id]["max_score"] = max(
segment_child_map[segment.id]["max_score"], document.metadata.get("score", 0.0)
)
else:
index_node_id = document.metadata["doc_id"]
# Handle normal documents
index_node_id = document.metadata.get("doc_id")
if not index_node_id:
continue
segment = (
db.session.query(DocumentSegment)
@ -324,16 +357,21 @@ class RetrievalService:
if not segment:
continue
include_segment_ids.append(segment.id)
include_segment_ids.add(segment.id)
record = {
"segment": segment,
"score": document.metadata.get("score", None),
"score": document.metadata.get("score"), # type: ignore
}
records.append(record)
# Add child chunks information to records
for record in records:
if record["segment"].id in segment_child_map:
record["child_chunks"] = segment_child_map[record["segment"].id].get("child_chunks", None)
record["child_chunks"] = segment_child_map[record["segment"].id].get("child_chunks") # type: ignore
record["score"] = segment_child_map[record["segment"].id]["max_score"]
return [RetrievalSegments(**record) for record in records]
return [RetrievalSegments(**record) for record in records]
except Exception as e:
db.session.rollback()
raise e

View File

@ -111,8 +111,9 @@ class ChromaVector(BaseVector):
for index in range(len(ids)):
distance = distances[index]
metadata = dict(metadatas[index])
if distance >= score_threshold:
metadata["score"] = distance
score = 1 - distance
if score > score_threshold:
metadata["score"] = score
doc = Document(
page_content=documents[index],
metadata=metadata,

View File

@ -72,8 +72,18 @@ class MilvusVector(BaseVector):
self._client = self._init_client(config)
self._consistency_level = "Session" # Consistency level for Milvus operations
self._fields: list[str] = [] # List of fields in the collection
if self._client.has_collection(collection_name):
self._load_collection_fields()
self._hybrid_search_enabled = self._check_hybrid_search_support() # Check if hybrid search is supported
def _load_collection_fields(self, fields: Optional[list[str]] = None) -> None:
if fields is None:
# Load collection fields from remote server
collection_info = self._client.describe_collection(self._collection_name)
fields = [field["name"] for field in collection_info["fields"]]
# Since primary field is auto-id, no need to track it
self._fields = [f for f in fields if f != Field.PRIMARY_KEY.value]
def _check_hybrid_search_support(self) -> bool:
"""
Check if the current Milvus version supports hybrid search.
@ -306,10 +316,7 @@ class MilvusVector(BaseVector):
)
schema.add_function(bm25_function)
for x in schema.fields:
self._fields.append(x.name)
# Since primary field is auto-id, no need to track it
self._fields.remove(Field.PRIMARY_KEY.value)
self._load_collection_fields([f.name for f in schema.fields])
# Create Index params for the collection
index_params_obj = IndexParams()

Some files were not shown because too many files have changed in this diff Show More