Compare commits

..

1002 Commits

Author SHA1 Message Date
b8236b29f3 Merge branch 'feat/end-user-oauth' into deploy/end-user-oauth 2025-12-04 17:50:16 +08:00
eb0e63c336 Merge remote-tracking branch 'origin/main' into feat/end-user-oauth
# Conflicts:
#	web/app/components/app/configuration/config/agent/agent-tools/index.tsx
2025-12-04 16:16:04 +08:00
f62926f0ca fix: bump pyarrow to 17.0.0, werkzeug to 3.1.4, urllib3 to 2.5.0 (#29089)
Signed-off-by: kenwoodjw <blackxin55+@gmail.com>
2025-12-04 15:39:31 +08:00
b033bb02fc chore: upgrade React to 19.2.1,fix cve-2025-55182 (#29121)
Co-authored-by: zhsama <torvalds@linux.do>
2025-12-04 14:44:52 +08:00
031cba81b4 Fix/app list compatible (#29123) 2025-12-04 14:44:24 +08:00
693ab6ad82 fix(web): disable tooltip delay to avoid tooltip flickering (#29104) 2025-12-04 14:16:56 +08:00
541fd7daa2 chore: update Next.js dev dependencies to 15.5.7 (#29120) 2025-12-04 14:16:45 +08:00
61d79a1502 feat: Unify environment variables for database connection and authentication (#29092) 2025-12-04 14:16:11 +08:00
03357ff1ec fix: catch error in response converter (#29056)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-12-04 11:25:16 +08:00
b4bed94cc5 chore(deps): bump next from 15.5.6 to 15.5.7 in /web (#29105)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-04 10:14:50 +08:00
e924dc7b30 chore: ignore redis lock not owned error (#29064)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-12-04 10:14:28 +08:00
4b969bdce3 fix:mysql does not support 'returning' (#29069) 2025-12-04 10:14:19 +08:00
d07afb38a0 fix: trigger call workflow_as_tool error (#29058) 2025-12-04 10:13:18 +08:00
5bb715ee2f fix: remove chat conversation api dead arg message_count_gte (#29097) 2025-12-04 10:12:47 +08:00
3e5f683e90 feat: dark theme icon support (#28858) 2025-12-04 09:29:00 +08:00
31481581e8 refactor: simplify marketplace component structure by removing unused… (#29095) 2025-12-03 21:30:24 +08:00
16dc744908 Merge remote-tracking branch 'origin/main' into feat/end-user-oauth 2025-12-03 18:44:20 +08:00
yyh
2e0c2e8482 refactor/marketplace react query (#29028)
Co-authored-by: zhsama <torvalds@linux.do>
2025-12-03 18:30:20 +08:00
0343374d52 feat: add ReactScan component for enhanced development scanning (#29086) 2025-12-03 18:19:12 +08:00
c1fe394c0e fix: check education verify api slow may cause page redirects when modal closes (#29078) 2025-12-03 17:11:57 +08:00
f3258bab9e Merge remote-tracking branch 'origin/main' into feat/end-user-oauth 2025-12-03 16:36:24 +08:00
876f48df76 chore: remove useless mock files (#29068) 2025-12-03 15:34:11 +08:00
fbb2d076f4 integrate Amplitude analytics into the application (#29049)
Co-authored-by: CodingOnStar <hanxujiang@dify.ai>
Co-authored-by: Joel <iamjoel007@gmail.com>
2025-12-03 14:22:12 +08:00
c7d2a13524 fix: improve chat message log feedback (#29045)
Co-authored-by: yyh <yuanyouhuilyz@gmail.com>
2025-12-03 13:42:40 +08:00
9b9588f20d fix: CVE-2025-64718 (#29027)
Signed-off-by: kenwoodjw <blackxin55+@gmail.com>
2025-12-02 21:49:57 +08:00
d6bbf0f975 chore: enhance test (#29002)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-12-02 21:49:08 +08:00
f48522e923 feat: add x-trace-id to http responses and logs (#29015)
Introduce trace id to http responses and logs to facilitate debugging process.
2025-12-02 17:22:34 +08:00
yyh
f8b10c2272 Refactor apps service toward TanStack Query (#29004) 2025-12-02 15:18:33 +08:00
369892634d [Bugfix] Fixed an issue with UUID type queries in MySQL databases (#28941) 2025-12-02 14:37:23 +08:00
yyh
8e5cb86409 Stop showing slash commands in general Go to Anything search (#29012) 2025-12-02 14:24:21 +08:00
472f8a8a2b Merge branch 'origin-main' into feat/end-user-oauth 2025-12-02 13:30:53 +08:00
a85afe4d07 feat: complete test script of plugin manager (#28967)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-12-02 11:25:08 +08:00
e8f93380d1 Fix validation (#28985)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-12-02 10:25:52 +08:00
yyh
0a22bc5d05 fix(web): use atomic selectors in AccessControlItem (#28983) 2025-12-01 19:23:42 +08:00
yyh
626d4f3e35 fix(web): use atomic selectors to fix Zustand v5 infinite loop (#28977) 2025-12-01 15:45:50 +08:00
f4db5f9973 chore(deps): bump faker from 32.1.0 to 38.2.0 in /api (#28964)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 15:45:39 +08:00
70dabe318c feat: complete test script of mail send task (#28963) 2025-12-01 15:45:22 +08:00
f94972f662 chore(deps): bump @lexical/list from 0.36.2 to 0.38.2 in /web (#28961)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 15:44:52 +08:00
c8807d3f89 feat: add service connection panel and translations for service connection messages 2025-12-01 15:23:46 +08:00
d162f7e5ef feat(api): automatically NODE_TYPE_CLASSES_MAPPING generation from node class definitions (#28525) 2025-12-01 14:14:19 +08:00
17e1de18d5 Merge branch 'feat/end-user-oauth' into deploy/end-user-oauth 2025-12-01 12:26:29 +08:00
5e6053b367 Merge branch 'origin-main' into feat/end-user-oauth 2025-12-01 12:26:01 +08:00
2f8cb2a1af chore(deps): bump @lexical/text from 0.36.2 to 0.38.2 in /web (#28960)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 09:56:58 +08:00
b91d22375f fix: moving focus after navigations (#28937) 2025-12-01 09:55:04 +08:00
yyh
a087ace697 chore(web): upgrade zustand from v4.5.7 to v5.0.9 (#28943) 2025-12-01 09:53:19 +08:00
0af8a7b958 feat: enhance OceanBase vector database with SQL injection fixes, unified processing, and improved error handling (#28951) 2025-12-01 09:51:47 +08:00
861098714b feat: complete test script of plugin runtime (#28955)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-12-01 09:51:31 +08:00
63b345110e chore(deps): bump echarts-for-react from 3.0.2 to 3.0.5 in /web (#28958)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-01 09:51:22 +08:00
f5e36a8a2b Merge branch 'origin-main' into feat/end-user-oauth 2025-12-01 02:25:44 +08:00
247069c7e9 refactor: port reqparse to Pydantic model (#28913)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-30 16:09:42 +09:00
bb096f4ae3 Feat/ implement test script of content moderation (#28923) 2025-11-30 12:43:58 +08:00
a37497ffb5 fix(web): prevent navbar clearing app state on cmd+click (#28935) 2025-11-30 12:43:47 +08:00
02adf4ff06 chore(i18n): translate i18n files and update type definitions (#28933)
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2025-11-30 12:43:02 +08:00
acbc886ecd fix: implement score_threshold filtering for OceanBase vector search (#28536)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-29 18:50:21 +08:00
0a2d478749 Feat: Add "Open Workflow" link in workflow side panel (#28898) 2025-11-29 18:47:12 +08:00
95528ad8e5 fix: ensure "No apps found" text is visible on small screens (#28929)
Co-authored-by: yyh <92089059+lyzno1@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-29 17:21:39 +08:00
ddad2460f3 feat: complete test script of dataset indexing task (#28897)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-28 21:31:03 +08:00
660efda7f5 feat: enhance AgentTools component with expandable provider sections and improved UI interactions 2025-11-28 18:30:55 +08:00
5e96e4dda6 feat: introduce CredentialConfigHeader and EndUserCredentialSection components for enhanced plugin authentication UI 2025-11-28 18:12:11 +08:00
a8491c26ea fix: add explicit default to httpx.timeout (#28836) 2025-11-28 04:02:07 -06:00
0aed7afdc0 feat: Add comprehensive unit tests for TagService with extensive docu… (#28885)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-28 18:01:01 +08:00
18b800a33b feat: complete test script of sensitive word filter (#28879)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-28 18:00:54 +08:00
c64fe595d3 test: add comprehensive unit tests for ExternalDatasetService (#28872) 2025-11-28 17:59:02 +08:00
a31eea8389 feat: refactor AgentTools component to group tools by provider and enhance UI interactions 2025-11-28 16:42:52 +08:00
dd3b1ccd45 refactor(workflow): remove redundant get_base_node_data() method (#28803) 2025-11-28 15:38:46 +08:00
6f927b4a62 test: add comprehensive unit tests for RecommendedAppService (#28869) 2025-11-28 15:10:24 +08:00
c76bb8ffa0 feat: complete test script of file upload (#28843)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-28 15:10:12 +08:00
4dcd871cef test: add comprehensive unit tests for AudioService (#28860) 2025-11-28 14:43:35 +08:00
abe1d31ae0 test: add comprehensive unit tests for SavedMessageService (#28845) 2025-11-28 14:42:54 +08:00
2d71fff2b2 test: add comprehensive unit tests for TagService (#28854) 2025-11-28 14:41:57 +08:00
c4f61b8ae7 Fix CODEOWNERS workflow owner handle (#28866) 2025-11-28 14:41:20 +08:00
c51ab6ec37 fix: the consistency of the go-to-anything interaction (#28857) 2025-11-28 14:29:15 +08:00
1fc2255219 test: add comprehensive unit tests for EndUserService (#28840) 2025-11-28 14:22:19 +08:00
6aa0c9e5cc Merge branch 'refs/heads/origin-main' into feat/end-user-oauth 2025-11-28 14:19:42 +08:00
037389137d feat: complete test script of indexing runner (#28828)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-28 14:18:59 +08:00
8cd3e84c06 chore: bump dify plugin version in docker.middleware (#28847) 2025-11-28 13:55:13 +08:00
b3c6ac1430 chore: assign code owners to frontend and backend modules in CODEOWNERS (#28713) 2025-11-28 12:42:58 +08:00
68bb97919a feat: add comprehensive unit tests for MessageService (#28837)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-28 12:36:15 +08:00
f268d7c7be feat: complete test script of website crawl (#28826) 2025-11-28 12:34:27 +08:00
d695a79ba1 test: add comprehensive unit tests for DocumentIndexingTaskProxy (#28830)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-28 12:30:54 +08:00
cd5a745bd2 feat: complete test script of notion provider (#28833) 2025-11-28 12:30:45 +08:00
51e5f422c4 test: add comprehensive unit tests for VectorService and Vector classes (#28834)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-28 12:30:02 +08:00
ec3b2b40c2 test: add comprehensive unit tests for FeedbackService (#28771)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-28 11:33:56 +08:00
67ae3e9253 docker: use COPY --chown in api Dockerfile to avoid adding layers by explicit chown calls (#28756) 2025-11-28 11:33:06 +08:00
d38e3b7792 test: add unit tests for document service status management (#28804)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-28 11:25:36 +08:00
43d27edef2 feat: complete test script of embedding service (#28817)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-28 11:24:30 +08:00
94b87eac72 feat: add comprehensive unit tests for provider models (#28702)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-28 11:24:20 +08:00
yyh
fd31af6012 fix(ci): use dynamic branch name for i18n workflow to prevent race condition (#28823)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-28 11:23:28 +08:00
yyh
228deccec2 chore: update packageManager version in package.json to pnpm@10.24.0 (#28820) 2025-11-28 11:23:20 +08:00
639f1d31f7 feat: complete test script of text splitter (#28813)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-28 11:22:52 +08:00
ec786fe236 test: add unit tests for document service validation and configuration (#28810)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-28 11:21:45 +08:00
fe3a6ef049 feat: complete test script of reranker (#28806)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-28 11:21:35 +08:00
8b761319f6 Refactor workflow nodes to use generic node_data (#28782) 2025-11-27 20:46:56 +08:00
002d8769b0 chore: translate i18n files and update type definitions (#28784)
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2025-11-27 20:28:17 +08:00
5aba111297 Feat zen mode (#28794) 2025-11-27 20:10:50 +08:00
dc9b3a7e03 refactor: rename VariableAssignerNodeData to VariableAggregatorNodeData (#28780) 2025-11-27 17:45:48 +08:00
8fe47a3c04 Merge remote-tracking branch 'origin/main' into feat/end-user-oauth 2025-11-27 17:45:28 +08:00
cb4670cd68 feat: enhance agent tools with end user credential support 2025-11-27 17:45:05 +08:00
1400b9c6e2 refactor(plugin-auth): enhance end-user type selection UI and remove unused state 2025-11-27 17:45:05 +08:00
b7d9483bc2 feat(end-user): implement end-user credential management in plugin-auth component
- Added support for end-user credentials with options for OAuth and API Key.
- Introduced new props in PluginAuth for managing end-user credential types and their states.
- Updated workflow types to include end-user credential fields.
- Enhanced UI to allow users to select and manage end-user credentials.
- Added translations for new UI elements related to end-user credentials.
2025-11-27 17:45:05 +08:00
5f2e0d6347 pref: reduce next step components reRender (#28783) 2025-11-27 17:12:00 +08:00
1f72571c06 edit analyze-component (#28781)
Co-authored-by: CodingOnStar <hanxujiang@dify.ai>
Co-authored-by: 姜涵煦 <hanxujiang@jianghanxudeMacBook-Pro.local>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-27 16:54:44 +08:00
820925a866 feat(workflow): workflow as tool output schema (#26241)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: Novice <novice12185727@gmail.com>
2025-11-27 16:50:48 +08:00
299bd351fd perf: reduce reRender in candidate node (#28776) 2025-11-27 15:57:36 +08:00
13bf6547ee Refactor: centralize node data hydration (#27771) 2025-11-27 15:41:56 +08:00
1b733abe82 feat: creates logs immediately when workflows start (not at completion) (#28701) 2025-11-27 15:22:33 +08:00
5782e26ab2 test: add unit tests for dataset service update/delete operations (#28757)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-27 15:01:43 +08:00
38d329e75a test: add unit tests for dataset permission service (#28760)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-27 15:00:55 +08:00
58f448a926 chore: remove outdated model config doc (#28765) 2025-11-27 14:40:06 +08:00
7a7fea40d9 feat: complete test script of dataset retrieval (#28762)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-27 14:39:33 +08:00
0309545ff1 Feat/test script of workflow service (#28726)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-27 11:23:55 +08:00
6deabfdad3 Use naive_utc_now in graph engine tests (#28735)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-27 11:23:20 +08:00
f9b4c31344 fix: MCP tool time configuration not work (#28740) 2025-11-27 11:22:49 +08:00
8d8800e632 upgrade docker compose milvus version to 2.6.0 to fix installation error (#26618)
Co-authored-by: crazywoola <427733928@qq.com>
2025-11-27 11:01:14 +08:00
4ca4493084 Add comprehensive unit tests for MetadataService (dataset metadata CRUD operations and filtering) (#28748)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-27 11:00:10 +08:00
7efa0df1fd Add comprehensive API/controller tests for dataset endpoints (list, create, update, delete, documents, segments, hit testing, external datasets) (#28750)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-27 10:59:17 +08:00
b786e101e5 fix: querying and setting the system default model (#28743) 2025-11-27 11:58:35 +09:00
09a8046b10 fix: querying webhook trigger issue (#28753) 2025-11-27 10:56:21 +08:00
2f6b3f1c5f hotfix: fix _extract_filename for rfc 5987 (#26230)
Signed-off-by: NeatGuyCoding <15627489+NeatGuyCoding@users.noreply.github.com>
2025-11-27 10:54:00 +08:00
2551f6f279 feat: add APP_DEFAULT_ACTIVE_REQUESTS as the default value for APP_AC… (#26930) 2025-11-27 10:51:48 +08:00
01afa56166 chore: enhance the test script of current billing service (#28747)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-27 10:37:24 +08:00
5815950092 add unit tests for iteration node (#28719)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-27 10:36:47 +08:00
766e16b26f add unit tests for code node (#28717) 2025-11-27 10:36:37 +08:00
0fdb4e7c12 chore: enhance the test script of conversation service (#28739)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-27 09:57:52 +08:00
64babb35e2 feat: Add comprehensive unit tests for DatasetCollectionBindingService (dataset collection binding methods) (#28724)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-27 09:55:42 +08:00
38522e5dfa fix: use default_factory for callable defaults in ORM dataclasses (#28730) 2025-11-27 09:39:49 +09:00
9df9db3a8f feat: use enums instead of literals (#28733) 2025-11-26 13:31:12 -06:00
bf4f9b04bf Merge branch 'feat/end-user-oauth' into feat/end-user-add-dsl-field 2025-11-26 13:29:54 -06:00
a659cbf71d ECO-184: use enums instead of literals 2025-11-26 13:28:23 -06:00
72514904ea Revert "feat: add dsl field for end user oauth" (#28732) 2025-11-26 13:24:07 -06:00
f3fbd4f90e Revert "feat: add dsl field for end user oauth" 2025-11-26 13:23:58 -06:00
5947cc2bab feat: add dsl field for end user oauth (#28687) 2025-11-26 13:23:27 -06:00
4ccc150fd1 test: add comprehensive unit tests for ExternalDatasetService (external knowledge API integration) (#28716)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-26 23:33:46 +08:00
a4c57017d5 add: badges (#28722) 2025-11-26 23:30:41 +08:00
b2a7cec644 add unit tests for template transform node (#28595)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-26 22:50:20 +08:00
ddc5cbe865 feat: complete test script of dataset service (#28710)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-26 22:48:08 +08:00
1e23957657 fix(ops): add streaming metrics and LLM span for agent-chat traces (#28320)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2025-11-26 22:45:20 +08:00
2731b04ff9 Pydantic models (#28697)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-26 22:44:14 +08:00
e8ca80a61a add unit tests for list operator node (#28597)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-26 22:43:30 +08:00
e76129b5a4 test: add comprehensive unit tests for HitTestingService Fix: #28667 (#28668)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-26 22:42:58 +08:00
6635ea62c2 fix: change existing node to a webhook node raise 404 (#28686) 2025-11-26 22:41:52 +08:00
6b8c649876 fix: prevent auto-scrolling from stopping in chat (#28690)
Signed-off-by: Yuichiro Utsumi <utsumi.yuichiro@fujitsu.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-26 22:39:29 +08:00
af587f3869 chore: update packageManager version to pnpm@10.23.0 (#28708) 2025-11-26 22:37:05 +08:00
1c1f124891 Enhanced GraphEngine Pause Handling (#28196)
This commit: 

1. Convert `pause_reason` to `pause_reasons` in `GraphExecution` and relevant classes. Change the field from a scalar value to a list that can contain multiple `PauseReason` objects, ensuring all pause events are properly captured.
2. Introduce a new `WorkflowPauseReason` model to record reasons associated with a specific `WorkflowPause`.

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: -LAN- <laipz8200@outlook.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-26 19:59:34 +08:00
b353a126d8 chore: bump version to 1.10.1 (#28696) 2025-11-26 18:32:10 +08:00
ef0e1031b0 pref: reduce the times of useNodes reRender (#28682)
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-26 16:52:47 +08:00
d7010f582f Fix 500 error in knowledge base, select weightedScore and click retrieve. (#28586)
Signed-off-by: -LAN- <laipz8200@outlook.com>
Co-authored-by: -LAN- <laipz8200@outlook.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-26 16:44:00 +08:00
d696b9f35e Use pnpm dev in dev/start-web (#28684) 2025-11-26 16:24:01 +08:00
665d49d375 Fixes session scope bug in FileService.delete_file (#27911)
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: -LAN- <laipz8200@outlook.com>
2025-11-26 16:21:33 +08:00
31109735a3 fix: add dependency 2025-11-26 01:31:10 -06:00
df025ac400 Merge branch 'feat/end-user-oauth' into feat/end-user-add-dsl-field 2025-11-26 01:30:18 -06:00
f4a3e290cb fix: change end user table to uuidv7 (#28689) 2025-11-26 01:29:15 -06:00
66e85ca16c fix: change end user table to user uuid v7 2025-11-26 01:27:07 -06:00
26a1c84881 chore: upgrade system libraries and Python dependencies (#28624)
Signed-off-by: -LAN- <laipz8200@outlook.com>
Co-authored-by: Xiyuan Chen <52963600+GareArc@users.noreply.github.com>
2025-11-26 15:25:28 +08:00
dbecba710b frontend auto testing rules (#28679)
Co-authored-by: CodingOnStar <hanxujiang@dify.ai>
Co-authored-by: 姜涵煦 <hanxujiang@jianghanxudeMacBook-Pro.local>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-11-26 15:18:07 +08:00
ac8136cca4 ECO-184: add new dsl field to tool node, ready for enduser auth 2025-11-26 01:13:57 -06:00
8c111de6a9 ECO-184: add new dsl field to tool node, ready for enduser auth 2025-11-26 01:13:35 -06:00
591414307a fix: fixed workflow as tool files field return empty problem (#27925)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: QuantumGhost <obelisk.reg+git@gmail.com>
2025-11-26 14:00:36 +08:00
1241cab113 chore: enhance the hint when the user triggers an invalid webhook request (#28671) 2025-11-26 14:00:16 +08:00
490b7ac43c fix: fix feedback like or dislike not display in logs (#28652) 2025-11-26 13:59:47 +08:00
d784a0432c fix: migration file conflits (#28665) 2025-11-25 21:27:26 -06:00
1636b228db ECO-183: fix migration file conflits 2025-11-25 17:24:49 -06:00
d5beccf0da feat: add deployment workflow for end-user development environment 2025-11-25 15:05:55 +08:00
e36d460d67 Merge branch 'refs/heads/origin-main' into feat/end-user-oauth 2025-11-25 14:38:40 +08:00
68f195c5e9 Merge branch 'main' into feat/end-user-oauth 2025-11-24 21:29:42 -06:00
200e801182 ECO-171: remove unused index statement from migration script 2025-11-24 11:12:46 -06:00
5bcf5be874 ECO-171: remove unused index statement from migration script 2025-11-24 11:11:15 -06:00
b60ba0b192 Merge branch 'main' into feat/end-user-oauth 2025-11-24 11:04:39 -06:00
26dc4d43bf feat: add new table of end user oauth (#28351) 2025-11-24 01:28:40 -06:00
0e355079fa ECO-171: removing duplicated index 2025-11-24 01:26:54 -06:00
b51bf33b1e ECO-171: adding comments for expire_at as clarification 2025-11-24 00:02:37 -06:00
30594978f9 ECO-171: update to competible with mysql 2025-11-23 23:43:59 -06:00
2e2d7a5345 Apply suggestion from @gemini-code-assist[bot]
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-24 13:23:31 +08:00
cf2457a03c ECO-171: update credential cast 2025-11-23 23:17:24 -06:00
4590dab046 ECO-171: update migration script with sql alchemy auto naming 2025-11-23 23:10:24 -06:00
f2df8af4c8 ECO-171: update to use server default time stamp 2025-11-23 23:00:10 -06:00
5b93ed3fcd ECO-171: update unique constraints 2025-11-23 21:29:23 -06:00
5fd83e9425 ECO-171: remove postgresql specific features 2025-11-23 20:41:13 -06:00
8f6937eea6 ECO-171: resolving comments 2025-11-21 17:05:16 -06:00
74fc026fe0 merge main 2025-11-21 09:12:46 -06:00
bbd466eaba ECO-171: fix comments 2025-11-20 23:08:00 -06:00
153609b968 ECO-171: remove server defaults 2025-11-20 22:57:37 -06:00
6cd7ab4719 Update api/models/tools.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-20 21:18:25 -06:00
39de9e7248 Update api/models/tools.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-20 21:18:14 -06:00
5e93a61865 ECO-171: edited migration script 2025-11-20 21:11:12 -06:00
76069b5d6d ECO-171: add migration script 2025-11-20 21:11:12 -06:00
adf673d031 Apply suggestion from @gemini-code-assist[bot]
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-11-19 17:18:50 +09:00
2fa6684c4d Merge remote-tracking branch 'origin/main' into feat/trigger 2025-11-13 10:59:57 +08:00
87439b8fec Merge main 2025-11-12 17:49:17 +08:00
08034532f6 Update trigger.py 2025-11-12 17:42:55 +08:00
c5f47ebccd Merge branch 'main' into feat/trigger 2025-11-12 17:14:35 +08:00
6744306818 Merge branch 'main' into feat/trigger 2025-11-12 17:04:31 +08:00
3ff14ccc89 Merge branch 'main' into feat/trigger 2025-11-12 16:49:08 +08:00
9c30f16e4b Update api/tasks/trigger_processing_tasks.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-12 16:42:12 +08:00
c31933c163 chore: downgrade dify-api version to 1.9.2 in uv.lock 2025-11-12 16:20:22 +08:00
574eb1a10a chore: downgrade version to 1.9.2 in pyproject.toml(ready for merge) 2025-11-12 16:18:19 +08:00
e6ac783fc3 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-11-12 16:01:19 +08:00
689a75f44a Merge remote-tracking branch 'origin/main' into feat/trigger 2025-11-12 15:07:05 +08:00
a25f469bde fix(trigger): Prevent marketplace tool downloads from retriggering on tab change 2025-11-12 13:40:56 +08:00
044ee7ef54 feat(workflow): always render featured tools section 2025-11-12 10:37:39 +08:00
2725f28fa8 fix(trigger): incorrect behavior when node uninstalled on the canvas 2025-11-12 10:12:49 +08:00
c493e08df1 add new table of end user oauth 2025-11-11 20:05:11 -06:00
36ad784251 feat(workflow-header): add conditional logic to disable publish and refresh actions based on workflow node presence 2025-11-11 20:08:09 +08:00
0a39e5c092 Fix modal query sync for settings & pricing 2025-11-11 19:32:48 +08:00
9169a5e35b Merge branch 'main' into feat/trigger 2025-11-11 18:05:23 +08:00
bfdcb79e19 feat(card-view): enhance CardView to conditionally render AppCards based on trigger node presence in workflow 2025-11-11 16:54:06 +08:00
c37cce000f refactor: replace TRIGGER_NODE_TYPES with isTriggerNode utility for improved node type checks across workflow components 2025-11-11 16:54:06 +08:00
6d3fb9b769 [autofix.ci] apply automated fixes 2025-11-11 08:37:13 +00:00
c04913ecf8 refactor(tests): remove redundant graph validation tests from WorkflowService unit tests
- Deleted tests for graph initialization and error propagation that were deemed unnecessary.
- Cleaned up the test suite to improve maintainability and focus on essential validation scenarios.
2025-11-11 16:35:19 +08:00
3a84a64c32 refactor: unify account setting tab constants and tighten modal types 2025-11-11 16:16:41 +08:00
b344d4add1 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-11-11 16:16:22 +08:00
405a4ec9f8 feat: add URL parameter support for settings modal using action=showSettings 2025-11-11 16:16:06 +08:00
8bb11a588c fix(tests): fix end node missing ouputs 2025-11-11 16:14:00 +08:00
44f451bd7d refactor(api): improve graph validation logic in WorkflowService
- Updated the validate_graph_structure method to handle empty graph cases gracefully.
- Introduced a variable for workflow_id to ensure consistent handling of unknown workflow IDs.
- Enhanced code readability and maintainability by refining the method's structure.
2025-11-11 16:14:00 +08:00
35d914e755 Merge remote-tracking branch 'origin/feat/trigger' into feat/trigger 2025-11-11 15:22:39 +08:00
9c37f8c1cb feat(trigger): add support for trigger nodes and user input node management in workflow components 2025-11-11 15:21:04 +08:00
9de0e3c3a7 fix: add missing TimePicker type definitions for notClearable, triggerFullWidth, showTimezone and placement 2025-11-11 15:18:14 +08:00
707c94f86e feat: add URL parameter support for pricing modal using action=showPricing 2025-11-11 15:15:40 +08:00
81afd087f6 feat: add trigger events, workflow execution, and start nodes to billing plan features
- Add three new feature items to cloud plan list:
  - Trigger Events (varies by plan: 3K for sandbox, 20K/month for pro, unlimited for team)
  - Workflow Execution (standard/faster/priority based on plan)
  - Start Nodes (limited to 2 for sandbox, unlimited for pro/team)
- Add i18n translations for en-US and zh-Hans
- Position new items below document processing priority and above divider
2025-11-11 15:00:25 +08:00
0f952f328f feat: add api rate limit and trigger events billing card 2025-11-11 15:00:25 +08:00
50619fba0a [autofix.ci] apply automated fixes 2025-11-11 06:54:03 +00:00
aad31bb703 feat(api): enhance workflow validation and structure checks
- Added a new validation class to ensure that trigger nodes do not coexist with UserInput (start) nodes in the workflow graph.
- Implemented a method in WorkflowService to validate the graph structure before persisting workflows, leveraging the new validation logic.
- Updated unit tests to cover the new validation scenarios and ensure proper error propagation.
2025-11-11 14:52:13 +08:00
7484a020e1 fix(trigger): subscription schema use bool field cause pydantic error 2025-11-11 14:05:11 +08:00
186828c13a [autofix.ci] apply automated fixes 2025-11-11 04:47:22 +00:00
203fb95391 chore(api): update dependencies and default queue configurations
- Updated `revision` in `uv.lock` from 3 to 2.
- Added `croniter` package version 6.0.0 with dependencies in `uv.lock`.
- Updated `dify-api` version to 1.10.0rc1 and added `croniter` as a dependency.
- Modified default queue names in `entrypoint.sh` for both CLOUD and SELF_HOSTED editions to include `priority_dataset`.
2025-11-11 12:45:02 +08:00
a94e650ffd Merge remote-tracking branch 'origin/main' into feat/trigger
# Conflicts:
#	api/docker/entrypoint.sh
#	api/uv.lock
#	dev/start-worker
#	docker/.env.example
#	docker/docker-compose.yaml
#	web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/chart-view.tsx
#	web/app/components/base/date-and-time-picker/date-picker/index.tsx
#	web/app/components/base/date-and-time-picker/types.ts
2025-11-11 12:42:01 +08:00
00fdd06179 fix(trigger): subscription schema config not display field description 2025-11-10 13:43:56 +08:00
62fbc90389 refactor: update free plan rate limit description in pricing modal 2025-11-10 10:48:32 +08:00
f19a21da11 fix prepare userinput logic 2025-11-10 09:59:20 +08:00
7401792063 feat(last-run): add handling for Listening status in run result calculation 2025-11-07 16:39:30 +08:00
79e46c8a81 feat(step-run): add resolvedStatus calculation for improved run result handling 2025-11-07 15:19:40 +08:00
7658c92cf9 feat(trigger): improve trigger node in useOneStepRun for getting system variables 2025-11-07 13:17:57 +08:00
85a5c78b80 feat: enhance workflow log components with detailed trigger metadata and type safety 2025-11-06 16:16:34 +08:00
9d7b47c784 trigger CI
Signed-off-by: lyzno1 <yuanyouhuilyz@gmail.com>
2025-11-06 15:41:54 +08:00
e4c6ed9c60 Merge branch 'main' into feat/trigger 2025-11-06 15:25:19 +08:00
fcfade4778 fix: update checkValidFns type to Partial and ensure error message fallback in useOneStepRun 2025-11-06 14:51:32 +08:00
000e8bd12b fix: improve error handling in useOneStepRun and useWorkflowRun to provide structured error messages 2025-11-06 14:05:45 +08:00
ed8da2c760 fix: return structured error response for PluginInvokeError in workflow trigger APIs 2025-11-06 12:41:17 +08:00
fb6dc14e9b refactor:simplify syncWorkflowDraft parameters 2025-11-06 12:19:09 +08:00
77e6e98234 fix CI 2025-11-06 09:46:43 +08:00
fb3699ec5e fix(web): resolve type checks in app operations 2025-11-05 20:08:59 +08:00
4601be8b67 fix: update hasConnectedUserInput function to use specific types for nodes and edges 2025-11-05 18:12:57 +08:00
cc4d4adfb9 feat: enhance workflow draft processing by adding hydration and sanitization functions 2025-11-05 16:54:49 +08:00
6a08623949 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-11-05 16:50:53 +08:00
f0127ffc9a feat: enhance trigger metadata structure by adding type field and updating trigger_metadata handling 2025-11-05 16:49:29 +08:00
f1e513830c feat: implement logging for failed trigger invocations in workflow processing 2025-11-05 16:49:29 +08:00
7de533a643 feat: add start-web script to automate web project setup and execution 2025-11-05 16:49:29 +08:00
052127c473 [autofix.ci] apply automated fixes 2025-11-05 05:07:06 +00:00
7a4be5c0d2 Update package metadata in uv.lock: increment revision to 2, add upload times for sdist and wheels, and update aiohttp sdist version to 3.13.2. 2025-11-05 13:00:28 +08:00
a6208feed8 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-11-05 12:59:25 +08:00
c8f55549d7 Remove global pnpm installation from script 2025-11-05 10:27:45 +08:00
132a86dcb3 fix: output node vars check 2025-11-05 10:15:57 +08:00
9f59baed10 fix(urlValidation): remove specific check for Dify cloud trigger debug URLs 2025-11-04 18:34:41 +08:00
ce56286329 feat(validation): implement isPrivateOrLocalAddress utility and integrate into webhook components for improved URL validation 2025-11-04 18:32:19 +08:00
6e76f2aff2 fix(api): update TriggerInvokeEventResponse to use Field for default value of cancelled 2025-11-04 18:15:13 +08:00
49edd58722 fix(trigger): enhance credential handling by decrypting and masking subscription properties and parameters 2025-11-04 18:15:13 +08:00
6a28aee13e Merge remote-tracking branch 'origin/feat/trigger' into feat/trigger 2025-11-04 16:51:37 +08:00
79c70d09c9 feat(marketplace): introduce IS_MARKETPLACE flag and update X-Dify-Version header logic 2025-11-04 16:09:50 +08:00
b9bb97887b fix(workflow): handle node inspection variable deletion when not fetched 2025-11-04 15:25:15 +08:00
7df6d9f1aa Merge remote-tracking branch 'origin/main' into feat/trigger 2025-11-04 09:56:02 +08:00
587f83bc34 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-11-04 09:55:43 +08:00
d81b2e6820 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-11-04 09:52:08 +08:00
8315e0c74b Merge remote-tracking branch 'origin/main' into feat/trigger 2025-11-04 09:44:13 +08:00
3cc6690356 fix(trigger): global var dialogue_count and timestamp auto converted to string 2025-11-03 17:12:58 +08:00
6507263b28 fix(trigger): timestamp should be number type 2025-11-03 15:40:33 +08:00
8fa0bb48df fix CI 2025-11-03 14:57:01 +08:00
637a675681 fix(trigger): workflow checklist not work for knowledge pipeline 2025-11-03 09:42:33 +08:00
085ada86e6 chore: trigger ci
Signed-off-by: lyzno1 <yuanyouhuilyz@gmail.com>
2025-10-31 20:15:14 +08:00
f59d430219 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-31 20:01:16 +08:00
c415e5b893 Merge branch 'feat/trigger' of https://github.com/langgenius/dify into feat/trigger 2025-10-31 20:00:51 +08:00
67b6b3612c fix: trigger docs link 2025-10-31 20:00:39 +08:00
229b0e190f bump version 2025-10-30 23:21:56 +08:00
09d412cf2a [autofix.ci] apply automated fixes 2025-10-30 15:18:20 +00:00
2842cbf1e1 refactor: update error handling to use BadRequest for plugin invocation errors 2025-10-30 23:16:14 +08:00
e2543bcf30 refactor: remove unused error imports in TriggerManager 2025-10-30 23:05:08 +08:00
3f75aa6848 refactor: simplify error handling in TriggerManager 2025-10-30 23:04:54 +08:00
57719f3ce9 fix: docker env 2025-10-30 22:21:14 +08:00
45677ac57c bump plugin daemon image version to 0.4.0-local 2025-10-30 22:19:24 +08:00
4eacbf37ff bump docker compose daemon version 2025-10-30 21:22:09 +08:00
ef256ac276 bump version 2025-10-30 21:20:58 +08:00
2733e04039 fix CI 2025-10-30 20:28:47 +08:00
e49ec82258 fix CI 2025-10-30 20:25:35 +08:00
cf301eb1d9 fix CI 2025-10-30 20:23:22 +08:00
98b9ba2b2e fix CI 2025-10-30 20:22:01 +08:00
2126c64468 fix CI 2025-10-30 20:17:11 +08:00
271a1b4f98 fix CI 2025-10-30 20:10:49 +08:00
9be3c62c04 fix CI 2025-10-30 20:08:16 +08:00
04bfa235a9 fix: test 2025-10-30 19:49:08 +08:00
3b37ae1b4e fix: dotenv lint 2025-10-30 19:37:45 +08:00
c1cb93cd26 fix 2025-10-30 18:55:08 +08:00
75fa161c46 apply fix 2025-10-30 18:49:06 +08:00
d6d82cff33 apply linter 2025-10-30 18:48:16 +08:00
5c266fecf9 fix: types 2025-10-30 18:36:08 +08:00
7244978b24 fix 2025-10-30 18:33:46 +08:00
623021dcff cleanup 2025-10-30 18:32:05 +08:00
5af165fce9 fix: change timestamp type to integer 2025-10-30 18:24:30 +08:00
9503fafc53 fix 2025-10-30 18:15:03 +08:00
99fac21bdb fix: type 2025-10-30 18:11:52 +08:00
bc95678c5e fix(trigger): appmode type 2025-10-30 18:03:12 +08:00
3f34f38635 fix: types 2025-10-30 18:02:58 +08:00
30f771369b fix: types 2025-10-30 18:01:12 +08:00
20bd059a6c fix CI 2025-10-30 17:58:59 +08:00
f5eb406394 fix: types 2025-10-30 17:58:31 +08:00
cbebac1d45 fix: webhook container tests 2025-10-30 17:46:59 +08:00
030da43ae3 fix: type 2025-10-30 17:44:43 +08:00
b7f1394403 fix(trigger): add default icon 2025-10-30 17:42:10 +08:00
ceb6a09387 exclude test file in tsc 2025-10-30 17:39:02 +08:00
14ad800967 Revert "rm type check"
This reverts commit 34d1f86f76.
2025-10-30 17:34:45 +08:00
34d1f86f76 rm type check 2025-10-30 17:28:38 +08:00
b9b9f8eae3 fix: type 2025-10-30 17:24:36 +08:00
0de8596afe Merge branch 'feat/trigger' of https://github.com/langgenius/dify into feat/trigger 2025-10-30 17:14:11 +08:00
4dbd26ff66 [autofix.ci] apply automated fixes 2025-10-30 09:14:08 +00:00
d018ef9033 apply autofix to autofix CI 2025-10-30 17:11:49 +08:00
979c985804 Merge branch 'feat/trigger' of https://github.com/langgenius/dify into feat/trigger 2025-10-30 17:11:08 +08:00
291e9a3aee fix: ruff 2025-10-30 17:10:21 +08:00
5861ca773e fix: mapping to dict 2025-10-30 17:09:40 +08:00
eb3b5f751a [autofix.ci] apply automated fixes 2025-10-30 09:08:11 +00:00
9bbfbf1c5f Merge branch 'feat/trigger' of https://github.com/langgenius/dify into feat/trigger 2025-10-30 17:06:27 +08:00
8cbd124b80 delete: remove cron-parser unit tests 2025-10-30 17:05:41 +08:00
d137d0eed0 rm test 2025-10-30 17:05:27 +08:00
58c5db3b00 Merge remote-tracking branch 'origin/feat/trigger' into feat/trigger 2025-10-30 17:05:14 +08:00
8750796f9f Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-30 16:59:56 +08:00
7d4bb45f94 fix(workflow): align plugin lock overlay with install availability 2025-10-30 16:55:07 +08:00
db744444f2 fix(CI): fix CI errors 2025-10-30 16:54:17 +08:00
b25d379ef4 cleanup 2025-10-30 16:33:39 +08:00
e1e95f7ccd fix(CI): fix CI errors 2025-10-30 16:33:04 +08:00
edd50420ec apply test fix 2025-10-30 16:24:43 +08:00
9af8fe085b fix: apply docker template 2025-10-30 16:20:26 +08:00
ed6bb121bb fix: update types 2025-10-30 16:16:55 +08:00
4635b99153 Merge remote-tracking branch 'origin/feat/trigger' into feat/trigger 2025-10-30 16:16:21 +08:00
282fde9a04 feat(next.config): add console log removal configuration for production 2025-10-30 16:16:11 +08:00
e9078eedbd fix: Variable e is not accessed (reportUnusedVariable) 2025-10-30 16:14:13 +08:00
501698d844 Potential fix for code scanning alert no. 243: Information exposure through an exception
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2025-10-30 16:11:32 +08:00
dd089b1b21 fix: coding style 2025-10-30 16:10:54 +08:00
6260a1a28c fix: cycle imports 2025-10-30 16:09:39 +08:00
8bc5035624 Potential fix for code scanning alert no. 211: Information exposure through an exception
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2025-10-30 16:00:57 +08:00
2dbfd9ea5a Potential fix for code scanning alert no. 241: Use of a broken or weak cryptographic hashing algorithm on sensitive data
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2025-10-30 16:00:44 +08:00
08e61d76d6 Potential fix for code scanning alert no. 244: Information exposure through an exception
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2025-10-30 16:00:24 +08:00
447127cee4 Update api/controllers/console/app/workflow.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-10-30 15:59:37 +08:00
49ebbd05b5 Update api/controllers/console/app/workflow.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-10-30 15:58:58 +08:00
defea962f6 Update api/controllers/console/workspace/trigger_providers.py
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-10-30 15:58:41 +08:00
b3866288e0 fix: docker compose 2025-10-30 15:54:25 +08:00
bed2ce69bb Merge branch 'feat/trigger' of https://github.com/langgenius/dify into feat/trigger 2025-10-30 15:35:39 +08:00
4d37d61851 feat: dim node 2025-10-30 15:34:50 +08:00
8a48db6d0d Improve workflow tool install flow 2025-10-30 15:34:49 +08:00
ff0f645e54 Fix plugin install detection for tool nodes 2025-10-30 15:34:49 +08:00
6e0765fbaf feat: add install check for tools, triggers and datasources 2025-10-30 15:34:49 +08:00
1d03e0e9fc fix(trigger): hide input params when no subscription 2025-10-30 15:28:34 +08:00
cac60a25bb cleanup: migrations 2025-10-30 15:27:02 +08:00
57c65ec625 fix: typing 2025-10-30 14:58:30 +08:00
ffc3c61d00 merge workflow pasuing 2025-10-30 14:54:14 +08:00
aa3b16a136 fix: migrations 2025-10-30 14:45:26 +08:00
6e0b408dd5 Merge branch 'main' into feat/trigger 2025-10-30 14:43:27 +08:00
be9eeff6c2 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-30 12:14:47 +08:00
ca9d92b1e5 fix(variable): when open history mode not close global var panel 2025-10-30 09:53:00 +08:00
0607db41e5 fix(variable): draft run workflow cause global var panel misalign 2025-10-30 09:02:38 +08:00
48b1829b14 chore: improve toggle env/conversation/global var panel 2025-10-29 22:08:04 +08:00
6767a8f72c chore: i18n for system var 2025-10-29 21:10:26 +08:00
1e477af05f feat(trigger): add system variables to webhook node outputs
Enhanced the TriggerWebhookNode to include system variables as outputs. This change allows for better accessibility of system variables during node execution, improving the overall functionality of the webhook trigger process. A TODO comment has been added to address future improvements for direct access to system variables.
2025-10-29 18:15:36 +08:00
9b5e5f0f50 refactor(api): replace dict type hints with Mapping for improved type safety
Updated type hints in several services to use Mapping instead of dict for better compatibility with various dictionary-like objects. Adjusted credential handling to ensure consistent encryption and decryption processes across ToolManager, DatasourceProviderService, ApiToolManageService, BuiltinToolManageService, and MCPToolManageService. This change enhances code clarity and adheres to strong typing practices.
2025-10-29 18:10:38 +08:00
fb12f31df2 feat(trigger): system variables for trigger nodes
Added a timestamp field to the SystemVariable model and updated the WorkflowAppRunner to include the current timestamp during execution. Enhanced node type checks to recognize trigger nodes in various services, ensuring proper handling of system variables and node outputs in TriggerEventNode and TriggerScheduleNode. This improves the overall workflow execution context and maintains consistency across node types.
2025-10-29 18:10:38 +08:00
db2c6678e4 fix(trigger): show subscription url & add readme in trigger plugin node 2025-10-29 16:16:29 +08:00
bc3421add8 refactor(variable): update global variable names and types for consistency 2025-10-29 15:53:37 +08:00
61d8809a0f fix(workflow): enhance validation before running workflows by integrating warning notifications 2025-10-29 15:53:13 +08:00
d37cc9f9c8 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-29 15:16:28 +08:00
0db082f6d0 feat(workflow): persist RAG recommendation panel collapse state 2025-10-29 15:10:45 +08:00
c94dc52310 fix: remove duplicate RAG tool heading and fix select callback type 2025-10-29 14:57:38 +08:00
bebcbfd80e chore: improve delete app related tables 2025-10-29 14:29:59 +08:00
dfc5e3609d refactor(trigger): streamline OAuth client existence check
Replaced the method for checking the existence of a system OAuth client with a new dedicated method `is_oauth_system_client_exists` in the TriggerProviderService. This improves code clarity and encapsulates the logic for verifying the presence of a system-level OAuth client. Updated the TriggerOAuthClientManageApi to utilize the new method for better readability.
2025-10-29 14:22:56 +08:00
fa5765ae82 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-29 12:56:31 +08:00
852d851996 fix(workflow): add empty array validation for required checklist fields in trigger plugin
The checkValid function was not properly validating required checklist fields when they had empty array values. This caused required fields to pass validation even when no options were selected.

Added array length check to the constant type validation to ensure required checklist fields must have at least one selected option.
2025-10-29 12:36:43 +08:00
0b599b44b0 chore: when delete app also delete related trigger tables 2025-10-29 12:15:34 +08:00
f06dc3ef90 fix: localize workflow block search filters 2025-10-29 11:55:30 +08:00
f9df61e648 feat: add inline code copy styling for variable inspect webhook url 2025-10-29 10:14:50 +08:00
6e76e02dba fix: trigger plugin help link 2025-10-29 09:35:45 +08:00
dc24450e29 feat(workflow): add webhook debug URL display in variable inspection 2025-10-29 04:38:27 +08:00
8bcecce627 feat(workflow): add toast notifications for warning nodes during execution 2025-10-29 01:40:27 +08:00
66cb963df3 feat(workflow): enhance validation by integrating warning nodes into last run checks. 2025-10-29 01:28:31 +08:00
5c95c77604 refactor(trigger): streamline workflow argument handling in DraftWorkflowTriggerNodeApi
- Simplified retrieval of workflow arguments by directly accessing event.workflow_args.
- Removed unnecessary conditional checks for user inputs, ensuring cleaner code.
- Enhanced TriggerEventNode to use deepcopy for user inputs to prevent unintended mutations.
2025-10-29 01:04:37 +08:00
a264a609db feat(workflow): integrate workflow run validation before execution 2025-10-29 00:36:48 +08:00
3a876fd437 Merge branch 'main' into feat/trigger 2025-10-29 00:08:50 +08:00
13bc68a646 feat(trigger): enhance runScheduleSingleRun to handle API response 2025-10-29 00:07:08 +08:00
b41538d8c7 feat(trigger): reinforcement schedule trigger debugging with cron calculation
- Implemented a caching mechanism for schedule trigger debug events using Redis to optimize performance.
- Added methods to create and manage schedule debug runtime configurations, including cron expression handling.
- Updated the ScheduleTriggerDebugEventPoller to utilize the new caching and event creation logic.
- Removed the deprecated build_schedule_pool_key function from event handling.
2025-10-28 23:34:08 +08:00
720480d05e chore(tests): remove deprecated test files for schedule and webhook triggers 2025-10-28 22:42:29 +08:00
71b1af69c5 feat(trigger): request condition param 2025-10-28 22:35:03 +08:00
18fd79fbe6 feat(trigger): add event_name to PluginTriggerMetadata for enhanced trigger handling
- Introduced event_name attribute in PluginTriggerMetadata to improve metadata clarity.
- Updated dispatch_triggered_workflow function to include event_name when dispatching triggered workflows.
2025-10-28 18:32:06 +08:00
c16421df27 refactor: improve trigger metadata handling and streamline workflow service
- Updated ScheduleTriggerDebugEventPoller to include an empty files list in workflow_args.
- Enhanced WorkflowAppService to handle trigger metadata more effectively, including a new method for processing metadata and removing the deprecated _safe_json_loads function.
- Adjusted PluginTriggerMetadata to use icon_filename and icon_dark_filename for better clarity.
- Simplified async workflow task parameters by changing triggered_from to trigger_from for consistency.
2025-10-28 17:50:06 +08:00
0d686fc6ae refactor: streamline trigger event node metadata handling and update async workflow service for JSON serialization
- Removed unnecessary input data from the TriggerEventNode's metadata.
- Updated AsyncWorkflowService to use model_dump_json() for trigger metadata serialization.
- Added a comment in WorkflowAppService to address the large size of the workflow_app_log table and the use of an additional details field.
2025-10-28 17:50:06 +08:00
db352c0a18 Merge branch 'main' into feat/trigger 2025-10-28 17:11:15 +08:00
bf7b18d442 feat(trigger): dynamic options opt 2025-10-28 16:20:01 +08:00
7d56ca5294 fix: add time-picker placement prop 2025-10-28 15:49:10 +08:00
0b1015e221 feat(workflow): enhance variable inspector to support schedule trigger events with next execution time display 2025-10-28 14:12:20 +08:00
96f0d648fa feat: invalidate trigger plugin queries after marketplace installs 2025-10-28 11:31:02 +08:00
c4996f9563 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-28 11:28:06 +08:00
850c5fec32 fix(trigger): invalid subscription 2025-10-27 21:27:08 +08:00
b1f79c34d1 fix(workflow): add support for schedule triggers in workflow run hook 2025-10-27 20:52:44 +08:00
96a461646e fix(trigger): readme portal zindex 2025-10-27 20:05:02 +08:00
5df94fd866 fix(workflow): enhance node-run to include schedule triggers 2025-10-27 19:44:26 +08:00
e074ba84d1 fix(workflow): avoid nested buttons in subscription selector to stop hydration warning 2025-10-27 17:23:58 +08:00
1335be8d60 Revert "feat: propagate trigger metadata for plugin icons across UI"
This reverts commit 3bd62f3fdf.
2025-10-27 17:06:40 +08:00
c79d75b32d Revert "fix: display plugin trigger labels in logs using i18n metadata"
This reverts commit 651cc81cfe.
2025-10-27 17:06:35 +08:00
f18054847e Revert "fix: workflow_trigger"
This reverts commit cc219cc81c.
2025-10-27 17:06:30 +08:00
b2b81f3822 Revert "fix: trigger by display translations"
This reverts commit 33daedd7aa.
2025-10-27 17:06:25 +08:00
90753b2782 fix(trigger): readme style 2025-10-27 16:33:40 +08:00
c05fa9963a fix plugin name incorrect encoded 2025-10-27 16:08:47 +08:00
9de7a7d48f fix(trigger): update outputs in TriggerEventNode to use inputs directly 2025-10-27 15:35:09 +08:00
29cddc449f fix(trigger): add clickOutsideNotClose prop 2025-10-27 14:30:08 +08:00
dfed14ba67 refactor: simplify syncWorkflowDraft parameters by removing payload sanitization 2025-10-27 13:46:27 +08:00
440262a51b fix: serialize workflow draft sync operations (#27487) 2025-10-27 13:29:40 +08:00
d705fece9d fix(plugin): update trigger field type to allow None and add field validator for parameters in EventEntity 2025-10-27 12:02:22 +08:00
d08cc48368 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-27 11:37:07 +08:00
b94ad084c3 feat: surface featured trigger recommendations in start tab (#27319) 2025-10-27 11:33:02 +08:00
9453148233 chore(migrations): remove obsolete migration files for workflow webhook and schedule plan tables 2025-10-27 00:36:27 +08:00
1857d0e53f chore(migrations): remove obsolete migration files for workflow trigger logs, app triggers, and plugin triggers 2025-10-27 00:28:07 +08:00
ae422c2628 fix(trigger): simplify return logic in TriggerProviderService by removing unnecessary None return 2025-10-26 23:48:50 +08:00
d933116e46 fix(workflow): improve error handling in DraftWorkflowTriggerNodeApi by returning JSON response 2025-10-26 23:48:50 +08:00
5cf4afd7b2 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-26 11:54:17 +08:00
f7853f3b27 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-25 14:58:32 +08:00
913d85302c fix: hide replay button for non app-run workflow logs 2025-10-25 14:42:26 +08:00
33daedd7aa fix: trigger by display translations 2025-10-25 14:36:29 +08:00
cc219cc81c fix: workflow_trigger 2025-10-25 14:21:52 +08:00
945295adc3 chore: add some ja-JP translations 2025-10-25 14:00:21 +08:00
651cc81cfe fix: display plugin trigger labels in logs using i18n metadata 2025-10-25 12:41:37 +08:00
3bd62f3fdf feat: propagate trigger metadata for plugin icons across UI 2025-10-25 12:15:21 +08:00
e3484c8dc3 fix: ruff format 2025-10-25 12:08:22 +08:00
eecbe533a1 fix: ruff check 2025-10-25 12:07:46 +08:00
4221e99362 update(docker): add triggered workflow dispatcher and refresh executor to default Celery queues 2025-10-24 21:31:38 +08:00
5c69521973 feat: align with params 2025-10-24 21:20:12 +08:00
ffcaa67a56 feat: align with path 2025-10-24 20:47:54 +08:00
64a070f6b0 feat: align with path 2025-10-24 19:56:06 +08:00
c61656c759 fix: request param 2025-10-24 19:40:50 +08:00
6d34e4e99b fix: request path 2025-10-24 19:33:19 +08:00
d3a767364b fix: request path 2025-10-24 19:15:18 +08:00
f3c6d1ca1d fix: param passing 2025-10-24 18:27:23 +08:00
6098dc0242 feat(workflow): enhance listening descriptions for plugin and webhook triggers 2025-10-24 16:18:07 +08:00
29ec3c7d5c feat(workflow): update listening descriptions when trigger nodes start test-run 2025-10-24 15:32:23 +08:00
4597ab4efb Merge branch 'refs/heads/main' into feat/trigger 2025-10-24 14:44:15 +08:00
1dddcf1194 fix(workflow): resolve occasional issues with syncing historical states or clearing DSL in draft mode (#27391) 2025-10-24 12:33:14 +08:00
c4c38a51d9 fix: merge 2025-10-24 11:58:13 +08:00
8a8c0703b1 feat: add datasource node readme 2025-10-24 11:46:58 +08:00
1b74869b04 fix: plugin readme params 2025-10-24 10:48:59 +08:00
f065504ed6 fix(app-overview): soften tooltip styling 2025-10-24 10:34:16 +08:00
3f5485605f chore: update docs link 2025-10-24 10:31:09 +08:00
399bb522e0 chore: add ts-node for test 2025-10-24 10:20:58 +08:00
9fffa9a996 refactor(workflow): clean up entry node status and colocate store types 2025-10-24 10:20:38 +08:00
aee9a8366f refactor: move marketplace footer outside scroll containers 2025-10-24 10:07:02 +08:00
c3eec7ea8a Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-24 09:47:28 +08:00
4b4ec3438f feat: add plugin readme 2025-10-24 01:18:58 +08:00
9aa43c9165 feat(workflow): enhance trigger node handling with event listening and state management 2025-10-23 18:21:22 +08:00
4ae23ed0f9 feat(workflow): remove unused trigger status logic and simplify entry node status handling 2025-10-23 18:19:06 +08:00
efe68d5aa6 Merge branch 'main' into feat/trigger 2025-10-23 18:05:59 +08:00
1604db02b5 fix(workflow): change description field to be required in TriggerPluginNodePayload 2025-10-23 16:56:00 +08:00
e7192de9c0 Merge remote-tracking branch 'origin/feat/trigger' into feat/trigger 2025-10-23 16:46:35 +08:00
f822b38a00 feat(workflow): integrate payload sanitization for workflow draft synchronization 2025-10-23 16:45:28 +08:00
5a5c7f38d1 fix(plugin): stop loading when uninstall fails 2025-10-23 16:18:52 +08:00
42a9a88ae2 refactor(trigger-plugin): enhance variable type resolution and encapsulate output variable logic in a dedicated function 2025-10-23 16:01:18 +08:00
aea3fc6281 Merge remote-tracking branch 'origin/feat/trigger' into feat/trigger 2025-10-23 15:33:31 +08:00
bcdc11396a refactor(trigger-plugin): streamline variable type resolution and output variable construction 2025-10-23 15:33:19 +08:00
ecd1d44d23 chore: update trigger dsl version to 0.5.0 2025-10-23 15:05:39 +08:00
6df786248c fix: draft run webhook node the _raw var not display on the panel 2025-10-23 13:35:45 +08:00
37e75f7791 Ensure workflow tools tab always shows marketplace footer 2025-10-23 13:08:59 +08:00
7ada2385b3 feat: add toggle behavior for featured tools 2025-10-23 12:27:42 +08:00
a77aab96f5 fix: align all workflow trigger docs link 2025-10-23 12:17:27 +08:00
13af48800b fix: merge 2025-10-23 12:13:33 +08:00
6e7fb59638 Merge branch 'feat/plugin-readme' into feat/trigger
# Conflicts:
#	api/controllers/console/workspace/plugin.py
#	api/core/plugin/entities/plugin_daemon.py
2025-10-23 12:10:44 +08:00
863b4f8fe9 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-23 11:54:35 +08:00
949ac9d930 feat(trigger): add formitem desc 2025-10-23 10:57:42 +08:00
06d1a2e2fd feat(trigger): remove Redundant comp & triggers api no cache 2025-10-23 10:57:41 +08:00
d478f62b49 Optimize workflow tool sync after plugin install (#27280) 2025-10-23 09:58:54 +08:00
128bc2241d feat(checkbox): adjust styles for checkbox component layout 2025-10-22 17:35:04 +08:00
b2730d680c fix(trigger): add missing fields in TriggerEventNode configuration 2025-10-22 16:55:55 +08:00
52b180104a Limit workflow tool/trigger search to provider and item names 2025-10-22 16:53:23 +08:00
9cdb62da93 fix(trigger): reset inputs in TriggerEventNode to an empty dictionary 2025-10-22 15:58:43 +08:00
5af08edfda chore: add missing icon 2025-10-22 15:38:54 +08:00
24fa5f33d7 fix(trigger): update input handling in TriggerEventNode to correctly retrieve and set outputs 2025-10-22 15:37:00 +08:00
caf0bf34dd fix(trigger): add event node status & min-height in modal 2025-10-22 15:31:03 +08:00
181a1ae7f3 fix: hover tooltip 2025-10-22 15:09:01 +08:00
e18ecead2c fix: hide All tools when searching 2025-10-22 15:07:46 +08:00
36a26adab2 fix: install from marketplace 2025-10-22 14:59:49 +08:00
bc2edf5107 refactor(workflow): replace fetch function with base/post in use-one-step-run and use-workflow-run hooks in polling 2025-10-22 14:55:57 +08:00
50bbac5973 Add double-arrow icon swap on featured tools “Show more” hover 2025-10-22 14:47:19 +08:00
45b221659b Tighten featured tools header arrow and add “All tools” section divider 2025-10-22 14:41:18 +08:00
16957f14f1 fix: featured icons 2025-10-22 14:33:00 +08:00
0d7dde0639 Align featured tool hover layout and widen action dropdown 2025-10-22 14:19:22 +08:00
37805184d9 Merge remote-tracking branch 'origin/feat/trigger' into feat/trigger 2025-10-22 12:58:15 +08:00
df9932088f avoid time slice strategy in community edition 2025-10-22 12:50:11 +08:00
d101a83be8 Merge remote-tracking branch 'origin/feat/trigger' into feat/trigger 2025-10-22 12:49:41 +08:00
94ea289c75 fix: suggestions tools list 2025-10-22 12:46:22 +08:00
e2539e91eb fix: view docs 2025-10-22 12:46:22 +08:00
77e9bae3ff feat(workflow): polish featured tools recommendations 2025-10-22 12:46:21 +08:00
d99644237b chore: align help link translations 2025-10-22 12:46:21 +08:00
5cb268e99b feat: suggestions ui 2025-10-22 12:46:21 +08:00
f179b03d6e fix: constrain rag pipeline datasource selector width 2025-10-22 12:46:21 +08:00
28fe58f3dd feat: try to add tools suggestions 2025-10-22 12:46:21 +08:00
14acd05846 fix 2025-10-22 12:41:19 +08:00
ccce135bf5 fix(workflow): add setShowVariableInspectPanel for specific block types in useLastRun hook 2025-10-22 12:38:03 +08:00
cb5607fc8c refactor: TimeSliceLayer 2025-10-22 12:13:12 +08:00
7f70d1de1c ASYNC_WORKFLOW_SCHEDULER_GRANULARITY 2025-10-22 12:10:12 +08:00
c36173f5a9 fix: typing 2025-10-22 11:55:26 +08:00
7acbe981e2 fix: discorrect elapsed_time 2025-10-22 11:49:02 +08:00
dd6ab7c68c feat: support pausing workflow trigger log 2025-10-22 11:45:16 +08:00
a1ea256e79 fix: global icon in inspect 2025-10-22 11:36:01 +08:00
14942c9ee9 fix: page crash 2025-10-22 11:28:28 +08:00
b0b316ed48 fix: rag pipline not show sys vars 2025-10-22 11:07:08 +08:00
871cfbd40c fix: CredentialsSchema missing help field display 2025-10-22 09:23:59 +08:00
9a3ca0ce3b fix(trigger): check subscription removed 2025-10-21 20:01:16 +08:00
c90df5c12c refactor(entry-node): remove showIndicator prop and related logic for cleaner component structure 2025-10-21 19:53:29 +08:00
f4acc78f66 fix(trigger): deal with empty manualPropertiesSchema 2025-10-21 19:49:42 +08:00
3d5e2c5ca1 feat(trigger): add suspend/timeslice layers and workflow CFS scheduler
- add suspend, timeslice, and trigger post engine layers
- introduce CFS workflow scheduler tasks and supporting entities
- update async workflow, trigger, and webhook services to wire in the new scheduling flow
2025-10-21 19:20:54 +08:00
55bf9196dc feat(trigger): add TriggerSchedule to node type checks for workflow execution 2025-10-21 18:57:57 +08:00
18a52b4937 fix(trigger): subscription removed in workflow 2025-10-21 18:43:15 +08:00
439727746c fix: trigger timestamp show place 2025-10-21 18:21:37 +08:00
04b55177b5 feat: support show global vars 2025-10-21 17:59:37 +08:00
2793ede875 feat: update checkbox component in the panel and refactor form types for checkbox and boolean 2025-10-21 17:28:25 +08:00
dc4801c014 refactor(trigger): refactor app mode type to enum 2025-10-21 16:50:18 +08:00
d5e2649608 fix(trigger): disable some options when no start node 2025-10-21 15:25:36 +08:00
4102f0bc9d feat: vars to new place 2025-10-21 15:03:16 +08:00
25e4203cb1 main 2025-10-21 14:44:24 +08:00
e1a3ead941 main 2025-10-21 14:42:27 +08:00
6251090893 Merge remote-tracking branch 'origin/feat/trigger' into feat/trigger 2025-10-21 14:39:10 +08:00
aa5e04b70e Merge branch 'main' into feat/trigger
# Conflicts:
#	web/app/components/workflow/hooks-store/store.ts
#	web/package.json
#	web/pnpm-lock.yaml
2025-10-21 14:36:07 +08:00
8ac25c29ee feat(trigger): implement subscription refresh logic with enhanced error handling and logging 2025-10-21 14:11:27 +08:00
f4517d667b feat(trigger): enhance trigger provider refresh task with locking mechanism and due filter logic 2025-10-21 14:11:27 +08:00
dc2481c805 feat: docs 2025-10-21 13:56:31 +08:00
8d7435a51b docs: introduce agent skill for trigger 2025-10-21 12:23:19 +08:00
bb28c718df fix: correct webhook trigger node id parsing 2025-10-21 11:48:50 +08:00
1b7a5b6209 fix: immer breaking change 2025-10-21 11:42:31 +08:00
448622b4fd fix: pnpm lock file 2025-10-21 11:39:39 +08:00
e9dda03e8d fix: immer version and ref in code base (#27130) 2025-10-21 11:38:44 +08:00
8d3d177932 fix: pnpm lock file 2025-10-21 11:35:41 +08:00
f0af4d692a fix: breaking change 2025-10-21 11:32:20 +08:00
075173e67d fix(workflow): reset onboarding auto-open flag across flows 2025-10-21 11:19:36 +08:00
f02d575379 Merge branch 'main' into feat/trigger 2025-10-21 11:09:26 +08:00
735ebf6c59 fix(trigger): oauth client params 2025-10-21 09:27:10 +08:00
96f0b7abe3 fix(trigger): handle missing 'inputs' key in trigger data retrieval 2025-10-20 21:47:49 +08:00
eb1686f04b Merge remote-tracking branch 'origin/feat/trigger' into feat/trigger 2025-10-20 20:27:02 +08:00
d4b5d9a02a feat(trigger): add trigger validation logic and utility functions for improved checklist integration 2025-10-20 20:26:40 +08:00
f87f77ce7b feat(trigger): add configuration for trigger provider refresh task 2025-10-20 20:02:12 +08:00
24619e74f6 fix(trigger): update error handling and credential expiration field 2025-10-20 20:02:12 +08:00
f5c1646f79 fix(dynamic-options): fix the dynamic options in plugin trigger 2025-10-20 19:34:41 +08:00
e26d77e78c fix(checklist): enhance type safety by refining BlockEnum usage in checklist components 2025-10-20 19:34:41 +08:00
8e1e81732a fix(trigger): formitem boolean layout 2025-10-20 19:27:21 +08:00
801f8c1592 fix(trigger): oauth client default values 2025-10-20 18:21:38 +08:00
fd4b234171 feat: improve oauth client info api 2025-10-20 16:50:03 +08:00
dff536ab6d feat(trigger): trigger plugin protocol improvements 2025-10-20 16:50:03 +08:00
a152ce45d3 fix: start/stop button on the node control not work 2025-10-20 16:43:16 +08:00
6a164f8811 refactor: use EnumText 2025-10-20 15:48:11 +08:00
a03ff39f3e chore: add to .env.example 2025-10-20 15:42:29 +08:00
a6373e357a fix: typing 2025-10-20 15:38:54 +08:00
538b639bef fix: unify trigger url generation 2025-10-20 15:34:51 +08:00
fe0457b257 fix(trigger): show text 2025-10-20 14:17:52 +08:00
d5b228f234 fix(end-node): adjust required status and update end node terminology to output in i18n 2025-10-20 14:00:14 +08:00
1c2f95eeb6 fix(migrations): chain messages.app_mode upgrade after plugin trigger 2025-10-20 13:40:37 +08:00
81b3436ec4 fix(trigger): resolve circular import in models 2025-10-20 09:23:11 +08:00
3e4f2bcf14 optimize: TriggerDispatchResponse 2025-10-18 20:40:59 +08:00
c7696964b9 fix: refine 2025-10-18 20:27:22 +08:00
fb8ecf7b5a refactor: move out enums to specific file 2025-10-18 20:22:21 +08:00
e3c2345b21 fix: typing 2025-10-18 20:17:23 +08:00
bfe0d14409 fix: typing 2025-10-18 20:16:10 +08:00
c7498c3a11 fix: typing 2025-10-18 20:14:00 +08:00
5fba41688a refactor: cleaning up terrible data 2025-10-18 20:12:20 +08:00
b63b9c32f7 refactor: models 2025-10-18 20:06:46 +08:00
65c6203ad7 fix: correct building reference 2025-10-18 19:54:06 +08:00
3a18337129 refactor: confused abstract class 2025-10-18 19:47:23 +08:00
b6b433626e fix: typing 2025-10-18 19:43:00 +08:00
5d6b9b0cb1 refactor 2025-10-18 19:41:53 +08:00
6d09330f98 chore: rename PluginTriggerManager to PluginTriggerClient 2025-10-18 19:33:08 +08:00
5df9afa91a fix: typing 2025-10-18 19:32:08 +08:00
30a341331f chore: unify request handling 2025-10-18 19:29:00 +08:00
31cf4b6619 fix: query parameter dose not exist in workflow 2025-10-18 19:19:36 +08:00
dd0da3218c feat: introduce payload field to plugin trigger processing 2025-10-18 19:15:46 +08:00
11c9219848 chore: better exception handling 2025-10-18 19:15:09 +08:00
b1ffd2ef2b refine: use enum reference to avoid plain text declarations 2025-10-18 19:14:24 +08:00
86cf7952fb refactor: add typing annotation 2025-10-18 19:13:07 +08:00
d790d2b6bc feat: introduce payload field to TriggerDispatchResponse and a better typing 2025-10-18 19:12:43 +08:00
a711a8e759 refactor: better typing 2025-10-18 19:11:50 +08:00
8a18b6e13b refactor webhook service enduser operations 2025-10-18 19:11:15 +08:00
95aeb61d7c fix: missing backwards invocation 2025-10-18 19:10:22 +08:00
e8b0144cf7 refactor: remove common end user operations out of wraps.py and move it into EndUserService 2025-10-18 19:09:55 +08:00
2c8c1860ca fix(trigger): show event output 2025-10-18 16:28:26 +08:00
5edfbd5305 fix: meaningless error messages 2025-10-18 16:27:12 +08:00
4ceae655bd fix: prevent selecting time text in picker 2025-10-18 15:50:15 +08:00
6ae76d108b feat: add cursor pointer to macketplace actions 2025-10-17 21:31:40 +08:00
9cc3cfb63e fix: hide footer from all start block when search not found 2025-10-17 21:28:57 +08:00
58e4c0793a feat: align tool selector empty state with start blocks 2025-10-17 21:25:28 +08:00
80f2c1be67 fix(trigger): enhance error handling and refactor end user creation in trigger workflows
- Improved error handling in `TriggerSubscriptionListApi` to return a 404 response for ValueErrors.
- Refactored end user creation logic in `service_api/wraps.py` to use `get_or_create_end_user` for better clarity and consistency.
- Introduced a new method `create_end_user_batch` for batch creation of end users, optimizing database interactions.
- Updated various trigger-related services to utilize the new end user handling, ensuring proper user context during trigger dispatching.
2025-10-17 21:00:57 +08:00
8a5174d078 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-17 19:21:15 +08:00
d0f357a690 feat(workflow): enhance listening functionality with multiple trigger node support 2025-10-17 19:09:55 +08:00
fbe3df5658 fix(plugin-detail-panel): update provider reference to use trigger identity name 2025-10-17 18:23:35 +08:00
21e3ef91eb fix(trigger): show event detail 2025-10-17 18:23:04 +08:00
3f116dc74b feat(variable-inspect): improve listening description resolution in Listening component 2025-10-17 18:11:26 +08:00
32731c4622 render autoCommonParametersSchema other input type 2025-10-17 18:09:14 +08:00
3c1f0e1aec fix(trigger): fix authentication status check 2025-10-17 17:13:07 +08:00
685e48636d fix: if tag show global vars problem 2025-10-17 16:57:42 +08:00
7c4edaa636 fix: variableValid in prompt editor 2025-10-17 16:48:27 +08:00
35867707d0 fix: global var type render in node 2025-10-17 15:24:49 +08:00
5b884d750f feat(trigger): add run all triggers test-run and implement TriggerType enum 2025-10-17 14:56:05 +08:00
bc0d5f4e41 fix(trigger): enhance subscription retrieval error handling in TriggerService
- Added exception handling for `get_subscription_by_endpoint` to return a 404 response when the plugin is not found and a 500 response for other errors.
- Improved overall robustness of the subscription retrieval process.
2025-10-17 14:43:43 +08:00
f20452622a fix(trigger): improve event retrieval handling in PluginTriggerProviderController
- Updated the `get_event` method to return `None` instead of raising a ValueError when an event is not found, enhancing error handling.
- Adjusted the `get_event_parameters` method to handle cases where the event may be `None`, returning an empty dictionary instead of causing an error.
- Improved type hinting for better clarity and type safety.
2025-10-17 14:43:43 +08:00
6ba26cf7b5 fix: global var show in node 2025-10-17 14:39:30 +08:00
7510e0654b fix: show global vars in picker 2025-10-17 14:24:20 +08:00
564bb22d8b feat: system var icon 2025-10-17 13:57:26 +08:00
5e2d5f0d83 feat: allow trigger schedule TimePicker to stretch with panel 2025-10-17 13:52:26 +08:00
d90ffbcf14 rm unused ensureWebhookRawVariable 2025-10-17 13:49:33 +08:00
771cc72dcf fix auto generate webhook url 2025-10-17 13:41:03 +08:00
04c91111e9 fix(trigger): trigger node is marked as 'branch' type 2025-10-17 13:37:46 +08:00
5a13daefdb fix(trigger): close portal after select a subscription 2025-10-17 13:31:00 +08:00
c033c05ec1 fix: resolve trigger plugin icons in workflow checklist 2025-10-17 12:55:41 +08:00
5b2f323a87 improve webhook request headers 2025-10-17 11:27:48 +08:00
b855d95430 feat: can choose global vars 2025-10-17 11:02:27 +08:00
fe4b63210e fix(trigger): oauth client config 2025-10-17 10:52:42 +08:00
84c09ec59d chore: user input output vars show 2025-10-17 10:21:11 +08:00
40e17ef801 fix merge main cause current_user not defined 2025-10-17 09:49:09 +08:00
f1fcb92691 feat(trigger): add category trigger 2025-10-16 18:30:54 +08:00
3865555113 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-16 18:30:33 +08:00
95e46806a4 fix: marketplace item install hover 2025-10-16 18:01:00 +08:00
c9c3d03878 fix: keep start tab search results restorable 2025-10-16 17:56:32 +08:00
b28ec4be6e fix: start block ui 2025-10-16 17:48:24 +08:00
29d7023fae - Update all-tools.tsx so provider search results keep only relevant items: full list retained when the provider matches; otherwise the provider is cloned with just matching tools.
- Mirror the same filtering strategy for Start-tab trigger plugins in trigger-plugin/list.tsx, ensuring only matching events render when searching.
2025-10-16 17:46:44 +08:00
22f6c23780 refactor: remove empty search placeholder from tool selector 2025-10-16 17:39:35 +08:00
548db29a47 add var name check for webhook node 2025-10-16 16:59:46 +08:00
1089c5bf04 add _webhook_raw to downstreamed node 2025-10-16 16:35:05 +08:00
559cf6583f fix add candidate webhook node raise error 2025-10-16 15:33:18 +08:00
b04f92715c feat(trigger): plugin category type 2025-10-16 15:30:04 +08:00
671aba6ab7 fix(trigger): handle missing subscription constructor gracefully in PluginTriggerProviderController
- Updated the logic in `PluginTriggerProviderController` to return an empty list instead of raising a ValueError when the subscription constructor is not found, improving error handling and flow.
2025-10-16 15:09:13 +08:00
beaeb30dcc fix(trigger): enhance credential encryption handling in TriggerProviderService
- Introduced conditional initialization of credential_encrypter based on credential_type to prevent errors when unauthorized.
- Updated the encryption logic to handle cases where credential_encrypter may be None, ensuring robustness in credential processing.
2025-10-16 15:07:05 +08:00
56abca1f41 webhook i18n 2025-10-16 14:52:15 +08:00
52d5f219e1 fix(workflow): include trigger node type in available blocks check 2025-10-16 14:24:44 +08:00
d4516e942c fix(trigger): improve error handling in DraftWorkflowTriggerNodeApi and update input class naming
- Removed specific exception handling for ValueError and PluginInvokeError in `DraftWorkflowTriggerNodeApi`, allowing a more general exception to be raised.
- Renamed `PluginTriggerInput` to `TriggerEventInput` in `TriggerEventNodeData` for better clarity and consistency.
- Updated validation logic in `TriggerEventInput` to ensure correct type checks for input values.
2025-10-16 14:04:44 +08:00
1c17a16830 feat(trigger): format event_parameters and improve 2025-10-16 14:00:21 +08:00
1f6ab13fc5 fix(workflow): auto run single start node without dropdown 2025-10-16 09:37:18 +08:00
7344df87e5 Merge remote-tracking branch 'origin/feat/trigger' into feat/trigger 2025-10-15 20:47:20 +08:00
29353bd7c2 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-15 20:47:02 +08:00
7b6f5d6860 fix(trigger): show tool credentials in workflow 2025-10-15 20:42:14 +08:00
2ccb20bf3a fix(workflow): gate “publish as tool” on published user input node validity 2025-10-15 20:26:12 +08:00
34b7e5cbca fix: enable scrolling in start selector tab 2025-10-15 19:09:23 +08:00
a595e2df06 fix(trigger): skip validation when updating properties 2025-10-15 18:44:05 +08:00
729e0e9b1e feat(workflow): add disableVariableInsertion prop to form input and trigger components 2025-10-15 18:20:13 +08:00
c03b790888 feat(trigger): add event_parameters to PluginTriggerNode configuration 2025-10-15 18:14:43 +08:00
112b5f63dd feat(workflow): enhance single run handling 2025-10-15 18:14:33 +08:00
334e5f19bf fix(trigger): handle missing subscription constructor in trigger subscription builder
- Updated the `TriggerSubscriptionBuilderService` to return an empty dictionary when the subscription constructor is not available, improving robustness in subscription handling.
2025-10-15 17:44:51 +08:00
35bbf67175 refactor(trigger): Rename and replace PluginTriggerNode with TriggerEventNode
- Updated references from `PluginTriggerNode` to `TriggerEventNode` across multiple files to reflect the new naming convention.
- Modified `PluginTriggerNodeData` to `TriggerEventNodeData`, including changes to event parameters for better clarity and consistency in data handling.
- Removed the deprecated `trigger_plugin_node.py` file as part of the refactor.
2025-10-15 17:30:42 +08:00
9aec255ee9 feat(trigger): update subscription list after saving draft 2025-10-15 17:22:14 +08:00
b07e80e6ae fix(trigger): update error type for event handling in trigger manager
- Changed the error type check from "TriggerIgnoreEventError" to "EventIgnoreError" in the `TriggerManager` class to improve clarity in error handling during trigger invocations.
2025-10-15 17:14:44 +08:00
ad2b910d73 refactor(trigger): Enhance error handling and parameter resolution in trigger workflows
- Improved error handling in `DraftWorkflowTriggerRunApi`, `DraftWorkflowTriggerNodeApi`, and `DraftWorkflowTriggerRunAllApi` to raise exceptions directly, providing clearer error messages.
- Introduced `get_event_parameters` method in `PluginTriggerProviderController` to retrieve event parameters for triggers.
- Updated `PluginTriggerNodeData` to include a new method for resolving parameters based on event schemas, ensuring better validation and handling of trigger inputs.
- Refactored `TriggerService` to utilize the new parameter resolution method, enhancing the clarity and reliability of trigger invocations.
2025-10-15 17:05:51 +08:00
f28a7218cd fix(trigger): optimize subscription entry in workflow 2025-10-15 16:13:00 +08:00
4164e1191e fix: hide checklist navigation for missing nodes 2025-10-15 16:10:34 +08:00
bd31c6f90b refactor(trigger): Reinstate DraftWorkflowTriggerNodeApi with improved structure
- Restored the `DraftWorkflowTriggerNodeApi` class to handle polling for trigger events in draft workflows.
- Enhanced the implementation to utilize `TriggerDebugEvent` and `TriggerDebugEventPoller` for better event management.
- Improved error handling and response structure for node execution, ensuring clarity in API responses.
- Updated API documentation to reflect the restored functionality and parameters.
2025-10-15 14:45:00 +08:00
8f7bef9509 fix(trigger): Update API routes for draft workflow trigger
- Changed the endpoint for triggering draft workflows from `/trigger/plugin/run` to `/trigger/run` in both backend and frontend to ensure consistency and clarity in the API structure.
- Adjusted the URL construction in the `useWorkflowRun` hook to reflect the updated route.
2025-10-15 14:44:00 +08:00
06c91fbcbd refactor(trigger): Unify the Trigger Debug interface and event handling and enhance error management
- Updated `DraftWorkflowTriggerNodeApi` to utilize the new `TriggerDebugEvent` and `TriggerDebugEventPoller` for improved event polling.
- Removed deprecated `poll_debug_event` methods from `TriggerService`, `ScheduleService`, and `WebhookService`, consolidating functionality into the new event structure.
- Enhanced error handling in `invoke_trigger_event` to utilize `TriggerPluginInvokeError` for better clarity on invocation issues.
- Updated frontend API routes to reflect changes in trigger event handling, ensuring consistency across the application.
2025-10-15 14:41:53 +08:00
dab4e521af feat(trigger): enhance trigger event handling and introduce new debug event polling
- Refactored the `DraftWorkflowTriggerNodeApi` and related services to utilize the new `TriggerService` for polling debug events, improving modularity and clarity.
- Added `poll_debug_event` methods in `TriggerService`, `ScheduleService`, and `WebhookService` to streamline event handling for different trigger types.
- Introduced `ScheduleDebugEvent` and updated `PluginTriggerDebugEvent` to include a more structured approach for event data.
- Enhanced the `invoke_trigger_event` method to improve error handling and data validation during trigger invocations.
- Updated frontend API calls to align with the new event structure, removing deprecated parameters for cleaner integration.
2025-10-15 11:04:09 +08:00
b20f61356c Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-15 09:53:03 +08:00
4ec23eea00 fix: add i18n key 2025-10-14 21:23:24 +08:00
270fd9cb07 fix: discorrect entity reference 2025-10-14 20:14:13 +08:00
c7c5e07d43 fix(trigger): add tooltip when only one creation type 2025-10-14 18:39:22 +08:00
c1ba83f0d4 feat(trigger): add validation for subscription in PluginTrigger node 2025-10-14 18:13:02 +08:00
d71200ee32 feat: enhance block selector and change block components with flow type handling 2025-10-14 16:42:21 +08:00
16ac05ebd5 feat: support search in checkbox list 2025-10-14 16:24:44 +08:00
ac77b9b735 Merge remote-tracking branch 'origin/feat/trigger' into feat/trigger 2025-10-14 15:28:35 +08:00
0fa4b77ff8 feat(style): adjust minimum and maximum width for block-selector and data source components 2025-10-14 15:23:28 +08:00
6773dda657 feat(trigger): enhance trigger handling with new data validation and logging improvements
- Added validation for `PluginTriggerData` and `ScheduleTriggerData` in the `WorkflowService` to support new trigger types.
- Updated debug event return strings in `PluginTriggerDebugEvent` and `WebhookDebugEvent` for clarity and consistency.
- Enhanced logging in `dispatch_triggered_workflows_async` to include subscription and provider IDs, improving traceability during trigger dispatching.
2025-10-14 14:36:52 +08:00
bf42386c5b feat(trigger): add PluginTrigger node support and enhance output variable handling 2025-10-14 11:55:12 +08:00
90fc06a494 refactor(trigger): update TriggerApiEntity description type to TypeWithI18N
- Changed the description field type in `TriggerApiEntity` from `TriggerDescription` to `TypeWithI18N` for improved internationalization support.
- Adjusted the usage of the description field in the `convertToTriggerWithProvider` function to align with the new type definition.
2025-10-13 22:24:12 +08:00
8dfe693529 refactor(trigger): rename TriggerApiEntity to EventApiEntity and update related references
- Changed `TriggerApiEntity` to `EventApiEntity` in the trigger provider and subscription models to better reflect its purpose.
- Updated the description field type from `EventDescription` to `I18nObject` for improved consistency in event descriptions.
- Adjusted imports and references across multiple files to accommodate the renaming and type changes, ensuring proper functionality in trigger processing.
2025-10-13 21:10:31 +08:00
d65d27a6bb fix: creating button style 2025-10-13 20:53:06 +08:00
e6a6bde8e2 feat(i18n): add draft reminder to app overview tooltips 2025-10-13 20:18:54 +08:00
c7d0a7be04 feat(trigger): enable triggers by default after workflow publish 2025-10-13 19:59:39 +08:00
e0f1b03cf0 fix(trigger): clear subscription_id in trigger plugin processing
- Updated the `AppDslService` to clear the `subscription_id` when processing nodes of type `TRIGGER_PLUGIN`. This change ensures that sensitive subscription data is not retained unnecessarily, enhancing data security during workflow execution.
2025-10-13 18:42:54 +08:00
902737b262 feat(trigger): enhance subscription decryption in trigger processing
- Added functionality to decrypt subscription credentials and properties within the `dispatch_triggered_workflows_async` method. This ensures that sensitive data is securely handled before processing, improving the overall security of trigger invocations.
2025-10-13 18:10:53 +08:00
429cd05a0f fix(trigger): serialize subscription model in trigger invocation
- Updated the `PluginTriggerManager` to serialize the `subscription` parameter using `model_dump()` before passing it during trigger invocation. This change ensures that the subscription data is correctly formatted for processing.
2025-10-13 18:07:51 +08:00
46e7e99c5a feat(trigger): add subscription parameter to trigger invocation methods
- Enhanced `PluginTriggerManager`, `PluginTriggerProviderController`, and `TriggerManager` to accept a `subscription` parameter in their trigger invocation methods.
- Updated `TriggerService` to pass the subscription entity when invoking trigger events, improving the handling of subscription-related data during trigger execution.
2025-10-13 17:47:40 +08:00
d19ce15f3d Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-13 17:28:47 +08:00
49af7eb370 fix(trigger-schedule): make timezone field optional to match actual usage 2025-10-13 17:28:40 +08:00
8e235dc92c feat(workflow): hide timezone in node next execution, keep in panel next 5 executions 2025-10-13 17:28:40 +08:00
3b3963b055 refactor(workflow): remove timezone required validation as it is auto-filled by use-config 2025-10-13 17:28:40 +08:00
378c2afcd3 fix(workflow): remove hardcoded UTC timezone from new schedule node to use user timezone 2025-10-13 17:28:40 +08:00
d709f20e1f fix(workflow): update community feedback link to plugin request template 2025-10-13 17:28:40 +08:00
99d9657af8 feat(workflow): integrate timezone display into execution time format for better readability 2025-10-13 17:28:40 +08:00
62efdd7f7a fix(workflow): preserve saved timezone in trigger-schedule to match backend fixed-timezone design 2025-10-13 17:28:39 +08:00
ebcf98c137 revert(workflow): remove timezone label from trigger-schedule node display 2025-10-13 17:28:39 +08:00
7560e2427d fix(timezone): support half-hour and 45-minute timezone offsets
Critical regression fix for convertTimezoneToOffsetStr:

Issues Fixed:
- Previous regex /^([+-]?\d{1,2}):00/ only matched :00 offsets
- This caused half-hour offsets (e.g., India +05:30) to return UTC+0
- Even if matched, parseInt only parsed hours, losing minute info

Changes:
- Update regex to /^([+-]?\d{1,2}):(\d{2})/ to match all offset formats
- Parse both hours and minutes separately
- Output format: "UTC+5:30" for non-zero minutes, "UTC+8" for whole hours
- Preserve leading zeros in minute part (e.g., "UTC+5:30" not "UTC+5:3")

Test Coverage:
- Added 8 comprehensive tests covering:
  * Default/invalid timezone handling
  * Whole hour offsets (positive/negative)
  * Zero offset (UTC)
  * Half-hour offsets (India +5:30, Australia +9:30)
  * 45-minute offset (Chatham +12:45)
  * Leading zero preservation in minutes

All 14 tests passing. Verified with timezone.json entries at lines 967, 1135, 1251.
2025-10-13 17:28:39 +08:00
920a608e5d fix(trigger-schedule): prevent timezone label truncation in node
- Change layout to ensure timezone label always visible with shrink-0
- Time text can truncate but timezone label stays intact
- Improves readability in constrained node space
2025-10-13 17:28:39 +08:00
4dfb8b988c feat(time-picker): add showTimezone prop with comprehensive tests
- Add showTimezone prop to TimePickerProps for optional inline timezone display
- Integrate TimezoneLabel component into TimePicker when showTimezone=true
- Add 6 comprehensive test cases covering all showTimezone scenarios:
  * Default behavior (no timezone label)
  * Explicit disable with showTimezone=false
  * Enable with showTimezone=true
  * Inline prop correctly passed
  * No display when timezone is missing
  * Correct styling classes applied
- Update trigger-schedule panel to use showTimezone prop
- All 15 tests passing with good coverage
2025-10-13 17:28:39 +08:00
af6dae3498 fix(timezone): fix UTC offset display bug and add timezone labels
- Fixed convertTimezoneToOffsetStr() that only extracted first digit
  * UTC-11 was incorrectly displayed as UTC-1, UTC+10 as UTC+0
  * Now correctly extracts full offset using regex and removes leading zeros
- Created reusable TimezoneLabel component with inline mode support
- Added comprehensive unit tests with 100% coverage
- Integrated timezone labels into 3 locations:
  * Panel time picker (next to time input)
  * Node next execution display
  * Panel next 5 executions list
2025-10-13 17:28:39 +08:00
ee21b4d435 feat: support copy to clipboard in input component 2025-10-13 17:21:26 +08:00
654adccfbf fix(trigger): implement plugin single run functionality and update node status handling 2025-10-13 17:02:44 +08:00
b283a2b3d9 feat(trigger): add API endpoint to retrieve trigger plugin icons and enhance workflow response handling
- Introduced `TriggerProviderIconApi` to fetch icons for trigger plugins based on tenant and provider ID.
- Updated `WorkflowResponseConverter` to include trigger plugin icons in the response.
- Implemented `get_trigger_plugin_icon` method in `TriggerManager` for icon retrieval logic.
- Adjusted `Node` class to correctly set provider information for trigger plugins.
- Modified TypeScript types to accommodate new provider ID field in workflow nodes.
2025-10-13 16:50:32 +08:00
cce729916a fix(trigger-schedule): pass time string directly to TimePicker to avoid double timezone conversion 2025-10-13 16:00:13 +08:00
4f8bf97935 fix: creating modal style 2025-10-13 14:54:24 +08:00
ba88c7b25b fix(workflow): handle plugin run mode correctly by setting status 2025-10-13 14:50:12 +08:00
0ec5d53e5b fix(trigger): log style 2025-10-13 14:46:08 +08:00
f3b415c095 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-13 13:21:51 +08:00
6fb657a89e refactor(subscription): enhance subscription count handling in selector view
- Introduced a subscriptionCount variable to improve readability and performance when checking the number of subscriptions.
- Updated the rendering logic to use subscriptionCount, ensuring consistent and clear display of subscription information in the component.
2025-10-13 11:22:25 +08:00
90240cb6db refactor(subscription): optimize subscription count handling in list view
- Replaced direct length checks on subscriptions with a computed subscriptionCount variable for improved readability and performance.
- Updated the CreateSubscriptionButton to conditionally render based on the new subscriptionCount variable, enhancing clarity in the component logic.
- Adjusted className logic for the button to account for multiple supported methods, ensuring better user experience.
2025-10-12 23:56:27 +08:00
cca48f07aa feat(trigger): implement atomic update and verification for subscription builders
- Introduced atomic operations for updating and verifying subscription builders to prevent race conditions.
- Added distributed locking mechanism to ensure data consistency during concurrent updates and builds.
- Refactored existing methods to utilize the new atomic update and verification logic, enhancing the reliability of trigger subscription handling.
2025-10-12 21:27:38 +08:00
beff639c3d fix(trigger): improve trigger subscription query with AppTrigger join
- Updated the trigger subscription query to join with the AppTrigger model, ensuring only enabled app triggers are considered.
- Enhanced the filtering criteria for retrieving subscribers based on the AppTrigger status, improving the accuracy of the trigger subscription handling.
2025-10-12 19:24:54 +08:00
00359830c2 refactor(trigger): streamline response handling in trigger subscription dispatch
- Removed the redundant response extraction from the dispatch call and directly assigned the response to a variable for clarity.
- Enhanced logging by appending the request log after dispatching, ensuring better traceability of requests and responses in the trigger subscription workflow.
2025-10-11 22:16:18 +08:00
f23e098b9a fix(trigger): handle exceptions in trigger subscription dispatch
- Wrapped the dispatch call in a try-except block to catch exceptions and return a 500 error response if an error occurs.
- Enhanced logging of the request and error response for better traceability in the trigger subscription workflow.
2025-10-11 22:13:36 +08:00
42f75b6602 feat(trigger): enhance trigger subscription handling with credential support
- Added `credentials` and `credential_type` parameters to various methods in `PluginTriggerManager`, `PluginTriggerProviderController`, and `TriggerManager` to support improved credential management for trigger subscriptions.
- Updated the `Subscription` model to include `parameters` for better subscription data handling.
- Refactored related services to accommodate the new credential handling, ensuring consistency across the trigger workflow.
2025-10-11 21:12:27 +08:00
4f65cc312d feat: delete confirm opt 2025-10-11 20:19:27 +08:00
854a091f82 feat: add validation status for formitem 2025-10-11 19:50:05 +08:00
63dbc7c63d fix(trigger): update provider_id reference to plugin_id in useToolIcon hook 2025-10-11 19:05:57 +08:00
a4e80640fe chore(trigger): remove debug console logs 2025-10-11 18:54:47 +08:00
fe0a139c89 fix(trigger): update provider_id references to plugin_id in BasePanel component 2025-10-11 18:52:15 +08:00
ac2616545b fix(trigger): update provider_id field in TriggerPluginActionItem component 2025-10-11 17:10:29 +08:00
c9e7922a14 refactor(trigger): update trigger-related types and field names / values 2025-10-11 17:06:43 +08:00
12a7402291 fix: create button not working in manual creation mode 2025-10-11 15:36:37 +08:00
33d7b48e49 fix: error when fetching info while switching plugins 2025-10-11 15:00:35 +08:00
ee89e9eb2f refactor(trigger): update type parameter naming in PluginTriggerManager
- Changed the parameter name from 'type' to 'type_' in multiple method calls within the PluginTriggerManager class to avoid conflicts with the built-in type function and improve code clarity.
2025-10-11 13:09:25 +08:00
e793f9e871 refactor(trigger): remove unnecessary whitespace in trigger-related files
- Cleaned up the code by removing extraneous whitespace in `trigger.py` and `workflow_plugin_trigger_service.py`, improving readability and maintaining code style consistency.
2025-10-11 12:44:54 +08:00
18b02370a2 chore(workflows): update deployment configurations
- Modified the build-push workflow to trigger on all branches under "deploy/**" for broader deployment coverage.
- Changed the SSH host secret in the deploy-dev workflow from RAG_SSH_HOST to DEV_SSH_HOST for improved clarity.
- Removed the obsolete deploy-rag-dev workflow to streamline the CI/CD process.
2025-10-11 12:26:31 +08:00
d53399e546 refactor(trigger): rename trigger-related fields and methods for consistency
- Updated the naming convention from 'trigger_name' to 'event_name' across various models and services to align with the new event-driven architecture.
- Refactored methods in PluginTriggerManager and PluginTriggerProviderController to use 'invoke_trigger_event' instead of 'invoke_trigger'.
- Adjusted database migration scripts to reflect changes in the schema, including the addition of 'event_name' and 'subscription_id' fields in the workflow_plugin_triggers table.
- Removed deprecated trigger-related methods in WorkflowPluginTriggerService to streamline the codebase.
2025-10-11 12:26:08 +08:00
622d12137a feat: change subscription field in workflow 2025-10-10 20:58:56 +08:00
bae8e44b32 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-10 19:43:23 +08:00
24b5387fd1 fix(workflow): streamline stopping workflow run process 2025-10-10 18:45:43 +08:00
0c65824cad fix(workflow): update API route for DraftWorkflowTriggerRunApi
- Changed the route from "/apps/<uuid:app_id>/workflows/draft/trigger/run" to "/apps/<uuid:app_id>/workflows/draft/trigger/plugin/run" to reflect the new plugin-based trigger structure.
- Updated corresponding URL in the useWorkflowRun hook to maintain consistency across the application.
2025-10-10 18:13:28 +08:00
31c9d9da3f fix(workflow): enhance response structure in DraftWorkflowTriggerRunApi
- Added a "retry_in" field to the response when no event is found, improving the API's feedback during workflow execution.
2025-10-10 18:05:50 +08:00
8f854e6a45 fix(workflow): add root_node_id to DraftWorkflowTriggerRunApi for improved response handling
- Included root_node_id in the API call to enhance the response structure during workflow execution.
2025-10-10 18:05:50 +08:00
75b3f5ac5a feat: change subscription field 2025-10-10 17:37:20 +08:00
323e183775 refactor(trigger): improve config value formatting in PluginTriggerNode 2025-10-10 17:28:41 +08:00
380ef52331 refactor(trigger): update API and service to use 'event' terminology
- Renamed 'trigger_name' to 'event_name' in the DraftWorkflowTriggerNodeApi for consistency with the new naming convention.
- Added 'provider_id' to the API request model to enhance functionality.
- Updated the PluginTriggerDebugEvent and TriggerDebugService to reflect changes in naming and improve address formatting.
- Adjusted frontend utility to align with the updated variable names.
2025-10-10 15:48:42 +08:00
b8862293b6 fix: resolve semantic conflict in TimePicker notClearable logic 2025-10-10 15:17:19 +08:00
85f1cf1d90 Merge branch 'main' into feat/trigger 2025-10-10 15:16:00 +08:00
1d4e36d58f fix: display correct icon for trigger nodes in listening panel 2025-10-10 15:04:58 +08:00
90ae5e5865 refactor(trigger): enhance update method to use explicit None checks
- Updated the `update` method in `SubscriptionBuilderUpdater` to use 'is not None' checks instead of truthy evaluations for better handling of empty values.
- This change improves clarity and ensures that empty dictionaries or strings are correctly processed during updates.
2025-10-10 14:52:03 +08:00
755fb96a33 feat(trigger): add plugin trigger test-run handling to workflow 2025-10-10 10:43:13 +08:00
b8ca480b07 refactor(trigger): update variable names for clarity and consistency
- Renamed variables related to triggers to use 'trigger' terminology consistently across the codebase.
- Adjusted filtering logic in `TriggerPluginList` to reference 'events' instead of 'triggers' for improved clarity.
- Updated the `getTriggerIcon` function to reflect the new naming conventions and ensure proper icon rendering.
2025-10-09 12:23:48 +08:00
8a5fbf183b Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-09 09:07:01 +08:00
91318d3d04 refactor(trigger): rename trigger references to event for consistency
- Updated variable names and types from 'trigger' to 'event' across multiple files to enhance clarity and maintain consistency in the codebase.
- Adjusted related data structures and API responses to reflect the new naming convention.
- Improved type annotations and error handling in the workflow trigger run API and associated services.
2025-10-09 03:12:35 +08:00
a33d04d1ac refactor(trigger): unify debug event handling and improve polling mechanism
- Introduced a base class for debug events to streamline event handling.
- Refactored `TriggerDebugService` to support multiple event types through a generic dispatch/poll interface.
- Updated webhook and plugin trigger debug services to utilize the new event structure.
- Enhanced the dispatch logic in `dispatch_triggered_workflows_async` to accommodate the new event model.
2025-10-08 17:31:16 +08:00
02222752f0 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-07 18:25:43 +08:00
04d94e3337 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-06 19:12:16 +08:00
b98c36db48 fix trigger related api 404 2025-10-06 14:36:07 +08:00
d05d11e67f add webhook node draft single run 2025-10-06 14:35:12 +08:00
3370736e09 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-04 11:30:26 +08:00
cc5a315039 fix pyright exception 2025-10-02 20:26:29 +08:00
6ea10cdaaf debug webhook don't require publish the app 2025-10-02 20:07:57 +08:00
9643fa1c9a fix: use StopCircle icon in variable inspect listening panel 2025-10-02 10:02:19 +08:00
937a58d0dd Merge remote-tracking branch 'origin/main' into feat/trigger 2025-10-02 09:18:21 +08:00
d9faa1329a move workflow_plugin_trigger_service to trigger sub dir 2025-10-02 00:31:33 +08:00
fec09e7ed3 move trigger_service to trigger sub dir 2025-10-02 00:29:53 +08:00
31b15b492e move trigger_debug_service to trigger sub dir 2025-10-02 00:27:48 +08:00
f96bd4eb18 move schedule service to trigger sub dir 2025-10-02 00:24:32 +08:00
a4109088c9 move webhook service to trigger sub dir 2025-10-02 00:18:37 +08:00
f827e8e1b7 add more code comment 2025-10-02 00:14:35 +08:00
82f2f76dc4 ruff format code 2025-10-01 23:39:46 +08:00
e6a44a0860 can debug when disable webhook 2025-10-01 23:39:37 +08:00
604651873e refactor webhook service 2025-10-01 12:46:42 +08:00
9114881623 fix: update frontend trigger field mapping from triggers to events
- Update TriggerProviderApiEntity type to use events field (aligned with backend commit 32f4d1af8)
- Update conversion function in use-triggers.ts to map provider.events to TriggerWithProvider.triggers
- Fix trigger-events-list.tsx to use providerInfo.events (TriggerProviderApiEntity type)
- Fix parameters-form.tsx to use provider.triggers (TriggerWithProvider type)
2025-10-01 09:53:45 +08:00
080cdda4fa query param of webhook backend support 2025-09-30 21:21:39 +08:00
32f4d1af8b Refactor: Rename triggers to events in trigger-related entities and services
- Updated class and variable names from 'triggers' to 'events' across multiple files to improve clarity and consistency.
- Adjusted related data structures and methods to reflect the new naming convention, including changes in API entities, service methods, and trigger management logic.
- Ensured all references to triggers are replaced with events to align with the updated terminology.
2025-09-30 20:18:33 +08:00
1bfa8e6662 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-09-30 18:56:21 +08:00
7c97ea4a9e fix: correct entry node alignment for wrapper offset
- Add ENTRY_NODE_WRAPPER_OFFSET constant (x: 0, y: 21) for Start/Trigger nodes
- Implement getNodeAlignPosition() to calculate actual inner node positions
- Fix horizontal/vertical helpline rendering to account for wrapper offset
- Fix snap-to-align logic to properly align inner nodes instead of wrapper
- Correct helpline width/height calculation by subtracting offset for entry nodes
- Ensure backward compatibility: only affects Start/Trigger nodes with EntryNodeContainer wrapper

This fix ensures that Start and Trigger nodes (which have an EntryNodeContainer wrapper
with status indicator) align based on their inner node boundaries rather than the wrapper
boundaries, matching the alignment behavior of regular nodes.
2025-09-30 18:36:49 +08:00
bea11b08d7 refactor: hide workflow features button in workflow mode, keep it visible in chatflow mode 2025-09-30 17:51:01 +08:00
8547032a87 Revert "refactor: app publisher"
This reverts commit 8feef2c1a9.
2025-09-30 17:46:27 +08:00
43574c852d add variable type to webhook request parameters panel 2025-09-30 16:31:21 +08:00
5ecc006805 add listening status for variable panel 2025-09-30 15:18:07 +08:00
15413108f0 chore: remove unused empty enums.py file 2025-09-30 13:52:33 +08:00
831c888b84 feat: sort output variables by table display order in webhook trigger 2025-09-30 12:34:09 +08:00
f0ed09a8d4 feat: add output variables display to webhook trigger node (#26478)
Co-authored-by: Claude <noreply@anthropic.com>
2025-09-30 12:26:42 +08:00
a80f30f9ef add nginx /triggers endpoint 2025-09-30 11:08:14 +08:00
fd2f0df097 useStore to isListening status 2025-09-30 10:48:38 +08:00
d72a3e1879 fix: translations 2025-09-30 10:01:33 +08:00
4a6903fdb4 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-09-30 08:00:16 +08:00
8106df1d7d fix: types 2025-09-29 20:53:50 +08:00
5e3e6b0bd8 refactor(api): update subscription handling in trigger provider
- Replaced SubscriptionSchema with SubscriptionConstructor in various parts of the trigger provider implementation to streamline subscription management.
- Enhanced the PluginTriggerProviderController to utilize the new subscription constructor for retrieving default properties and credential schemas.
- Removed the deprecated get_provider_subscription_schema method from TriggerManager.
- Updated TriggerSubscriptionBuilderService to reflect changes in subscription handling, ensuring compatibility with the new structure.

These changes improve the clarity and maintainability of the subscription handling within the trigger provider architecture.
2025-09-29 18:28:10 +08:00
a06d2892f8 fix(plugin): handle optional property in llm_description assignment
- Updated the llm_description assignment in the ToolParameter to safely access the en_US property of paramDescription, ensuring it defaults to an empty string if not present. This change improves the robustness of the parameter handling in the plugin detail panel.
2025-09-29 18:28:10 +08:00
e377e90666 feat(api): add CHECKBOX parameter type to plugin and tool entities
- Introduced CHECKBOX as a new parameter type in CommonParameterType and PluginParameterType.
- Updated as_normal_type and cast_parameter_value functions to handle CHECKBOX type.
- Enhanced ToolParameter class to include CHECKBOX for consistency across parameter types.

These changes expand the parameter capabilities within the API, allowing for more versatile input options.
2025-09-29 18:28:10 +08:00
19cc67561b refactor(api): improve error handling in trigger providers
- Removed unnecessary ValueError handling in TriggerSubscriptionBuilderCreateApi and TriggerSubscriptionBuilderBuildApi, allowing for more streamlined exception management.
- Updated TriggerSubscriptionBuilderVerifyApi and TriggerSubscriptionBuilderBuildApi to raise ValueError with the original exception context for better debugging.
- Enhanced trigger_endpoint in trigger.py to log errors and return a JSON response for not found endpoints, improving user feedback and error reporting.

These changes enhance the robustness and clarity of error handling across the API.
2025-09-29 18:28:10 +08:00
92f2ca1866 add listening status in the run panel result 2025-09-29 17:55:53 +08:00
1949074e2f add shortcut for open test run panel 2025-09-29 14:39:44 +08:00
1c0068e95b fix can't stop webhook debug 2025-09-29 13:34:05 +08:00
4b43196295 feat: add specialized trigger icons to workflow logs
- Create TriggerByDisplay component with appropriate colored icons
- Add dedicated Code icon for debugging triggers (blue background)
- Add KnowledgeRetrieval icon for RAG pipeline triggers (green background)
- Use existing webhook, schedule, and plugin icons with proper colors
- Add comprehensive i18n translations for Chinese, Japanese, and English
- Integrate icon display into workflow logs table
- Follow project color standards from block-icon component
2025-09-29 12:53:35 +08:00
2c3cf9a25e Merge remote-tracking branch 'origin/main' into feat/trigger 2025-09-29 12:13:39 +08:00
67fbfc0b8f fix: adjust MoreActions menu position based on sidebar state 2025-09-29 12:13:18 +08:00
6e6198c64e debug webhook node 2025-09-29 09:28:19 +08:00
6b677c16ce refactor: use Tailwind className for MiniMap node colors instead of CSS variables 2025-09-29 08:09:38 +08:00
973b937ba5 feat: add subscription in node 2025-09-28 22:40:31 +08:00
48597ef193 feat: enhance minimap node color handling 2025-09-28 21:11:46 +08:00
ffbc007f82 feat(i18n): add tooltip and placeholder for callback URL in plugin-trigger translations 2025-09-28 20:13:10 +08:00
8fc88f8cbf Merge remote-tracking branch 'origin/main' into feat/trigger 2025-09-28 19:32:33 +08:00
a4b932c78b feat: integrate chat mode detection in ChangeBlock component 2025-09-28 17:10:09 +08:00
2ff4af8ce3 add debug run schedule node 2025-09-28 16:37:37 +08:00
6795015d00 refactor: enhance type definitions and update import paths in form input and trigger components 2025-09-28 15:42:38 +08:00
b100ce15cd refactor: update import paths and remove unused props in block selector components 2025-09-28 15:21:44 +08:00
3edf1e2f59 feat: add checkbox list 2025-09-28 15:12:17 +08:00
4d49db0ff9 Unify SearchBox styles with Input component and add autoFocus 2025-09-28 14:33:27 +08:00
7da22e4915 Add toast notifications to TriggerCard toggle operations 2025-09-28 14:21:51 +08:00
8d4a9df6b1 fix: more button dropdown menu visibility and auto-close behavior 2025-09-28 14:15:33 +08:00
f620e78b20 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-09-28 13:40:46 +08:00
8df80781d9 _node_type has changed to node_type in new version 2025-09-28 09:36:45 +08:00
edec065fee fix can't subtract offset-naive and offset-aware datetimes 2025-09-28 09:10:21 +08:00
0fe529c3aa Merge remote-tracking branch 'origin/main' into feat/trigger 2025-09-27 21:32:38 +08:00
bcfdd07f85 feat(plugin): enhance trigger events list with dynamic tool integration
- Refactor TriggerEventsList component to utilize provider information for dynamic tool rendering.
- Implement locale-aware text handling for trigger descriptions and labels.
- Introduce utility functions for better management of tool parameters and trigger descriptions.
- Improve user experience by ensuring consistent display of trigger events based on available provider data.

This update enhances the functionality and maintainability of the trigger events list, aligning with the project's metadata-driven approach.
2025-09-26 23:27:27 +08:00
a9a118aaf9 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-09-26 22:48:15 +08:00
60c86dd8d1 fix(workflow): replace hardcoded trigger node logic with metadata-driven approach
- Add isStart: true to all trigger nodes (TriggerWebhook, TriggerSchedule, TriggerPlugin)
- Replace hardcoded BlockEnum checks in use-checklist.ts with metadata-driven logic
- Update trigger node tests to validate metadata instead of obsolete methods
- Add webhook URL validation to TriggerWebhook node
- Ensure backward compatibility with existing workflow configurations

This change migrates from hardcoded trigger node identification to a
centralized metadata-driven approach, improving maintainability and
consistency across the workflow system.
2025-09-26 22:35:21 +08:00
8feef2c1a9 refactor: app publisher 2025-09-26 22:06:05 +08:00
4ba99db88c feat: Restore complete test run functionality and fix workflow block selector system
This comprehensive restore includes:

## Test Run System Restoration
- Restore test-run-menu.tsx component with multi-trigger support and keyboard shortcuts
- Restore use-dynamic-test-run-options.tsx hook for dynamic trigger option generation
- Restore workflow-entry.ts utilities for entry node detection and validation
- Integrate complete test run functionality back into run-mode.tsx

## Block Selector System Fixes
- Fix workflow block selector constants by uncommenting BLOCKS and START_BLOCKS arrays
- Restore proper i18n translations for trigger node descriptions using workflow.blocksAbout keys
- Filter trigger types from Blocks tab to prevent duplication with Start tab
- Fix trigger node handle display to match start node behavior (hide left input handles)

## Workflow Validation System Improvements
- Restore unified workflow validation using correct getValidTreeNodes(nodes, edges) signature
- Remove duplicate Start node validation from isRequired mechanism
- Eliminate "user input must be added" validation error by setting Start node isRequired: false
- Fix end node connectivity validation to properly detect valid workflow chains

## Component Integration
- Verify all dependencies exist (TriggerAll icon, useAllTriggerPlugins hook)
- Maintain keyboard shortcut integration (Alt+R, ~, 0-9 keys)
- Preserve portal-based dropdown positioning and tooltip structure
- Support multiple trigger types: user_input, schedule, webhook, plugin, all

This restores the complete test run functionality that was missing from feat/trigger branch
by systematically analyzing and restoring components from feat/trigger-backup-before-merge.
2025-09-26 21:34:08 +08:00
b4801adfbd refactor(workflow): Remove Start node from isRequired mechanism
- Set Start node isRequired: false since entry node validation is handled by unified logic
- Remove conditional skip logic in checklist since Start is no longer in isRequiredNodesType
- Cleaner separation of concerns: unified entry node check vs individual required nodes
- Eliminates architectural inconsistency where Start was both individually required and part of group validation
2025-09-26 21:09:48 +08:00
08e8f8676e fix(workflow): Remove duplicate Start node validation
- Skip Start node requirement in isRequiredNodesType loop since it's already covered by unified entry node validation
- Eliminates duplicate 'User Input must be added' error when trigger nodes are present
- Both useChecklist and useChecklistBeforePublish now consistently handle entry node validation
- Resolves UI showing redundant validation errors for Start vs Trigger nodes
2025-09-26 21:08:21 +08:00
2dca0c20db fix: restore unified workflow validation system
Major fixes to workflow checklist validation:

## Fixed getValidTreeNodes function (workflow.ts)
- Restore original function signature: (nodes, edges) instead of (startNode, nodes, edges)
- Re-implement automatic start node discovery for all entry types
- Unified traversal from Start, TriggerWebhook, TriggerSchedule, TriggerPlugin nodes
- Single call now discovers all valid connected nodes correctly

## Simplified useChecklist validation (use-checklist.ts)
- Remove complex manual start node iteration and result aggregation
- Unified entry node validation concept for all start node types
- Remove dependency on getStartNodes() utility
- Simplified validation logic matching backup branch approach

## Resolved Issues
-  End node connectivity: Now correctly detects connections from any entry node
-  Unified entry validation: All start types (Start/Triggers) validated consistently
-  Simplified architecture: Restored proven validation approach from backup branch

This restores the reliable workflow validation system while maintaining trigger node support.
2025-09-26 20:54:28 +08:00
6f57aa3f53 fix: hide left input handles for all trigger node types
- Extend handle hiding logic to include TriggerWebhook, TriggerSchedule, TriggerPlugin
- Make trigger nodes behave like Start nodes without left-side input handles
- Apply fix to both main workflow and preview node handle components
- Ensures consistent UX where all start-type nodes have no input handles
2025-09-26 20:39:29 +08:00
1aafe915e4 fix: trigger tooltip descriptions and filter trigger types from Nodes tab
- Fix trigger tooltip descriptions to use workflow.blocksAbout translations
- Filter TriggerWebhook/TriggerSchedule/TriggerPlugin from Blocks component
- Ensure trigger types only appear in Start tab, not Nodes tab
2025-09-26 20:28:59 +08:00
6d4d25ee6f feat(workflow): Restore block selector functionality
- Restore BLOCKS constant array and useBlocks hook
- Add intelligent fallback mechanism for blocks prop
- Fix metadata access in StartBlocks tooltip
- Restore defaultActiveTab support in NodeSelector
- Improve component robustness with graceful degradation
- Fix TypeScript errors and component interfaces

Phase 1-3 of atomic refactoring complete:
- Critical fixes: Constants, hooks, components
- Interface fixes: Props, tabs, modal integration
- Architecture improvements: Metadata, wrappers
2025-09-26 20:05:59 +08:00
6b94d30a5f fix: oauth subscription 2025-09-26 17:44:57 +08:00
1a9798c559 fix(workflow): Fix onboarding node creation after knowledge pipeline refactor (#26289) 2025-09-26 16:43:36 +08:00
764436ed8e feat(workflow): Enable keyboard delete for all node types including Start
Removes explicit Start node exclusion from handleNodesDelete function:
- Remove BlockEnum.Start filter from bundled nodes selection
- Remove BlockEnum.Start filter from selected node detection
- Allows DEL/Backspace keys to delete Start nodes same as other nodes
- Button delete already worked, now keyboard delete works too

Fixes: Start nodes can now be deleted via both button and keyboard shortcuts

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-26 14:10:28 +08:00
2a1c5ff57b feat(workflow): Enable entry node deletion and fix draft sync
Complete workflow liberalization following PR #24627:

1. Remove Start node deletion restriction by removing isUndeletable property
2. Fix draft sync blocking when no Start node exists
3. Restore isWorkflowDataLoaded protection to prevent race conditions
4. Ensure all entry nodes (Start + 3 trigger types) have equal deletion rights

This allows workflows with only trigger nodes and fixes the issue where
added nodes would disappear after page refresh due to sync API blocking.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-26 13:54:52 +08:00
cc4ba1a3a9 chore: add settings.local.json to .gitignore 2025-09-26 13:26:51 +08:00
d68a9f1850 Merge remote-tracking branch 'origin/main' into feat/trigger
Resolve merge conflict in use-workflow.ts:
- Keep trigger branch workflow-entry utilities imports
- Preserve SUPPORT_OUTPUT_VARS_NODE from main branch
- Remove unused PARALLEL_DEPTH_LIMIT import

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-26 13:17:14 +08:00
4f460160d2 refactor(api): reorganize migration files 2025-09-26 12:31:44 +08:00
d5ff89f6d3 refactor(api): enhance request handling and time management
- Initialized `response` variable in `trigger.py` to ensure proper handling in the trigger endpoint.
- Updated `http_parser.py` to conditionally set `CONTENT_TYPE` and `CONTENT_LENGTH` headers for improved robustness.
- Changed `datetime.utcnow()` to `datetime.now(UTC)` in `sqlalchemy_workflow_trigger_log_repository.py` and `rate_limiter.py` for consistent time zone handling.
- Refactored `async_workflow_service.py` to use the public method `get_tenant_owner_timezone` for better encapsulation.
- Simplified subscription retrieval logic in `plugin_parameter_service.py` for clarity.

These changes improve code reliability and maintainability while ensuring accurate time management and request processing.
2025-09-25 19:46:52 +08:00
452588dded refactor(api): fix pyright check
- Replaced `is_editor` checks with `has_edit_permission` in `workflow_trigger.py` and `workflow.py` to enhance clarity and consistency in permission handling.
- Updated the rate limiter to use `datetime.now(UTC)` instead of `datetime.utcnow()` for accurate time handling.
- Added `__all__` declaration in `trigger/__init__.py` for better module export management.
- Initialized `debug_dispatched` variable in `trigger_processing_tasks.py` to ensure proper tracking during workflow dispatching.

These changes improve code readability and maintainability while ensuring correct permission checks and time management.
2025-09-25 18:32:22 +08:00
aef862d9ce refactor(api): remove unused PluginTriggerApi route
- Removed the `PluginTriggerApi` resource route from `workflow_trigger.py` to streamline the API and improve maintainability. This change contributes to a cleaner and more organized codebase.
2025-09-25 18:23:17 +08:00
896f3252b8 refactor(api): refactor all
- Replaced direct imports of `TriggerProviderID` and `ToolProviderID` from `core.plugin.entities.plugin` with imports from `models.provider_ids` for better organization.
- Refactored workflow node classes to inherit from a unified `Node` class, improving consistency and maintainability.
- Removed unused code and comments to clean up the implementation, particularly in the `workflow_trigger.py` and `builtin_tools_manage_service.py` files.

These changes enhance the clarity and structure of the codebase, facilitating easier future modifications.
2025-09-25 18:22:30 +08:00
6853a699e1 Merge branch 'main' into feat/trigger 2025-09-25 17:43:39 +08:00
cd07eef639 Merge remote-tracking branch 'origin/main' into feat/trigger 2025-09-25 17:14:24 +08:00
ef9a741781 feat(trigger): enhance trigger management with new error handling and response structure
- Added `TriggerInvokeError` and `TriggerIgnoreEventError` for better error categorization during trigger invocation.
- Updated `TriggerInvokeResponse` to include a `cancelled` field, indicating if a trigger was ignored.
- Enhanced `TriggerManager` to handle specific errors and return appropriate responses.
- Refactored `dispatch_triggered_workflows` to improve workflow execution logic and error handling.

These changes improve the robustness and clarity of the trigger management system.
2025-09-23 16:01:59 +08:00
c5de91ba94 refactor(trigger): update cache expiration constants and log key format
- Renamed validation-related constants to builder-related ones for clarity.
- Updated cache expiration from milliseconds to seconds for consistency.
- Adjusted log key format to reflect the builder context instead of validation.

These changes enhance the readability and maintainability of the TriggerSubscriptionBuilderService.
2025-09-22 13:37:46 +08:00
bc1e6e011b fix(trigger): update cache key format in TriggerSubscriptionBuilderService
- Changed the cache key format in the `encode_cache_key` method from `trigger:subscription:validation:{subscription_id}` to `trigger:subscription:builder:{subscription_id}` to better reflect its purpose.

This update improves clarity in cache key usage for trigger subscriptions.
2025-09-22 13:37:46 +08:00
906028b1fb fix: start node validation 2025-09-22 12:58:20 +08:00
034602969f feat(schedule-trigger): enhance cron parser with mature library and comprehensive testing (#26002) 2025-09-22 10:01:48 +08:00
4ca14bfdad chore: improve webhook (#25998) 2025-09-21 12:16:31 +08:00
59f56d8c94 feat: schedule trigger default daily midnight (#25937) 2025-09-19 08:05:00 +08:00
63d26f0478 fix: api key params 2025-09-18 17:35:34 +08:00
eae65e55ce feat: oauth config opt & add dynamic options 2025-09-18 17:12:48 +08:00
0edf06329f fix: apply suggestions 2025-09-18 17:04:02 +08:00
6943a379c9 feat: show placeholder '--' for invalid cron expressions in node display
- Return '--' placeholder when cron mode has empty or invalid expressions
- Prevents displaying fallback dates that confuse users
- Maintains consistent UX for invalid schedule configurations
2025-09-18 17:04:02 +08:00
e49534b70c fix: make frequency optional 2025-09-18 17:04:02 +08:00
344616ca2f fix: clear opposite mode data only when editing, preserve data during mode switching 2025-09-18 17:04:02 +08:00
0e287a9c93 chore: add missing translations 2025-09-18 13:25:57 +08:00
8141f53af5 fix: add preventDefaultSubmit prop to BaseForm to prevent unwanted page refresh on Enter key 2025-09-18 12:48:26 +08:00
5a6cb0d887 feat: enhance API key modal step indicator with active dots and improved styling 2025-09-18 12:44:11 +08:00
26e7677595 fix: align width and use rounded xl 2025-09-18 12:08:21 +08:00
814b0e1fe8 feat: oauth config init 2025-09-18 00:00:50 +08:00
a173dc5c9d feat(provider): add multiple option support in ProviderConfig
- Introduced a new field `multiple` in the `ProviderConfig` class to allow for multiple selections, enhancing the configuration capabilities for providers.
- This addition improves flexibility in provider settings and aligns with the evolving requirements for provider configurations.
2025-09-17 22:12:01 +08:00
a567facf2b refactor(trigger): streamline encrypter creation in TriggerProviderService
- Replaced calls to `create_trigger_provider_encrypter` and `create_trigger_provider_oauth_encrypter` with a unified `create_provider_encrypter` method, simplifying the encrypter creation process.
- Updated the parameters passed to the new method to enhance configuration management and cache handling.

These changes improve code clarity and maintainability in the trigger provider service.
2025-09-17 21:47:11 +08:00
e76d80defe fix(trigger): update client parameter handling in TriggerProviderService
- Modified the `create_provider_encrypter` call to include a cache assignment, ensuring proper management of encryption resources.
- Added a cache deletion step after updating client parameters, enhancing the integrity of the parameter handling process.

These changes improve the reliability of client parameter updates within the trigger provider service.
2025-09-17 20:57:52 +08:00
4a17025467 fix(trigger): update session management in TriggerProviderService
- Changed session management in `TriggerProviderService` from `autoflush=True` to `expire_on_commit=False` for improved control over session state.
- This change enhances the reliability of database interactions by preventing automatic expiration of objects after commit, ensuring data consistency during trigger operations.

These updates contribute to better session handling and stability in trigger-related functionalities.
2025-09-16 18:01:44 +08:00
bd1fcd3525 feat(trigger): add TriggerProviderInfoApi and enhance trigger provider service
- Introduced `TriggerProviderInfoApi` to retrieve information for a specific trigger provider, improving API capabilities.
- Added `get_trigger_provider` method in `TriggerProviderService` to fetch trigger provider details, enhancing data retrieval.
- Updated route configurations to include the new API endpoint for trigger provider information.

These changes enhance the functionality and usability of trigger provider interactions within the application.
2025-09-16 17:03:52 +08:00
0cb0cea167 feat(trigger): enhance trigger plugin data structure and error handling
- Added `plugin_unique_identifier` to `PluginTriggerData` and `TriggerProviderApiEntity` to improve identification of trigger plugins.
- Introduced `PluginTriggerDispatchData` for structured dispatch data in Celery tasks, enhancing the clarity of trigger dispatching.
- Updated `dispatch_triggered_workflows_async` to utilize the new dispatch data structure, improving error handling and logging for trigger invocations.
- Enhanced metadata handling in `TriggerPluginNode` to include trigger information, aiding in debugging and tracking.

These changes improve the robustness and maintainability of trigger plugin interactions within the workflow system.
2025-09-16 15:39:40 +08:00
ee68a685a7 fix(workflow): enforce non-nullable arguments in DraftWorkflowTriggerRunApi
- Updated the argument definitions in the DraftWorkflowTriggerRunApi to include `nullable=False` for `node_id`, `trigger_name`, and `subscription_id`. This change ensures that these fields are always provided in the request, improving the robustness of the API.

This fix enhances input validation and prevents potential errors related to missing arguments.
2025-09-16 11:25:16 +08:00
c78bd492af feat(trigger): add supported creation methods to TriggerProviderApiEntity
- Introduced a new field `supported_creation_methods` in `TriggerProviderApiEntity` to specify the available methods for creating triggers, including OAUTH, APIKEY, and MANUAL.
- Updated the `PluginTriggerProviderController` to populate this field based on the entity's schemas, enhancing the API's clarity and usability.

These changes improve the flexibility and configurability of trigger providers within the application.
2025-09-15 17:01:29 +08:00
6857bb4406 feat(trigger): implement plugin trigger synchronization and subscription management in workflow
- Added a new event handler for syncing plugin trigger relationships when a draft workflow is synced, ensuring that the database reflects the current state of plugin triggers.
- Introduced subscription management features in the frontend, allowing users to select, add, and remove subscriptions for trigger plugins.
- Updated various components to support subscription handling, including the addition of new UI elements for subscription selection and removal.
- Enhanced internationalization support by adding new translation keys related to subscription management.

These changes improve the overall functionality and user experience of trigger plugins within workflows.
2025-09-15 15:49:07 +08:00
dcf3ee6982 fix(trigger): update trigger label assignment for improved clarity
- Changed the label assignment in the convertToTriggerWithProvider function from trigger.description.human to trigger.identity.label, ensuring the label reflects the correct identity format.

This update enhances the accuracy of trigger data representation in the application.
2025-09-15 14:50:56 +08:00
76850749e4 feat(trigger): enhance trigger debugging with polling API and new subscription retrieval
- Refactored DraftWorkflowTriggerNodeApi and DraftWorkflowTriggerRunApi to implement polling for trigger events instead of listening, improving responsiveness and reliability.
- Introduced TriggerSubscriptionBuilderGetApi to retrieve subscription instances for trigger providers, enhancing the API's capabilities.
- Removed deprecated trigger event classes and streamlined event handling in TriggerDebugService, ensuring a cleaner architecture.
- Updated Queue and Stream entities to reflect the changes in trigger event handling, improving overall clarity and maintainability.

These enhancements significantly improve the trigger debugging experience and API usability.
2025-09-14 19:12:31 +08:00
91e5e33440 feat: add modal style opt 2025-09-12 20:22:33 +08:00
11e55088c9 fix: restore id prop passing to node children in BaseNode (#25520) 2025-09-11 17:54:31 +08:00
57c0bc9fb6 feat(trigger): refactor trigger debug event handling and improve response structures
- Renamed and refactored trigger debug event classes to enhance clarity and consistency, including changes from `TriggerDebugEventData` to `TriggerEventData` and related response classes.
- Updated `DraftWorkflowTriggerNodeApi` and `DraftWorkflowTriggerRunApi` to utilize the new event structures, improving the handling of trigger events.
- Removed the `TriggerDebugEventGenerator` class, consolidating event generation directly within the API logic for streamlined processing.
- Enhanced error handling and response formatting for trigger events, ensuring structured outputs for better integration and debugging.

This refactor improves the overall architecture of trigger debugging, making it more intuitive and maintainable.
2025-09-11 16:55:58 +08:00
c3ebb22a4b feat(trigger): add workflows_in_use field to TriggerProviderSubscriptionApiEntity
- Introduced a new field `workflows_in_use` to the TriggerProviderSubscriptionApiEntity to track the number of workflows utilizing each subscription.
- Enhanced the TriggerProviderService to populate this field by querying the WorkflowPluginTrigger model for usage counts associated with each subscription.

This addition improves the visibility of subscription usage within the trigger provider context.
2025-09-11 16:55:58 +08:00
1562d00037 feat(trigger): implement trigger debugging functionality
- Added DraftWorkflowTriggerNodeApi and DraftWorkflowTriggerRunApi for debugging trigger nodes and workflows.
- Enhanced TriggerDebugService to manage trigger debugging sessions and event listening.
- Introduced structured event responses for trigger debugging, including listening started, received, node finished, and workflow started events.
- Updated Queue and Stream entities to support new trigger debug events.
- Refactored trigger input handling to streamline the process of creating inputs from trigger data.

This implementation improves the debugging capabilities for trigger nodes and workflows, providing clearer event handling and structured responses.
2025-09-11 16:55:58 +08:00
e9e843b27d fix(tool): standardize tool naming across components
- Updated references from `trigger_name` to `tool_name` in multiple components for consistency.
- Adjusted type definitions to reflect the change in naming convention, enhancing clarity in the codebase.
2025-09-11 16:55:57 +08:00
ec33b9908e fix(trigger): improve formatting of OAuth client response in TriggerOAuthClientManageApi
- Refactored the return statement in the TriggerOAuthClientManageApi to enhance readability and maintainability.
- Ensured consistent formatting of the response structure for better clarity in API responses.
2025-09-11 16:55:57 +08:00
67004368d9 feat: sub card style 2025-09-11 16:22:59 +08:00
94ecbd44e4 feat: add API endpoint to extract plugin assets 2025-09-11 14:48:42 +08:00
ba76312248 feat: adapt to plugin_daemon endpoint 2025-09-11 14:46:12 +08:00
50bff270b6 feat: add subscription 2025-09-10 23:21:33 +08:00
bd5cf1c272 fix(trigger): enhance OAuth client response in TriggerOAuthClientManageApi
- Integrated TriggerManager to retrieve the trigger provider's OAuth client schema.
- Updated the return structure to include the redirect URI and OAuth client schema for improved API response clarity.
2025-09-10 17:35:30 +08:00
d22404994a chore: add comments on generate_webhook_id 2025-09-10 17:23:29 +08:00
9898730cc5 feat: add webhook node limit validation (max 5 per workflow)
- Add MAX_WEBHOOK_NODES_PER_WORKFLOW constant set to 5
- Validate webhook node count in sync_webhook_relationships method
- Raise ValueError when workflow exceeds webhook node limit
- Block workflow save when limit is exceeded to ensure data integrity
- Provide clear error message indicating current count and maximum allowed

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-10 17:22:09 +08:00
b0f1e55a87 refactor: remove triggered_by field from webhook triggers and use automatic sync
- Remove triggered_by field from WorkflowWebhookTrigger model
- Replace manual webhook creation/deletion APIs with automatic sync via WebhookService
- Keep only GET API for retrieving webhook information
- Use same webhook ID for both debug and production environments (differentiated by endpoint)
- Add sync_webhook_relationships to automatically manage webhook lifecycle
- Update tests to remove triggered_by references
- Clean up unused imports and fix type checking issues

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-10 17:17:19 +08:00
6566824807 fix(trigger): update return type in TriggerSubscriptionBuilderService
- Changed the return type of the method in `TriggerSubscriptionBuilderService` from `SubscriptionBuilder` to `SubscriptionBuilderApiEntity` for improved clarity and alignment with API entity structures.
- Updated the return statement to utilize the new method for converting the builder to the API entity.
2025-09-10 15:48:32 +08:00
9249a2af0d fix(trigger): update event data publishing in TriggerDebugService
- Changed the event data publishing method in `TriggerDebugService` to use `model_dump()` for improved data structure handling when publishing to Redis Pub/Sub.
2025-09-10 15:48:32 +08:00
112fc3b1d1 fix: clear schedule config when exporting data 2025-09-10 13:50:37 +08:00
37299b3bd7 fix: rename migration 2025-09-10 13:41:50 +08:00
8f65ce995a fix: migrations 2025-09-10 13:38:34 +08:00
4a743e6dc1 feat: add workflow schedule trigger support (#24428)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-09-10 13:24:23 +08:00
07dda61929 fix/tooltip and onboarding ui (#25451) 2025-09-10 10:40:14 +08:00
0d8438ef40 fix(trigger): add 'trigger' category key to plugin constants for error avoid 2025-09-10 10:34:33 +08:00
96bb638969 fix: limits 2025-09-09 23:32:51 +08:00
e74962272e fix: only workflow use trigger api (#25443) 2025-09-09 23:14:10 +08:00
5a15419baf feat(trigger): implement debug session capabilities for trigger nodes
- Added `DraftWorkflowTriggerNodeApi` to handle debugging of trigger nodes, allowing for real-time event listening and session management.
- Introduced `TriggerDebugService` for managing debug sessions and event dispatching using Redis Pub/Sub.
- Updated `TriggerService` to support dispatching events to debug sessions and refactored related methods for improved clarity and functionality.
- Enhanced data structures in `request.py` and `entities.py` to accommodate new debug event data requirements.

These changes significantly improve the debugging capabilities for trigger nodes in draft workflows, facilitating better development and troubleshooting processes.
2025-09-09 21:27:31 +08:00
e8403977b9 feat(plugin): add triggers field to PluginDeclaration for enhanced functionality
- Introduced a new `triggers` field in the `PluginDeclaration` class to support trigger functionalities within plugins.
- This addition improves the integration of triggers in the plugin architecture, aligning with recent updates to the trigger entity structures.

These changes enhance the overall capabilities of the plugin system.
2025-09-09 17:22:11 +08:00
add2ca85f2 refactor(trigger): update plugin and trigger entity structures
- Removed unnecessary newline in `TriggerPluginNode` class for consistency.
- Made `provider` in `TriggerIdentity` optional to enhance flexibility.
- Added `trigger` field to `PluginDeclaration` and updated `PluginCategory` to include `Trigger`, improving the integration of trigger functionalities within the plugin architecture.

These changes streamline the entity definitions and enhance the overall structure of the trigger and plugin components.
2025-09-09 17:16:44 +08:00
fbb7b02e90 fix(webhook): prevent SimpleSelect from resetting user selections (#25423)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2025-09-09 17:11:11 +08:00
249b62c9de fix: workflow header (#25411) 2025-09-09 15:34:15 +08:00
b433322e8d feat/trigger plugin apikey (#25388) 2025-09-09 15:01:06 +08:00
1c8850fc95 feat: adjust scroll to selected node position to top-left area (#25403) 2025-09-09 14:58:42 +08:00
dc16f1b65a refactor(trigger): simplify provider path handling in workflow components
- Updated various components to directly use `provider.name` instead of constructing a path with `provider.plugin_id` and `provider.name`.
- Adjusted related calls to `invalidateSubscriptions` and other functions to reflect this change.

These modifications enhance code clarity and streamline the handling of provider information in the trigger plugin components.
2025-09-09 00:17:20 +08:00
ff30395dc1 fix(OAuthClientConfigModal): simplify provider path handling in OAuth configuration
- Updated the provider path handling in `OAuthClientConfigModal` to directly use `provider.name` instead of constructing a path with `provider.plugin_id` and `provider.name`.
- Adjusted the corresponding calls to `invalidateOAuthConfig` and `configureTriggerOAuth` to reflect this change.

These modifications enhance code clarity and streamline the OAuth configuration process in the trigger plugin component.
2025-09-09 00:10:04 +08:00
8e600f3302 feat(trigger): optimize trigger parameter schema handling in useConfig
- Refactored the trigger parameter schema construction in `useConfig` to utilize a Map for improved efficiency and clarity.
- Updated the return value to ensure unique schema entries, enhancing the integrity of the trigger configuration.

These changes streamline the management of trigger parameters, improving performance and maintainability in the workflow component.
2025-09-08 23:39:44 +08:00
5a1e0a8379 feat(FormInputItem): enhance UI components for improved user experience
- Added loading indicator using `RiLoader4Line` to `FormInputItem` for better feedback during option fetching.
- Refactored button and option styles for improved accessibility and visual consistency.
- Updated text color classes to enhance readability based on loading state and selection.

These changes improve the overall user experience and visual clarity of the form input components.
2025-09-08 23:19:33 +08:00
2a3ce6baa9 feat(trigger): enhance plugin and trigger integration with updated naming conventions
- Refactored `PluginFetchDynamicSelectOptionsApi` to replace the `extra` argument with `credential_id`, improving clarity in dynamic option fetching.
- Updated `ProviderConfigEncrypter` to rename `mask_tool_credentials` to `mask_credentials` for consistency, and added a new method to maintain backward compatibility.
- Enhanced `PluginParameterService` to utilize `credential_id` for fetching subscriptions, improving the handling of trigger credentials.
- Adjusted various components and types in the frontend to replace `tool_name` with `trigger_name`, ensuring consistency across the application.
- Introduced `multiple` property in `TriggerParameter` to support multi-select functionality.

These changes improve the integration of triggers and plugins, enhance code clarity, and align naming conventions across the codebase.
2025-09-08 23:14:50 +08:00
01b2f9cff6 feat: add providerType prop to form components for dynamic behavior
- Introduced `providerType` prop in `FormInputItem`, `ToolForm`, and `ToolFormItem` components to support both 'tool' and 'trigger' types, enhancing flexibility in handling different provider scenarios.
- Updated the `useFetchDynamicOptions` function to accept `provider_type` as 'tool' | 'trigger', allowing for more dynamic option fetching based on the provider type.

These changes improve the adaptability of the form components and streamline the integration of different provider types in the workflow.
2025-09-08 18:29:48 +08:00
ac38614171 refactor(trigger): streamline trigger provider verification and update imports
- Updated `TriggerSubscriptionBuilderVerifyApi` to directly return the result of `verify_trigger_subscription_builder`, improving clarity.
- Refactored import statement in `trigger_plugin/__init__.py` to point to the correct module, enhancing code organization.
- Removed the obsolete `node.py` file, cleaning up the codebase by eliminating unused components.

These changes enhance the maintainability and clarity of the trigger provider functionality.
2025-09-08 18:25:04 +08:00
eb95c5cd07 feat(trigger): enhance subscription builder management and update API
- Introduced `SubscriptionBuilderUpdater` class to streamline updates to subscription builders, encapsulating properties like name, parameters, and credentials.
- Refactored API endpoints to utilize the new updater class, improving code clarity and maintainability.
- Adjusted OAuth handling to create and update subscription builders more effectively, ensuring proper credential management.

This change enhances the overall functionality and organization of the trigger subscription builder API.
2025-09-08 15:09:47 +08:00
a799b54b9e feat: initialize trigger status at application level to prevent canvas refresh state issues (#25329) 2025-09-08 09:34:28 +08:00
98ba0236e6 feat: implement trigger plugin authentication UI (#25310) 2025-09-07 21:53:22 +08:00
b6c552df07 fix: add stable sorting for trigger list to prevent position changes (#25328) 2025-09-07 21:52:41 +08:00
e2827e475d feat: implement trigger-plugin support with real-time status sync (#25326) 2025-09-07 21:29:53 +08:00
58cbd337b5 fix: improve test run menu and checklist ui (#25300) 2025-09-06 22:54:36 +08:00
a91e59d544 feat: implement trigger plugin frontend integration (#25283) 2025-09-06 16:18:46 +08:00
814787677a feat(trigger): update plugin trigger API and model to use trigger_name
- Modified `PluginTriggerApi` to accept `trigger_name` as a JSON argument and return encoded plugin triggers.
- Updated `WorkflowPluginTrigger` model to replace `trigger_id` with `trigger_name` for better clarity.
- Adjusted `WorkflowPluginTriggerService` to handle the new `trigger_name` field and ensure proper error handling for subscriptions.
- Enhanced `workflow_trigger_fields` to include `trigger_name` in the plugin trigger schema.

This change improves the API's clarity and aligns the model with the updated naming conventions.
2025-09-05 15:56:13 +08:00
85caa5bd0c fix(trigger): clean up whitespace in encryption utility and trigger provider service
- Removed unnecessary blank lines in `encryption.py` and `trigger_provider_service.py` for improved code readability.
- This minor adjustment enhances the overall code quality without altering functionality.

🤖 Generated with [Claude Code](https://claude.ai/code)
2025-09-05 15:56:13 +08:00
e04083fc0e feat: add icon support for trigger plugin workflow nodes (#25241) 2025-09-05 15:50:54 +08:00
cf532e5e0d feat(trigger): add context caching for trigger providers
- Add plugin_trigger_providers and plugin_trigger_providers_lock to contexts module
- Implement caching mechanism in TriggerManager.get_trigger_provider() method
- Cache fetched trigger providers to reduce repeated daemon calls
- Use double-check locking pattern for thread-safe cache access

This follows the same pattern as ToolManager.get_plugin_provider() to improve performance
by avoiding redundant requests to the daemon when accessing trigger providers.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-05 14:30:10 +08:00
c097fc2c48 refactor(trigger): add uuid import to trigger provider service
- Imported `uuid` in `trigger_provider_service.py` to support unique identifier generation.
- This change prepares the service for future enhancements that may require UUID functionality.
2025-09-05 14:30:10 +08:00
0371d71409 feat(trigger): enhance trigger subscription management and cache handling
- Added `name` parameter to `TriggerSubscriptionBuilderCreateApi` for better subscription identification.
- Implemented `delete_cache_for_subscription` function to clear cache associated with trigger subscriptions.
- Updated `WorkflowPluginTriggerService` to check for existing subscriptions before creating new plugin triggers, improving error handling.
- Refactored `TriggerProviderService` to utilize the new cache deletion method during provider deletion.

This improves the overall management of trigger subscriptions and enhances cache efficiency.
2025-09-05 14:30:10 +08:00
81ef7343d4 chore: (trigger) refactor webhook service (#25229) 2025-09-05 14:00:20 +08:00
8e4b59c90c feat: improve trigger plugin UI layout and responsiveness (#25232) 2025-09-05 14:00:14 +08:00
68f73410fc chore: (trigger) add WEBHOOK_REQUEST_BODY_MAX_SIZE (#25217) 2025-09-05 12:23:11 +08:00
88af8ed374 fix: block selector ui (#25228) 2025-09-05 12:22:13 +08:00
015f82878e feat(trigger): integrate plugin icon retrieval into trigger provider
- Added `get_plugin_icon_url` method in `PluginService` to fetch plugin icons.
- Updated `PluginTriggerProviderController` to use the new method for icon handling.
- Refactored `ToolTransformService` to utilize `PluginService` for consistent icon URL generation.

This enhances the trigger provider's ability to manage plugin icons effectively.
2025-09-05 12:01:41 +08:00
3874e58dc2 refactor(trigger): enhance trigger provider deletion process and session management 2025-09-05 11:31:57 +08:00
9f8c159583 feat(trigger): implement trigger plugin block selector following tools pattern (#25204) 2025-09-05 10:20:47 +08:00
d8f6f9ce19 chore: (trigger)change content type from form to application/octet-stream (#25167) 2025-09-05 09:54:07 +08:00
eab03e63d4 refactor(trigger): rename request logs API and enhance logging functionality
- Renamed `TriggerSubscriptionBuilderRequestLogsApi` to `TriggerSubscriptionBuilderLogsApi` for clarity.
- Updated the API endpoint to retrieve logs for subscription builders.
- Enhanced logging functionality in `TriggerSubscriptionBuilderService` to append and list logs more effectively.
- Refactored trigger processing tasks to improve naming consistency and clarity in logging.

🤖 Generated with [Claude Code](https://claude.ai/code)
2025-09-04 21:11:25 +08:00
461829274a feat: (trigger) support file upload in webhook (#25159) 2025-09-04 18:33:42 +08:00
e751c0c535 refactor(trigger): update trigger provider API and clean up unused classes
- Renamed the API endpoint for trigger providers from `/workspaces/current/trigger-providers` to `/workspaces/current/triggers` for consistency.
- Removed unused `TriggerProviderCredentialsCache` and `TriggerProviderOAuthClientParamsCache` classes to streamline the codebase.
- Enhanced the `TriggerProviderApiEntity` to include additional properties and improved the conversion logic in `PluginTriggerProviderController`.

🤖 Generated with [Claude Code](https://claude.ai/code)
2025-09-04 17:45:15 +08:00
1fffc79c32 fix: prevent empty workflow draft sync during page navigation (#25140) 2025-09-04 17:13:49 +08:00
83fab4bc19 chore: (webhook) when content type changed clear the body variables (#25136) 2025-09-04 15:09:54 +08:00
f60e28d2f5 feat(trigger): enhance user role validation and add request logs API for trigger providers
- Updated user role validation in PluginTriggerApi and WebhookTriggerApi to assert current_user as an Account and check tenant ID.
- Introduced TriggerSubscriptionBuilderRequestLogsApi to retrieve request logs for subscription instances, ensuring proper user authentication and error handling.
- Added new API endpoint for accessing request logs related to trigger providers.

🤖 Generated with [Claude Code](https://claude.ai/code)
2025-09-04 14:44:02 +08:00
a62d7aa3ee feat(trigger): add plugin trigger workflow support and refactor trigger system
- Add new workflow plugin trigger service for managing plugin-based triggers
- Implement trigger provider encryption utilities for secure credential storage
- Add custom trigger errors module for better error handling
- Refactor trigger provider and manager classes for improved plugin integration
- Update API endpoints to support plugin trigger workflows
- Add database migration for plugin trigger workflow support

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-04 13:20:43 +08:00
cc84a45244 chore: (webhook) use variable instead of InputVar (#25119) 2025-09-04 11:10:42 +08:00
5cf3d24018 fix(webhook): selected type ui style (#25106) 2025-09-04 10:59:08 +08:00
4bdbe617fe fix: uuidv7 (#25097) 2025-09-04 08:44:14 +08:00
33c867fd8c feat(workflow): enhance webhook status code input with increment/decrement controls (#25099) 2025-09-03 22:26:00 +08:00
2013ceb9d2 chore: validate param type of application/json when call a webhook (#25074) 2025-09-03 15:49:07 +08:00
7120c6414c fix: content type of webhook (#25032) 2025-09-03 15:13:01 +08:00
5ce7b2d98d refactor(migrations): remove obsolete plugin_trigger migration file
- Deleted the plugin_trigger migration file as it is no longer needed in the codebase.
- Updated model imports in `__init__.py` to include new trigger-related classes for better organization.
2025-09-03 15:02:17 +08:00
cb82198271 refactor(trigger): update trigger provider classes and API endpoints
- Renamed classes for trigger subscription management to improve clarity, including TriggerProviderSubscriptionListApi to TriggerSubscriptionListApi and TriggerSubscriptionsDeleteApi to TriggerSubscriptionDeleteApi.
- Updated API endpoint paths to reflect the new naming conventions for trigger subscriptions.
- Removed deprecated TriggerOAuthRefreshTokenApi class to streamline the codebase.
- Added trigger_providers import to the console controller for better organization.
2025-09-03 14:53:27 +08:00
5e5ffaa416 feat(tool-form): add extraParams prop to ToolForm and ToolFormItem components
- Introduced extraParams prop to both ToolForm and ToolFormItem components for enhanced flexibility in passing additional parameters.
- Updated component usage to accommodate the new prop, improving the overall functionality of the tool forms.
2025-09-03 14:53:27 +08:00
4b253e1f73 feat(trigger): plugin trigger workflow 2025-09-03 14:53:27 +08:00
dd929dbf0e fix(dynamic_select): implement function 2025-09-03 14:53:27 +08:00
97a9d34e96 feat(trigger): introduce plugin trigger management and enhance trigger processing
- Remove the debug endpoint for cleaner API structure
- Add support for TRIGGER_PLUGIN in NodeType enumeration
- Implement WorkflowPluginTrigger model to map plugin triggers to workflow nodes
- Enhance TriggerService to process plugin triggers and store trigger data in Redis
- Update node mapping to include TriggerPluginNode for workflow execution

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-03 14:53:27 +08:00
602070ec9c refactor(trigger): improve method signature formatting in TriggerService
- Adjust the formatting of the `process_triggered_workflows` method signature for better readability
- Ensure consistent style across method definitions in the TriggerService class

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-03 14:53:27 +08:00
afd8989150 feat(trigger): introduce subscription builder and enhance trigger management
- Refactor trigger provider classes to improve naming consistency, including renaming classes for subscription management
- Implement new TriggerSubscriptionBuilderService for creating and verifying subscription builders
- Update API endpoints to support subscription builder creation and verification
- Enhance data models to include new attributes for subscription builders
- Remove the deprecated TriggerSubscriptionValidationService to streamline the codebase

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-03 14:53:27 +08:00
694197a701 refactor(trigger): clean up imports and optimize trigger-related code
- Remove unused imports in trigger-related files for better clarity and maintainability
- Streamline import statements across various modules to enhance code quality

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-03 14:53:27 +08:00
2f08306695 feat(trigger): enhance trigger subscription management and processing
- Refactor trigger provider classes to improve naming consistency and clarity
- Introduce new methods for managing trigger subscriptions, including validation and dispatching
- Update API endpoints to reflect changes in subscription handling
- Implement logging and request management for endpoint interactions
- Enhance data models to support subscription attributes and lifecycle management

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-03 14:53:27 +08:00
6acc77d86d feat(trigger): refactor trigger provider to subscription model
- Rename classes and methods to reflect the transition from credentials to subscriptions
- Update API endpoints for managing trigger subscriptions
- Modify data models and entities to support subscription attributes
- Enhance service methods for listing, adding, updating, and deleting subscriptions
- Adjust encryption utilities to handle subscription data

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-03 14:53:27 +08:00
5ddd5e49ee feat(trigger): enhance subscription schema and provider configuration
- Update ProviderConfig to allow a list as a default value
- Introduce SubscriptionSchema for better organization of subscription-related configurations
- Modify TriggerProviderApiEntity to use Optional for subscription_schema
- Add custom_model_schema to TriggerProviderEntity for additional configuration options

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-03 14:53:27 +08:00
72f9e77368 refactor(trigger): clean up and optimize trigger-related code
- Remove unused classes and imports in encryption utilities
- Simplify method signatures for better readability
- Enhance code quality by adding newlines for clarity
- Update tests to reflect changes in import paths

Co-authored-by: Claude <noreply@anthropic.com>
2025-09-03 14:53:26 +08:00
a46c9238fa feat(trigger): implement complete OAuth authorization flow for trigger providers
- Add OAuth authorization URL generation API endpoint
- Implement OAuth callback handler for credential storage
- Support both system-level and tenant-level OAuth clients
- Add trigger provider credential encryption utilities
- Refactor trigger entities into separate modules
- Update trigger provider service with OAuth client management
- Add credential cache for trigger providers

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-09-03 14:53:26 +08:00
87120ad4ac feat(trigger): add trigger provider management and webhook handling functionality 2025-09-03 14:53:26 +08:00
7544b5ec9a fix: delete var of webhook (#25038) 2025-09-03 14:49:56 +08:00
ff4a62d1e7 chore: limit webhook status code 200~399 (#25045) 2025-09-03 14:48:18 +08:00
41daa51988 fix: missing key for translation path (#25059) 2025-09-03 14:43:40 +08:00
d522350c99 fix(webhook-trigger): request array type adjustment (#25005) 2025-09-02 23:20:12 +08:00
1d1bb9451e fix: prevent workflow canvas clearing due to race condition and viewport errors (#25003) 2025-09-02 20:53:44 +08:00
1fce1a61d4 feat(workflow-log): enhance workflow logs UI with sorting and status filters (#24978) 2025-09-02 16:43:11 +08:00
883a6caf96 feat: add trigger by of app log (#24973) 2025-09-02 16:04:08 +08:00
a239c39f09 fix: webhook http method should case insensitive (#24957) 2025-09-02 14:47:24 +08:00
e925a8ab99 fix(app-cards): restrict toggle enable to Start nodes only (#24918) 2025-09-01 22:52:23 +08:00
bccaf939e6 fix: migrations 2025-09-01 18:07:21 +08:00
676648e0b3 Merge branch 'main' into feat/trigger 2025-09-01 18:05:31 +08:00
4ae19e6dde fix(webhook-trigger): remove error handling (#24902) 2025-09-01 17:11:49 +08:00
4d0ff5c281 feat: implement variable synchronization for webhook node (#24874)
Co-authored-by: Claude <noreply@anthropic.com>
2025-09-01 16:58:06 +08:00
327b354cc2 refactor: unify trigger node architecture and clean up technical debt (#24886)
Co-authored-by: hjlarry <hjlarry@163.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-09-01 15:47:44 +08:00
6d307cc9fc Fix test run shortcut consistency and improve dropdown styling (#24849) 2025-09-01 14:47:21 +08:00
adc7134af5 fix: improve TimePicker footer layout and button styling (#24831) 2025-09-01 13:34:53 +08:00
10f19cd0c2 fix(webhook): add content-type aware parameter type handling (#24865) 2025-09-01 10:06:26 +08:00
9ed45594c6 fix: improve schedule trigger and quick settings app-operation btns ui (#24843) 2025-08-31 16:59:49 +08:00
c138f4c3a6 fix: check AppTrigger status before webhook execution (#24829)
Co-authored-by: Claude <noreply@anthropic.com>
2025-08-30 16:40:21 +08:00
a35be05790 Fix workflow card toggle logic and implement minimal state UI (#24822) 2025-08-30 16:35:34 +08:00
60b5ed8e5d fix: enhance webhook trigger panel UI consistency and user experience (#24780) 2025-08-29 17:41:42 +08:00
d8ddbc4d87 feat: enhance webhook trigger panel UI consistency and interactivity (#24759)
Co-authored-by: hjlarry <hjlarry@163.com>
2025-08-29 14:24:23 +08:00
19c0fc85e2 feat: when add/delete webhook trigger call the API (#24755) 2025-08-29 14:23:50 +08:00
a58df35ead fix: improve trigger card layout spacing and remove dividers (#24756) 2025-08-29 13:37:44 +08:00
9789bd02d8 feat: implement trigger card component with auto-refresh (#24743) 2025-08-29 11:57:08 +08:00
d94e54923f Improve tooltip design for trigger blocks (#24724) 2025-08-28 23:18:00 +08:00
64c7be59b7 Improve workflow block selector search functionality (#24707) 2025-08-28 17:21:34 +08:00
89ad6ad902 feat: add app trigger list api (#24693) 2025-08-28 15:23:08 +08:00
4f73bc9693 fix(schedule): add time logic to weekly frequency mode for consistent behavior with daily mode (#24673) 2025-08-28 14:40:11 +08:00
add6b79231 UI enhancements for workflow checklist component (#24647) 2025-08-28 10:10:10 +08:00
c90dad566f feat: enhance workflow error handling and internationalization (#24648) 2025-08-28 09:41:22 +08:00
5cbe6bf8f8 fix(schedule): correct weekly frequency weekday calculation algorithm (#24641) 2025-08-27 18:20:09 +08:00
4ef6ff217e fix: improve code quality in webhook services and controllers (#24634)
Co-authored-by: Claude <noreply@anthropic.com>
2025-08-27 17:50:51 +08:00
87abfbf515 Allow empty workflows and improve workflow validation (#24627) 2025-08-27 17:49:09 +08:00
73e65fd838 feat: align trigger webhook style with schedule node and fix selection border truncation (#24635) 2025-08-27 17:47:14 +08:00
e53edb0fc2 refactor: optimize TenantDailyRateLimiter to use UTC internally with timezone-aware error messages (#24632)
Co-authored-by: Claude <noreply@anthropic.com>
2025-08-27 17:35:04 +08:00
17908fbf6b fix: only workflow should display start modal (#24623) 2025-08-27 16:20:31 +08:00
3dae108f84 refactor(sidebar): Restructure app operations with toggle functionality (#24625) 2025-08-27 16:20:17 +08:00
5bbf685035 feat: fix i18n missing keys and merge upstream/main (#24615)
Signed-off-by: -LAN- <laipz8200@outlook.com>
Signed-off-by: kenwoodjw <blackxin55+@gmail.com>
Signed-off-by: Yongtao Huang <yongtaoh2022@gmail.com>
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
Co-authored-by: -LAN- <laipz8200@outlook.com>
Co-authored-by: GuanMu <ballmanjq@gmail.com>
Co-authored-by: Davide Delbianco <davide.delbianco@outlook.com>
Co-authored-by: NeatGuyCoding <15627489+NeatGuyCoding@users.noreply.github.com>
Co-authored-by: kenwoodjw <blackxin55+@gmail.com>
Co-authored-by: Yongtao Huang <yongtaoh2022@gmail.com>
Co-authored-by: Yongtao Huang <99629139+hyongtao-db@users.noreply.github.com>
Co-authored-by: Qiang Lee <18018968632@163.com>
Co-authored-by: 李强04 <liqiang04@gaotu.cn>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Asuka Minato <i@asukaminato.eu.org>
Co-authored-by: Matri Qi <matrixdom@126.com>
Co-authored-by: huayaoyue6 <huayaoyue@163.com>
Co-authored-by: Bowen Liang <liangbowen@gf.com.cn>
Co-authored-by: znn <jubinkumarsoni@gmail.com>
Co-authored-by: crazywoola <427733928@qq.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: yihong <zouzou0208@gmail.com>
Co-authored-by: Muke Wang <shaodwaaron@gmail.com>
Co-authored-by: wangmuke <wangmuke@kingsware.cn>
Co-authored-by: Wu Tianwei <30284043+WTW0313@users.noreply.github.com>
Co-authored-by: quicksand <quicksandzn@gmail.com>
Co-authored-by: 非法操作 <hjlarry@163.com>
Co-authored-by: zxhlyh <jasonapring2015@outlook.com>
Co-authored-by: Eric Guo <eric.guocz@gmail.com>
Co-authored-by: Zhedong Cen <cenzhedong2@126.com>
Co-authored-by: jiangbo721 <jiangbo721@163.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: hjlarry <25834719+hjlarry@users.noreply.github.com>
Co-authored-by: lxsummer <35754229+lxjustdoit@users.noreply.github.com>
Co-authored-by: 湛露先生 <zhanluxianshen@163.com>
Co-authored-by: Guangdong Liu <liugddx@gmail.com>
Co-authored-by: QuantumGhost <obelisk.reg+git@gmail.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Yessenia-d <yessenia.contact@gmail.com>
Co-authored-by: huangzhuo1949 <167434202+huangzhuo1949@users.noreply.github.com>
Co-authored-by: huangzhuo <huangzhuo1@xiaomi.com>
Co-authored-by: 17hz <0x149527@gmail.com>
Co-authored-by: Amy <1530140574@qq.com>
Co-authored-by: Joel <iamjoel007@gmail.com>
Co-authored-by: Nite Knite <nkCoding@gmail.com>
Co-authored-by: Yeuoly <45712896+Yeuoly@users.noreply.github.com>
Co-authored-by: Petrus Han <petrus.hanks@gmail.com>
Co-authored-by: iamjoel <2120155+iamjoel@users.noreply.github.com>
Co-authored-by: Kalo Chin <frog.beepers.0n@icloud.com>
Co-authored-by: Ujjwal Maurya <ujjwalsbx@gmail.com>
Co-authored-by: Maries <xh001x@hotmail.com>
2025-08-27 15:07:28 +08:00
a63d1e87b1 feat: webhook trigger backend api (#24387)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-08-27 14:42:45 +08:00
7129de98cd feat: implement workflow onboarding modal system (#24551) 2025-08-27 13:31:22 +08:00
2984dbc0df fix: when workflow not has start node can't open service api (#24564) 2025-08-26 18:06:11 +08:00
392db7f611 fix: when workflow only has trigger node can't save (#24546) 2025-08-26 16:41:47 +08:00
5a427b8daa refactor: rename RunAllTriggers icon to TriggerAll for semantic clarity (#24478)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-08-25 17:51:04 +08:00
18f2e6f166 refactor: Use specific error types for workflow execution (#24475)
Co-authored-by: Claude <noreply@anthropic.com>
2025-08-25 16:19:12 +08:00
e78903302f feat(trigger-schedule): simplify timezone handling with user-centric approach (#24401) 2025-08-24 21:03:59 +08:00
4084ade86c refactor(trigger-webhook): remove redundant WebhookParam type and sim… (#24390) 2025-08-24 00:21:47 +08:00
6b0d919dbd feat: webhook trigger frontend (#24311) 2025-08-23 23:54:41 +08:00
a7b558b38b feat/trigger: support specifying root node (#24388)
Co-authored-by: Claude <noreply@anthropic.com>
2025-08-23 20:44:03 +08:00
6aed7e3ff4 feat/trigger universal entry (#24358)
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2025-08-23 20:18:08 +08:00
8e93a8a2e2 refactor: comprehensive schedule trigger component redesign (#24359)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: zhangxuhe1 <xuhezhang6@gmail.com>
2025-08-23 11:03:18 +08:00
e38a86e37b Merge branch 'main' into feat/trigger 2025-08-22 20:11:49 +08:00
392e3530bf feat: replace mock data with dynamic workflow options in test run dropdown (#24320) 2025-08-22 16:36:09 +08:00
833c902b2b feat(workflow): Plugin Trigger Node with Unified Entry Node System (#24205) 2025-08-20 23:49:10 +08:00
6eaea64b3f feat: implement multi-select monthly trigger schedule (#24247) 2025-08-20 06:23:30 -07:00
5303b50737 fix: initialize recur fields when switching to hourly frequency (#24181) 2025-08-20 09:32:05 +08:00
6acbcfe679 UI improvements: fix translation and custom icons for schedule trigger (#24167) 2025-08-19 18:27:07 +08:00
16ef5ebb97 fix: remove duplicate weekdays keys in i18n workflow files (#24157) 2025-08-19 14:55:16 +08:00
acfb95f9c2 Refactor Start node UI to User Input and optimize EntryNodeContainer (#24156) 2025-08-19 14:40:24 +08:00
aacea166d7 fix: resolve merge conflict between Features removal and validation enhancement (#24150) 2025-08-19 13:47:38 +08:00
f7bb3b852a feat: implement Schedule Trigger validation with multi-start node topology support (#24134) 2025-08-19 11:55:15 +08:00
d4ff1e031a Remove workflow features button (#24085) 2025-08-19 09:32:07 +08:00
6a3d135d49 fix: simplify trigger-schedule hourly mode calculation and improve UI consistency (#24082)
Co-authored-by: zhangxuhe1 <xuhezhang6@gmail.com>
2025-08-18 23:37:57 +08:00
5c4bf7aabd feat: Test Run dropdown with dynamic trigger selection (#24113) 2025-08-18 17:46:36 +08:00
e9c7dc7464 feat: update workflow run button to Test Run with keyboard shortcut (#24071) 2025-08-18 10:44:17 +08:00
74ad21b145 feat: comprehensive trigger node system with Schedule Trigger implementation (#24039)
Co-authored-by: zhangxuhe1 <xuhezhang6@gmail.com>
2025-08-18 09:23:16 +08:00
f214eeb7b1 feat: add scroll to selected node button in workflow header (#24030)
Co-authored-by: zhangxuhe1 <xuhezhang6@gmail.com>
2025-08-16 19:26:44 +08:00
ae25f90f34 Replace export button with more actions button in workflow control panel (#24033) 2025-08-16 19:25:18 +08:00
514 changed files with 69193 additions and 13270 deletions

6
.cursorrules Normal file
View File

@ -0,0 +1,6 @@
# Cursor Rules for Dify Project
## Automated Test Generation
- Use `web/testing/testing.md` as the canonical instruction set for generating frontend automated tests.
- When proposing or saving tests, re-read that document and follow every requirement.

226
.github/CODEOWNERS vendored Normal file
View File

@ -0,0 +1,226 @@
# CODEOWNERS
# This file defines code ownership for the Dify project.
# Each line is a file pattern followed by one or more owners.
# Owners can be @username, @org/team-name, or email addresses.
# For more information, see: https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners
* @crazywoola @laipz8200 @Yeuoly
# Backend (default owner, more specific rules below will override)
api/ @QuantumGhost
# Backend - Workflow - Engine (Core graph execution engine)
api/core/workflow/graph_engine/ @laipz8200 @QuantumGhost
api/core/workflow/runtime/ @laipz8200 @QuantumGhost
api/core/workflow/graph/ @laipz8200 @QuantumGhost
api/core/workflow/graph_events/ @laipz8200 @QuantumGhost
api/core/workflow/node_events/ @laipz8200 @QuantumGhost
api/core/model_runtime/ @laipz8200 @QuantumGhost
# Backend - Workflow - Nodes (Agent, Iteration, Loop, LLM)
api/core/workflow/nodes/agent/ @Nov1c444
api/core/workflow/nodes/iteration/ @Nov1c444
api/core/workflow/nodes/loop/ @Nov1c444
api/core/workflow/nodes/llm/ @Nov1c444
# Backend - RAG (Retrieval Augmented Generation)
api/core/rag/ @JohnJyong
api/services/rag_pipeline/ @JohnJyong
api/services/dataset_service.py @JohnJyong
api/services/knowledge_service.py @JohnJyong
api/services/external_knowledge_service.py @JohnJyong
api/services/hit_testing_service.py @JohnJyong
api/services/metadata_service.py @JohnJyong
api/services/vector_service.py @JohnJyong
api/services/entities/knowledge_entities/ @JohnJyong
api/services/entities/external_knowledge_entities/ @JohnJyong
api/controllers/console/datasets/ @JohnJyong
api/controllers/service_api/dataset/ @JohnJyong
api/models/dataset.py @JohnJyong
api/tasks/rag_pipeline/ @JohnJyong
api/tasks/add_document_to_index_task.py @JohnJyong
api/tasks/batch_clean_document_task.py @JohnJyong
api/tasks/clean_document_task.py @JohnJyong
api/tasks/clean_notion_document_task.py @JohnJyong
api/tasks/document_indexing_task.py @JohnJyong
api/tasks/document_indexing_sync_task.py @JohnJyong
api/tasks/document_indexing_update_task.py @JohnJyong
api/tasks/duplicate_document_indexing_task.py @JohnJyong
api/tasks/recover_document_indexing_task.py @JohnJyong
api/tasks/remove_document_from_index_task.py @JohnJyong
api/tasks/retry_document_indexing_task.py @JohnJyong
api/tasks/sync_website_document_indexing_task.py @JohnJyong
api/tasks/batch_create_segment_to_index_task.py @JohnJyong
api/tasks/create_segment_to_index_task.py @JohnJyong
api/tasks/delete_segment_from_index_task.py @JohnJyong
api/tasks/disable_segment_from_index_task.py @JohnJyong
api/tasks/disable_segments_from_index_task.py @JohnJyong
api/tasks/enable_segment_to_index_task.py @JohnJyong
api/tasks/enable_segments_to_index_task.py @JohnJyong
api/tasks/clean_dataset_task.py @JohnJyong
api/tasks/deal_dataset_index_update_task.py @JohnJyong
api/tasks/deal_dataset_vector_index_task.py @JohnJyong
# Backend - Plugins
api/core/plugin/ @Mairuis @Yeuoly @Stream29
api/services/plugin/ @Mairuis @Yeuoly @Stream29
api/controllers/console/workspace/plugin.py @Mairuis @Yeuoly @Stream29
api/controllers/inner_api/plugin/ @Mairuis @Yeuoly @Stream29
api/tasks/process_tenant_plugin_autoupgrade_check_task.py @Mairuis @Yeuoly @Stream29
# Backend - Trigger/Schedule/Webhook
api/controllers/trigger/ @Mairuis @Yeuoly
api/controllers/console/app/workflow_trigger.py @Mairuis @Yeuoly
api/controllers/console/workspace/trigger_providers.py @Mairuis @Yeuoly
api/core/trigger/ @Mairuis @Yeuoly
api/core/app/layers/trigger_post_layer.py @Mairuis @Yeuoly
api/services/trigger/ @Mairuis @Yeuoly
api/models/trigger.py @Mairuis @Yeuoly
api/fields/workflow_trigger_fields.py @Mairuis @Yeuoly
api/repositories/workflow_trigger_log_repository.py @Mairuis @Yeuoly
api/repositories/sqlalchemy_workflow_trigger_log_repository.py @Mairuis @Yeuoly
api/libs/schedule_utils.py @Mairuis @Yeuoly
api/services/workflow/scheduler.py @Mairuis @Yeuoly
api/schedule/trigger_provider_refresh_task.py @Mairuis @Yeuoly
api/schedule/workflow_schedule_task.py @Mairuis @Yeuoly
api/tasks/trigger_processing_tasks.py @Mairuis @Yeuoly
api/tasks/trigger_subscription_refresh_tasks.py @Mairuis @Yeuoly
api/tasks/workflow_schedule_tasks.py @Mairuis @Yeuoly
api/tasks/workflow_cfs_scheduler/ @Mairuis @Yeuoly
api/events/event_handlers/sync_plugin_trigger_when_app_created.py @Mairuis @Yeuoly
api/events/event_handlers/update_app_triggers_when_app_published_workflow_updated.py @Mairuis @Yeuoly
api/events/event_handlers/sync_workflow_schedule_when_app_published.py @Mairuis @Yeuoly
api/events/event_handlers/sync_webhook_when_app_created.py @Mairuis @Yeuoly
# Backend - Async Workflow
api/services/async_workflow_service.py @Mairuis @Yeuoly
api/tasks/async_workflow_tasks.py @Mairuis @Yeuoly
# Backend - Billing
api/services/billing_service.py @hj24 @zyssyz123
api/controllers/console/billing/ @hj24 @zyssyz123
# Backend - Enterprise
api/configs/enterprise/ @GarfieldDai @GareArc
api/services/enterprise/ @GarfieldDai @GareArc
api/services/feature_service.py @GarfieldDai @GareArc
api/controllers/console/feature.py @GarfieldDai @GareArc
api/controllers/web/feature.py @GarfieldDai @GareArc
# Backend - Database Migrations
api/migrations/ @snakevash @laipz8200
# Frontend
web/ @iamjoel
# Frontend - App - Orchestration
web/app/components/workflow/ @iamjoel @zxhlyh
web/app/components/workflow-app/ @iamjoel @zxhlyh
web/app/components/app/configuration/ @iamjoel @zxhlyh
web/app/components/app/app-publisher/ @iamjoel @zxhlyh
# Frontend - WebApp - Chat
web/app/components/base/chat/ @iamjoel @zxhlyh
# Frontend - WebApp - Completion
web/app/components/share/text-generation/ @iamjoel @zxhlyh
# Frontend - App - List and Creation
web/app/components/apps/ @JzoNgKVO @iamjoel
web/app/components/app/create-app-dialog/ @JzoNgKVO @iamjoel
web/app/components/app/create-app-modal/ @JzoNgKVO @iamjoel
web/app/components/app/create-from-dsl-modal/ @JzoNgKVO @iamjoel
# Frontend - App - API Documentation
web/app/components/develop/ @JzoNgKVO @iamjoel
# Frontend - App - Logs and Annotations
web/app/components/app/workflow-log/ @JzoNgKVO @iamjoel
web/app/components/app/log/ @JzoNgKVO @iamjoel
web/app/components/app/log-annotation/ @JzoNgKVO @iamjoel
web/app/components/app/annotation/ @JzoNgKVO @iamjoel
# Frontend - App - Monitoring
web/app/(commonLayout)/app/(appDetailLayout)/\[appId\]/overview/ @JzoNgKVO @iamjoel
web/app/components/app/overview/ @JzoNgKVO @iamjoel
# Frontend - App - Settings
web/app/components/app-sidebar/ @JzoNgKVO @iamjoel
# Frontend - RAG - Hit Testing
web/app/components/datasets/hit-testing/ @JzoNgKVO @iamjoel
# Frontend - RAG - List and Creation
web/app/components/datasets/list/ @iamjoel @WTW0313
web/app/components/datasets/create/ @iamjoel @WTW0313
web/app/components/datasets/create-from-pipeline/ @iamjoel @WTW0313
web/app/components/datasets/external-knowledge-base/ @iamjoel @WTW0313
# Frontend - RAG - Orchestration (general rule first, specific rules below override)
web/app/components/rag-pipeline/ @iamjoel @WTW0313
web/app/components/rag-pipeline/components/rag-pipeline-main.tsx @iamjoel @zxhlyh
web/app/components/rag-pipeline/store/ @iamjoel @zxhlyh
# Frontend - RAG - Documents List
web/app/components/datasets/documents/list.tsx @iamjoel @WTW0313
web/app/components/datasets/documents/create-from-pipeline/ @iamjoel @WTW0313
# Frontend - RAG - Segments List
web/app/components/datasets/documents/detail/ @iamjoel @WTW0313
# Frontend - RAG - Settings
web/app/components/datasets/settings/ @iamjoel @WTW0313
# Frontend - Ecosystem - Plugins
web/app/components/plugins/ @iamjoel @zhsama
# Frontend - Ecosystem - Tools
web/app/components/tools/ @iamjoel @Yessenia-d
# Frontend - Ecosystem - MarketPlace
web/app/components/plugins/marketplace/ @iamjoel @Yessenia-d
# Frontend - Login and Registration
web/app/signin/ @douxc @iamjoel
web/app/signup/ @douxc @iamjoel
web/app/reset-password/ @douxc @iamjoel
web/app/install/ @douxc @iamjoel
web/app/init/ @douxc @iamjoel
web/app/forgot-password/ @douxc @iamjoel
web/app/account/ @douxc @iamjoel
# Frontend - Service Authentication
web/service/base.ts @douxc @iamjoel
# Frontend - WebApp Authentication and Access Control
web/app/(shareLayout)/components/ @douxc @iamjoel
web/app/(shareLayout)/webapp-signin/ @douxc @iamjoel
web/app/(shareLayout)/webapp-reset-password/ @douxc @iamjoel
web/app/components/app/app-access-control/ @douxc @iamjoel
# Frontend - Explore Page
web/app/components/explore/ @CodingOnStar @iamjoel
# Frontend - Personal Settings
web/app/components/header/account-setting/ @CodingOnStar @iamjoel
web/app/components/header/account-dropdown/ @CodingOnStar @iamjoel
# Frontend - Analytics
web/app/components/base/ga/ @CodingOnStar @iamjoel
# Frontend - Base Components
web/app/components/base/ @iamjoel @zxhlyh
# Frontend - Utils and Hooks
web/utils/classnames.ts @iamjoel @zxhlyh
web/utils/time.ts @iamjoel @zxhlyh
web/utils/format.ts @iamjoel @zxhlyh
web/utils/clipboard.ts @iamjoel @zxhlyh
web/hooks/use-document-title.ts @iamjoel @zxhlyh
# Frontend - Billing and Education
web/app/components/billing/ @iamjoel @zxhlyh
web/app/education-apply/ @iamjoel @zxhlyh
# Frontend - Workspace
web/app/components/header/account-dropdown/workplace-selector/ @iamjoel @zxhlyh

12
.github/copilot-instructions.md vendored Normal file
View File

@ -0,0 +1,12 @@
# Copilot Instructions
GitHub Copilot must follow the unified frontend testing requirements documented in `web/testing/testing.md`.
Key reminders:
- Generate tests using the mandated tech stack, naming, and code style (AAA pattern, `fireEvent`, descriptive test names, cleans up mocks).
- Cover rendering, prop combinations, and edge cases by default; extend coverage for hooks, routing, async flows, and domain-specific components when applicable.
- Target >95% line and branch coverage and 100% function/statement coverage.
- Apply the project's mocking conventions for i18n, toast notifications, and Next.js utilities.
Any suggestions from Copilot that conflict with `web/testing/testing.md` should be revised before acceptance.

View File

@ -0,0 +1,28 @@
name: Deploy Trigger Dev
permissions:
contents: read
on:
workflow_run:
workflows: ["Build and Push API & Web"]
branches:
- "deploy/end-user-oauth"
types:
- completed
jobs:
deploy:
runs-on: ubuntu-latest
if: |
github.event.workflow_run.conclusion == 'success' &&
github.event.workflow_run.head_branch == 'deploy/end-user-oauth'
steps:
- name: Deploy to server
uses: appleboy/ssh-action@v0.1.8
with:
host: ${{ secrets.TRIGGER_SSH_HOST }}
username: ${{ secrets.SSH_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
${{ vars.SSH_SCRIPT || secrets.SSH_SCRIPT }}

View File

@ -77,12 +77,15 @@ jobs:
uses: peter-evans/create-pull-request@v6
with:
token: ${{ secrets.GITHUB_TOKEN }}
commit-message: Update i18n files and type definitions based on en-US changes
title: 'chore: translate i18n files and update type definitions'
commit-message: 'chore(i18n): update translations based on en-US changes'
title: 'chore(i18n): translate i18n files and update type definitions'
body: |
This PR was automatically created to update i18n files and TypeScript type definitions based on changes in en-US locale.
**Triggered by:** ${{ github.sha }}
**Changes included:**
- Updated translation files for all locales
- Regenerated TypeScript type definitions for type safety
branch: chore/automated-i18n-updates
branch: chore/automated-i18n-updates-${{ github.sha }}
delete-branch: true

View File

@ -0,0 +1,5 @@
# Windsurf Testing Rules
- Use `web/testing/testing.md` as the single source of truth for frontend automated testing.
- Honor every requirement in that document when generating or accepting tests.
- When proposing or saving tests, re-read that document and follow every requirement.

View File

@ -77,6 +77,8 @@ How we prioritize:
For setting up the frontend service, please refer to our comprehensive [guide](https://github.com/langgenius/dify/blob/main/web/README.md) in the `web/README.md` file. This document provides detailed instructions to help you set up the frontend environment properly.
**Testing**: All React components must have comprehensive test coverage. See [web/testing/testing.md](https://github.com/langgenius/dify/blob/main/web/testing/testing.md) for the canonical frontend testing guidelines and follow every requirement described there.
#### Backend
For setting up the backend service, kindly refer to our detailed [instructions](https://github.com/langgenius/dify/blob/main/api/README.md) in the `api/README.md` file. This document contains step-by-step guidance to help you get the backend up and running smoothly.

View File

@ -36,6 +36,12 @@
<img alt="Issues closed" src="https://img.shields.io/github/issues-search?query=repo%3Alanggenius%2Fdify%20is%3Aclosed&label=issues%20closed&labelColor=%20%237d89b0&color=%20%235d6b98"></a>
<a href="https://github.com/langgenius/dify/discussions/" target="_blank">
<img alt="Discussion posts" src="https://img.shields.io/github/discussions/langgenius/dify?labelColor=%20%239b8afb&color=%20%237a5af8"></a>
<a href="https://insights.linuxfoundation.org/project/langgenius-dify" target="_blank">
<img alt="LFX Health Score" src="https://insights.linuxfoundation.org/api/badge/health-score?project=langgenius-dify"></a>
<a href="https://insights.linuxfoundation.org/project/langgenius-dify" target="_blank">
<img alt="LFX Contributors" src="https://insights.linuxfoundation.org/api/badge/contributors?project=langgenius-dify"></a>
<a href="https://insights.linuxfoundation.org/project/langgenius-dify" target="_blank">
<img alt="LFX Active Contributors" src="https://insights.linuxfoundation.org/api/badge/active-contributors?project=langgenius-dify"></a>
</p>
<p align="center">

View File

@ -540,6 +540,7 @@ WORKFLOW_LOG_CLEANUP_BATCH_SIZE=100
# App configuration
APP_MAX_EXECUTION_TIME=1200
APP_DEFAULT_ACTIVE_REQUESTS=0
APP_MAX_ACTIVE_REQUESTS=0
# Celery beat configuration

View File

@ -16,6 +16,7 @@ layers =
graph
nodes
node_events
runtime
entities
containers =
core.workflow

View File

@ -48,6 +48,12 @@ ENV PYTHONIOENCODING=utf-8
WORKDIR /app/api
# Create non-root user
ARG dify_uid=1001
RUN groupadd -r -g ${dify_uid} dify && \
useradd -r -u ${dify_uid} -g ${dify_uid} -s /bin/bash dify && \
chown -R dify:dify /app
RUN \
apt-get update \
# Install dependencies
@ -57,7 +63,7 @@ RUN \
# for gmpy2 \
libgmp-dev libmpfr-dev libmpc-dev \
# For Security
expat libldap-2.5-0 perl libsqlite3-0 zlib1g \
expat libldap-2.5-0=2.5.13+dfsg-5 perl libsqlite3-0=3.40.1-2+deb12u2 zlib1g=1:1.2.13.dfsg-1 \
# install fonts to support the use of tools like pypdfium2
fonts-noto-cjk \
# install a package to improve the accuracy of guessing mime type and file extension
@ -69,7 +75,7 @@ RUN \
# Copy Python environment and packages
ENV VIRTUAL_ENV=/app/api/.venv
COPY --from=packages ${VIRTUAL_ENV} ${VIRTUAL_ENV}
COPY --from=packages --chown=dify:dify ${VIRTUAL_ENV} ${VIRTUAL_ENV}
ENV PATH="${VIRTUAL_ENV}/bin:${PATH}"
# Download nltk data
@ -78,24 +84,20 @@ RUN mkdir -p /usr/local/share/nltk_data && NLTK_DATA=/usr/local/share/nltk_data
ENV TIKTOKEN_CACHE_DIR=/app/api/.tiktoken_cache
RUN python -c "import tiktoken; tiktoken.encoding_for_model('gpt2')"
RUN python -c "import tiktoken; tiktoken.encoding_for_model('gpt2')" \
&& chown -R dify:dify ${TIKTOKEN_CACHE_DIR}
# Copy source code
COPY . /app/api/
COPY --chown=dify:dify . /app/api/
# Copy entrypoint
COPY docker/entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
# Prepare entrypoint script
COPY --chown=dify:dify --chmod=755 docker/entrypoint.sh /entrypoint.sh
# Create non-root user and set permissions
RUN groupadd -r -g 1001 dify && \
useradd -r -u 1001 -g 1001 -s /bin/bash dify && \
mkdir -p /home/dify && \
chown -R 1001:1001 /app /home/dify ${TIKTOKEN_CACHE_DIR} /entrypoint.sh
ARG COMMIT_SHA
ENV COMMIT_SHA=${COMMIT_SHA}
ENV NLTK_DATA=/usr/local/share/nltk_data
USER 1001
USER dify
ENTRYPOINT ["/bin/bash", "/entrypoint.sh"]

View File

@ -1,6 +1,8 @@
import logging
import time
from opentelemetry.trace import get_current_span
from configs import dify_config
from contexts.wrapper import RecyclableContextVar
from dify_app import DifyApp
@ -26,8 +28,25 @@ def create_flask_app_with_configs() -> DifyApp:
# add an unique identifier to each request
RecyclableContextVar.increment_thread_recycles()
# add after request hook for injecting X-Trace-Id header from OpenTelemetry span context
@dify_app.after_request
def add_trace_id_header(response):
try:
span = get_current_span()
ctx = span.get_span_context() if span else None
if ctx and ctx.is_valid:
trace_id_hex = format(ctx.trace_id, "032x")
# Avoid duplicates if some middleware added it
if "X-Trace-Id" not in response.headers:
response.headers["X-Trace-Id"] = trace_id_hex
except Exception:
# Never break the response due to tracing header injection
logger.warning("Failed to add trace ID to response header", exc_info=True)
return response
# Capture the decorator's return value to avoid pyright reportUnusedFunction
_ = before_request
_ = add_trace_id_header
return dify_app
@ -51,6 +70,7 @@ def initialize_extensions(app: DifyApp):
ext_commands,
ext_compress,
ext_database,
ext_forward_refs,
ext_hosting_provider,
ext_import_modules,
ext_logging,
@ -75,6 +95,7 @@ def initialize_extensions(app: DifyApp):
ext_warnings,
ext_import_modules,
ext_orjson,
ext_forward_refs,
ext_set_secretkey,
ext_compress,
ext_code_based_extension,

View File

@ -73,6 +73,10 @@ class AppExecutionConfig(BaseSettings):
description="Maximum allowed execution time for the application in seconds",
default=1200,
)
APP_DEFAULT_ACTIVE_REQUESTS: NonNegativeInt = Field(
description="Default number of concurrent active requests per app (0 for unlimited)",
default=0,
)
APP_MAX_ACTIVE_REQUESTS: NonNegativeInt = Field(
description="Maximum number of concurrent active requests per app (0 for unlimited)",
default=0,
@ -549,7 +553,10 @@ class LoggingConfig(BaseSettings):
LOG_FORMAT: str = Field(
description="Format string for log messages",
default="%(asctime)s.%(msecs)03d %(levelname)s [%(threadName)s] [%(filename)s:%(lineno)d] - %(message)s",
default=(
"%(asctime)s.%(msecs)03d %(levelname)s [%(threadName)s] "
"[%(filename)s:%(lineno)d] %(trace_id)s - %(message)s"
),
)
LOG_DATEFORMAT: str | None = Field(

View File

@ -1,16 +1,23 @@
from flask_restx import Resource, fields, reqparse
from flask import request
from flask_restx import Resource, fields
from pydantic import BaseModel, Field
from controllers.console import console_ns
from controllers.console.wraps import account_initialization_required, setup_required
from libs.login import login_required
from services.advanced_prompt_template_service import AdvancedPromptTemplateService
parser = (
reqparse.RequestParser()
.add_argument("app_mode", type=str, required=True, location="args", help="Application mode")
.add_argument("model_mode", type=str, required=True, location="args", help="Model mode")
.add_argument("has_context", type=str, required=False, default="true", location="args", help="Whether has context")
.add_argument("model_name", type=str, required=True, location="args", help="Model name")
class AdvancedPromptTemplateQuery(BaseModel):
app_mode: str = Field(..., description="Application mode")
model_mode: str = Field(..., description="Model mode")
has_context: str = Field(default="true", description="Whether has context")
model_name: str = Field(..., description="Model name")
console_ns.schema_model(
AdvancedPromptTemplateQuery.__name__,
AdvancedPromptTemplateQuery.model_json_schema(ref_template="#/definitions/{model}"),
)
@ -18,7 +25,7 @@ parser = (
class AdvancedPromptTemplateList(Resource):
@console_ns.doc("get_advanced_prompt_templates")
@console_ns.doc(description="Get advanced prompt templates based on app mode and model configuration")
@console_ns.expect(parser)
@console_ns.expect(console_ns.models[AdvancedPromptTemplateQuery.__name__])
@console_ns.response(
200, "Prompt templates retrieved successfully", fields.List(fields.Raw(description="Prompt template data"))
)
@ -27,6 +34,6 @@ class AdvancedPromptTemplateList(Resource):
@login_required
@account_initialization_required
def get(self):
args = parser.parse_args()
args = AdvancedPromptTemplateQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
return AdvancedPromptTemplateService.get_prompt(args)
return AdvancedPromptTemplateService.get_prompt(args.model_dump())

View File

@ -1,9 +1,12 @@
import uuid
from typing import Literal
from flask_restx import Resource, fields, inputs, marshal, marshal_with, reqparse
from flask import request
from flask_restx import Resource, fields, marshal, marshal_with
from pydantic import BaseModel, Field, field_validator
from sqlalchemy import select
from sqlalchemy.orm import Session
from werkzeug.exceptions import BadRequest, abort
from werkzeug.exceptions import BadRequest
from controllers.console import console_ns
from controllers.console.app.wraps import get_app_model
@ -36,6 +39,130 @@ from services.enterprise.enterprise_service import EnterpriseService
from services.feature_service import FeatureService
ALLOW_CREATE_APP_MODES = ["chat", "agent-chat", "advanced-chat", "workflow", "completion"]
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class AppListQuery(BaseModel):
page: int = Field(default=1, ge=1, le=99999, description="Page number (1-99999)")
limit: int = Field(default=20, ge=1, le=100, description="Page size (1-100)")
mode: Literal["completion", "chat", "advanced-chat", "workflow", "agent-chat", "channel", "all"] = Field(
default="all", description="App mode filter"
)
name: str | None = Field(default=None, description="Filter by app name")
tag_ids: list[str] | None = Field(default=None, description="Comma-separated tag IDs")
is_created_by_me: bool | None = Field(default=None, description="Filter by creator")
@field_validator("tag_ids", mode="before")
@classmethod
def validate_tag_ids(cls, value: str | list[str] | None) -> list[str] | None:
if not value:
return None
if isinstance(value, str):
items = [item.strip() for item in value.split(",") if item.strip()]
elif isinstance(value, list):
items = [str(item).strip() for item in value if item and str(item).strip()]
else:
raise TypeError("Unsupported tag_ids type.")
if not items:
return None
try:
return [str(uuid.UUID(item)) for item in items]
except ValueError as exc:
raise ValueError("Invalid UUID format in tag_ids.") from exc
class CreateAppPayload(BaseModel):
name: str = Field(..., min_length=1, description="App name")
description: str | None = Field(default=None, description="App description (max 400 chars)")
mode: Literal["chat", "agent-chat", "advanced-chat", "workflow", "completion"] = Field(..., description="App mode")
icon_type: str | None = Field(default=None, description="Icon type")
icon: str | None = Field(default=None, description="Icon")
icon_background: str | None = Field(default=None, description="Icon background color")
@field_validator("description")
@classmethod
def validate_description(cls, value: str | None) -> str | None:
if value is None:
return value
return validate_description_length(value)
class UpdateAppPayload(BaseModel):
name: str = Field(..., min_length=1, description="App name")
description: str | None = Field(default=None, description="App description (max 400 chars)")
icon_type: str | None = Field(default=None, description="Icon type")
icon: str | None = Field(default=None, description="Icon")
icon_background: str | None = Field(default=None, description="Icon background color")
use_icon_as_answer_icon: bool | None = Field(default=None, description="Use icon as answer icon")
max_active_requests: int | None = Field(default=None, description="Maximum active requests")
@field_validator("description")
@classmethod
def validate_description(cls, value: str | None) -> str | None:
if value is None:
return value
return validate_description_length(value)
class CopyAppPayload(BaseModel):
name: str | None = Field(default=None, description="Name for the copied app")
description: str | None = Field(default=None, description="Description for the copied app")
icon_type: str | None = Field(default=None, description="Icon type")
icon: str | None = Field(default=None, description="Icon")
icon_background: str | None = Field(default=None, description="Icon background color")
@field_validator("description")
@classmethod
def validate_description(cls, value: str | None) -> str | None:
if value is None:
return value
return validate_description_length(value)
class AppExportQuery(BaseModel):
include_secret: bool = Field(default=False, description="Include secrets in export")
workflow_id: str | None = Field(default=None, description="Specific workflow ID to export")
class AppNamePayload(BaseModel):
name: str = Field(..., min_length=1, description="Name to check")
class AppIconPayload(BaseModel):
icon: str | None = Field(default=None, description="Icon data")
icon_background: str | None = Field(default=None, description="Icon background color")
class AppSiteStatusPayload(BaseModel):
enable_site: bool = Field(..., description="Enable or disable site")
class AppApiStatusPayload(BaseModel):
enable_api: bool = Field(..., description="Enable or disable API")
class AppTracePayload(BaseModel):
enabled: bool = Field(..., description="Enable or disable tracing")
tracing_provider: str = Field(..., description="Tracing provider")
def reg(cls: type[BaseModel]):
console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
reg(AppListQuery)
reg(CreateAppPayload)
reg(UpdateAppPayload)
reg(CopyAppPayload)
reg(AppExportQuery)
reg(AppNamePayload)
reg(AppIconPayload)
reg(AppSiteStatusPayload)
reg(AppApiStatusPayload)
reg(AppTracePayload)
# Register models for flask_restx to avoid dict type issues in Swagger
# Register base models first
@ -147,22 +274,7 @@ app_pagination_model = console_ns.model(
class AppListApi(Resource):
@console_ns.doc("list_apps")
@console_ns.doc(description="Get list of applications with pagination and filtering")
@console_ns.expect(
console_ns.parser()
.add_argument("page", type=int, location="args", help="Page number (1-99999)", default=1)
.add_argument("limit", type=int, location="args", help="Page size (1-100)", default=20)
.add_argument(
"mode",
type=str,
location="args",
choices=["completion", "chat", "advanced-chat", "workflow", "agent-chat", "channel", "all"],
default="all",
help="App mode filter",
)
.add_argument("name", type=str, location="args", help="Filter by app name")
.add_argument("tag_ids", type=str, location="args", help="Comma-separated tag IDs")
.add_argument("is_created_by_me", type=bool, location="args", help="Filter by creator")
)
@console_ns.expect(console_ns.models[AppListQuery.__name__])
@console_ns.response(200, "Success", app_pagination_model)
@setup_required
@login_required
@ -172,42 +284,12 @@ class AppListApi(Resource):
"""Get app list"""
current_user, current_tenant_id = current_account_with_tenant()
def uuid_list(value):
try:
return [str(uuid.UUID(v)) for v in value.split(",")]
except ValueError:
abort(400, message="Invalid UUID format in tag_ids.")
parser = (
reqparse.RequestParser()
.add_argument("page", type=inputs.int_range(1, 99999), required=False, default=1, location="args")
.add_argument("limit", type=inputs.int_range(1, 100), required=False, default=20, location="args")
.add_argument(
"mode",
type=str,
choices=[
"completion",
"chat",
"advanced-chat",
"workflow",
"agent-chat",
"channel",
"all",
],
default="all",
location="args",
required=False,
)
.add_argument("name", type=str, location="args", required=False)
.add_argument("tag_ids", type=uuid_list, location="args", required=False)
.add_argument("is_created_by_me", type=inputs.boolean, location="args", required=False)
)
args = parser.parse_args()
args = AppListQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
args_dict = args.model_dump()
# get app list
app_service = AppService()
app_pagination = app_service.get_paginate_apps(current_user.id, current_tenant_id, args)
app_pagination = app_service.get_paginate_apps(current_user.id, current_tenant_id, args_dict)
if not app_pagination:
return {"data": [], "total": 0, "page": 1, "limit": 20, "has_more": False}
@ -242,10 +324,13 @@ class AppListApi(Resource):
NodeType.TRIGGER_PLUGIN,
}
for workflow in draft_workflows:
for _, node_data in workflow.walk_nodes():
if node_data.get("type") in trigger_node_types:
draft_trigger_app_ids.add(str(workflow.app_id))
break
try:
for _, node_data in workflow.walk_nodes():
if node_data.get("type") in trigger_node_types:
draft_trigger_app_ids.add(str(workflow.app_id))
break
except Exception:
continue
for app in app_pagination.items:
app.has_draft_trigger = str(app.id) in draft_trigger_app_ids
@ -254,19 +339,7 @@ class AppListApi(Resource):
@console_ns.doc("create_app")
@console_ns.doc(description="Create a new application")
@console_ns.expect(
console_ns.model(
"CreateAppRequest",
{
"name": fields.String(required=True, description="App name"),
"description": fields.String(description="App description (max 400 chars)"),
"mode": fields.String(required=True, enum=ALLOW_CREATE_APP_MODES, description="App mode"),
"icon_type": fields.String(description="Icon type"),
"icon": fields.String(description="Icon"),
"icon_background": fields.String(description="Icon background color"),
},
)
)
@console_ns.expect(console_ns.models[CreateAppPayload.__name__])
@console_ns.response(201, "App created successfully", app_detail_model)
@console_ns.response(403, "Insufficient permissions")
@console_ns.response(400, "Invalid request parameters")
@ -279,22 +352,10 @@ class AppListApi(Resource):
def post(self):
"""Create app"""
current_user, current_tenant_id = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("name", type=str, required=True, location="json")
.add_argument("description", type=validate_description_length, location="json")
.add_argument("mode", type=str, choices=ALLOW_CREATE_APP_MODES, location="json")
.add_argument("icon_type", type=str, location="json")
.add_argument("icon", type=str, location="json")
.add_argument("icon_background", type=str, location="json")
)
args = parser.parse_args()
if "mode" not in args or args["mode"] is None:
raise BadRequest("mode is required")
args = CreateAppPayload.model_validate(console_ns.payload)
app_service = AppService()
app = app_service.create_app(current_tenant_id, args, current_user)
app = app_service.create_app(current_tenant_id, args.model_dump(), current_user)
return app, 201
@ -326,20 +387,7 @@ class AppApi(Resource):
@console_ns.doc("update_app")
@console_ns.doc(description="Update application details")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(
console_ns.model(
"UpdateAppRequest",
{
"name": fields.String(required=True, description="App name"),
"description": fields.String(description="App description (max 400 chars)"),
"icon_type": fields.String(description="Icon type"),
"icon": fields.String(description="Icon"),
"icon_background": fields.String(description="Icon background color"),
"use_icon_as_answer_icon": fields.Boolean(description="Use icon as answer icon"),
"max_active_requests": fields.Integer(description="Maximum active requests"),
},
)
)
@console_ns.expect(console_ns.models[UpdateAppPayload.__name__])
@console_ns.response(200, "App updated successfully", app_detail_with_site_model)
@console_ns.response(403, "Insufficient permissions")
@console_ns.response(400, "Invalid request parameters")
@ -351,28 +399,18 @@ class AppApi(Resource):
@marshal_with(app_detail_with_site_model)
def put(self, app_model):
"""Update app"""
parser = (
reqparse.RequestParser()
.add_argument("name", type=str, required=True, nullable=False, location="json")
.add_argument("description", type=validate_description_length, location="json")
.add_argument("icon_type", type=str, location="json")
.add_argument("icon", type=str, location="json")
.add_argument("icon_background", type=str, location="json")
.add_argument("use_icon_as_answer_icon", type=bool, location="json")
.add_argument("max_active_requests", type=int, location="json")
)
args = parser.parse_args()
args = UpdateAppPayload.model_validate(console_ns.payload)
app_service = AppService()
args_dict: AppService.ArgsDict = {
"name": args["name"],
"description": args.get("description", ""),
"icon_type": args.get("icon_type", ""),
"icon": args.get("icon", ""),
"icon_background": args.get("icon_background", ""),
"use_icon_as_answer_icon": args.get("use_icon_as_answer_icon", False),
"max_active_requests": args.get("max_active_requests", 0),
"name": args.name,
"description": args.description or "",
"icon_type": args.icon_type or "",
"icon": args.icon or "",
"icon_background": args.icon_background or "",
"use_icon_as_answer_icon": args.use_icon_as_answer_icon or False,
"max_active_requests": args.max_active_requests or 0,
}
app_model = app_service.update_app(app_model, args_dict)
@ -401,18 +439,7 @@ class AppCopyApi(Resource):
@console_ns.doc("copy_app")
@console_ns.doc(description="Create a copy of an existing application")
@console_ns.doc(params={"app_id": "Application ID to copy"})
@console_ns.expect(
console_ns.model(
"CopyAppRequest",
{
"name": fields.String(description="Name for the copied app"),
"description": fields.String(description="Description for the copied app"),
"icon_type": fields.String(description="Icon type"),
"icon": fields.String(description="Icon"),
"icon_background": fields.String(description="Icon background color"),
},
)
)
@console_ns.expect(console_ns.models[CopyAppPayload.__name__])
@console_ns.response(201, "App copied successfully", app_detail_with_site_model)
@console_ns.response(403, "Insufficient permissions")
@setup_required
@ -426,15 +453,7 @@ class AppCopyApi(Resource):
# The role of the current user in the ta table must be admin, owner, or editor
current_user, _ = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("name", type=str, location="json")
.add_argument("description", type=validate_description_length, location="json")
.add_argument("icon_type", type=str, location="json")
.add_argument("icon", type=str, location="json")
.add_argument("icon_background", type=str, location="json")
)
args = parser.parse_args()
args = CopyAppPayload.model_validate(console_ns.payload or {})
with Session(db.engine) as session:
import_service = AppDslService(session)
@ -443,11 +462,11 @@ class AppCopyApi(Resource):
account=current_user,
import_mode=ImportMode.YAML_CONTENT,
yaml_content=yaml_content,
name=args.get("name"),
description=args.get("description"),
icon_type=args.get("icon_type"),
icon=args.get("icon"),
icon_background=args.get("icon_background"),
name=args.name,
description=args.description,
icon_type=args.icon_type,
icon=args.icon,
icon_background=args.icon_background,
)
session.commit()
@ -462,11 +481,7 @@ class AppExportApi(Resource):
@console_ns.doc("export_app")
@console_ns.doc(description="Export application configuration as DSL")
@console_ns.doc(params={"app_id": "Application ID to export"})
@console_ns.expect(
console_ns.parser()
.add_argument("include_secret", type=bool, location="args", default=False, help="Include secrets in export")
.add_argument("workflow_id", type=str, location="args", help="Specific workflow ID to export")
)
@console_ns.expect(console_ns.models[AppExportQuery.__name__])
@console_ns.response(
200,
"App exported successfully",
@ -480,30 +495,23 @@ class AppExportApi(Resource):
@edit_permission_required
def get(self, app_model):
"""Export app"""
# Add include_secret params
parser = (
reqparse.RequestParser()
.add_argument("include_secret", type=inputs.boolean, default=False, location="args")
.add_argument("workflow_id", type=str, location="args")
)
args = parser.parse_args()
args = AppExportQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
return {
"data": AppDslService.export_dsl(
app_model=app_model, include_secret=args["include_secret"], workflow_id=args.get("workflow_id")
app_model=app_model,
include_secret=args.include_secret,
workflow_id=args.workflow_id,
)
}
parser = reqparse.RequestParser().add_argument("name", type=str, required=True, location="json", help="Name to check")
@console_ns.route("/apps/<uuid:app_id>/name")
class AppNameApi(Resource):
@console_ns.doc("check_app_name")
@console_ns.doc(description="Check if app name is available")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(parser)
@console_ns.expect(console_ns.models[AppNamePayload.__name__])
@console_ns.response(200, "Name availability checked")
@setup_required
@login_required
@ -512,10 +520,10 @@ class AppNameApi(Resource):
@marshal_with(app_detail_model)
@edit_permission_required
def post(self, app_model):
args = parser.parse_args()
args = AppNamePayload.model_validate(console_ns.payload)
app_service = AppService()
app_model = app_service.update_app_name(app_model, args["name"])
app_model = app_service.update_app_name(app_model, args.name)
return app_model
@ -525,16 +533,7 @@ class AppIconApi(Resource):
@console_ns.doc("update_app_icon")
@console_ns.doc(description="Update application icon")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(
console_ns.model(
"AppIconRequest",
{
"icon": fields.String(required=True, description="Icon data"),
"icon_type": fields.String(description="Icon type"),
"icon_background": fields.String(description="Icon background color"),
},
)
)
@console_ns.expect(console_ns.models[AppIconPayload.__name__])
@console_ns.response(200, "Icon updated successfully")
@console_ns.response(403, "Insufficient permissions")
@setup_required
@ -544,15 +543,10 @@ class AppIconApi(Resource):
@marshal_with(app_detail_model)
@edit_permission_required
def post(self, app_model):
parser = (
reqparse.RequestParser()
.add_argument("icon", type=str, location="json")
.add_argument("icon_background", type=str, location="json")
)
args = parser.parse_args()
args = AppIconPayload.model_validate(console_ns.payload or {})
app_service = AppService()
app_model = app_service.update_app_icon(app_model, args.get("icon") or "", args.get("icon_background") or "")
app_model = app_service.update_app_icon(app_model, args.icon or "", args.icon_background or "")
return app_model
@ -562,11 +556,7 @@ class AppSiteStatus(Resource):
@console_ns.doc("update_app_site_status")
@console_ns.doc(description="Enable or disable app site")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(
console_ns.model(
"AppSiteStatusRequest", {"enable_site": fields.Boolean(required=True, description="Enable or disable site")}
)
)
@console_ns.expect(console_ns.models[AppSiteStatusPayload.__name__])
@console_ns.response(200, "Site status updated successfully", app_detail_model)
@console_ns.response(403, "Insufficient permissions")
@setup_required
@ -576,11 +566,10 @@ class AppSiteStatus(Resource):
@marshal_with(app_detail_model)
@edit_permission_required
def post(self, app_model):
parser = reqparse.RequestParser().add_argument("enable_site", type=bool, required=True, location="json")
args = parser.parse_args()
args = AppSiteStatusPayload.model_validate(console_ns.payload)
app_service = AppService()
app_model = app_service.update_app_site_status(app_model, args["enable_site"])
app_model = app_service.update_app_site_status(app_model, args.enable_site)
return app_model
@ -590,11 +579,7 @@ class AppApiStatus(Resource):
@console_ns.doc("update_app_api_status")
@console_ns.doc(description="Enable or disable app API")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(
console_ns.model(
"AppApiStatusRequest", {"enable_api": fields.Boolean(required=True, description="Enable or disable API")}
)
)
@console_ns.expect(console_ns.models[AppApiStatusPayload.__name__])
@console_ns.response(200, "API status updated successfully", app_detail_model)
@console_ns.response(403, "Insufficient permissions")
@setup_required
@ -604,11 +589,10 @@ class AppApiStatus(Resource):
@get_app_model
@marshal_with(app_detail_model)
def post(self, app_model):
parser = reqparse.RequestParser().add_argument("enable_api", type=bool, required=True, location="json")
args = parser.parse_args()
args = AppApiStatusPayload.model_validate(console_ns.payload)
app_service = AppService()
app_model = app_service.update_app_api_status(app_model, args["enable_api"])
app_model = app_service.update_app_api_status(app_model, args.enable_api)
return app_model
@ -631,15 +615,7 @@ class AppTraceApi(Resource):
@console_ns.doc("update_app_trace")
@console_ns.doc(description="Update app tracing configuration")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(
console_ns.model(
"AppTraceRequest",
{
"enabled": fields.Boolean(required=True, description="Enable or disable tracing"),
"tracing_provider": fields.String(required=True, description="Tracing provider"),
},
)
)
@console_ns.expect(console_ns.models[AppTracePayload.__name__])
@console_ns.response(200, "Trace configuration updated successfully")
@console_ns.response(403, "Insufficient permissions")
@setup_required
@ -648,17 +624,12 @@ class AppTraceApi(Resource):
@edit_permission_required
def post(self, app_id):
# add app trace
parser = (
reqparse.RequestParser()
.add_argument("enabled", type=bool, required=True, location="json")
.add_argument("tracing_provider", type=str, required=True, location="json")
)
args = parser.parse_args()
args = AppTracePayload.model_validate(console_ns.payload)
OpsTraceManager.update_app_tracing_config(
app_id=app_id,
enabled=args["enabled"],
tracing_provider=args["tracing_provider"],
enabled=args.enabled,
tracing_provider=args.tracing_provider,
)
return {"result": "success"}

View File

@ -1,7 +1,9 @@
import logging
from typing import Any, Literal
from flask import request
from flask_restx import Resource, fields, reqparse
from flask_restx import Resource
from pydantic import BaseModel, Field, field_validator
from werkzeug.exceptions import InternalServerError, NotFound
import services
@ -35,6 +37,41 @@ from services.app_task_service import AppTaskService
from services.errors.llm import InvokeRateLimitError
logger = logging.getLogger(__name__)
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class BaseMessagePayload(BaseModel):
inputs: dict[str, Any]
model_config_data: dict[str, Any] = Field(..., alias="model_config")
files: list[Any] | None = Field(default=None, description="Uploaded files")
response_mode: Literal["blocking", "streaming"] = Field(default="blocking", description="Response mode")
retriever_from: str = Field(default="dev", description="Retriever source")
class CompletionMessagePayload(BaseMessagePayload):
query: str = Field(default="", description="Query text")
class ChatMessagePayload(BaseMessagePayload):
query: str = Field(..., description="User query")
conversation_id: str | None = Field(default=None, description="Conversation ID")
parent_message_id: str | None = Field(default=None, description="Parent message ID")
@field_validator("conversation_id", "parent_message_id")
@classmethod
def validate_uuid(cls, value: str | None) -> str | None:
if value is None:
return value
return uuid_value(value)
console_ns.schema_model(
CompletionMessagePayload.__name__,
CompletionMessagePayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
console_ns.schema_model(
ChatMessagePayload.__name__, ChatMessagePayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)
)
# define completion message api for user
@ -43,19 +80,7 @@ class CompletionMessageApi(Resource):
@console_ns.doc("create_completion_message")
@console_ns.doc(description="Generate completion message for debugging")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(
console_ns.model(
"CompletionMessageRequest",
{
"inputs": fields.Raw(required=True, description="Input variables"),
"query": fields.String(description="Query text", default=""),
"files": fields.List(fields.Raw(), description="Uploaded files"),
"model_config": fields.Raw(required=True, description="Model configuration"),
"response_mode": fields.String(enum=["blocking", "streaming"], description="Response mode"),
"retriever_from": fields.String(default="dev", description="Retriever source"),
},
)
)
@console_ns.expect(console_ns.models[CompletionMessagePayload.__name__])
@console_ns.response(200, "Completion generated successfully")
@console_ns.response(400, "Invalid request parameters")
@console_ns.response(404, "App not found")
@ -64,18 +89,10 @@ class CompletionMessageApi(Resource):
@account_initialization_required
@get_app_model(mode=AppMode.COMPLETION)
def post(self, app_model):
parser = (
reqparse.RequestParser()
.add_argument("inputs", type=dict, required=True, location="json")
.add_argument("query", type=str, location="json", default="")
.add_argument("files", type=list, required=False, location="json")
.add_argument("model_config", type=dict, required=True, location="json")
.add_argument("response_mode", type=str, choices=["blocking", "streaming"], location="json")
.add_argument("retriever_from", type=str, required=False, default="dev", location="json")
)
args = parser.parse_args()
args_model = CompletionMessagePayload.model_validate(console_ns.payload)
args = args_model.model_dump(exclude_none=True, by_alias=True)
streaming = args["response_mode"] != "blocking"
streaming = args_model.response_mode != "blocking"
args["auto_generate_name"] = False
try:
@ -137,21 +154,7 @@ class ChatMessageApi(Resource):
@console_ns.doc("create_chat_message")
@console_ns.doc(description="Generate chat message for debugging")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(
console_ns.model(
"ChatMessageRequest",
{
"inputs": fields.Raw(required=True, description="Input variables"),
"query": fields.String(required=True, description="User query"),
"files": fields.List(fields.Raw(), description="Uploaded files"),
"model_config": fields.Raw(required=True, description="Model configuration"),
"conversation_id": fields.String(description="Conversation ID"),
"parent_message_id": fields.String(description="Parent message ID"),
"response_mode": fields.String(enum=["blocking", "streaming"], description="Response mode"),
"retriever_from": fields.String(default="dev", description="Retriever source"),
},
)
)
@console_ns.expect(console_ns.models[ChatMessagePayload.__name__])
@console_ns.response(200, "Chat message generated successfully")
@console_ns.response(400, "Invalid request parameters")
@console_ns.response(404, "App or conversation not found")
@ -161,20 +164,10 @@ class ChatMessageApi(Resource):
@get_app_model(mode=[AppMode.CHAT, AppMode.AGENT_CHAT])
@edit_permission_required
def post(self, app_model):
parser = (
reqparse.RequestParser()
.add_argument("inputs", type=dict, required=True, location="json")
.add_argument("query", type=str, required=True, location="json")
.add_argument("files", type=list, required=False, location="json")
.add_argument("model_config", type=dict, required=True, location="json")
.add_argument("conversation_id", type=uuid_value, location="json")
.add_argument("parent_message_id", type=uuid_value, required=False, location="json")
.add_argument("response_mode", type=str, choices=["blocking", "streaming"], location="json")
.add_argument("retriever_from", type=str, required=False, default="dev", location="json")
)
args = parser.parse_args()
args_model = ChatMessagePayload.model_validate(console_ns.payload)
args = args_model.model_dump(exclude_none=True, by_alias=True)
streaming = args["response_mode"] != "blocking"
streaming = args_model.response_mode != "blocking"
args["auto_generate_name"] = False
external_trace_id = get_external_trace_id(request)

View File

@ -1,7 +1,9 @@
from typing import Literal
import sqlalchemy as sa
from flask import abort
from flask_restx import Resource, fields, marshal_with, reqparse
from flask_restx.inputs import int_range
from flask import abort, request
from flask_restx import Resource, fields, marshal_with
from pydantic import BaseModel, Field, field_validator
from sqlalchemy import func, or_
from sqlalchemy.orm import joinedload
from werkzeug.exceptions import NotFound
@ -14,13 +16,53 @@ from extensions.ext_database import db
from fields.conversation_fields import MessageTextField
from fields.raws import FilesContainedField
from libs.datetime_utils import naive_utc_now, parse_time_range
from libs.helper import DatetimeString, TimestampField
from libs.helper import TimestampField
from libs.login import current_account_with_tenant, login_required
from models import Conversation, EndUser, Message, MessageAnnotation
from models.model import AppMode
from services.conversation_service import ConversationService
from services.errors.conversation import ConversationNotExistsError
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class BaseConversationQuery(BaseModel):
keyword: str | None = Field(default=None, description="Search keyword")
start: str | None = Field(default=None, description="Start date (YYYY-MM-DD HH:MM)")
end: str | None = Field(default=None, description="End date (YYYY-MM-DD HH:MM)")
annotation_status: Literal["annotated", "not_annotated", "all"] = Field(
default="all", description="Annotation status filter"
)
page: int = Field(default=1, ge=1, le=99999, description="Page number")
limit: int = Field(default=20, ge=1, le=100, description="Page size (1-100)")
@field_validator("start", "end", mode="before")
@classmethod
def blank_to_none(cls, value: str | None) -> str | None:
if value == "":
return None
return value
class CompletionConversationQuery(BaseConversationQuery):
pass
class ChatConversationQuery(BaseConversationQuery):
sort_by: Literal["created_at", "-created_at", "updated_at", "-updated_at"] = Field(
default="-updated_at", description="Sort field and direction"
)
console_ns.schema_model(
CompletionConversationQuery.__name__,
CompletionConversationQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
console_ns.schema_model(
ChatConversationQuery.__name__,
ChatConversationQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
# Register models for flask_restx to avoid dict type issues in Swagger
# Register in dependency order: base models first, then dependent models
@ -283,22 +325,7 @@ class CompletionConversationApi(Resource):
@console_ns.doc("list_completion_conversations")
@console_ns.doc(description="Get completion conversations with pagination and filtering")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(
console_ns.parser()
.add_argument("keyword", type=str, location="args", help="Search keyword")
.add_argument("start", type=str, location="args", help="Start date (YYYY-MM-DD HH:MM)")
.add_argument("end", type=str, location="args", help="End date (YYYY-MM-DD HH:MM)")
.add_argument(
"annotation_status",
type=str,
location="args",
choices=["annotated", "not_annotated", "all"],
default="all",
help="Annotation status filter",
)
.add_argument("page", type=int, location="args", default=1, help="Page number")
.add_argument("limit", type=int, location="args", default=20, help="Page size (1-100)")
)
@console_ns.expect(console_ns.models[CompletionConversationQuery.__name__])
@console_ns.response(200, "Success", conversation_pagination_model)
@console_ns.response(403, "Insufficient permissions")
@setup_required
@ -309,32 +336,17 @@ class CompletionConversationApi(Resource):
@edit_permission_required
def get(self, app_model):
current_user, _ = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("keyword", type=str, location="args")
.add_argument("start", type=DatetimeString("%Y-%m-%d %H:%M"), location="args")
.add_argument("end", type=DatetimeString("%Y-%m-%d %H:%M"), location="args")
.add_argument(
"annotation_status",
type=str,
choices=["annotated", "not_annotated", "all"],
default="all",
location="args",
)
.add_argument("page", type=int_range(1, 99999), default=1, location="args")
.add_argument("limit", type=int_range(1, 100), default=20, location="args")
)
args = parser.parse_args()
args = CompletionConversationQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
query = sa.select(Conversation).where(
Conversation.app_id == app_model.id, Conversation.mode == "completion", Conversation.is_deleted.is_(False)
)
if args["keyword"]:
if args.keyword:
query = query.join(Message, Message.conversation_id == Conversation.id).where(
or_(
Message.query.ilike(f"%{args['keyword']}%"),
Message.answer.ilike(f"%{args['keyword']}%"),
Message.query.ilike(f"%{args.keyword}%"),
Message.answer.ilike(f"%{args.keyword}%"),
)
)
@ -342,7 +354,7 @@ class CompletionConversationApi(Resource):
assert account.timezone is not None
try:
start_datetime_utc, end_datetime_utc = parse_time_range(args["start"], args["end"], account.timezone)
start_datetime_utc, end_datetime_utc = parse_time_range(args.start, args.end, account.timezone)
except ValueError as e:
abort(400, description=str(e))
@ -354,11 +366,11 @@ class CompletionConversationApi(Resource):
query = query.where(Conversation.created_at < end_datetime_utc)
# FIXME, the type ignore in this file
if args["annotation_status"] == "annotated":
if args.annotation_status == "annotated":
query = query.options(joinedload(Conversation.message_annotations)).join( # type: ignore
MessageAnnotation, MessageAnnotation.conversation_id == Conversation.id
)
elif args["annotation_status"] == "not_annotated":
elif args.annotation_status == "not_annotated":
query = (
query.outerjoin(MessageAnnotation, MessageAnnotation.conversation_id == Conversation.id)
.group_by(Conversation.id)
@ -367,7 +379,7 @@ class CompletionConversationApi(Resource):
query = query.order_by(Conversation.created_at.desc())
conversations = db.paginate(query, page=args["page"], per_page=args["limit"], error_out=False)
conversations = db.paginate(query, page=args.page, per_page=args.limit, error_out=False)
return conversations
@ -419,31 +431,7 @@ class ChatConversationApi(Resource):
@console_ns.doc("list_chat_conversations")
@console_ns.doc(description="Get chat conversations with pagination, filtering and summary")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(
console_ns.parser()
.add_argument("keyword", type=str, location="args", help="Search keyword")
.add_argument("start", type=str, location="args", help="Start date (YYYY-MM-DD HH:MM)")
.add_argument("end", type=str, location="args", help="End date (YYYY-MM-DD HH:MM)")
.add_argument(
"annotation_status",
type=str,
location="args",
choices=["annotated", "not_annotated", "all"],
default="all",
help="Annotation status filter",
)
.add_argument("message_count_gte", type=int, location="args", help="Minimum message count")
.add_argument("page", type=int, location="args", default=1, help="Page number")
.add_argument("limit", type=int, location="args", default=20, help="Page size (1-100)")
.add_argument(
"sort_by",
type=str,
location="args",
choices=["created_at", "-created_at", "updated_at", "-updated_at"],
default="-updated_at",
help="Sort field and direction",
)
)
@console_ns.expect(console_ns.models[ChatConversationQuery.__name__])
@console_ns.response(200, "Success", conversation_with_summary_pagination_model)
@console_ns.response(403, "Insufficient permissions")
@setup_required
@ -454,31 +442,7 @@ class ChatConversationApi(Resource):
@edit_permission_required
def get(self, app_model):
current_user, _ = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("keyword", type=str, location="args")
.add_argument("start", type=DatetimeString("%Y-%m-%d %H:%M"), location="args")
.add_argument("end", type=DatetimeString("%Y-%m-%d %H:%M"), location="args")
.add_argument(
"annotation_status",
type=str,
choices=["annotated", "not_annotated", "all"],
default="all",
location="args",
)
.add_argument("message_count_gte", type=int_range(1, 99999), required=False, location="args")
.add_argument("page", type=int_range(1, 99999), required=False, default=1, location="args")
.add_argument("limit", type=int_range(1, 100), required=False, default=20, location="args")
.add_argument(
"sort_by",
type=str,
choices=["created_at", "-created_at", "updated_at", "-updated_at"],
required=False,
default="-updated_at",
location="args",
)
)
args = parser.parse_args()
args = ChatConversationQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
subquery = (
db.session.query(
@ -490,8 +454,8 @@ class ChatConversationApi(Resource):
query = sa.select(Conversation).where(Conversation.app_id == app_model.id, Conversation.is_deleted.is_(False))
if args["keyword"]:
keyword_filter = f"%{args['keyword']}%"
if args.keyword:
keyword_filter = f"%{args.keyword}%"
query = (
query.join(
Message,
@ -514,12 +478,12 @@ class ChatConversationApi(Resource):
assert account.timezone is not None
try:
start_datetime_utc, end_datetime_utc = parse_time_range(args["start"], args["end"], account.timezone)
start_datetime_utc, end_datetime_utc = parse_time_range(args.start, args.end, account.timezone)
except ValueError as e:
abort(400, description=str(e))
if start_datetime_utc:
match args["sort_by"]:
match args.sort_by:
case "updated_at" | "-updated_at":
query = query.where(Conversation.updated_at >= start_datetime_utc)
case "created_at" | "-created_at" | _:
@ -527,35 +491,27 @@ class ChatConversationApi(Resource):
if end_datetime_utc:
end_datetime_utc = end_datetime_utc.replace(second=59)
match args["sort_by"]:
match args.sort_by:
case "updated_at" | "-updated_at":
query = query.where(Conversation.updated_at <= end_datetime_utc)
case "created_at" | "-created_at" | _:
query = query.where(Conversation.created_at <= end_datetime_utc)
if args["annotation_status"] == "annotated":
if args.annotation_status == "annotated":
query = query.options(joinedload(Conversation.message_annotations)).join( # type: ignore
MessageAnnotation, MessageAnnotation.conversation_id == Conversation.id
)
elif args["annotation_status"] == "not_annotated":
elif args.annotation_status == "not_annotated":
query = (
query.outerjoin(MessageAnnotation, MessageAnnotation.conversation_id == Conversation.id)
.group_by(Conversation.id)
.having(func.count(MessageAnnotation.id) == 0)
)
if args["message_count_gte"] and args["message_count_gte"] >= 1:
query = (
query.options(joinedload(Conversation.messages)) # type: ignore
.join(Message, Message.conversation_id == Conversation.id)
.group_by(Conversation.id)
.having(func.count(Message.id) >= args["message_count_gte"])
)
if app_model.mode == AppMode.ADVANCED_CHAT:
query = query.where(Conversation.invoke_from != InvokeFrom.DEBUGGER)
match args["sort_by"]:
match args.sort_by:
case "created_at":
query = query.order_by(Conversation.created_at.asc())
case "-created_at":
@ -567,7 +523,7 @@ class ChatConversationApi(Resource):
case _:
query = query.order_by(Conversation.created_at.desc())
conversations = db.paginate(query, page=args["page"], per_page=args["limit"], error_out=False)
conversations = db.paginate(query, page=args.page, per_page=args.limit, error_out=False)
return conversations

View File

@ -1,4 +1,6 @@
from flask_restx import Resource, fields, marshal_with, reqparse
from flask import request
from flask_restx import Resource, fields, marshal_with
from pydantic import BaseModel, Field
from sqlalchemy import select
from sqlalchemy.orm import Session
@ -14,6 +16,18 @@ from libs.login import login_required
from models import ConversationVariable
from models.model import AppMode
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class ConversationVariablesQuery(BaseModel):
conversation_id: str = Field(..., description="Conversation ID to filter variables")
console_ns.schema_model(
ConversationVariablesQuery.__name__,
ConversationVariablesQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
# Register models for flask_restx to avoid dict type issues in Swagger
# Register base model first
conversation_variable_model = console_ns.model("ConversationVariable", conversation_variable_fields)
@ -33,11 +47,7 @@ class ConversationVariablesApi(Resource):
@console_ns.doc("get_conversation_variables")
@console_ns.doc(description="Get conversation variables for an application")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(
console_ns.parser().add_argument(
"conversation_id", type=str, location="args", help="Conversation ID to filter variables"
)
)
@console_ns.expect(console_ns.models[ConversationVariablesQuery.__name__])
@console_ns.response(200, "Conversation variables retrieved successfully", paginated_conversation_variable_model)
@setup_required
@login_required
@ -45,18 +55,14 @@ class ConversationVariablesApi(Resource):
@get_app_model(mode=AppMode.ADVANCED_CHAT)
@marshal_with(paginated_conversation_variable_model)
def get(self, app_model):
parser = reqparse.RequestParser().add_argument("conversation_id", type=str, location="args")
args = parser.parse_args()
args = ConversationVariablesQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
stmt = (
select(ConversationVariable)
.where(ConversationVariable.app_id == app_model.id)
.order_by(ConversationVariable.created_at)
)
if args["conversation_id"]:
stmt = stmt.where(ConversationVariable.conversation_id == args["conversation_id"])
else:
raise ValueError("conversation_id is required")
stmt = stmt.where(ConversationVariable.conversation_id == args.conversation_id)
# NOTE: This is a temporary solution to avoid performance issues.
page = 1

View File

@ -1,6 +1,8 @@
from collections.abc import Sequence
from typing import Any
from flask_restx import Resource, fields, reqparse
from flask_restx import Resource
from pydantic import BaseModel, Field
from controllers.console import console_ns
from controllers.console.app.error import (
@ -21,21 +23,54 @@ from libs.login import current_account_with_tenant, login_required
from models import App
from services.workflow_service import WorkflowService
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class RuleGeneratePayload(BaseModel):
instruction: str = Field(..., description="Rule generation instruction")
model_config_data: dict[str, Any] = Field(..., alias="model_config", description="Model configuration")
no_variable: bool = Field(default=False, description="Whether to exclude variables")
class RuleCodeGeneratePayload(RuleGeneratePayload):
code_language: str = Field(default="javascript", description="Programming language for code generation")
class RuleStructuredOutputPayload(BaseModel):
instruction: str = Field(..., description="Structured output generation instruction")
model_config_data: dict[str, Any] = Field(..., alias="model_config", description="Model configuration")
class InstructionGeneratePayload(BaseModel):
flow_id: str = Field(..., description="Workflow/Flow ID")
node_id: str = Field(default="", description="Node ID for workflow context")
current: str = Field(default="", description="Current instruction text")
language: str = Field(default="javascript", description="Programming language (javascript/python)")
instruction: str = Field(..., description="Instruction for generation")
model_config_data: dict[str, Any] = Field(..., alias="model_config", description="Model configuration")
ideal_output: str = Field(default="", description="Expected ideal output")
class InstructionTemplatePayload(BaseModel):
type: str = Field(..., description="Instruction template type")
def reg(cls: type[BaseModel]):
console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
reg(RuleGeneratePayload)
reg(RuleCodeGeneratePayload)
reg(RuleStructuredOutputPayload)
reg(InstructionGeneratePayload)
reg(InstructionTemplatePayload)
@console_ns.route("/rule-generate")
class RuleGenerateApi(Resource):
@console_ns.doc("generate_rule_config")
@console_ns.doc(description="Generate rule configuration using LLM")
@console_ns.expect(
console_ns.model(
"RuleGenerateRequest",
{
"instruction": fields.String(required=True, description="Rule generation instruction"),
"model_config": fields.Raw(required=True, description="Model configuration"),
"no_variable": fields.Boolean(required=True, default=False, description="Whether to exclude variables"),
},
)
)
@console_ns.expect(console_ns.models[RuleGeneratePayload.__name__])
@console_ns.response(200, "Rule configuration generated successfully")
@console_ns.response(400, "Invalid request parameters")
@console_ns.response(402, "Provider quota exceeded")
@ -43,21 +78,15 @@ class RuleGenerateApi(Resource):
@login_required
@account_initialization_required
def post(self):
parser = (
reqparse.RequestParser()
.add_argument("instruction", type=str, required=True, nullable=False, location="json")
.add_argument("model_config", type=dict, required=True, nullable=False, location="json")
.add_argument("no_variable", type=bool, required=True, default=False, location="json")
)
args = parser.parse_args()
args = RuleGeneratePayload.model_validate(console_ns.payload)
_, current_tenant_id = current_account_with_tenant()
try:
rules = LLMGenerator.generate_rule_config(
tenant_id=current_tenant_id,
instruction=args["instruction"],
model_config=args["model_config"],
no_variable=args["no_variable"],
instruction=args.instruction,
model_config=args.model_config_data,
no_variable=args.no_variable,
)
except ProviderTokenNotInitError as ex:
raise ProviderNotInitializeError(ex.description)
@ -75,19 +104,7 @@ class RuleGenerateApi(Resource):
class RuleCodeGenerateApi(Resource):
@console_ns.doc("generate_rule_code")
@console_ns.doc(description="Generate code rules using LLM")
@console_ns.expect(
console_ns.model(
"RuleCodeGenerateRequest",
{
"instruction": fields.String(required=True, description="Code generation instruction"),
"model_config": fields.Raw(required=True, description="Model configuration"),
"no_variable": fields.Boolean(required=True, default=False, description="Whether to exclude variables"),
"code_language": fields.String(
default="javascript", description="Programming language for code generation"
),
},
)
)
@console_ns.expect(console_ns.models[RuleCodeGeneratePayload.__name__])
@console_ns.response(200, "Code rules generated successfully")
@console_ns.response(400, "Invalid request parameters")
@console_ns.response(402, "Provider quota exceeded")
@ -95,22 +112,15 @@ class RuleCodeGenerateApi(Resource):
@login_required
@account_initialization_required
def post(self):
parser = (
reqparse.RequestParser()
.add_argument("instruction", type=str, required=True, nullable=False, location="json")
.add_argument("model_config", type=dict, required=True, nullable=False, location="json")
.add_argument("no_variable", type=bool, required=True, default=False, location="json")
.add_argument("code_language", type=str, required=False, default="javascript", location="json")
)
args = parser.parse_args()
args = RuleCodeGeneratePayload.model_validate(console_ns.payload)
_, current_tenant_id = current_account_with_tenant()
try:
code_result = LLMGenerator.generate_code(
tenant_id=current_tenant_id,
instruction=args["instruction"],
model_config=args["model_config"],
code_language=args["code_language"],
instruction=args.instruction,
model_config=args.model_config_data,
code_language=args.code_language,
)
except ProviderTokenNotInitError as ex:
raise ProviderNotInitializeError(ex.description)
@ -128,15 +138,7 @@ class RuleCodeGenerateApi(Resource):
class RuleStructuredOutputGenerateApi(Resource):
@console_ns.doc("generate_structured_output")
@console_ns.doc(description="Generate structured output rules using LLM")
@console_ns.expect(
console_ns.model(
"StructuredOutputGenerateRequest",
{
"instruction": fields.String(required=True, description="Structured output generation instruction"),
"model_config": fields.Raw(required=True, description="Model configuration"),
},
)
)
@console_ns.expect(console_ns.models[RuleStructuredOutputPayload.__name__])
@console_ns.response(200, "Structured output generated successfully")
@console_ns.response(400, "Invalid request parameters")
@console_ns.response(402, "Provider quota exceeded")
@ -144,19 +146,14 @@ class RuleStructuredOutputGenerateApi(Resource):
@login_required
@account_initialization_required
def post(self):
parser = (
reqparse.RequestParser()
.add_argument("instruction", type=str, required=True, nullable=False, location="json")
.add_argument("model_config", type=dict, required=True, nullable=False, location="json")
)
args = parser.parse_args()
args = RuleStructuredOutputPayload.model_validate(console_ns.payload)
_, current_tenant_id = current_account_with_tenant()
try:
structured_output = LLMGenerator.generate_structured_output(
tenant_id=current_tenant_id,
instruction=args["instruction"],
model_config=args["model_config"],
instruction=args.instruction,
model_config=args.model_config_data,
)
except ProviderTokenNotInitError as ex:
raise ProviderNotInitializeError(ex.description)
@ -174,20 +171,7 @@ class RuleStructuredOutputGenerateApi(Resource):
class InstructionGenerateApi(Resource):
@console_ns.doc("generate_instruction")
@console_ns.doc(description="Generate instruction for workflow nodes or general use")
@console_ns.expect(
console_ns.model(
"InstructionGenerateRequest",
{
"flow_id": fields.String(required=True, description="Workflow/Flow ID"),
"node_id": fields.String(description="Node ID for workflow context"),
"current": fields.String(description="Current instruction text"),
"language": fields.String(default="javascript", description="Programming language (javascript/python)"),
"instruction": fields.String(required=True, description="Instruction for generation"),
"model_config": fields.Raw(required=True, description="Model configuration"),
"ideal_output": fields.String(description="Expected ideal output"),
},
)
)
@console_ns.expect(console_ns.models[InstructionGeneratePayload.__name__])
@console_ns.response(200, "Instruction generated successfully")
@console_ns.response(400, "Invalid request parameters or flow/workflow not found")
@console_ns.response(402, "Provider quota exceeded")
@ -195,79 +179,69 @@ class InstructionGenerateApi(Resource):
@login_required
@account_initialization_required
def post(self):
parser = (
reqparse.RequestParser()
.add_argument("flow_id", type=str, required=True, default="", location="json")
.add_argument("node_id", type=str, required=False, default="", location="json")
.add_argument("current", type=str, required=False, default="", location="json")
.add_argument("language", type=str, required=False, default="javascript", location="json")
.add_argument("instruction", type=str, required=True, nullable=False, location="json")
.add_argument("model_config", type=dict, required=True, nullable=False, location="json")
.add_argument("ideal_output", type=str, required=False, default="", location="json")
)
args = parser.parse_args()
args = InstructionGeneratePayload.model_validate(console_ns.payload)
_, current_tenant_id = current_account_with_tenant()
providers: list[type[CodeNodeProvider]] = [Python3CodeProvider, JavascriptCodeProvider]
code_provider: type[CodeNodeProvider] | None = next(
(p for p in providers if p.is_accept_language(args["language"])), None
(p for p in providers if p.is_accept_language(args.language)), None
)
code_template = code_provider.get_default_code() if code_provider else ""
try:
# Generate from nothing for a workflow node
if (args["current"] == code_template or args["current"] == "") and args["node_id"] != "":
app = db.session.query(App).where(App.id == args["flow_id"]).first()
if (args.current in (code_template, "")) and args.node_id != "":
app = db.session.query(App).where(App.id == args.flow_id).first()
if not app:
return {"error": f"app {args['flow_id']} not found"}, 400
return {"error": f"app {args.flow_id} not found"}, 400
workflow = WorkflowService().get_draft_workflow(app_model=app)
if not workflow:
return {"error": f"workflow {args['flow_id']} not found"}, 400
return {"error": f"workflow {args.flow_id} not found"}, 400
nodes: Sequence = workflow.graph_dict["nodes"]
node = [node for node in nodes if node["id"] == args["node_id"]]
node = [node for node in nodes if node["id"] == args.node_id]
if len(node) == 0:
return {"error": f"node {args['node_id']} not found"}, 400
return {"error": f"node {args.node_id} not found"}, 400
node_type = node[0]["data"]["type"]
match node_type:
case "llm":
return LLMGenerator.generate_rule_config(
current_tenant_id,
instruction=args["instruction"],
model_config=args["model_config"],
instruction=args.instruction,
model_config=args.model_config_data,
no_variable=True,
)
case "agent":
return LLMGenerator.generate_rule_config(
current_tenant_id,
instruction=args["instruction"],
model_config=args["model_config"],
instruction=args.instruction,
model_config=args.model_config_data,
no_variable=True,
)
case "code":
return LLMGenerator.generate_code(
tenant_id=current_tenant_id,
instruction=args["instruction"],
model_config=args["model_config"],
code_language=args["language"],
instruction=args.instruction,
model_config=args.model_config_data,
code_language=args.language,
)
case _:
return {"error": f"invalid node type: {node_type}"}
if args["node_id"] == "" and args["current"] != "": # For legacy app without a workflow
if args.node_id == "" and args.current != "": # For legacy app without a workflow
return LLMGenerator.instruction_modify_legacy(
tenant_id=current_tenant_id,
flow_id=args["flow_id"],
current=args["current"],
instruction=args["instruction"],
model_config=args["model_config"],
ideal_output=args["ideal_output"],
flow_id=args.flow_id,
current=args.current,
instruction=args.instruction,
model_config=args.model_config_data,
ideal_output=args.ideal_output,
)
if args["node_id"] != "" and args["current"] != "": # For workflow node
if args.node_id != "" and args.current != "": # For workflow node
return LLMGenerator.instruction_modify_workflow(
tenant_id=current_tenant_id,
flow_id=args["flow_id"],
node_id=args["node_id"],
current=args["current"],
instruction=args["instruction"],
model_config=args["model_config"],
ideal_output=args["ideal_output"],
flow_id=args.flow_id,
node_id=args.node_id,
current=args.current,
instruction=args.instruction,
model_config=args.model_config_data,
ideal_output=args.ideal_output,
workflow_service=WorkflowService(),
)
return {"error": "incompatible parameters"}, 400
@ -285,24 +259,15 @@ class InstructionGenerateApi(Resource):
class InstructionGenerationTemplateApi(Resource):
@console_ns.doc("get_instruction_template")
@console_ns.doc(description="Get instruction generation template")
@console_ns.expect(
console_ns.model(
"InstructionTemplateRequest",
{
"instruction": fields.String(required=True, description="Template instruction"),
"ideal_output": fields.String(description="Expected ideal output"),
},
)
)
@console_ns.expect(console_ns.models[InstructionTemplatePayload.__name__])
@console_ns.response(200, "Template retrieved successfully")
@console_ns.response(400, "Invalid request parameters")
@setup_required
@login_required
@account_initialization_required
def post(self):
parser = reqparse.RequestParser().add_argument("type", type=str, required=True, default=False, location="json")
args = parser.parse_args()
match args["type"]:
args = InstructionTemplatePayload.model_validate(console_ns.payload)
match args.type:
case "prompt":
from core.llm_generator.prompts import INSTRUCTION_GENERATE_TEMPLATE_PROMPT
@ -312,4 +277,4 @@ class InstructionGenerationTemplateApi(Resource):
return {"data": INSTRUCTION_GENERATE_TEMPLATE_CODE}
case _:
raise ValueError(f"Invalid type: {args['type']}")
raise ValueError(f"Invalid type: {args.type}")

View File

@ -1,7 +1,9 @@
import logging
from typing import Literal
from flask_restx import Resource, fields, marshal_with, reqparse
from flask_restx.inputs import int_range
from flask import request
from flask_restx import Resource, fields, marshal_with
from pydantic import BaseModel, Field, field_validator
from sqlalchemy import exists, select
from werkzeug.exceptions import InternalServerError, NotFound
@ -33,6 +35,67 @@ from services.errors.message import MessageNotExistsError, SuggestedQuestionsAft
from services.message_service import MessageService
logger = logging.getLogger(__name__)
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class ChatMessagesQuery(BaseModel):
conversation_id: str = Field(..., description="Conversation ID")
first_id: str | None = Field(default=None, description="First message ID for pagination")
limit: int = Field(default=20, ge=1, le=100, description="Number of messages to return (1-100)")
@field_validator("first_id", mode="before")
@classmethod
def empty_to_none(cls, value: str | None) -> str | None:
if value == "":
return None
return value
@field_validator("conversation_id", "first_id")
@classmethod
def validate_uuid(cls, value: str | None) -> str | None:
if value is None:
return value
return uuid_value(value)
class MessageFeedbackPayload(BaseModel):
message_id: str = Field(..., description="Message ID")
rating: Literal["like", "dislike"] | None = Field(default=None, description="Feedback rating")
@field_validator("message_id")
@classmethod
def validate_message_id(cls, value: str) -> str:
return uuid_value(value)
class FeedbackExportQuery(BaseModel):
from_source: Literal["user", "admin"] | None = Field(default=None, description="Filter by feedback source")
rating: Literal["like", "dislike"] | None = Field(default=None, description="Filter by rating")
has_comment: bool | None = Field(default=None, description="Only include feedback with comments")
start_date: str | None = Field(default=None, description="Start date (YYYY-MM-DD)")
end_date: str | None = Field(default=None, description="End date (YYYY-MM-DD)")
format: Literal["csv", "json"] = Field(default="csv", description="Export format")
@field_validator("has_comment", mode="before")
@classmethod
def parse_bool(cls, value: bool | str | None) -> bool | None:
if isinstance(value, bool) or value is None:
return value
lowered = value.lower()
if lowered in {"true", "1", "yes", "on"}:
return True
if lowered in {"false", "0", "no", "off"}:
return False
raise ValueError("has_comment must be a boolean value")
def reg(cls: type[BaseModel]):
console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
reg(ChatMessagesQuery)
reg(MessageFeedbackPayload)
reg(FeedbackExportQuery)
# Register models for flask_restx to avoid dict type issues in Swagger
# Register in dependency order: base models first, then dependent models
@ -157,12 +220,7 @@ class ChatMessageListApi(Resource):
@console_ns.doc("list_chat_messages")
@console_ns.doc(description="Get chat messages for a conversation with pagination")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(
console_ns.parser()
.add_argument("conversation_id", type=str, required=True, location="args", help="Conversation ID")
.add_argument("first_id", type=str, location="args", help="First message ID for pagination")
.add_argument("limit", type=int, location="args", default=20, help="Number of messages to return (1-100)")
)
@console_ns.expect(console_ns.models[ChatMessagesQuery.__name__])
@console_ns.response(200, "Success", message_infinite_scroll_pagination_model)
@console_ns.response(404, "Conversation not found")
@login_required
@ -172,27 +230,21 @@ class ChatMessageListApi(Resource):
@marshal_with(message_infinite_scroll_pagination_model)
@edit_permission_required
def get(self, app_model):
parser = (
reqparse.RequestParser()
.add_argument("conversation_id", required=True, type=uuid_value, location="args")
.add_argument("first_id", type=uuid_value, location="args")
.add_argument("limit", type=int_range(1, 100), required=False, default=20, location="args")
)
args = parser.parse_args()
args = ChatMessagesQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
conversation = (
db.session.query(Conversation)
.where(Conversation.id == args["conversation_id"], Conversation.app_id == app_model.id)
.where(Conversation.id == args.conversation_id, Conversation.app_id == app_model.id)
.first()
)
if not conversation:
raise NotFound("Conversation Not Exists.")
if args["first_id"]:
if args.first_id:
first_message = (
db.session.query(Message)
.where(Message.conversation_id == conversation.id, Message.id == args["first_id"])
.where(Message.conversation_id == conversation.id, Message.id == args.first_id)
.first()
)
@ -207,7 +259,7 @@ class ChatMessageListApi(Resource):
Message.id != first_message.id,
)
.order_by(Message.created_at.desc())
.limit(args["limit"])
.limit(args.limit)
.all()
)
else:
@ -215,12 +267,12 @@ class ChatMessageListApi(Resource):
db.session.query(Message)
.where(Message.conversation_id == conversation.id)
.order_by(Message.created_at.desc())
.limit(args["limit"])
.limit(args.limit)
.all()
)
# Initialize has_more based on whether we have a full page
if len(history_messages) == args["limit"]:
if len(history_messages) == args.limit:
current_page_first_message = history_messages[-1]
# Check if there are more messages before the current page
has_more = db.session.scalar(
@ -238,7 +290,7 @@ class ChatMessageListApi(Resource):
history_messages = list(reversed(history_messages))
return InfiniteScrollPagination(data=history_messages, limit=args["limit"], has_more=has_more)
return InfiniteScrollPagination(data=history_messages, limit=args.limit, has_more=has_more)
@console_ns.route("/apps/<uuid:app_id>/feedbacks")
@ -246,15 +298,7 @@ class MessageFeedbackApi(Resource):
@console_ns.doc("create_message_feedback")
@console_ns.doc(description="Create or update message feedback (like/dislike)")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(
console_ns.model(
"MessageFeedbackRequest",
{
"message_id": fields.String(required=True, description="Message ID"),
"rating": fields.String(enum=["like", "dislike"], description="Feedback rating"),
},
)
)
@console_ns.expect(console_ns.models[MessageFeedbackPayload.__name__])
@console_ns.response(200, "Feedback updated successfully")
@console_ns.response(404, "Message not found")
@console_ns.response(403, "Insufficient permissions")
@ -265,14 +309,9 @@ class MessageFeedbackApi(Resource):
def post(self, app_model):
current_user, _ = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("message_id", required=True, type=uuid_value, location="json")
.add_argument("rating", type=str, choices=["like", "dislike", None], location="json")
)
args = parser.parse_args()
args = MessageFeedbackPayload.model_validate(console_ns.payload)
message_id = str(args["message_id"])
message_id = str(args.message_id)
message = db.session.query(Message).where(Message.id == message_id, Message.app_id == app_model.id).first()
@ -281,18 +320,21 @@ class MessageFeedbackApi(Resource):
feedback = message.admin_feedback
if not args["rating"] and feedback:
if not args.rating and feedback:
db.session.delete(feedback)
elif args["rating"] and feedback:
feedback.rating = args["rating"]
elif not args["rating"] and not feedback:
elif args.rating and feedback:
feedback.rating = args.rating
elif not args.rating and not feedback:
raise ValueError("rating cannot be None when feedback not exists")
else:
rating_value = args.rating
if rating_value is None:
raise ValueError("rating is required to create feedback")
feedback = MessageFeedback(
app_id=app_model.id,
conversation_id=message.conversation_id,
message_id=message.id,
rating=args["rating"],
rating=rating_value,
from_source="admin",
from_account_id=current_user.id,
)
@ -369,6 +411,46 @@ class MessageSuggestedQuestionApi(Resource):
return {"data": questions}
@console_ns.route("/apps/<uuid:app_id>/feedbacks/export")
class MessageFeedbackExportApi(Resource):
@console_ns.doc("export_feedbacks")
@console_ns.doc(description="Export user feedback data for Google Sheets")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(console_ns.models[FeedbackExportQuery.__name__])
@console_ns.response(200, "Feedback data exported successfully")
@console_ns.response(400, "Invalid parameters")
@console_ns.response(500, "Internal server error")
@get_app_model
@setup_required
@login_required
@account_initialization_required
def get(self, app_model):
args = FeedbackExportQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
# Import the service function
from services.feedback_service import FeedbackService
try:
export_data = FeedbackService.export_feedbacks(
app_id=app_model.id,
from_source=args.from_source,
rating=args.rating,
has_comment=args.has_comment,
start_date=args.start_date,
end_date=args.end_date,
format_type=args.format,
)
return export_data
except ValueError as e:
logger.exception("Parameter validation error in feedback export")
return {"error": f"Parameter validation error: {str(e)}"}, 400
except Exception as e:
logger.exception("Error exporting feedback data")
raise InternalServerError(str(e))
@console_ns.route("/apps/<uuid:app_id>/messages/<uuid:message_id>")
class MessageApi(Resource):
@console_ns.doc("get_message")

View File

@ -1,8 +1,9 @@
from decimal import Decimal
import sqlalchemy as sa
from flask import abort, jsonify
from flask_restx import Resource, fields, reqparse
from flask import abort, jsonify, request
from flask_restx import Resource, fields
from pydantic import BaseModel, Field, field_validator
from controllers.console import console_ns
from controllers.console.app.wraps import get_app_model
@ -10,21 +11,37 @@ from controllers.console.wraps import account_initialization_required, setup_req
from core.app.entities.app_invoke_entities import InvokeFrom
from extensions.ext_database import db
from libs.datetime_utils import parse_time_range
from libs.helper import DatetimeString, convert_datetime_to_date
from libs.helper import convert_datetime_to_date
from libs.login import current_account_with_tenant, login_required
from models import AppMode
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class StatisticTimeRangeQuery(BaseModel):
start: str | None = Field(default=None, description="Start date (YYYY-MM-DD HH:MM)")
end: str | None = Field(default=None, description="End date (YYYY-MM-DD HH:MM)")
@field_validator("start", "end", mode="before")
@classmethod
def empty_string_to_none(cls, value: str | None) -> str | None:
if value == "":
return None
return value
console_ns.schema_model(
StatisticTimeRangeQuery.__name__,
StatisticTimeRangeQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
@console_ns.route("/apps/<uuid:app_id>/statistics/daily-messages")
class DailyMessageStatistic(Resource):
@console_ns.doc("get_daily_message_statistics")
@console_ns.doc(description="Get daily message statistics for an application")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(
console_ns.parser()
.add_argument("start", type=str, location="args", help="Start date (YYYY-MM-DD HH:MM)")
.add_argument("end", type=str, location="args", help="End date (YYYY-MM-DD HH:MM)")
)
@console_ns.expect(console_ns.models[StatisticTimeRangeQuery.__name__])
@console_ns.response(
200,
"Daily message statistics retrieved successfully",
@ -37,12 +54,7 @@ class DailyMessageStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("start", type=DatetimeString("%Y-%m-%d %H:%M"), location="args")
.add_argument("end", type=DatetimeString("%Y-%m-%d %H:%M"), location="args")
)
args = parser.parse_args()
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
converted_created_at = convert_datetime_to_date("created_at")
sql_query = f"""SELECT
@ -57,7 +69,7 @@ WHERE
assert account.timezone is not None
try:
start_datetime_utc, end_datetime_utc = parse_time_range(args["start"], args["end"], account.timezone)
start_datetime_utc, end_datetime_utc = parse_time_range(args.start, args.end, account.timezone)
except ValueError as e:
abort(400, description=str(e))
@ -81,19 +93,12 @@ WHERE
return jsonify({"data": response_data})
parser = (
reqparse.RequestParser()
.add_argument("start", type=DatetimeString("%Y-%m-%d %H:%M"), location="args", help="Start date (YYYY-MM-DD HH:MM)")
.add_argument("end", type=DatetimeString("%Y-%m-%d %H:%M"), location="args", help="End date (YYYY-MM-DD HH:MM)")
)
@console_ns.route("/apps/<uuid:app_id>/statistics/daily-conversations")
class DailyConversationStatistic(Resource):
@console_ns.doc("get_daily_conversation_statistics")
@console_ns.doc(description="Get daily conversation statistics for an application")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(parser)
@console_ns.expect(console_ns.models[StatisticTimeRangeQuery.__name__])
@console_ns.response(
200,
"Daily conversation statistics retrieved successfully",
@ -106,7 +111,7 @@ class DailyConversationStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
args = parser.parse_args()
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
converted_created_at = convert_datetime_to_date("created_at")
sql_query = f"""SELECT
@ -121,7 +126,7 @@ WHERE
assert account.timezone is not None
try:
start_datetime_utc, end_datetime_utc = parse_time_range(args["start"], args["end"], account.timezone)
start_datetime_utc, end_datetime_utc = parse_time_range(args.start, args.end, account.timezone)
except ValueError as e:
abort(400, description=str(e))
@ -149,7 +154,7 @@ class DailyTerminalsStatistic(Resource):
@console_ns.doc("get_daily_terminals_statistics")
@console_ns.doc(description="Get daily terminal/end-user statistics for an application")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(parser)
@console_ns.expect(console_ns.models[StatisticTimeRangeQuery.__name__])
@console_ns.response(
200,
"Daily terminal statistics retrieved successfully",
@ -162,7 +167,7 @@ class DailyTerminalsStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
args = parser.parse_args()
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
converted_created_at = convert_datetime_to_date("created_at")
sql_query = f"""SELECT
@ -177,7 +182,7 @@ WHERE
assert account.timezone is not None
try:
start_datetime_utc, end_datetime_utc = parse_time_range(args["start"], args["end"], account.timezone)
start_datetime_utc, end_datetime_utc = parse_time_range(args.start, args.end, account.timezone)
except ValueError as e:
abort(400, description=str(e))
@ -206,7 +211,7 @@ class DailyTokenCostStatistic(Resource):
@console_ns.doc("get_daily_token_cost_statistics")
@console_ns.doc(description="Get daily token cost statistics for an application")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(parser)
@console_ns.expect(console_ns.models[StatisticTimeRangeQuery.__name__])
@console_ns.response(
200,
"Daily token cost statistics retrieved successfully",
@ -219,7 +224,7 @@ class DailyTokenCostStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
args = parser.parse_args()
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
converted_created_at = convert_datetime_to_date("created_at")
sql_query = f"""SELECT
@ -235,7 +240,7 @@ WHERE
assert account.timezone is not None
try:
start_datetime_utc, end_datetime_utc = parse_time_range(args["start"], args["end"], account.timezone)
start_datetime_utc, end_datetime_utc = parse_time_range(args.start, args.end, account.timezone)
except ValueError as e:
abort(400, description=str(e))
@ -266,7 +271,7 @@ class AverageSessionInteractionStatistic(Resource):
@console_ns.doc("get_average_session_interaction_statistics")
@console_ns.doc(description="Get average session interaction statistics for an application")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(parser)
@console_ns.expect(console_ns.models[StatisticTimeRangeQuery.__name__])
@console_ns.response(
200,
"Average session interaction statistics retrieved successfully",
@ -279,7 +284,7 @@ class AverageSessionInteractionStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
args = parser.parse_args()
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
converted_created_at = convert_datetime_to_date("c.created_at")
sql_query = f"""SELECT
@ -302,7 +307,7 @@ FROM
assert account.timezone is not None
try:
start_datetime_utc, end_datetime_utc = parse_time_range(args["start"], args["end"], account.timezone)
start_datetime_utc, end_datetime_utc = parse_time_range(args.start, args.end, account.timezone)
except ValueError as e:
abort(400, description=str(e))
@ -342,7 +347,7 @@ class UserSatisfactionRateStatistic(Resource):
@console_ns.doc("get_user_satisfaction_rate_statistics")
@console_ns.doc(description="Get user satisfaction rate statistics for an application")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(parser)
@console_ns.expect(console_ns.models[StatisticTimeRangeQuery.__name__])
@console_ns.response(
200,
"User satisfaction rate statistics retrieved successfully",
@ -355,7 +360,7 @@ class UserSatisfactionRateStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
args = parser.parse_args()
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
converted_created_at = convert_datetime_to_date("m.created_at")
sql_query = f"""SELECT
@ -374,7 +379,7 @@ WHERE
assert account.timezone is not None
try:
start_datetime_utc, end_datetime_utc = parse_time_range(args["start"], args["end"], account.timezone)
start_datetime_utc, end_datetime_utc = parse_time_range(args.start, args.end, account.timezone)
except ValueError as e:
abort(400, description=str(e))
@ -408,7 +413,7 @@ class AverageResponseTimeStatistic(Resource):
@console_ns.doc("get_average_response_time_statistics")
@console_ns.doc(description="Get average response time statistics for an application")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(parser)
@console_ns.expect(console_ns.models[StatisticTimeRangeQuery.__name__])
@console_ns.response(
200,
"Average response time statistics retrieved successfully",
@ -421,7 +426,7 @@ class AverageResponseTimeStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
args = parser.parse_args()
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
converted_created_at = convert_datetime_to_date("created_at")
sql_query = f"""SELECT
@ -436,7 +441,7 @@ WHERE
assert account.timezone is not None
try:
start_datetime_utc, end_datetime_utc = parse_time_range(args["start"], args["end"], account.timezone)
start_datetime_utc, end_datetime_utc = parse_time_range(args.start, args.end, account.timezone)
except ValueError as e:
abort(400, description=str(e))
@ -465,7 +470,7 @@ class TokensPerSecondStatistic(Resource):
@console_ns.doc("get_tokens_per_second_statistics")
@console_ns.doc(description="Get tokens per second statistics for an application")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(parser)
@console_ns.expect(console_ns.models[StatisticTimeRangeQuery.__name__])
@console_ns.response(
200,
"Tokens per second statistics retrieved successfully",
@ -477,7 +482,7 @@ class TokensPerSecondStatistic(Resource):
@account_initialization_required
def get(self, app_model):
account, _ = current_account_with_tenant()
args = parser.parse_args()
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
converted_created_at = convert_datetime_to_date("created_at")
sql_query = f"""SELECT
@ -495,7 +500,7 @@ WHERE
assert account.timezone is not None
try:
start_datetime_utc, end_datetime_utc = parse_time_range(args["start"], args["end"], account.timezone)
start_datetime_utc, end_datetime_utc = parse_time_range(args.start, args.end, account.timezone)
except ValueError as e:
abort(400, description=str(e))

View File

@ -1,10 +1,11 @@
import json
import logging
from collections.abc import Sequence
from typing import cast
from typing import Any
from flask import abort, request
from flask_restx import Resource, fields, inputs, marshal_with, reqparse
from flask_restx import Resource, fields, marshal_with
from pydantic import BaseModel, Field, field_validator
from sqlalchemy.orm import Session
from werkzeug.exceptions import Forbidden, InternalServerError, NotFound
@ -49,6 +50,7 @@ from services.workflow_service import DraftWorkflowDeletionError, WorkflowInUseE
logger = logging.getLogger(__name__)
LISTENING_RETRY_IN = 2000
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
# Register models for flask_restx to avoid dict type issues in Swagger
# Register in dependency order: base models first, then dependent models
@ -107,6 +109,104 @@ if workflow_run_node_execution_model is None:
workflow_run_node_execution_model = console_ns.model("WorkflowRunNodeExecution", workflow_run_node_execution_fields)
class SyncDraftWorkflowPayload(BaseModel):
graph: dict[str, Any]
features: dict[str, Any]
hash: str | None = None
environment_variables: list[dict[str, Any]] = Field(default_factory=list)
conversation_variables: list[dict[str, Any]] = Field(default_factory=list)
class BaseWorkflowRunPayload(BaseModel):
files: list[dict[str, Any]] | None = None
class AdvancedChatWorkflowRunPayload(BaseWorkflowRunPayload):
inputs: dict[str, Any] | None = None
query: str = ""
conversation_id: str | None = None
parent_message_id: str | None = None
@field_validator("conversation_id", "parent_message_id")
@classmethod
def validate_uuid(cls, value: str | None) -> str | None:
if value is None:
return value
return uuid_value(value)
class IterationNodeRunPayload(BaseModel):
inputs: dict[str, Any] | None = None
class LoopNodeRunPayload(BaseModel):
inputs: dict[str, Any] | None = None
class DraftWorkflowRunPayload(BaseWorkflowRunPayload):
inputs: dict[str, Any]
class DraftWorkflowNodeRunPayload(BaseWorkflowRunPayload):
inputs: dict[str, Any]
query: str = ""
class PublishWorkflowPayload(BaseModel):
marked_name: str | None = Field(default=None, max_length=20)
marked_comment: str | None = Field(default=None, max_length=100)
class DefaultBlockConfigQuery(BaseModel):
q: str | None = None
class ConvertToWorkflowPayload(BaseModel):
name: str | None = None
icon_type: str | None = None
icon: str | None = None
icon_background: str | None = None
class WorkflowListQuery(BaseModel):
page: int = Field(default=1, ge=1, le=99999)
limit: int = Field(default=10, ge=1, le=100)
user_id: str | None = None
named_only: bool = False
class WorkflowUpdatePayload(BaseModel):
marked_name: str | None = Field(default=None, max_length=20)
marked_comment: str | None = Field(default=None, max_length=100)
class DraftWorkflowTriggerRunPayload(BaseModel):
node_id: str
class DraftWorkflowTriggerRunAllPayload(BaseModel):
node_ids: list[str]
def reg(cls: type[BaseModel]):
console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
reg(SyncDraftWorkflowPayload)
reg(AdvancedChatWorkflowRunPayload)
reg(IterationNodeRunPayload)
reg(LoopNodeRunPayload)
reg(DraftWorkflowRunPayload)
reg(DraftWorkflowNodeRunPayload)
reg(PublishWorkflowPayload)
reg(DefaultBlockConfigQuery)
reg(ConvertToWorkflowPayload)
reg(WorkflowListQuery)
reg(WorkflowUpdatePayload)
reg(DraftWorkflowTriggerRunPayload)
reg(DraftWorkflowTriggerRunAllPayload)
# TODO(QuantumGhost): Refactor existing node run API to handle file parameter parsing
# at the controller level rather than in the workflow logic. This would improve separation
# of concerns and make the code more maintainable.
@ -158,18 +258,7 @@ class DraftWorkflowApi(Resource):
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
@console_ns.doc("sync_draft_workflow")
@console_ns.doc(description="Sync draft workflow configuration")
@console_ns.expect(
console_ns.model(
"SyncDraftWorkflowRequest",
{
"graph": fields.Raw(required=True, description="Workflow graph configuration"),
"features": fields.Raw(required=True, description="Workflow features configuration"),
"hash": fields.String(description="Workflow hash for validation"),
"environment_variables": fields.List(fields.Raw, required=True, description="Environment variables"),
"conversation_variables": fields.List(fields.Raw, description="Conversation variables"),
},
)
)
@console_ns.expect(console_ns.models[SyncDraftWorkflowPayload.__name__])
@console_ns.response(
200,
"Draft workflow synced successfully",
@ -193,36 +282,23 @@ class DraftWorkflowApi(Resource):
content_type = request.headers.get("Content-Type", "")
payload_data: dict[str, Any] | None = None
if "application/json" in content_type:
parser = (
reqparse.RequestParser()
.add_argument("graph", type=dict, required=True, nullable=False, location="json")
.add_argument("features", type=dict, required=True, nullable=False, location="json")
.add_argument("hash", type=str, required=False, location="json")
.add_argument("environment_variables", type=list, required=True, location="json")
.add_argument("conversation_variables", type=list, required=False, location="json")
)
args = parser.parse_args()
payload_data = request.get_json(silent=True)
if not isinstance(payload_data, dict):
return {"message": "Invalid JSON data"}, 400
elif "text/plain" in content_type:
try:
data = json.loads(request.data.decode("utf-8"))
if "graph" not in data or "features" not in data:
raise ValueError("graph or features not found in data")
if not isinstance(data.get("graph"), dict) or not isinstance(data.get("features"), dict):
raise ValueError("graph or features is not a dict")
args = {
"graph": data.get("graph"),
"features": data.get("features"),
"hash": data.get("hash"),
"environment_variables": data.get("environment_variables"),
"conversation_variables": data.get("conversation_variables"),
}
payload_data = json.loads(request.data.decode("utf-8"))
except json.JSONDecodeError:
return {"message": "Invalid JSON data"}, 400
if not isinstance(payload_data, dict):
return {"message": "Invalid JSON data"}, 400
else:
abort(415)
args_model = SyncDraftWorkflowPayload.model_validate(payload_data)
args = args_model.model_dump()
workflow_service = WorkflowService()
try:
@ -258,17 +334,7 @@ class AdvancedChatDraftWorkflowRunApi(Resource):
@console_ns.doc("run_advanced_chat_draft_workflow")
@console_ns.doc(description="Run draft workflow for advanced chat application")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(
console_ns.model(
"AdvancedChatWorkflowRunRequest",
{
"query": fields.String(required=True, description="User query"),
"inputs": fields.Raw(description="Input variables"),
"files": fields.List(fields.Raw, description="File uploads"),
"conversation_id": fields.String(description="Conversation ID"),
},
)
)
@console_ns.expect(console_ns.models[AdvancedChatWorkflowRunPayload.__name__])
@console_ns.response(200, "Workflow run started successfully")
@console_ns.response(400, "Invalid request parameters")
@console_ns.response(403, "Permission denied")
@ -283,16 +349,8 @@ class AdvancedChatDraftWorkflowRunApi(Resource):
"""
current_user, _ = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("inputs", type=dict, location="json")
.add_argument("query", type=str, required=True, location="json", default="")
.add_argument("files", type=list, location="json")
.add_argument("conversation_id", type=uuid_value, location="json")
.add_argument("parent_message_id", type=uuid_value, required=False, location="json")
)
args = parser.parse_args()
args_model = AdvancedChatWorkflowRunPayload.model_validate(console_ns.payload or {})
args = args_model.model_dump(exclude_none=True)
external_trace_id = get_external_trace_id(request)
if external_trace_id:
@ -322,15 +380,7 @@ class AdvancedChatDraftRunIterationNodeApi(Resource):
@console_ns.doc("run_advanced_chat_draft_iteration_node")
@console_ns.doc(description="Run draft workflow iteration node for advanced chat")
@console_ns.doc(params={"app_id": "Application ID", "node_id": "Node ID"})
@console_ns.expect(
console_ns.model(
"IterationNodeRunRequest",
{
"task_id": fields.String(required=True, description="Task ID"),
"inputs": fields.Raw(description="Input variables"),
},
)
)
@console_ns.expect(console_ns.models[IterationNodeRunPayload.__name__])
@console_ns.response(200, "Iteration node run started successfully")
@console_ns.response(403, "Permission denied")
@console_ns.response(404, "Node not found")
@ -344,8 +394,7 @@ class AdvancedChatDraftRunIterationNodeApi(Resource):
Run draft workflow iteration node
"""
current_user, _ = current_account_with_tenant()
parser = reqparse.RequestParser().add_argument("inputs", type=dict, location="json")
args = parser.parse_args()
args = IterationNodeRunPayload.model_validate(console_ns.payload or {}).model_dump(exclude_none=True)
try:
response = AppGenerateService.generate_single_iteration(
@ -369,15 +418,7 @@ class WorkflowDraftRunIterationNodeApi(Resource):
@console_ns.doc("run_workflow_draft_iteration_node")
@console_ns.doc(description="Run draft workflow iteration node")
@console_ns.doc(params={"app_id": "Application ID", "node_id": "Node ID"})
@console_ns.expect(
console_ns.model(
"WorkflowIterationNodeRunRequest",
{
"task_id": fields.String(required=True, description="Task ID"),
"inputs": fields.Raw(description="Input variables"),
},
)
)
@console_ns.expect(console_ns.models[IterationNodeRunPayload.__name__])
@console_ns.response(200, "Workflow iteration node run started successfully")
@console_ns.response(403, "Permission denied")
@console_ns.response(404, "Node not found")
@ -391,8 +432,7 @@ class WorkflowDraftRunIterationNodeApi(Resource):
Run draft workflow iteration node
"""
current_user, _ = current_account_with_tenant()
parser = reqparse.RequestParser().add_argument("inputs", type=dict, location="json")
args = parser.parse_args()
args = IterationNodeRunPayload.model_validate(console_ns.payload or {}).model_dump(exclude_none=True)
try:
response = AppGenerateService.generate_single_iteration(
@ -416,15 +456,7 @@ class AdvancedChatDraftRunLoopNodeApi(Resource):
@console_ns.doc("run_advanced_chat_draft_loop_node")
@console_ns.doc(description="Run draft workflow loop node for advanced chat")
@console_ns.doc(params={"app_id": "Application ID", "node_id": "Node ID"})
@console_ns.expect(
console_ns.model(
"LoopNodeRunRequest",
{
"task_id": fields.String(required=True, description="Task ID"),
"inputs": fields.Raw(description="Input variables"),
},
)
)
@console_ns.expect(console_ns.models[LoopNodeRunPayload.__name__])
@console_ns.response(200, "Loop node run started successfully")
@console_ns.response(403, "Permission denied")
@console_ns.response(404, "Node not found")
@ -438,8 +470,7 @@ class AdvancedChatDraftRunLoopNodeApi(Resource):
Run draft workflow loop node
"""
current_user, _ = current_account_with_tenant()
parser = reqparse.RequestParser().add_argument("inputs", type=dict, location="json")
args = parser.parse_args()
args = LoopNodeRunPayload.model_validate(console_ns.payload or {}).model_dump(exclude_none=True)
try:
response = AppGenerateService.generate_single_loop(
@ -463,15 +494,7 @@ class WorkflowDraftRunLoopNodeApi(Resource):
@console_ns.doc("run_workflow_draft_loop_node")
@console_ns.doc(description="Run draft workflow loop node")
@console_ns.doc(params={"app_id": "Application ID", "node_id": "Node ID"})
@console_ns.expect(
console_ns.model(
"WorkflowLoopNodeRunRequest",
{
"task_id": fields.String(required=True, description="Task ID"),
"inputs": fields.Raw(description="Input variables"),
},
)
)
@console_ns.expect(console_ns.models[LoopNodeRunPayload.__name__])
@console_ns.response(200, "Workflow loop node run started successfully")
@console_ns.response(403, "Permission denied")
@console_ns.response(404, "Node not found")
@ -485,8 +508,7 @@ class WorkflowDraftRunLoopNodeApi(Resource):
Run draft workflow loop node
"""
current_user, _ = current_account_with_tenant()
parser = reqparse.RequestParser().add_argument("inputs", type=dict, location="json")
args = parser.parse_args()
args = LoopNodeRunPayload.model_validate(console_ns.payload or {}).model_dump(exclude_none=True)
try:
response = AppGenerateService.generate_single_loop(
@ -510,15 +532,7 @@ class DraftWorkflowRunApi(Resource):
@console_ns.doc("run_draft_workflow")
@console_ns.doc(description="Run draft workflow")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(
console_ns.model(
"DraftWorkflowRunRequest",
{
"inputs": fields.Raw(required=True, description="Input variables"),
"files": fields.List(fields.Raw, description="File uploads"),
},
)
)
@console_ns.expect(console_ns.models[DraftWorkflowRunPayload.__name__])
@console_ns.response(200, "Draft workflow run started successfully")
@console_ns.response(403, "Permission denied")
@setup_required
@ -531,12 +545,7 @@ class DraftWorkflowRunApi(Resource):
Run draft workflow
"""
current_user, _ = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("inputs", type=dict, required=True, nullable=False, location="json")
.add_argument("files", type=list, required=False, location="json")
)
args = parser.parse_args()
args = DraftWorkflowRunPayload.model_validate(console_ns.payload or {}).model_dump(exclude_none=True)
external_trace_id = get_external_trace_id(request)
if external_trace_id:
@ -588,14 +597,7 @@ class DraftWorkflowNodeRunApi(Resource):
@console_ns.doc("run_draft_workflow_node")
@console_ns.doc(description="Run draft workflow node")
@console_ns.doc(params={"app_id": "Application ID", "node_id": "Node ID"})
@console_ns.expect(
console_ns.model(
"DraftWorkflowNodeRunRequest",
{
"inputs": fields.Raw(description="Input variables"),
},
)
)
@console_ns.expect(console_ns.models[DraftWorkflowNodeRunPayload.__name__])
@console_ns.response(200, "Node run started successfully", workflow_run_node_execution_model)
@console_ns.response(403, "Permission denied")
@console_ns.response(404, "Node not found")
@ -610,15 +612,10 @@ class DraftWorkflowNodeRunApi(Resource):
Run draft workflow node
"""
current_user, _ = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("inputs", type=dict, required=True, nullable=False, location="json")
.add_argument("query", type=str, required=False, location="json", default="")
.add_argument("files", type=list, location="json", default=[])
)
args = parser.parse_args()
args_model = DraftWorkflowNodeRunPayload.model_validate(console_ns.payload or {})
args = args_model.model_dump(exclude_none=True)
user_inputs = args.get("inputs")
user_inputs = args_model.inputs
if user_inputs is None:
raise ValueError("missing inputs")
@ -643,13 +640,6 @@ class DraftWorkflowNodeRunApi(Resource):
return workflow_node_execution
parser_publish = (
reqparse.RequestParser()
.add_argument("marked_name", type=str, required=False, default="", location="json")
.add_argument("marked_comment", type=str, required=False, default="", location="json")
)
@console_ns.route("/apps/<uuid:app_id>/workflows/publish")
class PublishedWorkflowApi(Resource):
@console_ns.doc("get_published_workflow")
@ -674,7 +664,7 @@ class PublishedWorkflowApi(Resource):
# return workflow, if not found, return None
return workflow
@console_ns.expect(parser_publish)
@console_ns.expect(console_ns.models[PublishWorkflowPayload.__name__])
@setup_required
@login_required
@account_initialization_required
@ -686,13 +676,7 @@ class PublishedWorkflowApi(Resource):
"""
current_user, _ = current_account_with_tenant()
args = parser_publish.parse_args()
# Validate name and comment length
if args.marked_name and len(args.marked_name) > 20:
raise ValueError("Marked name cannot exceed 20 characters")
if args.marked_comment and len(args.marked_comment) > 100:
raise ValueError("Marked comment cannot exceed 100 characters")
args = PublishWorkflowPayload.model_validate(console_ns.payload or {})
workflow_service = WorkflowService()
with Session(db.engine) as session:
@ -741,9 +725,6 @@ class DefaultBlockConfigsApi(Resource):
return workflow_service.get_default_block_configs()
parser_block = reqparse.RequestParser().add_argument("q", type=str, location="args")
@console_ns.route("/apps/<uuid:app_id>/workflows/default-workflow-block-configs/<string:block_type>")
class DefaultBlockConfigApi(Resource):
@console_ns.doc("get_default_block_config")
@ -751,7 +732,7 @@ class DefaultBlockConfigApi(Resource):
@console_ns.doc(params={"app_id": "Application ID", "block_type": "Block type"})
@console_ns.response(200, "Default block configuration retrieved successfully")
@console_ns.response(404, "Block type not found")
@console_ns.expect(parser_block)
@console_ns.expect(console_ns.models[DefaultBlockConfigQuery.__name__])
@setup_required
@login_required
@account_initialization_required
@ -761,14 +742,12 @@ class DefaultBlockConfigApi(Resource):
"""
Get default block config
"""
args = parser_block.parse_args()
q = args.get("q")
args = DefaultBlockConfigQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
filters = None
if q:
if args.q:
try:
filters = json.loads(args.get("q", ""))
filters = json.loads(args.q)
except json.JSONDecodeError:
raise ValueError("Invalid filters")
@ -777,18 +756,9 @@ class DefaultBlockConfigApi(Resource):
return workflow_service.get_default_block_config(node_type=block_type, filters=filters)
parser_convert = (
reqparse.RequestParser()
.add_argument("name", type=str, required=False, nullable=True, location="json")
.add_argument("icon_type", type=str, required=False, nullable=True, location="json")
.add_argument("icon", type=str, required=False, nullable=True, location="json")
.add_argument("icon_background", type=str, required=False, nullable=True, location="json")
)
@console_ns.route("/apps/<uuid:app_id>/convert-to-workflow")
class ConvertToWorkflowApi(Resource):
@console_ns.expect(parser_convert)
@console_ns.expect(console_ns.models[ConvertToWorkflowPayload.__name__])
@console_ns.doc("convert_to_workflow")
@console_ns.doc(description="Convert application to workflow mode")
@console_ns.doc(params={"app_id": "Application ID"})
@ -808,10 +778,8 @@ class ConvertToWorkflowApi(Resource):
"""
current_user, _ = current_account_with_tenant()
if request.data:
args = parser_convert.parse_args()
else:
args = {}
payload = console_ns.payload or {}
args = ConvertToWorkflowPayload.model_validate(payload).model_dump(exclude_none=True)
# convert to workflow mode
workflow_service = WorkflowService()
@ -823,18 +791,9 @@ class ConvertToWorkflowApi(Resource):
}
parser_workflows = (
reqparse.RequestParser()
.add_argument("page", type=inputs.int_range(1, 99999), required=False, default=1, location="args")
.add_argument("limit", type=inputs.int_range(1, 100), required=False, default=10, location="args")
.add_argument("user_id", type=str, required=False, location="args")
.add_argument("named_only", type=inputs.boolean, required=False, default=False, location="args")
)
@console_ns.route("/apps/<uuid:app_id>/workflows")
class PublishedAllWorkflowApi(Resource):
@console_ns.expect(parser_workflows)
@console_ns.expect(console_ns.models[WorkflowListQuery.__name__])
@console_ns.doc("get_all_published_workflows")
@console_ns.doc(description="Get all published workflows for an application")
@console_ns.doc(params={"app_id": "Application ID"})
@ -851,16 +810,15 @@ class PublishedAllWorkflowApi(Resource):
"""
current_user, _ = current_account_with_tenant()
args = parser_workflows.parse_args()
page = args["page"]
limit = args["limit"]
user_id = args.get("user_id")
named_only = args.get("named_only", False)
args = WorkflowListQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
page = args.page
limit = args.limit
user_id = args.user_id
named_only = args.named_only
if user_id:
if user_id != current_user.id:
raise Forbidden()
user_id = cast(str, user_id)
workflow_service = WorkflowService()
with Session(db.engine) as session:
@ -886,15 +844,7 @@ class WorkflowByIdApi(Resource):
@console_ns.doc("update_workflow_by_id")
@console_ns.doc(description="Update workflow by ID")
@console_ns.doc(params={"app_id": "Application ID", "workflow_id": "Workflow ID"})
@console_ns.expect(
console_ns.model(
"UpdateWorkflowRequest",
{
"environment_variables": fields.List(fields.Raw, description="Environment variables"),
"conversation_variables": fields.List(fields.Raw, description="Conversation variables"),
},
)
)
@console_ns.expect(console_ns.models[WorkflowUpdatePayload.__name__])
@console_ns.response(200, "Workflow updated successfully", workflow_model)
@console_ns.response(404, "Workflow not found")
@console_ns.response(403, "Permission denied")
@ -909,25 +859,14 @@ class WorkflowByIdApi(Resource):
Update workflow attributes
"""
current_user, _ = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("marked_name", type=str, required=False, location="json")
.add_argument("marked_comment", type=str, required=False, location="json")
)
args = parser.parse_args()
# Validate name and comment length
if args.marked_name and len(args.marked_name) > 20:
raise ValueError("Marked name cannot exceed 20 characters")
if args.marked_comment and len(args.marked_comment) > 100:
raise ValueError("Marked comment cannot exceed 100 characters")
args = WorkflowUpdatePayload.model_validate(console_ns.payload or {})
# Prepare update data
update_data = {}
if args.get("marked_name") is not None:
update_data["marked_name"] = args["marked_name"]
if args.get("marked_comment") is not None:
update_data["marked_comment"] = args["marked_comment"]
if args.marked_name is not None:
update_data["marked_name"] = args.marked_name
if args.marked_comment is not None:
update_data["marked_comment"] = args.marked_comment
if not update_data:
return {"message": "No valid fields to update"}, 400
@ -1040,11 +979,8 @@ class DraftWorkflowTriggerRunApi(Resource):
Poll for trigger events and execute full workflow when event arrives
"""
current_user, _ = current_account_with_tenant()
parser = reqparse.RequestParser().add_argument(
"node_id", type=str, required=True, location="json", nullable=False
)
args = parser.parse_args()
node_id = args["node_id"]
args = DraftWorkflowTriggerRunPayload.model_validate(console_ns.payload or {})
node_id = args.node_id
workflow_service = WorkflowService()
draft_workflow = workflow_service.get_draft_workflow(app_model)
if not draft_workflow:
@ -1172,14 +1108,7 @@ class DraftWorkflowTriggerRunAllApi(Resource):
@console_ns.doc("draft_workflow_trigger_run_all")
@console_ns.doc(description="Full workflow debug when the start node is a trigger")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.expect(
console_ns.model(
"DraftWorkflowTriggerRunAllRequest",
{
"node_ids": fields.List(fields.String, required=True, description="Node IDs"),
},
)
)
@console_ns.expect(console_ns.models[DraftWorkflowTriggerRunAllPayload.__name__])
@console_ns.response(200, "Workflow executed successfully")
@console_ns.response(403, "Permission denied")
@console_ns.response(500, "Internal server error")
@ -1194,11 +1123,8 @@ class DraftWorkflowTriggerRunAllApi(Resource):
"""
current_user, _ = current_account_with_tenant()
parser = reqparse.RequestParser().add_argument(
"node_ids", type=list, required=True, location="json", nullable=False
)
args = parser.parse_args()
node_ids = args["node_ids"]
args = DraftWorkflowTriggerRunAllPayload.model_validate(console_ns.payload or {})
node_ids = args.node_ids
workflow_service = WorkflowService()
draft_workflow = workflow_service.get_draft_workflow(app_model)
if not draft_workflow:

View File

@ -1,6 +1,9 @@
from datetime import datetime
from dateutil.parser import isoparse
from flask_restx import Resource, marshal_with, reqparse
from flask_restx.inputs import int_range
from flask import request
from flask_restx import Resource, marshal_with
from pydantic import BaseModel, Field, field_validator
from sqlalchemy.orm import Session
from controllers.console import console_ns
@ -14,6 +17,48 @@ from models import App
from models.model import AppMode
from services.workflow_app_service import WorkflowAppService
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class WorkflowAppLogQuery(BaseModel):
keyword: str | None = Field(default=None, description="Search keyword for filtering logs")
status: WorkflowExecutionStatus | None = Field(
default=None, description="Execution status filter (succeeded, failed, stopped, partial-succeeded)"
)
created_at__before: datetime | None = Field(default=None, description="Filter logs created before this timestamp")
created_at__after: datetime | None = Field(default=None, description="Filter logs created after this timestamp")
created_by_end_user_session_id: str | None = Field(default=None, description="Filter by end user session ID")
created_by_account: str | None = Field(default=None, description="Filter by account")
detail: bool = Field(default=False, description="Whether to return detailed logs")
page: int = Field(default=1, ge=1, le=99999, description="Page number (1-99999)")
limit: int = Field(default=20, ge=1, le=100, description="Number of items per page (1-100)")
@field_validator("created_at__before", "created_at__after", mode="before")
@classmethod
def parse_datetime(cls, value: str | None) -> datetime | None:
if value in (None, ""):
return None
return isoparse(value) # type: ignore
@field_validator("detail", mode="before")
@classmethod
def parse_bool(cls, value: bool | str | None) -> bool:
if isinstance(value, bool):
return value
if value is None:
return False
lowered = value.lower()
if lowered in {"1", "true", "yes", "on"}:
return True
if lowered in {"0", "false", "no", "off"}:
return False
raise ValueError("Invalid boolean value for detail")
console_ns.schema_model(
WorkflowAppLogQuery.__name__, WorkflowAppLogQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)
)
# Register model for flask_restx to avoid dict type issues in Swagger
workflow_app_log_pagination_model = build_workflow_app_log_pagination_model(console_ns)
@ -23,19 +68,7 @@ class WorkflowAppLogApi(Resource):
@console_ns.doc("get_workflow_app_logs")
@console_ns.doc(description="Get workflow application execution logs")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.doc(
params={
"keyword": "Search keyword for filtering logs",
"status": "Filter by execution status (succeeded, failed, stopped, partial-succeeded)",
"created_at__before": "Filter logs created before this timestamp",
"created_at__after": "Filter logs created after this timestamp",
"created_by_end_user_session_id": "Filter by end user session ID",
"created_by_account": "Filter by account",
"detail": "Whether to return detailed logs",
"page": "Page number (1-99999)",
"limit": "Number of items per page (1-100)",
}
)
@console_ns.expect(console_ns.models[WorkflowAppLogQuery.__name__])
@console_ns.response(200, "Workflow app logs retrieved successfully", workflow_app_log_pagination_model)
@setup_required
@login_required
@ -46,44 +79,7 @@ class WorkflowAppLogApi(Resource):
"""
Get workflow app logs
"""
parser = (
reqparse.RequestParser()
.add_argument("keyword", type=str, location="args")
.add_argument(
"status", type=str, choices=["succeeded", "failed", "stopped", "partial-succeeded"], location="args"
)
.add_argument(
"created_at__before", type=str, location="args", help="Filter logs created before this timestamp"
)
.add_argument(
"created_at__after", type=str, location="args", help="Filter logs created after this timestamp"
)
.add_argument(
"created_by_end_user_session_id",
type=str,
location="args",
required=False,
default=None,
)
.add_argument(
"created_by_account",
type=str,
location="args",
required=False,
default=None,
)
.add_argument("detail", type=bool, location="args", required=False, default=False)
.add_argument("page", type=int_range(1, 99999), default=1, location="args")
.add_argument("limit", type=int_range(1, 100), default=20, location="args")
)
args = parser.parse_args()
args.status = WorkflowExecutionStatus(args.status) if args.status else None
if args.created_at__before:
args.created_at__before = isoparse(args.created_at__before)
if args.created_at__after:
args.created_at__after = isoparse(args.created_at__after)
args = WorkflowAppLogQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
# get paginate workflow app logs
workflow_app_service = WorkflowAppService()

View File

@ -1,7 +1,8 @@
from typing import cast
from typing import Literal, cast
from flask_restx import Resource, fields, marshal_with, reqparse
from flask_restx.inputs import int_range
from flask import request
from flask_restx import Resource, fields, marshal_with
from pydantic import BaseModel, Field, field_validator
from controllers.console import console_ns
from controllers.console.app.wraps import get_app_model
@ -92,70 +93,51 @@ workflow_run_node_execution_list_model = console_ns.model(
"WorkflowRunNodeExecutionList", workflow_run_node_execution_list_fields_copy
)
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
def _parse_workflow_run_list_args():
"""
Parse common arguments for workflow run list endpoints.
Returns:
Parsed arguments containing last_id, limit, status, and triggered_from filters
"""
parser = (
reqparse.RequestParser()
.add_argument("last_id", type=uuid_value, location="args")
.add_argument("limit", type=int_range(1, 100), required=False, default=20, location="args")
.add_argument(
"status",
type=str,
choices=WORKFLOW_RUN_STATUS_CHOICES,
location="args",
required=False,
)
.add_argument(
"triggered_from",
type=str,
choices=["debugging", "app-run"],
location="args",
required=False,
help="Filter by trigger source: debugging or app-run",
)
class WorkflowRunListQuery(BaseModel):
last_id: str | None = Field(default=None, description="Last run ID for pagination")
limit: int = Field(default=20, ge=1, le=100, description="Number of items per page (1-100)")
status: Literal["running", "succeeded", "failed", "stopped", "partial-succeeded"] | None = Field(
default=None, description="Workflow run status filter"
)
return parser.parse_args()
def _parse_workflow_run_count_args():
"""
Parse common arguments for workflow run count endpoints.
Returns:
Parsed arguments containing status, time_range, and triggered_from filters
"""
parser = (
reqparse.RequestParser()
.add_argument(
"status",
type=str,
choices=WORKFLOW_RUN_STATUS_CHOICES,
location="args",
required=False,
)
.add_argument(
"time_range",
type=time_duration,
location="args",
required=False,
help="Time range filter (e.g., 7d, 4h, 30m, 30s)",
)
.add_argument(
"triggered_from",
type=str,
choices=["debugging", "app-run"],
location="args",
required=False,
help="Filter by trigger source: debugging or app-run",
)
triggered_from: Literal["debugging", "app-run"] | None = Field(
default=None, description="Filter by trigger source: debugging or app-run"
)
return parser.parse_args()
@field_validator("last_id")
@classmethod
def validate_last_id(cls, value: str | None) -> str | None:
if value is None:
return value
return uuid_value(value)
class WorkflowRunCountQuery(BaseModel):
status: Literal["running", "succeeded", "failed", "stopped", "partial-succeeded"] | None = Field(
default=None, description="Workflow run status filter"
)
time_range: str | None = Field(default=None, description="Time range filter (e.g., 7d, 4h, 30m, 30s)")
triggered_from: Literal["debugging", "app-run"] | None = Field(
default=None, description="Filter by trigger source: debugging or app-run"
)
@field_validator("time_range")
@classmethod
def validate_time_range(cls, value: str | None) -> str | None:
if value is None:
return value
return time_duration(value)
console_ns.schema_model(
WorkflowRunListQuery.__name__, WorkflowRunListQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)
)
console_ns.schema_model(
WorkflowRunCountQuery.__name__,
WorkflowRunCountQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
@console_ns.route("/apps/<uuid:app_id>/advanced-chat/workflow-runs")
@ -170,6 +152,7 @@ class AdvancedChatAppWorkflowRunListApi(Resource):
@console_ns.doc(
params={"triggered_from": "Filter by trigger source (optional): debugging or app-run. Default: debugging"}
)
@console_ns.expect(console_ns.models[WorkflowRunListQuery.__name__])
@console_ns.response(200, "Workflow runs retrieved successfully", advanced_chat_workflow_run_pagination_model)
@setup_required
@login_required
@ -180,12 +163,13 @@ class AdvancedChatAppWorkflowRunListApi(Resource):
"""
Get advanced chat app workflow run list
"""
args = _parse_workflow_run_list_args()
args_model = WorkflowRunListQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
args = args_model.model_dump(exclude_none=True)
# Default to DEBUGGING if not specified
triggered_from = (
WorkflowRunTriggeredFrom(args.get("triggered_from"))
if args.get("triggered_from")
WorkflowRunTriggeredFrom(args_model.triggered_from)
if args_model.triggered_from
else WorkflowRunTriggeredFrom.DEBUGGING
)
@ -217,6 +201,7 @@ class AdvancedChatAppWorkflowRunCountApi(Resource):
params={"triggered_from": "Filter by trigger source (optional): debugging or app-run. Default: debugging"}
)
@console_ns.response(200, "Workflow runs count retrieved successfully", workflow_run_count_model)
@console_ns.expect(console_ns.models[WorkflowRunCountQuery.__name__])
@setup_required
@login_required
@account_initialization_required
@ -226,12 +211,13 @@ class AdvancedChatAppWorkflowRunCountApi(Resource):
"""
Get advanced chat workflow runs count statistics
"""
args = _parse_workflow_run_count_args()
args_model = WorkflowRunCountQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
args = args_model.model_dump(exclude_none=True)
# Default to DEBUGGING if not specified
triggered_from = (
WorkflowRunTriggeredFrom(args.get("triggered_from"))
if args.get("triggered_from")
WorkflowRunTriggeredFrom(args_model.triggered_from)
if args_model.triggered_from
else WorkflowRunTriggeredFrom.DEBUGGING
)
@ -259,6 +245,7 @@ class WorkflowRunListApi(Resource):
params={"triggered_from": "Filter by trigger source (optional): debugging or app-run. Default: debugging"}
)
@console_ns.response(200, "Workflow runs retrieved successfully", workflow_run_pagination_model)
@console_ns.expect(console_ns.models[WorkflowRunListQuery.__name__])
@setup_required
@login_required
@account_initialization_required
@ -268,12 +255,13 @@ class WorkflowRunListApi(Resource):
"""
Get workflow run list
"""
args = _parse_workflow_run_list_args()
args_model = WorkflowRunListQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
args = args_model.model_dump(exclude_none=True)
# Default to DEBUGGING for workflow if not specified (backward compatibility)
triggered_from = (
WorkflowRunTriggeredFrom(args.get("triggered_from"))
if args.get("triggered_from")
WorkflowRunTriggeredFrom(args_model.triggered_from)
if args_model.triggered_from
else WorkflowRunTriggeredFrom.DEBUGGING
)
@ -305,6 +293,7 @@ class WorkflowRunCountApi(Resource):
params={"triggered_from": "Filter by trigger source (optional): debugging or app-run. Default: debugging"}
)
@console_ns.response(200, "Workflow runs count retrieved successfully", workflow_run_count_model)
@console_ns.expect(console_ns.models[WorkflowRunCountQuery.__name__])
@setup_required
@login_required
@account_initialization_required
@ -314,12 +303,13 @@ class WorkflowRunCountApi(Resource):
"""
Get workflow runs count statistics
"""
args = _parse_workflow_run_count_args()
args_model = WorkflowRunCountQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
args = args_model.model_dump(exclude_none=True)
# Default to DEBUGGING for workflow if not specified (backward compatibility)
triggered_from = (
WorkflowRunTriggeredFrom(args.get("triggered_from"))
if args.get("triggered_from")
WorkflowRunTriggeredFrom(args_model.triggered_from)
if args_model.triggered_from
else WorkflowRunTriggeredFrom.DEBUGGING
)

View File

@ -1,5 +1,6 @@
from flask import abort, jsonify
from flask_restx import Resource, reqparse
from flask import abort, jsonify, request
from flask_restx import Resource
from pydantic import BaseModel, Field, field_validator
from sqlalchemy.orm import sessionmaker
from controllers.console import console_ns
@ -7,12 +8,31 @@ from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import account_initialization_required, setup_required
from extensions.ext_database import db
from libs.datetime_utils import parse_time_range
from libs.helper import DatetimeString
from libs.login import current_account_with_tenant, login_required
from models.enums import WorkflowRunTriggeredFrom
from models.model import AppMode
from repositories.factory import DifyAPIRepositoryFactory
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class WorkflowStatisticQuery(BaseModel):
start: str | None = Field(default=None, description="Start date and time (YYYY-MM-DD HH:MM)")
end: str | None = Field(default=None, description="End date and time (YYYY-MM-DD HH:MM)")
@field_validator("start", "end", mode="before")
@classmethod
def blank_to_none(cls, value: str | None) -> str | None:
if value == "":
return None
return value
console_ns.schema_model(
WorkflowStatisticQuery.__name__,
WorkflowStatisticQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
@console_ns.route("/apps/<uuid:app_id>/workflow/statistics/daily-conversations")
class WorkflowDailyRunsStatistic(Resource):
@ -24,9 +44,7 @@ class WorkflowDailyRunsStatistic(Resource):
@console_ns.doc("get_workflow_daily_runs_statistic")
@console_ns.doc(description="Get workflow daily runs statistics")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.doc(
params={"start": "Start date and time (YYYY-MM-DD HH:MM)", "end": "End date and time (YYYY-MM-DD HH:MM)"}
)
@console_ns.expect(console_ns.models[WorkflowStatisticQuery.__name__])
@console_ns.response(200, "Daily runs statistics retrieved successfully")
@get_app_model
@setup_required
@ -35,17 +53,12 @@ class WorkflowDailyRunsStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("start", type=DatetimeString("%Y-%m-%d %H:%M"), location="args")
.add_argument("end", type=DatetimeString("%Y-%m-%d %H:%M"), location="args")
)
args = parser.parse_args()
args = WorkflowStatisticQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
assert account.timezone is not None
try:
start_date, end_date = parse_time_range(args["start"], args["end"], account.timezone)
start_date, end_date = parse_time_range(args.start, args.end, account.timezone)
except ValueError as e:
abort(400, description=str(e))
@ -71,9 +84,7 @@ class WorkflowDailyTerminalsStatistic(Resource):
@console_ns.doc("get_workflow_daily_terminals_statistic")
@console_ns.doc(description="Get workflow daily terminals statistics")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.doc(
params={"start": "Start date and time (YYYY-MM-DD HH:MM)", "end": "End date and time (YYYY-MM-DD HH:MM)"}
)
@console_ns.expect(console_ns.models[WorkflowStatisticQuery.__name__])
@console_ns.response(200, "Daily terminals statistics retrieved successfully")
@get_app_model
@setup_required
@ -82,17 +93,12 @@ class WorkflowDailyTerminalsStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("start", type=DatetimeString("%Y-%m-%d %H:%M"), location="args")
.add_argument("end", type=DatetimeString("%Y-%m-%d %H:%M"), location="args")
)
args = parser.parse_args()
args = WorkflowStatisticQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
assert account.timezone is not None
try:
start_date, end_date = parse_time_range(args["start"], args["end"], account.timezone)
start_date, end_date = parse_time_range(args.start, args.end, account.timezone)
except ValueError as e:
abort(400, description=str(e))
@ -118,9 +124,7 @@ class WorkflowDailyTokenCostStatistic(Resource):
@console_ns.doc("get_workflow_daily_token_cost_statistic")
@console_ns.doc(description="Get workflow daily token cost statistics")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.doc(
params={"start": "Start date and time (YYYY-MM-DD HH:MM)", "end": "End date and time (YYYY-MM-DD HH:MM)"}
)
@console_ns.expect(console_ns.models[WorkflowStatisticQuery.__name__])
@console_ns.response(200, "Daily token cost statistics retrieved successfully")
@get_app_model
@setup_required
@ -129,17 +133,12 @@ class WorkflowDailyTokenCostStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("start", type=DatetimeString("%Y-%m-%d %H:%M"), location="args")
.add_argument("end", type=DatetimeString("%Y-%m-%d %H:%M"), location="args")
)
args = parser.parse_args()
args = WorkflowStatisticQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
assert account.timezone is not None
try:
start_date, end_date = parse_time_range(args["start"], args["end"], account.timezone)
start_date, end_date = parse_time_range(args.start, args.end, account.timezone)
except ValueError as e:
abort(400, description=str(e))
@ -165,9 +164,7 @@ class WorkflowAverageAppInteractionStatistic(Resource):
@console_ns.doc("get_workflow_average_app_interaction_statistic")
@console_ns.doc(description="Get workflow average app interaction statistics")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.doc(
params={"start": "Start date and time (YYYY-MM-DD HH:MM)", "end": "End date and time (YYYY-MM-DD HH:MM)"}
)
@console_ns.expect(console_ns.models[WorkflowStatisticQuery.__name__])
@console_ns.response(200, "Average app interaction statistics retrieved successfully")
@setup_required
@login_required
@ -176,17 +173,12 @@ class WorkflowAverageAppInteractionStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("start", type=DatetimeString("%Y-%m-%d %H:%M"), location="args")
.add_argument("end", type=DatetimeString("%Y-%m-%d %H:%M"), location="args")
)
args = parser.parse_args()
args = WorkflowStatisticQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
assert account.timezone is not None
try:
start_date, end_date = parse_time_range(args["start"], args["end"], account.timezone)
start_date, end_date = parse_time_range(args.start, args.end, account.timezone)
except ValueError as e:
abort(400, description=str(e))

View File

@ -1,6 +1,8 @@
import logging
from flask_restx import Resource, marshal_with, reqparse
from flask import request
from flask_restx import Resource, marshal_with
from pydantic import BaseModel
from sqlalchemy import select
from sqlalchemy.orm import Session
from werkzeug.exceptions import NotFound
@ -18,16 +20,30 @@ from ..app.wraps import get_app_model
from ..wraps import account_initialization_required, edit_permission_required, setup_required
logger = logging.getLogger(__name__)
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
parser = reqparse.RequestParser().add_argument("node_id", type=str, required=True, help="Node ID is required")
class Parser(BaseModel):
node_id: str
class ParserEnable(BaseModel):
trigger_id: str
enable_trigger: bool
console_ns.schema_model(Parser.__name__, Parser.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
console_ns.schema_model(
ParserEnable.__name__, ParserEnable.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)
)
@console_ns.route("/apps/<uuid:app_id>/workflows/triggers/webhook")
class WebhookTriggerApi(Resource):
"""Webhook Trigger API"""
@console_ns.expect(parser)
@console_ns.expect(console_ns.models[Parser.__name__])
@setup_required
@login_required
@account_initialization_required
@ -35,9 +51,9 @@ class WebhookTriggerApi(Resource):
@marshal_with(webhook_trigger_fields)
def get(self, app_model: App):
"""Get webhook trigger for a node"""
args = parser.parse_args()
args = Parser.model_validate(request.args.to_dict(flat=True)) # type: ignore
node_id = str(args["node_id"])
node_id = args.node_id
with Session(db.engine) as session:
# Get webhook trigger for this app and node
@ -96,16 +112,9 @@ class AppTriggersApi(Resource):
return {"data": triggers}
parser_enable = (
reqparse.RequestParser()
.add_argument("trigger_id", type=str, required=True, nullable=False, location="json")
.add_argument("enable_trigger", type=bool, required=True, nullable=False, location="json")
)
@console_ns.route("/apps/<uuid:app_id>/trigger-enable")
class AppTriggerEnableApi(Resource):
@console_ns.expect(parser_enable)
@console_ns.expect(console_ns.models[ParserEnable.__name__], validate=True)
@setup_required
@login_required
@account_initialization_required
@ -114,12 +123,11 @@ class AppTriggerEnableApi(Resource):
@marshal_with(trigger_fields)
def post(self, app_model: App):
"""Update app trigger (enable/disable)"""
args = parser_enable.parse_args()
args = ParserEnable.model_validate(console_ns.payload)
assert current_user.current_tenant_id is not None
trigger_id = args["trigger_id"]
trigger_id = args.trigger_id
with Session(db.engine) as session:
# Find the trigger using select
trigger = session.execute(
@ -134,7 +142,7 @@ class AppTriggerEnableApi(Resource):
raise NotFound("Trigger not found")
# Update status based on enable_trigger boolean
trigger.status = AppTriggerStatus.ENABLED if args["enable_trigger"] else AppTriggerStatus.DISABLED
trigger.status = AppTriggerStatus.ENABLED if args.enable_trigger else AppTriggerStatus.DISABLED
session.commit()
session.refresh(trigger)

View File

@ -58,7 +58,7 @@ class VersionApi(Resource):
response = httpx.get(
check_update_url,
params={"current_version": args["current_version"]},
timeout=httpx.Timeout(connect=3, read=10),
timeout=httpx.Timeout(timeout=10.0, connect=3.0),
)
except Exception as error:
logger.warning("Check update version error: %s.", str(error))

View File

@ -1,8 +1,10 @@
from datetime import datetime
from typing import Literal
import pytz
from flask import request
from flask_restx import Resource, fields, marshal_with, reqparse
from flask_restx import Resource, fields, marshal_with
from pydantic import BaseModel, Field, field_validator, model_validator
from sqlalchemy import select
from sqlalchemy.orm import Session
@ -42,20 +44,160 @@ from services.account_service import AccountService
from services.billing_service import BillingService
from services.errors.account import CurrentPasswordIncorrectError as ServiceCurrentPasswordIncorrectError
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
def _init_parser():
parser = reqparse.RequestParser()
if dify_config.EDITION == "CLOUD":
parser.add_argument("invitation_code", type=str, location="json")
parser.add_argument("interface_language", type=supported_language, required=True, location="json").add_argument(
"timezone", type=timezone, required=True, location="json"
)
return parser
class AccountInitPayload(BaseModel):
interface_language: str
timezone: str
invitation_code: str | None = None
@field_validator("interface_language")
@classmethod
def validate_language(cls, value: str) -> str:
return supported_language(value)
@field_validator("timezone")
@classmethod
def validate_timezone(cls, value: str) -> str:
return timezone(value)
class AccountNamePayload(BaseModel):
name: str = Field(min_length=3, max_length=30)
class AccountAvatarPayload(BaseModel):
avatar: str
class AccountInterfaceLanguagePayload(BaseModel):
interface_language: str
@field_validator("interface_language")
@classmethod
def validate_language(cls, value: str) -> str:
return supported_language(value)
class AccountInterfaceThemePayload(BaseModel):
interface_theme: Literal["light", "dark"]
class AccountTimezonePayload(BaseModel):
timezone: str
@field_validator("timezone")
@classmethod
def validate_timezone(cls, value: str) -> str:
return timezone(value)
class AccountPasswordPayload(BaseModel):
password: str | None = None
new_password: str
repeat_new_password: str
@model_validator(mode="after")
def check_passwords_match(self) -> "AccountPasswordPayload":
if self.new_password != self.repeat_new_password:
raise RepeatPasswordNotMatchError()
return self
class AccountDeletePayload(BaseModel):
token: str
code: str
class AccountDeletionFeedbackPayload(BaseModel):
email: str
feedback: str
@field_validator("email")
@classmethod
def validate_email(cls, value: str) -> str:
return email(value)
class EducationActivatePayload(BaseModel):
token: str
institution: str
role: str
class EducationAutocompleteQuery(BaseModel):
keywords: str
page: int = 0
limit: int = 20
class ChangeEmailSendPayload(BaseModel):
email: str
language: str | None = None
phase: str | None = None
token: str | None = None
@field_validator("email")
@classmethod
def validate_email(cls, value: str) -> str:
return email(value)
class ChangeEmailValidityPayload(BaseModel):
email: str
code: str
token: str
@field_validator("email")
@classmethod
def validate_email(cls, value: str) -> str:
return email(value)
class ChangeEmailResetPayload(BaseModel):
new_email: str
token: str
@field_validator("new_email")
@classmethod
def validate_email(cls, value: str) -> str:
return email(value)
class CheckEmailUniquePayload(BaseModel):
email: str
@field_validator("email")
@classmethod
def validate_email(cls, value: str) -> str:
return email(value)
def reg(cls: type[BaseModel]):
console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
reg(AccountInitPayload)
reg(AccountNamePayload)
reg(AccountAvatarPayload)
reg(AccountInterfaceLanguagePayload)
reg(AccountInterfaceThemePayload)
reg(AccountTimezonePayload)
reg(AccountPasswordPayload)
reg(AccountDeletePayload)
reg(AccountDeletionFeedbackPayload)
reg(EducationActivatePayload)
reg(EducationAutocompleteQuery)
reg(ChangeEmailSendPayload)
reg(ChangeEmailValidityPayload)
reg(ChangeEmailResetPayload)
reg(CheckEmailUniquePayload)
@console_ns.route("/account/init")
class AccountInitApi(Resource):
@console_ns.expect(_init_parser())
@console_ns.expect(console_ns.models[AccountInitPayload.__name__])
@setup_required
@login_required
def post(self):
@ -64,17 +206,18 @@ class AccountInitApi(Resource):
if account.status == "active":
raise AccountAlreadyInitedError()
args = _init_parser().parse_args()
payload = console_ns.payload or {}
args = AccountInitPayload.model_validate(payload)
if dify_config.EDITION == "CLOUD":
if not args["invitation_code"]:
if not args.invitation_code:
raise ValueError("invitation_code is required")
# check invitation code
invitation_code = (
db.session.query(InvitationCode)
.where(
InvitationCode.code == args["invitation_code"],
InvitationCode.code == args.invitation_code,
InvitationCode.status == "unused",
)
.first()
@ -88,8 +231,8 @@ class AccountInitApi(Resource):
invitation_code.used_by_tenant_id = account.current_tenant_id
invitation_code.used_by_account_id = account.id
account.interface_language = args["interface_language"]
account.timezone = args["timezone"]
account.interface_language = args.interface_language
account.timezone = args.timezone
account.interface_theme = "light"
account.status = "active"
account.initialized_at = naive_utc_now()
@ -110,137 +253,104 @@ class AccountProfileApi(Resource):
return current_user
parser_name = reqparse.RequestParser().add_argument("name", type=str, required=True, location="json")
@console_ns.route("/account/name")
class AccountNameApi(Resource):
@console_ns.expect(parser_name)
@console_ns.expect(console_ns.models[AccountNamePayload.__name__])
@setup_required
@login_required
@account_initialization_required
@marshal_with(account_fields)
def post(self):
current_user, _ = current_account_with_tenant()
args = parser_name.parse_args()
# Validate account name length
if len(args["name"]) < 3 or len(args["name"]) > 30:
raise ValueError("Account name must be between 3 and 30 characters.")
updated_account = AccountService.update_account(current_user, name=args["name"])
payload = console_ns.payload or {}
args = AccountNamePayload.model_validate(payload)
updated_account = AccountService.update_account(current_user, name=args.name)
return updated_account
parser_avatar = reqparse.RequestParser().add_argument("avatar", type=str, required=True, location="json")
@console_ns.route("/account/avatar")
class AccountAvatarApi(Resource):
@console_ns.expect(parser_avatar)
@console_ns.expect(console_ns.models[AccountAvatarPayload.__name__])
@setup_required
@login_required
@account_initialization_required
@marshal_with(account_fields)
def post(self):
current_user, _ = current_account_with_tenant()
args = parser_avatar.parse_args()
payload = console_ns.payload or {}
args = AccountAvatarPayload.model_validate(payload)
updated_account = AccountService.update_account(current_user, avatar=args["avatar"])
updated_account = AccountService.update_account(current_user, avatar=args.avatar)
return updated_account
parser_interface = reqparse.RequestParser().add_argument(
"interface_language", type=supported_language, required=True, location="json"
)
@console_ns.route("/account/interface-language")
class AccountInterfaceLanguageApi(Resource):
@console_ns.expect(parser_interface)
@console_ns.expect(console_ns.models[AccountInterfaceLanguagePayload.__name__])
@setup_required
@login_required
@account_initialization_required
@marshal_with(account_fields)
def post(self):
current_user, _ = current_account_with_tenant()
args = parser_interface.parse_args()
payload = console_ns.payload or {}
args = AccountInterfaceLanguagePayload.model_validate(payload)
updated_account = AccountService.update_account(current_user, interface_language=args["interface_language"])
updated_account = AccountService.update_account(current_user, interface_language=args.interface_language)
return updated_account
parser_theme = reqparse.RequestParser().add_argument(
"interface_theme", type=str, choices=["light", "dark"], required=True, location="json"
)
@console_ns.route("/account/interface-theme")
class AccountInterfaceThemeApi(Resource):
@console_ns.expect(parser_theme)
@console_ns.expect(console_ns.models[AccountInterfaceThemePayload.__name__])
@setup_required
@login_required
@account_initialization_required
@marshal_with(account_fields)
def post(self):
current_user, _ = current_account_with_tenant()
args = parser_theme.parse_args()
payload = console_ns.payload or {}
args = AccountInterfaceThemePayload.model_validate(payload)
updated_account = AccountService.update_account(current_user, interface_theme=args["interface_theme"])
updated_account = AccountService.update_account(current_user, interface_theme=args.interface_theme)
return updated_account
parser_timezone = reqparse.RequestParser().add_argument("timezone", type=str, required=True, location="json")
@console_ns.route("/account/timezone")
class AccountTimezoneApi(Resource):
@console_ns.expect(parser_timezone)
@console_ns.expect(console_ns.models[AccountTimezonePayload.__name__])
@setup_required
@login_required
@account_initialization_required
@marshal_with(account_fields)
def post(self):
current_user, _ = current_account_with_tenant()
args = parser_timezone.parse_args()
payload = console_ns.payload or {}
args = AccountTimezonePayload.model_validate(payload)
# Validate timezone string, e.g. America/New_York, Asia/Shanghai
if args["timezone"] not in pytz.all_timezones:
raise ValueError("Invalid timezone string.")
updated_account = AccountService.update_account(current_user, timezone=args["timezone"])
updated_account = AccountService.update_account(current_user, timezone=args.timezone)
return updated_account
parser_pw = (
reqparse.RequestParser()
.add_argument("password", type=str, required=False, location="json")
.add_argument("new_password", type=str, required=True, location="json")
.add_argument("repeat_new_password", type=str, required=True, location="json")
)
@console_ns.route("/account/password")
class AccountPasswordApi(Resource):
@console_ns.expect(parser_pw)
@console_ns.expect(console_ns.models[AccountPasswordPayload.__name__])
@setup_required
@login_required
@account_initialization_required
@marshal_with(account_fields)
def post(self):
current_user, _ = current_account_with_tenant()
args = parser_pw.parse_args()
if args["new_password"] != args["repeat_new_password"]:
raise RepeatPasswordNotMatchError()
payload = console_ns.payload or {}
args = AccountPasswordPayload.model_validate(payload)
try:
AccountService.update_account_password(current_user, args["password"], args["new_password"])
AccountService.update_account_password(current_user, args.password, args.new_password)
except ServiceCurrentPasswordIncorrectError:
raise CurrentPasswordIncorrectError()
@ -316,25 +426,19 @@ class AccountDeleteVerifyApi(Resource):
return {"result": "success", "data": token}
parser_delete = (
reqparse.RequestParser()
.add_argument("token", type=str, required=True, location="json")
.add_argument("code", type=str, required=True, location="json")
)
@console_ns.route("/account/delete")
class AccountDeleteApi(Resource):
@console_ns.expect(parser_delete)
@console_ns.expect(console_ns.models[AccountDeletePayload.__name__])
@setup_required
@login_required
@account_initialization_required
def post(self):
account, _ = current_account_with_tenant()
args = parser_delete.parse_args()
payload = console_ns.payload or {}
args = AccountDeletePayload.model_validate(payload)
if not AccountService.verify_account_deletion_code(args["token"], args["code"]):
if not AccountService.verify_account_deletion_code(args.token, args.code):
raise InvalidAccountDeletionCodeError()
AccountService.delete_account(account)
@ -342,21 +446,15 @@ class AccountDeleteApi(Resource):
return {"result": "success"}
parser_feedback = (
reqparse.RequestParser()
.add_argument("email", type=str, required=True, location="json")
.add_argument("feedback", type=str, required=True, location="json")
)
@console_ns.route("/account/delete/feedback")
class AccountDeleteUpdateFeedbackApi(Resource):
@console_ns.expect(parser_feedback)
@console_ns.expect(console_ns.models[AccountDeletionFeedbackPayload.__name__])
@setup_required
def post(self):
args = parser_feedback.parse_args()
payload = console_ns.payload or {}
args = AccountDeletionFeedbackPayload.model_validate(payload)
BillingService.update_account_deletion_feedback(args["email"], args["feedback"])
BillingService.update_account_deletion_feedback(args.email, args.feedback)
return {"result": "success"}
@ -379,14 +477,6 @@ class EducationVerifyApi(Resource):
return BillingService.EducationIdentity.verify(account.id, account.email)
parser_edu = (
reqparse.RequestParser()
.add_argument("token", type=str, required=True, location="json")
.add_argument("institution", type=str, required=True, location="json")
.add_argument("role", type=str, required=True, location="json")
)
@console_ns.route("/account/education")
class EducationApi(Resource):
status_fields = {
@ -396,7 +486,7 @@ class EducationApi(Resource):
"allow_refresh": fields.Boolean,
}
@console_ns.expect(parser_edu)
@console_ns.expect(console_ns.models[EducationActivatePayload.__name__])
@setup_required
@login_required
@account_initialization_required
@ -405,9 +495,10 @@ class EducationApi(Resource):
def post(self):
account, _ = current_account_with_tenant()
args = parser_edu.parse_args()
payload = console_ns.payload or {}
args = EducationActivatePayload.model_validate(payload)
return BillingService.EducationIdentity.activate(account, args["token"], args["institution"], args["role"])
return BillingService.EducationIdentity.activate(account, args.token, args.institution, args.role)
@setup_required
@login_required
@ -425,14 +516,6 @@ class EducationApi(Resource):
return res
parser_autocomplete = (
reqparse.RequestParser()
.add_argument("keywords", type=str, required=True, location="args")
.add_argument("page", type=int, required=False, location="args", default=0)
.add_argument("limit", type=int, required=False, location="args", default=20)
)
@console_ns.route("/account/education/autocomplete")
class EducationAutoCompleteApi(Resource):
data_fields = {
@ -441,7 +524,7 @@ class EducationAutoCompleteApi(Resource):
"has_next": fields.Boolean,
}
@console_ns.expect(parser_autocomplete)
@console_ns.expect(console_ns.models[EducationAutocompleteQuery.__name__])
@setup_required
@login_required
@account_initialization_required
@ -449,46 +532,39 @@ class EducationAutoCompleteApi(Resource):
@cloud_edition_billing_enabled
@marshal_with(data_fields)
def get(self):
args = parser_autocomplete.parse_args()
payload = request.args.to_dict(flat=True) # type: ignore
args = EducationAutocompleteQuery.model_validate(payload)
return BillingService.EducationIdentity.autocomplete(args["keywords"], args["page"], args["limit"])
parser_change_email = (
reqparse.RequestParser()
.add_argument("email", type=email, required=True, location="json")
.add_argument("language", type=str, required=False, location="json")
.add_argument("phase", type=str, required=False, location="json")
.add_argument("token", type=str, required=False, location="json")
)
return BillingService.EducationIdentity.autocomplete(args.keywords, args.page, args.limit)
@console_ns.route("/account/change-email")
class ChangeEmailSendEmailApi(Resource):
@console_ns.expect(parser_change_email)
@console_ns.expect(console_ns.models[ChangeEmailSendPayload.__name__])
@enable_change_email
@setup_required
@login_required
@account_initialization_required
def post(self):
current_user, _ = current_account_with_tenant()
args = parser_change_email.parse_args()
payload = console_ns.payload or {}
args = ChangeEmailSendPayload.model_validate(payload)
ip_address = extract_remote_ip(request)
if AccountService.is_email_send_ip_limit(ip_address):
raise EmailSendIpLimitError()
if args["language"] is not None and args["language"] == "zh-Hans":
if args.language is not None and args.language == "zh-Hans":
language = "zh-Hans"
else:
language = "en-US"
account = None
user_email = args["email"]
if args["phase"] is not None and args["phase"] == "new_email":
if args["token"] is None:
user_email = args.email
if args.phase is not None and args.phase == "new_email":
if args.token is None:
raise InvalidTokenError()
reset_data = AccountService.get_change_email_data(args["token"])
reset_data = AccountService.get_change_email_data(args.token)
if reset_data is None:
raise InvalidTokenError()
user_email = reset_data.get("email", "")
@ -497,118 +573,103 @@ class ChangeEmailSendEmailApi(Resource):
raise InvalidEmailError()
else:
with Session(db.engine) as session:
account = session.execute(select(Account).filter_by(email=args["email"])).scalar_one_or_none()
account = session.execute(select(Account).filter_by(email=args.email)).scalar_one_or_none()
if account is None:
raise AccountNotFound()
token = AccountService.send_change_email_email(
account=account, email=args["email"], old_email=user_email, language=language, phase=args["phase"]
account=account, email=args.email, old_email=user_email, language=language, phase=args.phase
)
return {"result": "success", "data": token}
parser_validity = (
reqparse.RequestParser()
.add_argument("email", type=email, required=True, location="json")
.add_argument("code", type=str, required=True, location="json")
.add_argument("token", type=str, required=True, nullable=False, location="json")
)
@console_ns.route("/account/change-email/validity")
class ChangeEmailCheckApi(Resource):
@console_ns.expect(parser_validity)
@console_ns.expect(console_ns.models[ChangeEmailValidityPayload.__name__])
@enable_change_email
@setup_required
@login_required
@account_initialization_required
def post(self):
args = parser_validity.parse_args()
payload = console_ns.payload or {}
args = ChangeEmailValidityPayload.model_validate(payload)
user_email = args["email"]
user_email = args.email
is_change_email_error_rate_limit = AccountService.is_change_email_error_rate_limit(args["email"])
is_change_email_error_rate_limit = AccountService.is_change_email_error_rate_limit(args.email)
if is_change_email_error_rate_limit:
raise EmailChangeLimitError()
token_data = AccountService.get_change_email_data(args["token"])
token_data = AccountService.get_change_email_data(args.token)
if token_data is None:
raise InvalidTokenError()
if user_email != token_data.get("email"):
raise InvalidEmailError()
if args["code"] != token_data.get("code"):
AccountService.add_change_email_error_rate_limit(args["email"])
if args.code != token_data.get("code"):
AccountService.add_change_email_error_rate_limit(args.email)
raise EmailCodeError()
# Verified, revoke the first token
AccountService.revoke_change_email_token(args["token"])
AccountService.revoke_change_email_token(args.token)
# Refresh token data by generating a new token
_, new_token = AccountService.generate_change_email_token(
user_email, code=args["code"], old_email=token_data.get("old_email"), additional_data={}
user_email, code=args.code, old_email=token_data.get("old_email"), additional_data={}
)
AccountService.reset_change_email_error_rate_limit(args["email"])
AccountService.reset_change_email_error_rate_limit(args.email)
return {"is_valid": True, "email": token_data.get("email"), "token": new_token}
parser_reset = (
reqparse.RequestParser()
.add_argument("new_email", type=email, required=True, location="json")
.add_argument("token", type=str, required=True, nullable=False, location="json")
)
@console_ns.route("/account/change-email/reset")
class ChangeEmailResetApi(Resource):
@console_ns.expect(parser_reset)
@console_ns.expect(console_ns.models[ChangeEmailResetPayload.__name__])
@enable_change_email
@setup_required
@login_required
@account_initialization_required
@marshal_with(account_fields)
def post(self):
args = parser_reset.parse_args()
payload = console_ns.payload or {}
args = ChangeEmailResetPayload.model_validate(payload)
if AccountService.is_account_in_freeze(args["new_email"]):
if AccountService.is_account_in_freeze(args.new_email):
raise AccountInFreezeError()
if not AccountService.check_email_unique(args["new_email"]):
if not AccountService.check_email_unique(args.new_email):
raise EmailAlreadyInUseError()
reset_data = AccountService.get_change_email_data(args["token"])
reset_data = AccountService.get_change_email_data(args.token)
if not reset_data:
raise InvalidTokenError()
AccountService.revoke_change_email_token(args["token"])
AccountService.revoke_change_email_token(args.token)
old_email = reset_data.get("old_email", "")
current_user, _ = current_account_with_tenant()
if current_user.email != old_email:
raise AccountNotFound()
updated_account = AccountService.update_account_email(current_user, email=args["new_email"])
updated_account = AccountService.update_account_email(current_user, email=args.new_email)
AccountService.send_change_email_completed_notify_email(
email=args["new_email"],
email=args.new_email,
)
return updated_account
parser_check = reqparse.RequestParser().add_argument("email", type=email, required=True, location="json")
@console_ns.route("/account/change-email/check-email-unique")
class CheckEmailUnique(Resource):
@console_ns.expect(parser_check)
@console_ns.expect(console_ns.models[CheckEmailUniquePayload.__name__])
@setup_required
def post(self):
args = parser_check.parse_args()
if AccountService.is_account_in_freeze(args["email"]):
payload = console_ns.payload or {}
args = CheckEmailUniquePayload.model_validate(payload)
if AccountService.is_account_in_freeze(args.email):
raise AccountInFreezeError()
if not AccountService.check_email_unique(args["email"]):
if not AccountService.check_email_unique(args.email):
raise EmailAlreadyInUseError()
return {"result": "success"}

View File

@ -1,4 +1,8 @@
from flask_restx import Resource, fields, reqparse
from typing import Any
from flask import request
from flask_restx import Resource, fields
from pydantic import BaseModel, Field
from controllers.console import console_ns
from controllers.console.wraps import account_initialization_required, is_admin_or_owner_required, setup_required
@ -7,21 +11,49 @@ from core.plugin.impl.exc import PluginPermissionDeniedError
from libs.login import current_account_with_tenant, login_required
from services.plugin.endpoint_service import EndpointService
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class EndpointCreatePayload(BaseModel):
plugin_unique_identifier: str
settings: dict[str, Any]
name: str = Field(min_length=1)
class EndpointIdPayload(BaseModel):
endpoint_id: str
class EndpointUpdatePayload(EndpointIdPayload):
settings: dict[str, Any]
name: str = Field(min_length=1)
class EndpointListQuery(BaseModel):
page: int = Field(ge=1)
page_size: int = Field(gt=0)
class EndpointListForPluginQuery(EndpointListQuery):
plugin_id: str
def reg(cls: type[BaseModel]):
console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
reg(EndpointCreatePayload)
reg(EndpointIdPayload)
reg(EndpointUpdatePayload)
reg(EndpointListQuery)
reg(EndpointListForPluginQuery)
@console_ns.route("/workspaces/current/endpoints/create")
class EndpointCreateApi(Resource):
@console_ns.doc("create_endpoint")
@console_ns.doc(description="Create a new plugin endpoint")
@console_ns.expect(
console_ns.model(
"EndpointCreateRequest",
{
"plugin_unique_identifier": fields.String(required=True, description="Plugin unique identifier"),
"settings": fields.Raw(required=True, description="Endpoint settings"),
"name": fields.String(required=True, description="Endpoint name"),
},
)
)
@console_ns.expect(console_ns.models[EndpointCreatePayload.__name__])
@console_ns.response(
200,
"Endpoint created successfully",
@ -35,26 +67,16 @@ class EndpointCreateApi(Resource):
def post(self):
user, tenant_id = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("plugin_unique_identifier", type=str, required=True)
.add_argument("settings", type=dict, required=True)
.add_argument("name", type=str, required=True)
)
args = parser.parse_args()
plugin_unique_identifier = args["plugin_unique_identifier"]
settings = args["settings"]
name = args["name"]
args = EndpointCreatePayload.model_validate(console_ns.payload)
try:
return {
"success": EndpointService.create_endpoint(
tenant_id=tenant_id,
user_id=user.id,
plugin_unique_identifier=plugin_unique_identifier,
name=name,
settings=settings,
plugin_unique_identifier=args.plugin_unique_identifier,
name=args.name,
settings=args.settings,
)
}
except PluginPermissionDeniedError as e:
@ -65,11 +87,7 @@ class EndpointCreateApi(Resource):
class EndpointListApi(Resource):
@console_ns.doc("list_endpoints")
@console_ns.doc(description="List plugin endpoints with pagination")
@console_ns.expect(
console_ns.parser()
.add_argument("page", type=int, required=True, location="args", help="Page number")
.add_argument("page_size", type=int, required=True, location="args", help="Page size")
)
@console_ns.expect(console_ns.models[EndpointListQuery.__name__])
@console_ns.response(
200,
"Success",
@ -83,15 +101,10 @@ class EndpointListApi(Resource):
def get(self):
user, tenant_id = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("page", type=int, required=True, location="args")
.add_argument("page_size", type=int, required=True, location="args")
)
args = parser.parse_args()
args = EndpointListQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
page = args["page"]
page_size = args["page_size"]
page = args.page
page_size = args.page_size
return jsonable_encoder(
{
@ -109,12 +122,7 @@ class EndpointListApi(Resource):
class EndpointListForSinglePluginApi(Resource):
@console_ns.doc("list_plugin_endpoints")
@console_ns.doc(description="List endpoints for a specific plugin")
@console_ns.expect(
console_ns.parser()
.add_argument("page", type=int, required=True, location="args", help="Page number")
.add_argument("page_size", type=int, required=True, location="args", help="Page size")
.add_argument("plugin_id", type=str, required=True, location="args", help="Plugin ID")
)
@console_ns.expect(console_ns.models[EndpointListForPluginQuery.__name__])
@console_ns.response(
200,
"Success",
@ -128,17 +136,11 @@ class EndpointListForSinglePluginApi(Resource):
def get(self):
user, tenant_id = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("page", type=int, required=True, location="args")
.add_argument("page_size", type=int, required=True, location="args")
.add_argument("plugin_id", type=str, required=True, location="args")
)
args = parser.parse_args()
args = EndpointListForPluginQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
page = args["page"]
page_size = args["page_size"]
plugin_id = args["plugin_id"]
page = args.page
page_size = args.page_size
plugin_id = args.plugin_id
return jsonable_encoder(
{
@ -157,11 +159,7 @@ class EndpointListForSinglePluginApi(Resource):
class EndpointDeleteApi(Resource):
@console_ns.doc("delete_endpoint")
@console_ns.doc(description="Delete a plugin endpoint")
@console_ns.expect(
console_ns.model(
"EndpointDeleteRequest", {"endpoint_id": fields.String(required=True, description="Endpoint ID")}
)
)
@console_ns.expect(console_ns.models[EndpointIdPayload.__name__])
@console_ns.response(
200,
"Endpoint deleted successfully",
@ -175,13 +173,12 @@ class EndpointDeleteApi(Resource):
def post(self):
user, tenant_id = current_account_with_tenant()
parser = reqparse.RequestParser().add_argument("endpoint_id", type=str, required=True)
args = parser.parse_args()
endpoint_id = args["endpoint_id"]
args = EndpointIdPayload.model_validate(console_ns.payload)
return {
"success": EndpointService.delete_endpoint(tenant_id=tenant_id, user_id=user.id, endpoint_id=endpoint_id)
"success": EndpointService.delete_endpoint(
tenant_id=tenant_id, user_id=user.id, endpoint_id=args.endpoint_id
)
}
@ -189,16 +186,7 @@ class EndpointDeleteApi(Resource):
class EndpointUpdateApi(Resource):
@console_ns.doc("update_endpoint")
@console_ns.doc(description="Update a plugin endpoint")
@console_ns.expect(
console_ns.model(
"EndpointUpdateRequest",
{
"endpoint_id": fields.String(required=True, description="Endpoint ID"),
"settings": fields.Raw(required=True, description="Updated settings"),
"name": fields.String(required=True, description="Updated name"),
},
)
)
@console_ns.expect(console_ns.models[EndpointUpdatePayload.__name__])
@console_ns.response(
200,
"Endpoint updated successfully",
@ -212,25 +200,15 @@ class EndpointUpdateApi(Resource):
def post(self):
user, tenant_id = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("endpoint_id", type=str, required=True)
.add_argument("settings", type=dict, required=True)
.add_argument("name", type=str, required=True)
)
args = parser.parse_args()
endpoint_id = args["endpoint_id"]
settings = args["settings"]
name = args["name"]
args = EndpointUpdatePayload.model_validate(console_ns.payload)
return {
"success": EndpointService.update_endpoint(
tenant_id=tenant_id,
user_id=user.id,
endpoint_id=endpoint_id,
name=name,
settings=settings,
endpoint_id=args.endpoint_id,
name=args.name,
settings=args.settings,
)
}
@ -239,11 +217,7 @@ class EndpointUpdateApi(Resource):
class EndpointEnableApi(Resource):
@console_ns.doc("enable_endpoint")
@console_ns.doc(description="Enable a plugin endpoint")
@console_ns.expect(
console_ns.model(
"EndpointEnableRequest", {"endpoint_id": fields.String(required=True, description="Endpoint ID")}
)
)
@console_ns.expect(console_ns.models[EndpointIdPayload.__name__])
@console_ns.response(
200,
"Endpoint enabled successfully",
@ -257,13 +231,12 @@ class EndpointEnableApi(Resource):
def post(self):
user, tenant_id = current_account_with_tenant()
parser = reqparse.RequestParser().add_argument("endpoint_id", type=str, required=True)
args = parser.parse_args()
endpoint_id = args["endpoint_id"]
args = EndpointIdPayload.model_validate(console_ns.payload)
return {
"success": EndpointService.enable_endpoint(tenant_id=tenant_id, user_id=user.id, endpoint_id=endpoint_id)
"success": EndpointService.enable_endpoint(
tenant_id=tenant_id, user_id=user.id, endpoint_id=args.endpoint_id
)
}
@ -271,11 +244,7 @@ class EndpointEnableApi(Resource):
class EndpointDisableApi(Resource):
@console_ns.doc("disable_endpoint")
@console_ns.doc(description="Disable a plugin endpoint")
@console_ns.expect(
console_ns.model(
"EndpointDisableRequest", {"endpoint_id": fields.String(required=True, description="Endpoint ID")}
)
)
@console_ns.expect(console_ns.models[EndpointIdPayload.__name__])
@console_ns.response(
200,
"Endpoint disabled successfully",
@ -289,11 +258,10 @@ class EndpointDisableApi(Resource):
def post(self):
user, tenant_id = current_account_with_tenant()
parser = reqparse.RequestParser().add_argument("endpoint_id", type=str, required=True)
args = parser.parse_args()
endpoint_id = args["endpoint_id"]
args = EndpointIdPayload.model_validate(console_ns.payload)
return {
"success": EndpointService.disable_endpoint(tenant_id=tenant_id, user_id=user.id, endpoint_id=endpoint_id)
"success": EndpointService.disable_endpoint(
tenant_id=tenant_id, user_id=user.id, endpoint_id=args.endpoint_id
)
}

View File

@ -1,7 +1,8 @@
from urllib import parse
from flask import abort, request
from flask_restx import Resource, marshal_with, reqparse
from flask_restx import Resource, marshal_with
from pydantic import BaseModel, Field
import services
from configs import dify_config
@ -31,6 +32,42 @@ from services.account_service import AccountService, RegisterService, TenantServ
from services.errors.account import AccountAlreadyInTenantError
from services.feature_service import FeatureService
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class MemberInvitePayload(BaseModel):
emails: list[str] = Field(default_factory=list)
role: TenantAccountRole
language: str | None = None
class MemberRoleUpdatePayload(BaseModel):
role: str
class OwnerTransferEmailPayload(BaseModel):
language: str | None = None
class OwnerTransferCheckPayload(BaseModel):
code: str
token: str
class OwnerTransferPayload(BaseModel):
token: str
def reg(cls: type[BaseModel]):
console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
reg(MemberInvitePayload)
reg(MemberRoleUpdatePayload)
reg(OwnerTransferEmailPayload)
reg(OwnerTransferCheckPayload)
reg(OwnerTransferPayload)
@console_ns.route("/workspaces/current/members")
class MemberListApi(Resource):
@ -48,29 +85,22 @@ class MemberListApi(Resource):
return {"result": "success", "accounts": members}, 200
parser_invite = (
reqparse.RequestParser()
.add_argument("emails", type=list, required=True, location="json")
.add_argument("role", type=str, required=True, default="admin", location="json")
.add_argument("language", type=str, required=False, location="json")
)
@console_ns.route("/workspaces/current/members/invite-email")
class MemberInviteEmailApi(Resource):
"""Invite a new member by email."""
@console_ns.expect(parser_invite)
@console_ns.expect(console_ns.models[MemberInvitePayload.__name__])
@setup_required
@login_required
@account_initialization_required
@cloud_edition_billing_resource_check("members")
def post(self):
args = parser_invite.parse_args()
payload = console_ns.payload or {}
args = MemberInvitePayload.model_validate(payload)
invitee_emails = args["emails"]
invitee_role = args["role"]
interface_language = args["language"]
invitee_emails = args.emails
invitee_role = args.role
interface_language = args.language
if not TenantAccountRole.is_non_owner_role(invitee_role):
return {"code": "invalid-role", "message": "Invalid role"}, 400
current_user, _ = current_account_with_tenant()
@ -146,20 +176,18 @@ class MemberCancelInviteApi(Resource):
}, 200
parser_update = reqparse.RequestParser().add_argument("role", type=str, required=True, location="json")
@console_ns.route("/workspaces/current/members/<uuid:member_id>/update-role")
class MemberUpdateRoleApi(Resource):
"""Update member role."""
@console_ns.expect(parser_update)
@console_ns.expect(console_ns.models[MemberRoleUpdatePayload.__name__])
@setup_required
@login_required
@account_initialization_required
def put(self, member_id):
args = parser_update.parse_args()
new_role = args["role"]
payload = console_ns.payload or {}
args = MemberRoleUpdatePayload.model_validate(payload)
new_role = args.role
if not TenantAccountRole.is_valid_role(new_role):
return {"code": "invalid-role", "message": "Invalid role"}, 400
@ -197,20 +225,18 @@ class DatasetOperatorMemberListApi(Resource):
return {"result": "success", "accounts": members}, 200
parser_send = reqparse.RequestParser().add_argument("language", type=str, required=False, location="json")
@console_ns.route("/workspaces/current/members/send-owner-transfer-confirm-email")
class SendOwnerTransferEmailApi(Resource):
"""Send owner transfer email."""
@console_ns.expect(parser_send)
@console_ns.expect(console_ns.models[OwnerTransferEmailPayload.__name__])
@setup_required
@login_required
@account_initialization_required
@is_allow_transfer_owner
def post(self):
args = parser_send.parse_args()
payload = console_ns.payload or {}
args = OwnerTransferEmailPayload.model_validate(payload)
ip_address = extract_remote_ip(request)
if AccountService.is_email_send_ip_limit(ip_address):
raise EmailSendIpLimitError()
@ -221,7 +247,7 @@ class SendOwnerTransferEmailApi(Resource):
if not TenantService.is_owner(current_user, current_user.current_tenant):
raise NotOwnerError()
if args["language"] is not None and args["language"] == "zh-Hans":
if args.language is not None and args.language == "zh-Hans":
language = "zh-Hans"
else:
language = "en-US"
@ -238,22 +264,16 @@ class SendOwnerTransferEmailApi(Resource):
return {"result": "success", "data": token}
parser_owner = (
reqparse.RequestParser()
.add_argument("code", type=str, required=True, location="json")
.add_argument("token", type=str, required=True, nullable=False, location="json")
)
@console_ns.route("/workspaces/current/members/owner-transfer-check")
class OwnerTransferCheckApi(Resource):
@console_ns.expect(parser_owner)
@console_ns.expect(console_ns.models[OwnerTransferCheckPayload.__name__])
@setup_required
@login_required
@account_initialization_required
@is_allow_transfer_owner
def post(self):
args = parser_owner.parse_args()
payload = console_ns.payload or {}
args = OwnerTransferCheckPayload.model_validate(payload)
# check if the current user is the owner of the workspace
current_user, _ = current_account_with_tenant()
if not current_user.current_tenant:
@ -267,41 +287,37 @@ class OwnerTransferCheckApi(Resource):
if is_owner_transfer_error_rate_limit:
raise OwnerTransferLimitError()
token_data = AccountService.get_owner_transfer_data(args["token"])
token_data = AccountService.get_owner_transfer_data(args.token)
if token_data is None:
raise InvalidTokenError()
if user_email != token_data.get("email"):
raise InvalidEmailError()
if args["code"] != token_data.get("code"):
if args.code != token_data.get("code"):
AccountService.add_owner_transfer_error_rate_limit(user_email)
raise EmailCodeError()
# Verified, revoke the first token
AccountService.revoke_owner_transfer_token(args["token"])
AccountService.revoke_owner_transfer_token(args.token)
# Refresh token data by generating a new token
_, new_token = AccountService.generate_owner_transfer_token(user_email, code=args["code"], additional_data={})
_, new_token = AccountService.generate_owner_transfer_token(user_email, code=args.code, additional_data={})
AccountService.reset_owner_transfer_error_rate_limit(user_email)
return {"is_valid": True, "email": token_data.get("email"), "token": new_token}
parser_owner_transfer = reqparse.RequestParser().add_argument(
"token", type=str, required=True, nullable=False, location="json"
)
@console_ns.route("/workspaces/current/members/<uuid:member_id>/owner-transfer")
class OwnerTransfer(Resource):
@console_ns.expect(parser_owner_transfer)
@console_ns.expect(console_ns.models[OwnerTransferPayload.__name__])
@setup_required
@login_required
@account_initialization_required
@is_allow_transfer_owner
def post(self, member_id):
args = parser_owner_transfer.parse_args()
payload = console_ns.payload or {}
args = OwnerTransferPayload.model_validate(payload)
# check if the current user is the owner of the workspace
current_user, _ = current_account_with_tenant()
@ -313,14 +329,14 @@ class OwnerTransfer(Resource):
if current_user.id == str(member_id):
raise CannotTransferOwnerToSelfError()
transfer_token_data = AccountService.get_owner_transfer_data(args["token"])
transfer_token_data = AccountService.get_owner_transfer_data(args.token)
if not transfer_token_data:
raise InvalidTokenError()
if transfer_token_data.get("email") != current_user.email:
raise InvalidEmailError()
AccountService.revoke_owner_transfer_token(args["token"])
AccountService.revoke_owner_transfer_token(args.token)
member = db.session.get(Account, str(member_id))
if not member:

View File

@ -1,31 +1,97 @@
import io
from typing import Any, Literal
from flask import send_file
from flask_restx import Resource, reqparse
from flask import request, send_file
from flask_restx import Resource
from pydantic import BaseModel, Field, field_validator
from controllers.console import console_ns
from controllers.console.wraps import account_initialization_required, is_admin_or_owner_required, setup_required
from core.model_runtime.entities.model_entities import ModelType
from core.model_runtime.errors.validate import CredentialsValidateFailedError
from core.model_runtime.utils.encoders import jsonable_encoder
from libs.helper import StrLen, uuid_value
from libs.helper import uuid_value
from libs.login import current_account_with_tenant, login_required
from services.billing_service import BillingService
from services.model_provider_service import ModelProviderService
parser_model = reqparse.RequestParser().add_argument(
"model_type",
type=str,
required=False,
nullable=True,
choices=[mt.value for mt in ModelType],
location="args",
)
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class ParserModelList(BaseModel):
model_type: ModelType | None = None
class ParserCredentialId(BaseModel):
credential_id: str | None = None
@field_validator("credential_id")
@classmethod
def validate_optional_credential_id(cls, value: str | None) -> str | None:
if value is None:
return value
return uuid_value(value)
class ParserCredentialCreate(BaseModel):
credentials: dict[str, Any]
name: str | None = Field(default=None, max_length=30)
class ParserCredentialUpdate(BaseModel):
credential_id: str
credentials: dict[str, Any]
name: str | None = Field(default=None, max_length=30)
@field_validator("credential_id")
@classmethod
def validate_update_credential_id(cls, value: str) -> str:
return uuid_value(value)
class ParserCredentialDelete(BaseModel):
credential_id: str
@field_validator("credential_id")
@classmethod
def validate_delete_credential_id(cls, value: str) -> str:
return uuid_value(value)
class ParserCredentialSwitch(BaseModel):
credential_id: str
@field_validator("credential_id")
@classmethod
def validate_switch_credential_id(cls, value: str) -> str:
return uuid_value(value)
class ParserCredentialValidate(BaseModel):
credentials: dict[str, Any]
class ParserPreferredProviderType(BaseModel):
preferred_provider_type: Literal["system", "custom"]
def reg(cls: type[BaseModel]):
console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
reg(ParserModelList)
reg(ParserCredentialId)
reg(ParserCredentialCreate)
reg(ParserCredentialUpdate)
reg(ParserCredentialDelete)
reg(ParserCredentialSwitch)
reg(ParserCredentialValidate)
reg(ParserPreferredProviderType)
@console_ns.route("/workspaces/current/model-providers")
class ModelProviderListApi(Resource):
@console_ns.expect(parser_model)
@console_ns.expect(console_ns.models[ParserModelList.__name__])
@setup_required
@login_required
@account_initialization_required
@ -33,38 +99,18 @@ class ModelProviderListApi(Resource):
_, current_tenant_id = current_account_with_tenant()
tenant_id = current_tenant_id
args = parser_model.parse_args()
payload = request.args.to_dict(flat=True) # type: ignore
args = ParserModelList.model_validate(payload)
model_provider_service = ModelProviderService()
provider_list = model_provider_service.get_provider_list(tenant_id=tenant_id, model_type=args.get("model_type"))
provider_list = model_provider_service.get_provider_list(tenant_id=tenant_id, model_type=args.model_type)
return jsonable_encoder({"data": provider_list})
parser_cred = reqparse.RequestParser().add_argument(
"credential_id", type=uuid_value, required=False, nullable=True, location="args"
)
parser_post_cred = (
reqparse.RequestParser()
.add_argument("credentials", type=dict, required=True, nullable=False, location="json")
.add_argument("name", type=StrLen(30), required=False, nullable=True, location="json")
)
parser_put_cred = (
reqparse.RequestParser()
.add_argument("credential_id", type=uuid_value, required=True, nullable=False, location="json")
.add_argument("credentials", type=dict, required=True, nullable=False, location="json")
.add_argument("name", type=StrLen(30), required=False, nullable=True, location="json")
)
parser_delete_cred = reqparse.RequestParser().add_argument(
"credential_id", type=uuid_value, required=True, nullable=False, location="json"
)
@console_ns.route("/workspaces/current/model-providers/<path:provider>/credentials")
class ModelProviderCredentialApi(Resource):
@console_ns.expect(parser_cred)
@console_ns.expect(console_ns.models[ParserCredentialId.__name__])
@setup_required
@login_required
@account_initialization_required
@ -72,23 +118,25 @@ class ModelProviderCredentialApi(Resource):
_, current_tenant_id = current_account_with_tenant()
tenant_id = current_tenant_id
# if credential_id is not provided, return current used credential
args = parser_cred.parse_args()
payload = request.args.to_dict(flat=True) # type: ignore
args = ParserCredentialId.model_validate(payload)
model_provider_service = ModelProviderService()
credentials = model_provider_service.get_provider_credential(
tenant_id=tenant_id, provider=provider, credential_id=args.get("credential_id")
tenant_id=tenant_id, provider=provider, credential_id=args.credential_id
)
return {"credentials": credentials}
@console_ns.expect(parser_post_cred)
@console_ns.expect(console_ns.models[ParserCredentialCreate.__name__])
@setup_required
@login_required
@is_admin_or_owner_required
@account_initialization_required
def post(self, provider: str):
_, current_tenant_id = current_account_with_tenant()
args = parser_post_cred.parse_args()
payload = console_ns.payload or {}
args = ParserCredentialCreate.model_validate(payload)
model_provider_service = ModelProviderService()
@ -96,15 +144,15 @@ class ModelProviderCredentialApi(Resource):
model_provider_service.create_provider_credential(
tenant_id=current_tenant_id,
provider=provider,
credentials=args["credentials"],
credential_name=args["name"],
credentials=args.credentials,
credential_name=args.name,
)
except CredentialsValidateFailedError as ex:
raise ValueError(str(ex))
return {"result": "success"}, 201
@console_ns.expect(parser_put_cred)
@console_ns.expect(console_ns.models[ParserCredentialUpdate.__name__])
@setup_required
@login_required
@is_admin_or_owner_required
@ -112,7 +160,8 @@ class ModelProviderCredentialApi(Resource):
def put(self, provider: str):
_, current_tenant_id = current_account_with_tenant()
args = parser_put_cred.parse_args()
payload = console_ns.payload or {}
args = ParserCredentialUpdate.model_validate(payload)
model_provider_service = ModelProviderService()
@ -120,71 +169,64 @@ class ModelProviderCredentialApi(Resource):
model_provider_service.update_provider_credential(
tenant_id=current_tenant_id,
provider=provider,
credentials=args["credentials"],
credential_id=args["credential_id"],
credential_name=args["name"],
credentials=args.credentials,
credential_id=args.credential_id,
credential_name=args.name,
)
except CredentialsValidateFailedError as ex:
raise ValueError(str(ex))
return {"result": "success"}
@console_ns.expect(parser_delete_cred)
@console_ns.expect(console_ns.models[ParserCredentialDelete.__name__])
@setup_required
@login_required
@is_admin_or_owner_required
@account_initialization_required
def delete(self, provider: str):
_, current_tenant_id = current_account_with_tenant()
args = parser_delete_cred.parse_args()
payload = console_ns.payload or {}
args = ParserCredentialDelete.model_validate(payload)
model_provider_service = ModelProviderService()
model_provider_service.remove_provider_credential(
tenant_id=current_tenant_id, provider=provider, credential_id=args["credential_id"]
tenant_id=current_tenant_id, provider=provider, credential_id=args.credential_id
)
return {"result": "success"}, 204
parser_switch = reqparse.RequestParser().add_argument(
"credential_id", type=str, required=True, nullable=False, location="json"
)
@console_ns.route("/workspaces/current/model-providers/<path:provider>/credentials/switch")
class ModelProviderCredentialSwitchApi(Resource):
@console_ns.expect(parser_switch)
@console_ns.expect(console_ns.models[ParserCredentialSwitch.__name__])
@setup_required
@login_required
@is_admin_or_owner_required
@account_initialization_required
def post(self, provider: str):
_, current_tenant_id = current_account_with_tenant()
args = parser_switch.parse_args()
payload = console_ns.payload or {}
args = ParserCredentialSwitch.model_validate(payload)
service = ModelProviderService()
service.switch_active_provider_credential(
tenant_id=current_tenant_id,
provider=provider,
credential_id=args["credential_id"],
credential_id=args.credential_id,
)
return {"result": "success"}
parser_validate = reqparse.RequestParser().add_argument(
"credentials", type=dict, required=True, nullable=False, location="json"
)
@console_ns.route("/workspaces/current/model-providers/<path:provider>/credentials/validate")
class ModelProviderValidateApi(Resource):
@console_ns.expect(parser_validate)
@console_ns.expect(console_ns.models[ParserCredentialValidate.__name__])
@setup_required
@login_required
@account_initialization_required
def post(self, provider: str):
_, current_tenant_id = current_account_with_tenant()
args = parser_validate.parse_args()
payload = console_ns.payload or {}
args = ParserCredentialValidate.model_validate(payload)
tenant_id = current_tenant_id
@ -195,7 +237,7 @@ class ModelProviderValidateApi(Resource):
try:
model_provider_service.validate_provider_credentials(
tenant_id=tenant_id, provider=provider, credentials=args["credentials"]
tenant_id=tenant_id, provider=provider, credentials=args.credentials
)
except CredentialsValidateFailedError as ex:
result = False
@ -228,19 +270,9 @@ class ModelProviderIconApi(Resource):
return send_file(io.BytesIO(icon), mimetype=mimetype)
parser_preferred = reqparse.RequestParser().add_argument(
"preferred_provider_type",
type=str,
required=True,
nullable=False,
choices=["system", "custom"],
location="json",
)
@console_ns.route("/workspaces/current/model-providers/<path:provider>/preferred-provider-type")
class PreferredProviderTypeUpdateApi(Resource):
@console_ns.expect(parser_preferred)
@console_ns.expect(console_ns.models[ParserPreferredProviderType.__name__])
@setup_required
@login_required
@is_admin_or_owner_required
@ -250,11 +282,12 @@ class PreferredProviderTypeUpdateApi(Resource):
tenant_id = current_tenant_id
args = parser_preferred.parse_args()
payload = console_ns.payload or {}
args = ParserPreferredProviderType.model_validate(payload)
model_provider_service = ModelProviderService()
model_provider_service.switch_preferred_provider(
tenant_id=tenant_id, provider=provider, preferred_provider_type=args["preferred_provider_type"]
tenant_id=tenant_id, provider=provider, preferred_provider_type=args.preferred_provider_type
)
return {"result": "success"}

View File

@ -1,52 +1,144 @@
import logging
from typing import Any, cast
from flask_restx import Resource, reqparse
from flask import request
from flask_restx import Resource
from pydantic import BaseModel, Field, field_validator
from controllers.console import console_ns
from controllers.console.wraps import account_initialization_required, is_admin_or_owner_required, setup_required
from core.model_runtime.entities.model_entities import ModelType
from core.model_runtime.errors.validate import CredentialsValidateFailedError
from core.model_runtime.utils.encoders import jsonable_encoder
from libs.helper import StrLen, uuid_value
from libs.helper import uuid_value
from libs.login import current_account_with_tenant, login_required
from services.model_load_balancing_service import ModelLoadBalancingService
from services.model_provider_service import ModelProviderService
logger = logging.getLogger(__name__)
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
parser_get_default = reqparse.RequestParser().add_argument(
"model_type",
type=str,
required=True,
nullable=False,
choices=[mt.value for mt in ModelType],
location="args",
)
parser_post_default = reqparse.RequestParser().add_argument(
"model_settings", type=list, required=True, nullable=False, location="json"
)
class ParserGetDefault(BaseModel):
model_type: ModelType
class ParserPostDefault(BaseModel):
class Inner(BaseModel):
model_type: ModelType
model: str | None = None
provider: str | None = None
model_settings: list[Inner]
class ParserDeleteModels(BaseModel):
model: str
model_type: ModelType
class LoadBalancingPayload(BaseModel):
configs: list[dict[str, Any]] | None = None
enabled: bool | None = None
class ParserPostModels(BaseModel):
model: str
model_type: ModelType
load_balancing: LoadBalancingPayload | None = None
config_from: str | None = None
credential_id: str | None = None
@field_validator("credential_id")
@classmethod
def validate_credential_id(cls, value: str | None) -> str | None:
if value is None:
return value
return uuid_value(value)
class ParserGetCredentials(BaseModel):
model: str
model_type: ModelType
config_from: str | None = None
credential_id: str | None = None
@field_validator("credential_id")
@classmethod
def validate_get_credential_id(cls, value: str | None) -> str | None:
if value is None:
return value
return uuid_value(value)
class ParserCredentialBase(BaseModel):
model: str
model_type: ModelType
class ParserCreateCredential(ParserCredentialBase):
name: str | None = Field(default=None, max_length=30)
credentials: dict[str, Any]
class ParserUpdateCredential(ParserCredentialBase):
credential_id: str
credentials: dict[str, Any]
name: str | None = Field(default=None, max_length=30)
@field_validator("credential_id")
@classmethod
def validate_update_credential_id(cls, value: str) -> str:
return uuid_value(value)
class ParserDeleteCredential(ParserCredentialBase):
credential_id: str
@field_validator("credential_id")
@classmethod
def validate_delete_credential_id(cls, value: str) -> str:
return uuid_value(value)
class ParserParameter(BaseModel):
model: str
def reg(cls: type[BaseModel]):
console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
reg(ParserGetDefault)
reg(ParserPostDefault)
reg(ParserDeleteModels)
reg(ParserPostModels)
reg(ParserGetCredentials)
reg(ParserCreateCredential)
reg(ParserUpdateCredential)
reg(ParserDeleteCredential)
reg(ParserParameter)
@console_ns.route("/workspaces/current/default-model")
class DefaultModelApi(Resource):
@console_ns.expect(parser_get_default)
@console_ns.expect(console_ns.models[ParserGetDefault.__name__])
@setup_required
@login_required
@account_initialization_required
def get(self):
_, tenant_id = current_account_with_tenant()
args = parser_get_default.parse_args()
args = ParserGetDefault.model_validate(request.args.to_dict(flat=True)) # type: ignore
model_provider_service = ModelProviderService()
default_model_entity = model_provider_service.get_default_model_of_model_type(
tenant_id=tenant_id, model_type=args["model_type"]
tenant_id=tenant_id, model_type=args.model_type
)
return jsonable_encoder({"data": default_model_entity})
@console_ns.expect(parser_post_default)
@console_ns.expect(console_ns.models[ParserPostDefault.__name__])
@setup_required
@login_required
@is_admin_or_owner_required
@ -54,66 +146,31 @@ class DefaultModelApi(Resource):
def post(self):
_, tenant_id = current_account_with_tenant()
args = parser_post_default.parse_args()
args = ParserPostDefault.model_validate(console_ns.payload)
model_provider_service = ModelProviderService()
model_settings = args["model_settings"]
model_settings = args.model_settings
for model_setting in model_settings:
if "model_type" not in model_setting or model_setting["model_type"] not in [mt.value for mt in ModelType]:
raise ValueError("invalid model type")
if "provider" not in model_setting:
if model_setting.provider is None:
continue
if "model" not in model_setting:
raise ValueError("invalid model")
try:
model_provider_service.update_default_model_of_model_type(
tenant_id=tenant_id,
model_type=model_setting["model_type"],
provider=model_setting["provider"],
model=model_setting["model"],
model_type=model_setting.model_type,
provider=model_setting.provider,
model=cast(str, model_setting.model),
)
except Exception as ex:
logger.exception(
"Failed to update default model, model type: %s, model: %s",
model_setting["model_type"],
model_setting.get("model"),
model_setting.model_type,
model_setting.model,
)
raise ex
return {"result": "success"}
parser_post_models = (
reqparse.RequestParser()
.add_argument("model", type=str, required=True, nullable=False, location="json")
.add_argument(
"model_type",
type=str,
required=True,
nullable=False,
choices=[mt.value for mt in ModelType],
location="json",
)
.add_argument("load_balancing", type=dict, required=False, nullable=True, location="json")
.add_argument("config_from", type=str, required=False, nullable=True, location="json")
.add_argument("credential_id", type=uuid_value, required=False, nullable=True, location="json")
)
parser_delete_models = (
reqparse.RequestParser()
.add_argument("model", type=str, required=True, nullable=False, location="json")
.add_argument(
"model_type",
type=str,
required=True,
nullable=False,
choices=[mt.value for mt in ModelType],
location="json",
)
)
@console_ns.route("/workspaces/current/model-providers/<path:provider>/models")
class ModelProviderModelApi(Resource):
@setup_required
@ -127,7 +184,7 @@ class ModelProviderModelApi(Resource):
return jsonable_encoder({"data": models})
@console_ns.expect(parser_post_models)
@console_ns.expect(console_ns.models[ParserPostModels.__name__])
@setup_required
@login_required
@is_admin_or_owner_required
@ -135,45 +192,45 @@ class ModelProviderModelApi(Resource):
def post(self, provider: str):
# To save the model's load balance configs
_, tenant_id = current_account_with_tenant()
args = parser_post_models.parse_args()
args = ParserPostModels.model_validate(console_ns.payload)
if args.get("config_from", "") == "custom-model":
if not args.get("credential_id"):
if args.config_from == "custom-model":
if not args.credential_id:
raise ValueError("credential_id is required when configuring a custom-model")
service = ModelProviderService()
service.switch_active_custom_model_credential(
tenant_id=tenant_id,
provider=provider,
model_type=args["model_type"],
model=args["model"],
credential_id=args["credential_id"],
model_type=args.model_type,
model=args.model,
credential_id=args.credential_id,
)
model_load_balancing_service = ModelLoadBalancingService()
if "load_balancing" in args and args["load_balancing"] and "configs" in args["load_balancing"]:
if args.load_balancing and args.load_balancing.configs:
# save load balancing configs
model_load_balancing_service.update_load_balancing_configs(
tenant_id=tenant_id,
provider=provider,
model=args["model"],
model_type=args["model_type"],
configs=args["load_balancing"]["configs"],
config_from=args.get("config_from", ""),
model=args.model,
model_type=args.model_type,
configs=args.load_balancing.configs,
config_from=args.config_from or "",
)
if args.get("load_balancing", {}).get("enabled"):
if args.load_balancing.enabled:
model_load_balancing_service.enable_model_load_balancing(
tenant_id=tenant_id, provider=provider, model=args["model"], model_type=args["model_type"]
tenant_id=tenant_id, provider=provider, model=args.model, model_type=args.model_type
)
else:
model_load_balancing_service.disable_model_load_balancing(
tenant_id=tenant_id, provider=provider, model=args["model"], model_type=args["model_type"]
tenant_id=tenant_id, provider=provider, model=args.model, model_type=args.model_type
)
return {"result": "success"}, 200
@console_ns.expect(parser_delete_models)
@console_ns.expect(console_ns.models[ParserDeleteModels.__name__], validate=True)
@setup_required
@login_required
@is_admin_or_owner_required
@ -181,113 +238,53 @@ class ModelProviderModelApi(Resource):
def delete(self, provider: str):
_, tenant_id = current_account_with_tenant()
args = parser_delete_models.parse_args()
args = ParserDeleteModels.model_validate(console_ns.payload)
model_provider_service = ModelProviderService()
model_provider_service.remove_model(
tenant_id=tenant_id, provider=provider, model=args["model"], model_type=args["model_type"]
tenant_id=tenant_id, provider=provider, model=args.model, model_type=args.model_type
)
return {"result": "success"}, 204
parser_get_credentials = (
reqparse.RequestParser()
.add_argument("model", type=str, required=True, nullable=False, location="args")
.add_argument(
"model_type",
type=str,
required=True,
nullable=False,
choices=[mt.value for mt in ModelType],
location="args",
)
.add_argument("config_from", type=str, required=False, nullable=True, location="args")
.add_argument("credential_id", type=uuid_value, required=False, nullable=True, location="args")
)
parser_post_cred = (
reqparse.RequestParser()
.add_argument("model", type=str, required=True, nullable=False, location="json")
.add_argument(
"model_type",
type=str,
required=True,
nullable=False,
choices=[mt.value for mt in ModelType],
location="json",
)
.add_argument("name", type=StrLen(30), required=False, nullable=True, location="json")
.add_argument("credentials", type=dict, required=True, nullable=False, location="json")
)
parser_put_cred = (
reqparse.RequestParser()
.add_argument("model", type=str, required=True, nullable=False, location="json")
.add_argument(
"model_type",
type=str,
required=True,
nullable=False,
choices=[mt.value for mt in ModelType],
location="json",
)
.add_argument("credential_id", type=uuid_value, required=True, nullable=False, location="json")
.add_argument("credentials", type=dict, required=True, nullable=False, location="json")
.add_argument("name", type=StrLen(30), required=False, nullable=True, location="json")
)
parser_delete_cred = (
reqparse.RequestParser()
.add_argument("model", type=str, required=True, nullable=False, location="json")
.add_argument(
"model_type",
type=str,
required=True,
nullable=False,
choices=[mt.value for mt in ModelType],
location="json",
)
.add_argument("credential_id", type=uuid_value, required=True, nullable=False, location="json")
)
@console_ns.route("/workspaces/current/model-providers/<path:provider>/models/credentials")
class ModelProviderModelCredentialApi(Resource):
@console_ns.expect(parser_get_credentials)
@console_ns.expect(console_ns.models[ParserGetCredentials.__name__])
@setup_required
@login_required
@account_initialization_required
def get(self, provider: str):
_, tenant_id = current_account_with_tenant()
args = parser_get_credentials.parse_args()
args = ParserGetCredentials.model_validate(request.args.to_dict(flat=True)) # type: ignore
model_provider_service = ModelProviderService()
current_credential = model_provider_service.get_model_credential(
tenant_id=tenant_id,
provider=provider,
model_type=args["model_type"],
model=args["model"],
credential_id=args.get("credential_id"),
model_type=args.model_type,
model=args.model,
credential_id=args.credential_id,
)
model_load_balancing_service = ModelLoadBalancingService()
is_load_balancing_enabled, load_balancing_configs = model_load_balancing_service.get_load_balancing_configs(
tenant_id=tenant_id,
provider=provider,
model=args["model"],
model_type=args["model_type"],
config_from=args.get("config_from", ""),
model=args.model,
model_type=args.model_type,
config_from=args.config_from or "",
)
if args.get("config_from", "") == "predefined-model":
if args.config_from == "predefined-model":
available_credentials = model_provider_service.provider_manager.get_provider_available_credentials(
tenant_id=tenant_id, provider_name=provider
)
else:
model_type = ModelType.value_of(args["model_type"]).to_origin_model_type()
model_type = args.model_type
available_credentials = model_provider_service.provider_manager.get_provider_model_available_credentials(
tenant_id=tenant_id, provider_name=provider, model_type=model_type, model_name=args["model"]
tenant_id=tenant_id, provider_name=provider, model_type=model_type, model_name=args.model
)
return jsonable_encoder(
@ -304,7 +301,7 @@ class ModelProviderModelCredentialApi(Resource):
}
)
@console_ns.expect(parser_post_cred)
@console_ns.expect(console_ns.models[ParserCreateCredential.__name__])
@setup_required
@login_required
@is_admin_or_owner_required
@ -312,7 +309,7 @@ class ModelProviderModelCredentialApi(Resource):
def post(self, provider: str):
_, tenant_id = current_account_with_tenant()
args = parser_post_cred.parse_args()
args = ParserCreateCredential.model_validate(console_ns.payload)
model_provider_service = ModelProviderService()
@ -320,30 +317,30 @@ class ModelProviderModelCredentialApi(Resource):
model_provider_service.create_model_credential(
tenant_id=tenant_id,
provider=provider,
model=args["model"],
model_type=args["model_type"],
credentials=args["credentials"],
credential_name=args["name"],
model=args.model,
model_type=args.model_type,
credentials=args.credentials,
credential_name=args.name,
)
except CredentialsValidateFailedError as ex:
logger.exception(
"Failed to save model credentials, tenant_id: %s, model: %s, model_type: %s",
tenant_id,
args.get("model"),
args.get("model_type"),
args.model,
args.model_type,
)
raise ValueError(str(ex))
return {"result": "success"}, 201
@console_ns.expect(parser_put_cred)
@console_ns.expect(console_ns.models[ParserUpdateCredential.__name__])
@setup_required
@login_required
@is_admin_or_owner_required
@account_initialization_required
def put(self, provider: str):
_, current_tenant_id = current_account_with_tenant()
args = parser_put_cred.parse_args()
args = ParserUpdateCredential.model_validate(console_ns.payload)
model_provider_service = ModelProviderService()
@ -351,106 +348,87 @@ class ModelProviderModelCredentialApi(Resource):
model_provider_service.update_model_credential(
tenant_id=current_tenant_id,
provider=provider,
model_type=args["model_type"],
model=args["model"],
credentials=args["credentials"],
credential_id=args["credential_id"],
credential_name=args["name"],
model_type=args.model_type,
model=args.model,
credentials=args.credentials,
credential_id=args.credential_id,
credential_name=args.name,
)
except CredentialsValidateFailedError as ex:
raise ValueError(str(ex))
return {"result": "success"}
@console_ns.expect(parser_delete_cred)
@console_ns.expect(console_ns.models[ParserDeleteCredential.__name__])
@setup_required
@login_required
@is_admin_or_owner_required
@account_initialization_required
def delete(self, provider: str):
_, current_tenant_id = current_account_with_tenant()
args = parser_delete_cred.parse_args()
args = ParserDeleteCredential.model_validate(console_ns.payload)
model_provider_service = ModelProviderService()
model_provider_service.remove_model_credential(
tenant_id=current_tenant_id,
provider=provider,
model_type=args["model_type"],
model=args["model"],
credential_id=args["credential_id"],
model_type=args.model_type,
model=args.model,
credential_id=args.credential_id,
)
return {"result": "success"}, 204
parser_switch = (
reqparse.RequestParser()
.add_argument("model", type=str, required=True, nullable=False, location="json")
.add_argument(
"model_type",
type=str,
required=True,
nullable=False,
choices=[mt.value for mt in ModelType],
location="json",
)
.add_argument("credential_id", type=str, required=True, nullable=False, location="json")
class ParserSwitch(BaseModel):
model: str
model_type: ModelType
credential_id: str
console_ns.schema_model(
ParserSwitch.__name__, ParserSwitch.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)
)
@console_ns.route("/workspaces/current/model-providers/<path:provider>/models/credentials/switch")
class ModelProviderModelCredentialSwitchApi(Resource):
@console_ns.expect(parser_switch)
@console_ns.expect(console_ns.models[ParserSwitch.__name__])
@setup_required
@login_required
@is_admin_or_owner_required
@account_initialization_required
def post(self, provider: str):
_, current_tenant_id = current_account_with_tenant()
args = parser_switch.parse_args()
args = ParserSwitch.model_validate(console_ns.payload)
service = ModelProviderService()
service.add_model_credential_to_model_list(
tenant_id=current_tenant_id,
provider=provider,
model_type=args["model_type"],
model=args["model"],
credential_id=args["credential_id"],
model_type=args.model_type,
model=args.model,
credential_id=args.credential_id,
)
return {"result": "success"}
parser_model_enable_disable = (
reqparse.RequestParser()
.add_argument("model", type=str, required=True, nullable=False, location="json")
.add_argument(
"model_type",
type=str,
required=True,
nullable=False,
choices=[mt.value for mt in ModelType],
location="json",
)
)
@console_ns.route(
"/workspaces/current/model-providers/<path:provider>/models/enable", endpoint="model-provider-model-enable"
)
class ModelProviderModelEnableApi(Resource):
@console_ns.expect(parser_model_enable_disable)
@console_ns.expect(console_ns.models[ParserDeleteModels.__name__])
@setup_required
@login_required
@account_initialization_required
def patch(self, provider: str):
_, tenant_id = current_account_with_tenant()
args = parser_model_enable_disable.parse_args()
args = ParserDeleteModels.model_validate(console_ns.payload)
model_provider_service = ModelProviderService()
model_provider_service.enable_model(
tenant_id=tenant_id, provider=provider, model=args["model"], model_type=args["model_type"]
tenant_id=tenant_id, provider=provider, model=args.model, model_type=args.model_type
)
return {"result": "success"}
@ -460,48 +438,43 @@ class ModelProviderModelEnableApi(Resource):
"/workspaces/current/model-providers/<path:provider>/models/disable", endpoint="model-provider-model-disable"
)
class ModelProviderModelDisableApi(Resource):
@console_ns.expect(parser_model_enable_disable)
@console_ns.expect(console_ns.models[ParserDeleteModels.__name__])
@setup_required
@login_required
@account_initialization_required
def patch(self, provider: str):
_, tenant_id = current_account_with_tenant()
args = parser_model_enable_disable.parse_args()
args = ParserDeleteModels.model_validate(console_ns.payload)
model_provider_service = ModelProviderService()
model_provider_service.disable_model(
tenant_id=tenant_id, provider=provider, model=args["model"], model_type=args["model_type"]
tenant_id=tenant_id, provider=provider, model=args.model, model_type=args.model_type
)
return {"result": "success"}
parser_validate = (
reqparse.RequestParser()
.add_argument("model", type=str, required=True, nullable=False, location="json")
.add_argument(
"model_type",
type=str,
required=True,
nullable=False,
choices=[mt.value for mt in ModelType],
location="json",
)
.add_argument("credentials", type=dict, required=True, nullable=False, location="json")
class ParserValidate(BaseModel):
model: str
model_type: ModelType
credentials: dict
console_ns.schema_model(
ParserValidate.__name__, ParserValidate.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)
)
@console_ns.route("/workspaces/current/model-providers/<path:provider>/models/credentials/validate")
class ModelProviderModelValidateApi(Resource):
@console_ns.expect(parser_validate)
@console_ns.expect(console_ns.models[ParserValidate.__name__])
@setup_required
@login_required
@account_initialization_required
def post(self, provider: str):
_, tenant_id = current_account_with_tenant()
args = parser_validate.parse_args()
args = ParserValidate.model_validate(console_ns.payload)
model_provider_service = ModelProviderService()
@ -512,9 +485,9 @@ class ModelProviderModelValidateApi(Resource):
model_provider_service.validate_model_credentials(
tenant_id=tenant_id,
provider=provider,
model=args["model"],
model_type=args["model_type"],
credentials=args["credentials"],
model=args.model,
model_type=args.model_type,
credentials=args.credentials,
)
except CredentialsValidateFailedError as ex:
result = False
@ -528,24 +501,19 @@ class ModelProviderModelValidateApi(Resource):
return response
parser_parameter = reqparse.RequestParser().add_argument(
"model", type=str, required=True, nullable=False, location="args"
)
@console_ns.route("/workspaces/current/model-providers/<path:provider>/models/parameter-rules")
class ModelProviderModelParameterRuleApi(Resource):
@console_ns.expect(parser_parameter)
@console_ns.expect(console_ns.models[ParserParameter.__name__])
@setup_required
@login_required
@account_initialization_required
def get(self, provider: str):
args = parser_parameter.parse_args()
args = ParserParameter.model_validate(request.args.to_dict(flat=True)) # type: ignore
_, tenant_id = current_account_with_tenant()
model_provider_service = ModelProviderService()
parameter_rules = model_provider_service.get_model_parameter_rules(
tenant_id=tenant_id, provider=provider, model=args["model"]
tenant_id=tenant_id, provider=provider, model=args.model
)
return jsonable_encoder({"data": parameter_rules})

View File

@ -1,7 +1,9 @@
import io
from typing import Literal
from flask import request, send_file
from flask_restx import Resource, reqparse
from flask_restx import Resource
from pydantic import BaseModel, Field
from werkzeug.exceptions import Forbidden
from configs import dify_config
@ -17,6 +19,12 @@ from services.plugin.plugin_parameter_service import PluginParameterService
from services.plugin.plugin_permission_service import PluginPermissionService
from services.plugin.plugin_service import PluginService
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
def reg(cls: type[BaseModel]):
console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
@console_ns.route("/workspaces/current/plugin/debugging-key")
class PluginDebuggingKeyApi(Resource):
@ -37,88 +45,194 @@ class PluginDebuggingKeyApi(Resource):
raise ValueError(e)
parser_list = (
reqparse.RequestParser()
.add_argument("page", type=int, required=False, location="args", default=1)
.add_argument("page_size", type=int, required=False, location="args", default=256)
)
class ParserList(BaseModel):
page: int = Field(default=1)
page_size: int = Field(default=256)
reg(ParserList)
@console_ns.route("/workspaces/current/plugin/list")
class PluginListApi(Resource):
@console_ns.expect(parser_list)
@console_ns.expect(console_ns.models[ParserList.__name__])
@setup_required
@login_required
@account_initialization_required
def get(self):
_, tenant_id = current_account_with_tenant()
args = parser_list.parse_args()
args = ParserList.model_validate(request.args.to_dict(flat=True)) # type: ignore
try:
plugins_with_total = PluginService.list_with_total(tenant_id, args["page"], args["page_size"])
plugins_with_total = PluginService.list_with_total(tenant_id, args.page, args.page_size)
except PluginDaemonClientSideError as e:
raise ValueError(e)
return jsonable_encoder({"plugins": plugins_with_total.list, "total": plugins_with_total.total})
parser_latest = reqparse.RequestParser().add_argument("plugin_ids", type=list, required=True, location="json")
class ParserLatest(BaseModel):
plugin_ids: list[str]
class ParserIcon(BaseModel):
tenant_id: str
filename: str
class ParserAsset(BaseModel):
plugin_unique_identifier: str
file_name: str
class ParserGithubUpload(BaseModel):
repo: str
version: str
package: str
class ParserPluginIdentifiers(BaseModel):
plugin_unique_identifiers: list[str]
class ParserGithubInstall(BaseModel):
plugin_unique_identifier: str
repo: str
version: str
package: str
class ParserPluginIdentifierQuery(BaseModel):
plugin_unique_identifier: str
class ParserTasks(BaseModel):
page: int
page_size: int
class ParserMarketplaceUpgrade(BaseModel):
original_plugin_unique_identifier: str
new_plugin_unique_identifier: str
class ParserGithubUpgrade(BaseModel):
original_plugin_unique_identifier: str
new_plugin_unique_identifier: str
repo: str
version: str
package: str
class ParserUninstall(BaseModel):
plugin_installation_id: str
class ParserPermissionChange(BaseModel):
install_permission: TenantPluginPermission.InstallPermission
debug_permission: TenantPluginPermission.DebugPermission
class ParserDynamicOptions(BaseModel):
plugin_id: str
provider: str
action: str
parameter: str
credential_id: str | None = None
provider_type: Literal["tool", "trigger"]
class PluginPermissionSettingsPayload(BaseModel):
install_permission: TenantPluginPermission.InstallPermission = TenantPluginPermission.InstallPermission.EVERYONE
debug_permission: TenantPluginPermission.DebugPermission = TenantPluginPermission.DebugPermission.EVERYONE
class PluginAutoUpgradeSettingsPayload(BaseModel):
strategy_setting: TenantPluginAutoUpgradeStrategy.StrategySetting = (
TenantPluginAutoUpgradeStrategy.StrategySetting.FIX_ONLY
)
upgrade_time_of_day: int = 0
upgrade_mode: TenantPluginAutoUpgradeStrategy.UpgradeMode = TenantPluginAutoUpgradeStrategy.UpgradeMode.EXCLUDE
exclude_plugins: list[str] = Field(default_factory=list)
include_plugins: list[str] = Field(default_factory=list)
class ParserPreferencesChange(BaseModel):
permission: PluginPermissionSettingsPayload
auto_upgrade: PluginAutoUpgradeSettingsPayload
class ParserExcludePlugin(BaseModel):
plugin_id: str
class ParserReadme(BaseModel):
plugin_unique_identifier: str
language: str = Field(default="en-US")
reg(ParserLatest)
reg(ParserIcon)
reg(ParserAsset)
reg(ParserGithubUpload)
reg(ParserPluginIdentifiers)
reg(ParserGithubInstall)
reg(ParserPluginIdentifierQuery)
reg(ParserTasks)
reg(ParserMarketplaceUpgrade)
reg(ParserGithubUpgrade)
reg(ParserUninstall)
reg(ParserPermissionChange)
reg(ParserDynamicOptions)
reg(ParserPreferencesChange)
reg(ParserExcludePlugin)
reg(ParserReadme)
@console_ns.route("/workspaces/current/plugin/list/latest-versions")
class PluginListLatestVersionsApi(Resource):
@console_ns.expect(parser_latest)
@console_ns.expect(console_ns.models[ParserLatest.__name__])
@setup_required
@login_required
@account_initialization_required
def post(self):
args = parser_latest.parse_args()
args = ParserLatest.model_validate(console_ns.payload)
try:
versions = PluginService.list_latest_versions(args["plugin_ids"])
versions = PluginService.list_latest_versions(args.plugin_ids)
except PluginDaemonClientSideError as e:
raise ValueError(e)
return jsonable_encoder({"versions": versions})
parser_ids = reqparse.RequestParser().add_argument("plugin_ids", type=list, required=True, location="json")
@console_ns.route("/workspaces/current/plugin/list/installations/ids")
class PluginListInstallationsFromIdsApi(Resource):
@console_ns.expect(parser_ids)
@console_ns.expect(console_ns.models[ParserLatest.__name__])
@setup_required
@login_required
@account_initialization_required
def post(self):
_, tenant_id = current_account_with_tenant()
args = parser_ids.parse_args()
args = ParserLatest.model_validate(console_ns.payload)
try:
plugins = PluginService.list_installations_from_ids(tenant_id, args["plugin_ids"])
plugins = PluginService.list_installations_from_ids(tenant_id, args.plugin_ids)
except PluginDaemonClientSideError as e:
raise ValueError(e)
return jsonable_encoder({"plugins": plugins})
parser_icon = (
reqparse.RequestParser()
.add_argument("tenant_id", type=str, required=True, location="args")
.add_argument("filename", type=str, required=True, location="args")
)
@console_ns.route("/workspaces/current/plugin/icon")
class PluginIconApi(Resource):
@console_ns.expect(parser_icon)
@console_ns.expect(console_ns.models[ParserIcon.__name__])
@setup_required
def get(self):
args = parser_icon.parse_args()
args = ParserIcon.model_validate(request.args.to_dict(flat=True)) # type: ignore
try:
icon_bytes, mimetype = PluginService.get_asset(args["tenant_id"], args["filename"])
icon_bytes, mimetype = PluginService.get_asset(args.tenant_id, args.filename)
except PluginDaemonClientSideError as e:
raise ValueError(e)
@ -128,20 +242,16 @@ class PluginIconApi(Resource):
@console_ns.route("/workspaces/current/plugin/asset")
class PluginAssetApi(Resource):
@console_ns.expect(console_ns.models[ParserAsset.__name__])
@setup_required
@login_required
@account_initialization_required
def get(self):
req = (
reqparse.RequestParser()
.add_argument("plugin_unique_identifier", type=str, required=True, location="args")
.add_argument("file_name", type=str, required=True, location="args")
)
args = req.parse_args()
args = ParserAsset.model_validate(request.args.to_dict(flat=True)) # type: ignore
_, tenant_id = current_account_with_tenant()
try:
binary = PluginService.extract_asset(tenant_id, args["plugin_unique_identifier"], args["file_name"])
binary = PluginService.extract_asset(tenant_id, args.plugin_unique_identifier, args.file_name)
return send_file(io.BytesIO(binary), mimetype="application/octet-stream")
except PluginDaemonClientSideError as e:
raise ValueError(e)
@ -171,17 +281,9 @@ class PluginUploadFromPkgApi(Resource):
return jsonable_encoder(response)
parser_github = (
reqparse.RequestParser()
.add_argument("repo", type=str, required=True, location="json")
.add_argument("version", type=str, required=True, location="json")
.add_argument("package", type=str, required=True, location="json")
)
@console_ns.route("/workspaces/current/plugin/upload/github")
class PluginUploadFromGithubApi(Resource):
@console_ns.expect(parser_github)
@console_ns.expect(console_ns.models[ParserGithubUpload.__name__])
@setup_required
@login_required
@account_initialization_required
@ -189,10 +291,10 @@ class PluginUploadFromGithubApi(Resource):
def post(self):
_, tenant_id = current_account_with_tenant()
args = parser_github.parse_args()
args = ParserGithubUpload.model_validate(console_ns.payload)
try:
response = PluginService.upload_pkg_from_github(tenant_id, args["repo"], args["version"], args["package"])
response = PluginService.upload_pkg_from_github(tenant_id, args.repo, args.version, args.package)
except PluginDaemonClientSideError as e:
raise ValueError(e)
@ -223,47 +325,28 @@ class PluginUploadFromBundleApi(Resource):
return jsonable_encoder(response)
parser_pkg = reqparse.RequestParser().add_argument(
"plugin_unique_identifiers", type=list, required=True, location="json"
)
@console_ns.route("/workspaces/current/plugin/install/pkg")
class PluginInstallFromPkgApi(Resource):
@console_ns.expect(parser_pkg)
@console_ns.expect(console_ns.models[ParserPluginIdentifiers.__name__])
@setup_required
@login_required
@account_initialization_required
@plugin_permission_required(install_required=True)
def post(self):
_, tenant_id = current_account_with_tenant()
args = parser_pkg.parse_args()
# check if all plugin_unique_identifiers are valid string
for plugin_unique_identifier in args["plugin_unique_identifiers"]:
if not isinstance(plugin_unique_identifier, str):
raise ValueError("Invalid plugin unique identifier")
args = ParserPluginIdentifiers.model_validate(console_ns.payload)
try:
response = PluginService.install_from_local_pkg(tenant_id, args["plugin_unique_identifiers"])
response = PluginService.install_from_local_pkg(tenant_id, args.plugin_unique_identifiers)
except PluginDaemonClientSideError as e:
raise ValueError(e)
return jsonable_encoder(response)
parser_githubapi = (
reqparse.RequestParser()
.add_argument("repo", type=str, required=True, location="json")
.add_argument("version", type=str, required=True, location="json")
.add_argument("package", type=str, required=True, location="json")
.add_argument("plugin_unique_identifier", type=str, required=True, location="json")
)
@console_ns.route("/workspaces/current/plugin/install/github")
class PluginInstallFromGithubApi(Resource):
@console_ns.expect(parser_githubapi)
@console_ns.expect(console_ns.models[ParserGithubInstall.__name__])
@setup_required
@login_required
@account_initialization_required
@ -271,15 +354,15 @@ class PluginInstallFromGithubApi(Resource):
def post(self):
_, tenant_id = current_account_with_tenant()
args = parser_githubapi.parse_args()
args = ParserGithubInstall.model_validate(console_ns.payload)
try:
response = PluginService.install_from_github(
tenant_id,
args["plugin_unique_identifier"],
args["repo"],
args["version"],
args["package"],
args.plugin_unique_identifier,
args.repo,
args.version,
args.package,
)
except PluginDaemonClientSideError as e:
raise ValueError(e)
@ -287,14 +370,9 @@ class PluginInstallFromGithubApi(Resource):
return jsonable_encoder(response)
parser_marketplace = reqparse.RequestParser().add_argument(
"plugin_unique_identifiers", type=list, required=True, location="json"
)
@console_ns.route("/workspaces/current/plugin/install/marketplace")
class PluginInstallFromMarketplaceApi(Resource):
@console_ns.expect(parser_marketplace)
@console_ns.expect(console_ns.models[ParserPluginIdentifiers.__name__])
@setup_required
@login_required
@account_initialization_required
@ -302,43 +380,33 @@ class PluginInstallFromMarketplaceApi(Resource):
def post(self):
_, tenant_id = current_account_with_tenant()
args = parser_marketplace.parse_args()
# check if all plugin_unique_identifiers are valid string
for plugin_unique_identifier in args["plugin_unique_identifiers"]:
if not isinstance(plugin_unique_identifier, str):
raise ValueError("Invalid plugin unique identifier")
args = ParserPluginIdentifiers.model_validate(console_ns.payload)
try:
response = PluginService.install_from_marketplace_pkg(tenant_id, args["plugin_unique_identifiers"])
response = PluginService.install_from_marketplace_pkg(tenant_id, args.plugin_unique_identifiers)
except PluginDaemonClientSideError as e:
raise ValueError(e)
return jsonable_encoder(response)
parser_pkgapi = reqparse.RequestParser().add_argument(
"plugin_unique_identifier", type=str, required=True, location="args"
)
@console_ns.route("/workspaces/current/plugin/marketplace/pkg")
class PluginFetchMarketplacePkgApi(Resource):
@console_ns.expect(parser_pkgapi)
@console_ns.expect(console_ns.models[ParserPluginIdentifierQuery.__name__])
@setup_required
@login_required
@account_initialization_required
@plugin_permission_required(install_required=True)
def get(self):
_, tenant_id = current_account_with_tenant()
args = parser_pkgapi.parse_args()
args = ParserPluginIdentifierQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
try:
return jsonable_encoder(
{
"manifest": PluginService.fetch_marketplace_pkg(
tenant_id,
args["plugin_unique_identifier"],
args.plugin_unique_identifier,
)
}
)
@ -346,14 +414,9 @@ class PluginFetchMarketplacePkgApi(Resource):
raise ValueError(e)
parser_fetch = reqparse.RequestParser().add_argument(
"plugin_unique_identifier", type=str, required=True, location="args"
)
@console_ns.route("/workspaces/current/plugin/fetch-manifest")
class PluginFetchManifestApi(Resource):
@console_ns.expect(parser_fetch)
@console_ns.expect(console_ns.models[ParserPluginIdentifierQuery.__name__])
@setup_required
@login_required
@account_initialization_required
@ -361,30 +424,19 @@ class PluginFetchManifestApi(Resource):
def get(self):
_, tenant_id = current_account_with_tenant()
args = parser_fetch.parse_args()
args = ParserPluginIdentifierQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
try:
return jsonable_encoder(
{
"manifest": PluginService.fetch_plugin_manifest(
tenant_id, args["plugin_unique_identifier"]
).model_dump()
}
{"manifest": PluginService.fetch_plugin_manifest(tenant_id, args.plugin_unique_identifier).model_dump()}
)
except PluginDaemonClientSideError as e:
raise ValueError(e)
parser_tasks = (
reqparse.RequestParser()
.add_argument("page", type=int, required=True, location="args")
.add_argument("page_size", type=int, required=True, location="args")
)
@console_ns.route("/workspaces/current/plugin/tasks")
class PluginFetchInstallTasksApi(Resource):
@console_ns.expect(parser_tasks)
@console_ns.expect(console_ns.models[ParserTasks.__name__])
@setup_required
@login_required
@account_initialization_required
@ -392,12 +444,10 @@ class PluginFetchInstallTasksApi(Resource):
def get(self):
_, tenant_id = current_account_with_tenant()
args = parser_tasks.parse_args()
args = ParserTasks.model_validate(request.args.to_dict(flat=True)) # type: ignore
try:
return jsonable_encoder(
{"tasks": PluginService.fetch_install_tasks(tenant_id, args["page"], args["page_size"])}
)
return jsonable_encoder({"tasks": PluginService.fetch_install_tasks(tenant_id, args.page, args.page_size)})
except PluginDaemonClientSideError as e:
raise ValueError(e)
@ -462,16 +512,9 @@ class PluginDeleteInstallTaskItemApi(Resource):
raise ValueError(e)
parser_marketplace_api = (
reqparse.RequestParser()
.add_argument("original_plugin_unique_identifier", type=str, required=True, location="json")
.add_argument("new_plugin_unique_identifier", type=str, required=True, location="json")
)
@console_ns.route("/workspaces/current/plugin/upgrade/marketplace")
class PluginUpgradeFromMarketplaceApi(Resource):
@console_ns.expect(parser_marketplace_api)
@console_ns.expect(console_ns.models[ParserMarketplaceUpgrade.__name__])
@setup_required
@login_required
@account_initialization_required
@ -479,31 +522,21 @@ class PluginUpgradeFromMarketplaceApi(Resource):
def post(self):
_, tenant_id = current_account_with_tenant()
args = parser_marketplace_api.parse_args()
args = ParserMarketplaceUpgrade.model_validate(console_ns.payload)
try:
return jsonable_encoder(
PluginService.upgrade_plugin_with_marketplace(
tenant_id, args["original_plugin_unique_identifier"], args["new_plugin_unique_identifier"]
tenant_id, args.original_plugin_unique_identifier, args.new_plugin_unique_identifier
)
)
except PluginDaemonClientSideError as e:
raise ValueError(e)
parser_github_post = (
reqparse.RequestParser()
.add_argument("original_plugin_unique_identifier", type=str, required=True, location="json")
.add_argument("new_plugin_unique_identifier", type=str, required=True, location="json")
.add_argument("repo", type=str, required=True, location="json")
.add_argument("version", type=str, required=True, location="json")
.add_argument("package", type=str, required=True, location="json")
)
@console_ns.route("/workspaces/current/plugin/upgrade/github")
class PluginUpgradeFromGithubApi(Resource):
@console_ns.expect(parser_github_post)
@console_ns.expect(console_ns.models[ParserGithubUpgrade.__name__])
@setup_required
@login_required
@account_initialization_required
@ -511,56 +544,44 @@ class PluginUpgradeFromGithubApi(Resource):
def post(self):
_, tenant_id = current_account_with_tenant()
args = parser_github_post.parse_args()
args = ParserGithubUpgrade.model_validate(console_ns.payload)
try:
return jsonable_encoder(
PluginService.upgrade_plugin_with_github(
tenant_id,
args["original_plugin_unique_identifier"],
args["new_plugin_unique_identifier"],
args["repo"],
args["version"],
args["package"],
args.original_plugin_unique_identifier,
args.new_plugin_unique_identifier,
args.repo,
args.version,
args.package,
)
)
except PluginDaemonClientSideError as e:
raise ValueError(e)
parser_uninstall = reqparse.RequestParser().add_argument(
"plugin_installation_id", type=str, required=True, location="json"
)
@console_ns.route("/workspaces/current/plugin/uninstall")
class PluginUninstallApi(Resource):
@console_ns.expect(parser_uninstall)
@console_ns.expect(console_ns.models[ParserUninstall.__name__])
@setup_required
@login_required
@account_initialization_required
@plugin_permission_required(install_required=True)
def post(self):
args = parser_uninstall.parse_args()
args = ParserUninstall.model_validate(console_ns.payload)
_, tenant_id = current_account_with_tenant()
try:
return {"success": PluginService.uninstall(tenant_id, args["plugin_installation_id"])}
return {"success": PluginService.uninstall(tenant_id, args.plugin_installation_id)}
except PluginDaemonClientSideError as e:
raise ValueError(e)
parser_change_post = (
reqparse.RequestParser()
.add_argument("install_permission", type=str, required=True, location="json")
.add_argument("debug_permission", type=str, required=True, location="json")
)
@console_ns.route("/workspaces/current/plugin/permission/change")
class PluginChangePermissionApi(Resource):
@console_ns.expect(parser_change_post)
@console_ns.expect(console_ns.models[ParserPermissionChange.__name__])
@setup_required
@login_required
@account_initialization_required
@ -570,14 +591,15 @@ class PluginChangePermissionApi(Resource):
if not user.is_admin_or_owner:
raise Forbidden()
args = parser_change_post.parse_args()
install_permission = TenantPluginPermission.InstallPermission(args["install_permission"])
debug_permission = TenantPluginPermission.DebugPermission(args["debug_permission"])
args = ParserPermissionChange.model_validate(console_ns.payload)
tenant_id = current_tenant_id
return {"success": PluginPermissionService.change_permission(tenant_id, install_permission, debug_permission)}
return {
"success": PluginPermissionService.change_permission(
tenant_id, args.install_permission, args.debug_permission
)
}
@console_ns.route("/workspaces/current/plugin/permission/fetch")
@ -605,20 +627,9 @@ class PluginFetchPermissionApi(Resource):
)
parser_dynamic = (
reqparse.RequestParser()
.add_argument("plugin_id", type=str, required=True, location="args")
.add_argument("provider", type=str, required=True, location="args")
.add_argument("action", type=str, required=True, location="args")
.add_argument("parameter", type=str, required=True, location="args")
.add_argument("credential_id", type=str, required=False, location="args")
.add_argument("provider_type", type=str, required=True, location="args")
)
@console_ns.route("/workspaces/current/plugin/parameters/dynamic-options")
class PluginFetchDynamicSelectOptionsApi(Resource):
@console_ns.expect(parser_dynamic)
@console_ns.expect(console_ns.models[ParserDynamicOptions.__name__])
@setup_required
@login_required
@is_admin_or_owner_required
@ -627,18 +638,18 @@ class PluginFetchDynamicSelectOptionsApi(Resource):
current_user, tenant_id = current_account_with_tenant()
user_id = current_user.id
args = parser_dynamic.parse_args()
args = ParserDynamicOptions.model_validate(request.args.to_dict(flat=True)) # type: ignore
try:
options = PluginParameterService.get_dynamic_select_options(
tenant_id=tenant_id,
user_id=user_id,
plugin_id=args["plugin_id"],
provider=args["provider"],
action=args["action"],
parameter=args["parameter"],
credential_id=args["credential_id"],
provider_type=args["provider_type"],
plugin_id=args.plugin_id,
provider=args.provider,
action=args.action,
parameter=args.parameter,
credential_id=args.credential_id,
provider_type=args.provider_type,
)
except PluginDaemonClientSideError as e:
raise ValueError(e)
@ -646,16 +657,9 @@ class PluginFetchDynamicSelectOptionsApi(Resource):
return jsonable_encoder({"options": options})
parser_change = (
reqparse.RequestParser()
.add_argument("permission", type=dict, required=True, location="json")
.add_argument("auto_upgrade", type=dict, required=True, location="json")
)
@console_ns.route("/workspaces/current/plugin/preferences/change")
class PluginChangePreferencesApi(Resource):
@console_ns.expect(parser_change)
@console_ns.expect(console_ns.models[ParserPreferencesChange.__name__])
@setup_required
@login_required
@account_initialization_required
@ -664,22 +668,20 @@ class PluginChangePreferencesApi(Resource):
if not user.is_admin_or_owner:
raise Forbidden()
args = parser_change.parse_args()
args = ParserPreferencesChange.model_validate(console_ns.payload)
permission = args["permission"]
permission = args.permission
install_permission = TenantPluginPermission.InstallPermission(permission.get("install_permission", "everyone"))
debug_permission = TenantPluginPermission.DebugPermission(permission.get("debug_permission", "everyone"))
install_permission = permission.install_permission
debug_permission = permission.debug_permission
auto_upgrade = args["auto_upgrade"]
auto_upgrade = args.auto_upgrade
strategy_setting = TenantPluginAutoUpgradeStrategy.StrategySetting(
auto_upgrade.get("strategy_setting", "fix_only")
)
upgrade_time_of_day = auto_upgrade.get("upgrade_time_of_day", 0)
upgrade_mode = TenantPluginAutoUpgradeStrategy.UpgradeMode(auto_upgrade.get("upgrade_mode", "exclude"))
exclude_plugins = auto_upgrade.get("exclude_plugins", [])
include_plugins = auto_upgrade.get("include_plugins", [])
strategy_setting = auto_upgrade.strategy_setting
upgrade_time_of_day = auto_upgrade.upgrade_time_of_day
upgrade_mode = auto_upgrade.upgrade_mode
exclude_plugins = auto_upgrade.exclude_plugins
include_plugins = auto_upgrade.include_plugins
# set permission
set_permission_result = PluginPermissionService.change_permission(
@ -744,12 +746,9 @@ class PluginFetchPreferencesApi(Resource):
return jsonable_encoder({"permission": permission_dict, "auto_upgrade": auto_upgrade_dict})
parser_exclude = reqparse.RequestParser().add_argument("plugin_id", type=str, required=True, location="json")
@console_ns.route("/workspaces/current/plugin/preferences/autoupgrade/exclude")
class PluginAutoUpgradeExcludePluginApi(Resource):
@console_ns.expect(parser_exclude)
@console_ns.expect(console_ns.models[ParserExcludePlugin.__name__])
@setup_required
@login_required
@account_initialization_required
@ -757,28 +756,20 @@ class PluginAutoUpgradeExcludePluginApi(Resource):
# exclude one single plugin
_, tenant_id = current_account_with_tenant()
args = parser_exclude.parse_args()
args = ParserExcludePlugin.model_validate(console_ns.payload)
return jsonable_encoder({"success": PluginAutoUpgradeService.exclude_plugin(tenant_id, args["plugin_id"])})
return jsonable_encoder({"success": PluginAutoUpgradeService.exclude_plugin(tenant_id, args.plugin_id)})
@console_ns.route("/workspaces/current/plugin/readme")
class PluginReadmeApi(Resource):
@console_ns.expect(console_ns.models[ParserReadme.__name__])
@setup_required
@login_required
@account_initialization_required
def get(self):
_, tenant_id = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("plugin_unique_identifier", type=str, required=True, location="args")
.add_argument("language", type=str, required=False, location="args")
)
args = parser.parse_args()
args = ParserReadme.model_validate(request.args.to_dict(flat=True)) # type: ignore
return jsonable_encoder(
{
"readme": PluginService.fetch_plugin_readme(
tenant_id, args["plugin_unique_identifier"], args.get("language", "en-US")
)
}
{"readme": PluginService.fetch_plugin_readme(tenant_id, args.plugin_unique_identifier, args.language)}
)

View File

@ -1,7 +1,8 @@
import logging
from flask import request
from flask_restx import Resource, fields, inputs, marshal, marshal_with, reqparse
from flask_restx import Resource, fields, marshal, marshal_with
from pydantic import BaseModel, Field
from sqlalchemy import select
from werkzeug.exceptions import Unauthorized
@ -32,8 +33,36 @@ from services.file_service import FileService
from services.workspace_service import WorkspaceService
logger = logging.getLogger(__name__)
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class WorkspaceListQuery(BaseModel):
page: int = Field(default=1, ge=1, le=99999)
limit: int = Field(default=20, ge=1, le=100)
class SwitchWorkspacePayload(BaseModel):
tenant_id: str
class WorkspaceCustomConfigPayload(BaseModel):
remove_webapp_brand: bool | None = None
replace_webapp_logo: str | None = None
class WorkspaceInfoPayload(BaseModel):
name: str
def reg(cls: type[BaseModel]):
console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
reg(WorkspaceListQuery)
reg(SwitchWorkspacePayload)
reg(WorkspaceCustomConfigPayload)
reg(WorkspaceInfoPayload)
provider_fields = {
"provider_name": fields.String,
"provider_type": fields.String,
@ -95,18 +124,15 @@ class TenantListApi(Resource):
@console_ns.route("/all-workspaces")
class WorkspaceListApi(Resource):
@console_ns.expect(console_ns.models[WorkspaceListQuery.__name__])
@setup_required
@admin_required
def get(self):
parser = (
reqparse.RequestParser()
.add_argument("page", type=inputs.int_range(1, 99999), required=False, default=1, location="args")
.add_argument("limit", type=inputs.int_range(1, 100), required=False, default=20, location="args")
)
args = parser.parse_args()
payload = request.args.to_dict(flat=True) # type: ignore
args = WorkspaceListQuery.model_validate(payload)
stmt = select(Tenant).order_by(Tenant.created_at.desc())
tenants = db.paginate(select=stmt, page=args["page"], per_page=args["limit"], error_out=False)
tenants = db.paginate(select=stmt, page=args.page, per_page=args.limit, error_out=False)
has_more = False
if tenants.has_next:
@ -115,8 +141,8 @@ class WorkspaceListApi(Resource):
return {
"data": marshal(tenants.items, workspace_fields),
"has_more": has_more,
"limit": args["limit"],
"page": args["page"],
"limit": args.limit,
"page": args.page,
"total": tenants.total,
}, 200
@ -150,26 +176,24 @@ class TenantApi(Resource):
return WorkspaceService.get_tenant_info(tenant), 200
parser_switch = reqparse.RequestParser().add_argument("tenant_id", type=str, required=True, location="json")
@console_ns.route("/workspaces/switch")
class SwitchWorkspaceApi(Resource):
@console_ns.expect(parser_switch)
@console_ns.expect(console_ns.models[SwitchWorkspacePayload.__name__])
@setup_required
@login_required
@account_initialization_required
def post(self):
current_user, _ = current_account_with_tenant()
args = parser_switch.parse_args()
payload = console_ns.payload or {}
args = SwitchWorkspacePayload.model_validate(payload)
# check if tenant_id is valid, 403 if not
try:
TenantService.switch_tenant(current_user, args["tenant_id"])
TenantService.switch_tenant(current_user, args.tenant_id)
except Exception:
raise AccountNotLinkTenantError("Account not link tenant")
new_tenant = db.session.query(Tenant).get(args["tenant_id"]) # Get new tenant
new_tenant = db.session.query(Tenant).get(args.tenant_id) # Get new tenant
if new_tenant is None:
raise ValueError("Tenant not found")
@ -178,24 +202,21 @@ class SwitchWorkspaceApi(Resource):
@console_ns.route("/workspaces/custom-config")
class CustomConfigWorkspaceApi(Resource):
@console_ns.expect(console_ns.models[WorkspaceCustomConfigPayload.__name__])
@setup_required
@login_required
@account_initialization_required
@cloud_edition_billing_resource_check("workspace_custom")
def post(self):
_, current_tenant_id = current_account_with_tenant()
parser = (
reqparse.RequestParser()
.add_argument("remove_webapp_brand", type=bool, location="json")
.add_argument("replace_webapp_logo", type=str, location="json")
)
args = parser.parse_args()
payload = console_ns.payload or {}
args = WorkspaceCustomConfigPayload.model_validate(payload)
tenant = db.get_or_404(Tenant, current_tenant_id)
custom_config_dict = {
"remove_webapp_brand": args["remove_webapp_brand"],
"replace_webapp_logo": args["replace_webapp_logo"]
if args["replace_webapp_logo"] is not None
"remove_webapp_brand": args.remove_webapp_brand,
"replace_webapp_logo": args.replace_webapp_logo
if args.replace_webapp_logo is not None
else tenant.custom_config_dict.get("replace_webapp_logo"),
}
@ -245,24 +266,22 @@ class WebappLogoWorkspaceApi(Resource):
return {"id": upload_file.id}, 201
parser_info = reqparse.RequestParser().add_argument("name", type=str, required=True, location="json")
@console_ns.route("/workspaces/info")
class WorkspaceInfoApi(Resource):
@console_ns.expect(parser_info)
@console_ns.expect(console_ns.models[WorkspaceInfoPayload.__name__])
@setup_required
@login_required
@account_initialization_required
# Change workspace name
def post(self):
_, current_tenant_id = current_account_with_tenant()
args = parser_info.parse_args()
payload = console_ns.payload or {}
args = WorkspaceInfoPayload.model_validate(payload)
if not current_tenant_id:
raise ValueError("No current tenant")
tenant = db.get_or_404(Tenant, current_tenant_id)
tenant.name = args["name"]
tenant.name = args.name
db.session.commit()
return {"result": "success", "tenant": marshal(WorkspaceService.get_tenant_info(tenant), tenant_fields)}

View File

@ -316,18 +316,16 @@ def validate_and_get_api_token(scope: str | None = None):
ApiToken.type == scope,
)
.values(last_used_at=current_time)
.returning(ApiToken)
)
stmt = select(ApiToken).where(ApiToken.token == auth_token, ApiToken.type == scope)
result = session.execute(update_stmt)
api_token = result.scalar_one_or_none()
api_token = session.scalar(stmt)
if hasattr(result, "rowcount") and result.rowcount > 0:
session.commit()
if not api_token:
stmt = select(ApiToken).where(ApiToken.token == auth_token, ApiToken.type == scope)
api_token = session.scalar(stmt)
if not api_token:
raise Unauthorized("Access token is invalid")
else:
session.commit()
raise Unauthorized("Access token is invalid")
return api_token

View File

@ -1,7 +1,7 @@
import logging
import time
from flask import jsonify
from flask import jsonify, request
from werkzeug.exceptions import NotFound, RequestEntityTooLarge
from controllers.trigger import bp
@ -28,8 +28,14 @@ def _prepare_webhook_execution(webhook_id: str, is_debug: bool = False):
webhook_data = WebhookService.extract_and_validate_webhook_data(webhook_trigger, node_config)
return webhook_trigger, workflow, node_config, webhook_data, None
except ValueError as e:
# Fall back to raw extraction for error reporting
webhook_data = WebhookService.extract_webhook_data(webhook_trigger)
# Provide minimal context for error reporting without risking another parse failure
webhook_data = {
"method": request.method,
"headers": dict(request.headers),
"query_params": dict(request.args),
"body": {},
"files": {},
}
return webhook_trigger, workflow, node_config, webhook_data, str(e)

View File

@ -62,7 +62,8 @@ from core.app.task_pipeline.message_cycle_manager import MessageCycleManager
from core.base.tts import AppGeneratorTTSPublisher, AudioTrunk
from core.model_runtime.entities.llm_entities import LLMUsage
from core.model_runtime.utils.encoders import jsonable_encoder
from core.ops.ops_trace_manager import TraceQueueManager
from core.ops.entities.trace_entity import TraceTaskName
from core.ops.ops_trace_manager import TraceQueueManager, TraceTask
from core.workflow.enums import WorkflowExecutionStatus
from core.workflow.nodes import NodeType
from core.workflow.repositories.draft_variable_repository import DraftVariableSaverFactory
@ -72,7 +73,7 @@ from extensions.ext_database import db
from libs.datetime_utils import naive_utc_now
from models import Account, Conversation, EndUser, Message, MessageFile
from models.enums import CreatorUserRole
from models.workflow import Workflow
from models.workflow import Workflow, WorkflowNodeExecutionModel
logger = logging.getLogger(__name__)
@ -580,7 +581,7 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
with self._database_session() as session:
# Save message
self._save_message(session=session, graph_runtime_state=resolved_state)
self._save_message(session=session, graph_runtime_state=resolved_state, trace_manager=trace_manager)
yield workflow_finish_resp
elif event.stopped_by in (
@ -590,7 +591,7 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
# When hitting input-moderation or annotation-reply, the workflow will not start
with self._database_session() as session:
# Save message
self._save_message(session=session)
self._save_message(session=session, trace_manager=trace_manager)
yield self._message_end_to_stream_response()
@ -599,6 +600,7 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
event: QueueAdvancedChatMessageEndEvent,
*,
graph_runtime_state: GraphRuntimeState | None = None,
trace_manager: TraceQueueManager | None = None,
**kwargs,
) -> Generator[StreamResponse, None, None]:
"""Handle advanced chat message end events."""
@ -616,7 +618,7 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
# Save message
with self._database_session() as session:
self._save_message(session=session, graph_runtime_state=resolved_state)
self._save_message(session=session, graph_runtime_state=resolved_state, trace_manager=trace_manager)
yield self._message_end_to_stream_response()
@ -770,7 +772,13 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
if self._conversation_name_generate_thread:
self._conversation_name_generate_thread.join()
def _save_message(self, *, session: Session, graph_runtime_state: GraphRuntimeState | None = None):
def _save_message(
self,
*,
session: Session,
graph_runtime_state: GraphRuntimeState | None = None,
trace_manager: TraceQueueManager | None = None,
):
message = self._get_message(session=session)
# If there are assistant files, remove markdown image links from answer
@ -809,6 +817,14 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
metadata = self._task_state.metadata.model_dump()
message.message_metadata = json.dumps(jsonable_encoder(metadata))
# Extract model provider and model_id from workflow node executions for tracing
if message.workflow_run_id:
model_info = self._extract_model_info_from_workflow(session, message.workflow_run_id)
if model_info:
message.model_provider = model_info.get("provider")
message.model_id = model_info.get("model")
message_files = [
MessageFile(
message_id=message.id,
@ -826,6 +842,68 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
]
session.add_all(message_files)
# Trigger MESSAGE_TRACE for tracing integrations
if trace_manager:
trace_manager.add_trace_task(
TraceTask(
TraceTaskName.MESSAGE_TRACE, conversation_id=self._conversation_id, message_id=self._message_id
)
)
def _extract_model_info_from_workflow(self, session: Session, workflow_run_id: str) -> dict[str, str] | None:
"""
Extract model provider and model_id from workflow node executions.
Returns dict with 'provider' and 'model' keys, or None if not found.
"""
try:
# Query workflow node executions for LLM or Agent nodes
stmt = (
select(WorkflowNodeExecutionModel)
.where(WorkflowNodeExecutionModel.workflow_run_id == workflow_run_id)
.where(WorkflowNodeExecutionModel.node_type.in_(["llm", "agent"]))
.order_by(WorkflowNodeExecutionModel.created_at.desc())
.limit(1)
)
node_execution = session.scalar(stmt)
if not node_execution:
return None
# Try to extract from execution_metadata for agent nodes
if node_execution.execution_metadata:
try:
metadata = json.loads(node_execution.execution_metadata)
agent_log = metadata.get("agent_log", [])
# Look for the first agent thought with provider info
for log_entry in agent_log:
entry_metadata = log_entry.get("metadata", {})
provider_str = entry_metadata.get("provider")
if provider_str:
# Parse format like "langgenius/deepseek/deepseek"
parts = provider_str.split("/")
if len(parts) >= 3:
return {"provider": parts[1], "model": parts[2]}
elif len(parts) == 2:
return {"provider": parts[0], "model": parts[1]}
except (json.JSONDecodeError, KeyError, AttributeError) as e:
logger.debug("Failed to parse execution_metadata: %s", e)
# Try to extract from process_data for llm nodes
if node_execution.process_data:
try:
process_data = json.loads(node_execution.process_data)
provider = process_data.get("model_provider")
model = process_data.get("model_name")
if provider and model:
return {"provider": provider, "model": model}
except (json.JSONDecodeError, KeyError) as e:
logger.debug("Failed to parse process_data: %s", e)
return None
except Exception as e:
logger.warning("Failed to extract model info from workflow: %s", e)
return None
def _seed_graph_runtime_state_from_queue_manager(self) -> None:
"""Bootstrap the cached runtime state from the queue manager when present."""
candidate = self._base_task_pipeline.queue_manager.graph_runtime_state

View File

@ -1,3 +1,4 @@
import logging
import time
from collections.abc import Mapping, Sequence
from dataclasses import dataclass
@ -55,6 +56,7 @@ from models import Account, EndUser
from services.variable_truncator import BaseTruncator, DummyVariableTruncator, VariableTruncator
NodeExecutionId = NewType("NodeExecutionId", str)
logger = logging.getLogger(__name__)
@dataclass(slots=True)
@ -289,26 +291,30 @@ class WorkflowResponseConverter:
),
)
if event.node_type == NodeType.TOOL:
response.data.extras["icon"] = ToolManager.get_tool_icon(
tenant_id=self._application_generate_entity.app_config.tenant_id,
provider_type=ToolProviderType(event.provider_type),
provider_id=event.provider_id,
)
elif event.node_type == NodeType.DATASOURCE:
manager = PluginDatasourceManager()
provider_entity = manager.fetch_datasource_provider(
self._application_generate_entity.app_config.tenant_id,
event.provider_id,
)
response.data.extras["icon"] = provider_entity.declaration.identity.generate_datasource_icon_url(
self._application_generate_entity.app_config.tenant_id
)
elif event.node_type == NodeType.TRIGGER_PLUGIN:
response.data.extras["icon"] = TriggerManager.get_trigger_plugin_icon(
self._application_generate_entity.app_config.tenant_id,
event.provider_id,
)
try:
if event.node_type == NodeType.TOOL:
response.data.extras["icon"] = ToolManager.get_tool_icon(
tenant_id=self._application_generate_entity.app_config.tenant_id,
provider_type=ToolProviderType(event.provider_type),
provider_id=event.provider_id,
)
elif event.node_type == NodeType.DATASOURCE:
manager = PluginDatasourceManager()
provider_entity = manager.fetch_datasource_provider(
self._application_generate_entity.app_config.tenant_id,
event.provider_id,
)
response.data.extras["icon"] = provider_entity.declaration.identity.generate_datasource_icon_url(
self._application_generate_entity.app_config.tenant_id
)
elif event.node_type == NodeType.TRIGGER_PLUGIN:
response.data.extras["icon"] = TriggerManager.get_trigger_plugin_icon(
self._application_generate_entity.app_config.tenant_id,
event.provider_id,
)
except Exception:
# metadata fetch may fail, for example, the plugin daemon is down or plugin is uninstalled.
logger.warning("failed to fetch icon for %s", event.provider_id)
return response

View File

@ -258,6 +258,10 @@ class WorkflowAppGenerateTaskPipeline(GraphRuntimeStateSupport):
run_id = self._extract_workflow_run_id(runtime_state)
self._workflow_execution_id = run_id
with self._database_session() as session:
self._save_workflow_app_log(session=session, workflow_run_id=self._workflow_execution_id)
start_resp = self._workflow_response_converter.workflow_start_to_stream_response(
task_id=self._application_generate_entity.task_id,
workflow_run_id=run_id,
@ -414,9 +418,6 @@ class WorkflowAppGenerateTaskPipeline(GraphRuntimeStateSupport):
graph_runtime_state=validated_state,
)
with self._database_session() as session:
self._save_workflow_app_log(session=session, workflow_run_id=self._workflow_execution_id)
yield workflow_finish_resp
def _handle_workflow_partial_success_event(
@ -437,10 +438,6 @@ class WorkflowAppGenerateTaskPipeline(GraphRuntimeStateSupport):
graph_runtime_state=validated_state,
exceptions_count=event.exceptions_count,
)
with self._database_session() as session:
self._save_workflow_app_log(session=session, workflow_run_id=self._workflow_execution_id)
yield workflow_finish_resp
def _handle_workflow_failed_and_stop_events(
@ -471,10 +468,6 @@ class WorkflowAppGenerateTaskPipeline(GraphRuntimeStateSupport):
error=error,
exceptions_count=exceptions_count,
)
with self._database_session() as session:
self._save_workflow_app_log(session=session, workflow_run_id=self._workflow_execution_id)
yield workflow_finish_resp
def _handle_text_chunk_event(
@ -655,7 +648,6 @@ class WorkflowAppGenerateTaskPipeline(GraphRuntimeStateSupport):
)
session.add(workflow_app_log)
session.commit()
def _text_chunk_to_stream_response(
self, text: str, from_variable_selector: list[str] | None = None

View File

@ -4,15 +4,15 @@ from typing import TYPE_CHECKING, Any, Optional
from pydantic import BaseModel, ConfigDict, Field, ValidationInfo, field_validator
if TYPE_CHECKING:
from core.ops.ops_trace_manager import TraceQueueManager
from constants import UUID_NIL
from core.app.app_config.entities import EasyUIBasedAppConfig, WorkflowUIBasedAppConfig
from core.entities.provider_configuration import ProviderModelBundle
from core.file import File, FileUploadConfig
from core.model_runtime.entities.model_entities import AIModelEntity
if TYPE_CHECKING:
from core.ops.ops_trace_manager import TraceQueueManager
class InvokeFrom(StrEnum):
"""
@ -275,10 +275,8 @@ class RagPipelineGenerateEntity(WorkflowAppGenerateEntity):
start_node_id: str | None = None
# Import TraceQueueManager at runtime to resolve forward references
from core.ops.ops_trace_manager import TraceQueueManager
# Rebuild models that use forward references
AppGenerateEntity.model_rebuild()
EasyUIBasedAppGenerateEntity.model_rebuild()
ConversationAppGenerateEntity.model_rebuild()

View File

@ -40,6 +40,9 @@ class EasyUITaskState(TaskState):
"""
llm_result: LLMResult
first_token_time: float | None = None
last_token_time: float | None = None
is_streaming_response: bool = False
class WorkflowTaskState(TaskState):

View File

@ -118,6 +118,7 @@ class PauseStatePersistenceLayer(GraphEngineLayer):
workflow_run_id=workflow_run_id,
state_owner_user_id=self._state_owner_user_id,
state=state.dumps(),
pause_reasons=event.reasons,
)
def on_graph_end(self, error: Exception | None) -> None:

View File

@ -332,6 +332,12 @@ class EasyUIBasedGenerateTaskPipeline(BasedGenerateTaskPipeline):
if not self._task_state.llm_result.prompt_messages:
self._task_state.llm_result.prompt_messages = chunk.prompt_messages
# Track streaming response times
if self._task_state.first_token_time is None:
self._task_state.first_token_time = time.perf_counter()
self._task_state.is_streaming_response = True
self._task_state.last_token_time = time.perf_counter()
# handle output moderation chunk
should_direct_answer = self._handle_output_moderation_chunk(cast(str, delta_text))
if should_direct_answer:
@ -398,6 +404,18 @@ class EasyUIBasedGenerateTaskPipeline(BasedGenerateTaskPipeline):
message.total_price = usage.total_price
message.currency = usage.currency
self._task_state.llm_result.usage.latency = message.provider_response_latency
# Add streaming metrics to usage if available
if self._task_state.is_streaming_response and self._task_state.first_token_time:
start_time = self.start_at
first_token_time = self._task_state.first_token_time
last_token_time = self._task_state.last_token_time or first_token_time
usage.time_to_first_token = round(first_token_time - start_time, 3)
usage.time_to_generate = round(last_token_time - first_token_time, 3)
# Update metadata with the complete usage info
self._task_state.metadata.usage = usage
message.message_metadata = self._task_state.metadata.model_dump_json()
if trace_manager:

View File

@ -29,6 +29,7 @@ class SimpleModelProviderEntity(BaseModel):
provider: str
label: I18nObject
icon_small: I18nObject | None = None
icon_small_dark: I18nObject | None = None
icon_large: I18nObject | None = None
supported_model_types: list[ModelType]
@ -42,6 +43,7 @@ class SimpleModelProviderEntity(BaseModel):
provider=provider_entity.provider,
label=provider_entity.label,
icon_small=provider_entity.icon_small,
icon_small_dark=provider_entity.icon_small_dark,
icon_large=provider_entity.icon_large,
supported_model_types=provider_entity.supported_model_types,
)

View File

@ -152,10 +152,5 @@ class CodeExecutor:
raise CodeExecutionError(f"Unsupported language {language}")
runner, preload = template_transformer.transform_caller(code, inputs)
try:
response = cls.execute_code(language, preload, runner)
except CodeExecutionError as e:
raise e
response = cls.execute_code(language, preload, runner)
return template_transformer.transform_response(response)

View File

@ -1,308 +0,0 @@
## Custom Integration of Pre-defined Models
### Introduction
After completing the vendors integration, the next step is to connect the vendor's models. To illustrate the entire connection process, we will use Xinference as an example to demonstrate a complete vendor integration.
It is important to note that for custom models, each model connection requires a complete vendor credential.
Unlike pre-defined models, a custom vendor integration always includes the following two parameters, which do not need to be defined in the vendor YAML file.
![](images/index/image-3.png)
As mentioned earlier, vendors do not need to implement validate_provider_credential. The runtime will automatically call the corresponding model layer's validate_credentials to validate the credentials based on the model type and name selected by the user.
### Writing the Vendor YAML
First, we need to identify the types of models supported by the vendor we are integrating.
Currently supported model types are as follows:
- `llm` Text Generation Models
- `text_embedding` Text Embedding Models
- `rerank` Rerank Models
- `speech2text` Speech-to-Text
- `tts` Text-to-Speech
- `moderation` Moderation
Xinference supports LLM, Text Embedding, and Rerank. So we will start by writing xinference.yaml.
```yaml
provider: xinference #Define the vendor identifier
label: # Vendor display name, supports both en_US (English) and zh_Hans (Simplified Chinese). If zh_Hans is not set, it will use en_US by default.
en_US: Xorbits Inference
icon_small: # Small icon, refer to other vendors' icons stored in the _assets directory within the vendor implementation directory; follows the same language policy as the label
en_US: icon_s_en.svg
icon_large: # Large icon
en_US: icon_l_en.svg
help: # Help information
title:
en_US: How to deploy Xinference
zh_Hans: 如何部署 Xinference
url:
en_US: https://github.com/xorbitsai/inference
supported_model_types: # Supported model types. Xinference supports LLM, Text Embedding, and Rerank
- llm
- text-embedding
- rerank
configurate_methods: # Since Xinference is a locally deployed vendor with no predefined models, users need to deploy whatever models they need according to Xinference documentation. Thus, it only supports custom models.
- customizable-model
provider_credential_schema:
credential_form_schemas:
```
Then, we need to determine what credentials are required to define a model in Xinference.
- Since it supports three different types of models, we need to specify the model_type to denote the model type. Here is how we can define it:
```yaml
provider_credential_schema:
credential_form_schemas:
- variable: model_type
type: select
label:
en_US: Model type
zh_Hans: 模型类型
required: true
options:
- value: text-generation
label:
en_US: Language Model
zh_Hans: 语言模型
- value: embeddings
label:
en_US: Text Embedding
- value: reranking
label:
en_US: Rerank
```
- Next, each model has its own model_name, so we need to define that here:
```yaml
- variable: model_name
type: text-input
label:
en_US: Model name
zh_Hans: 模型名称
required: true
placeholder:
zh_Hans: 填写模型名称
en_US: Input model name
```
- Specify the Xinference local deployment address:
```yaml
- variable: server_url
label:
zh_Hans: 服务器 URL
en_US: Server url
type: text-input
required: true
placeholder:
zh_Hans: 在此输入 Xinference 的服务器地址,如 https://example.com/xxx
en_US: Enter the url of your Xinference, for example https://example.com/xxx
```
- Each model has a unique model_uid, so we also need to define that here:
```yaml
- variable: model_uid
label:
zh_Hans: 模型 UID
en_US: Model uid
type: text-input
required: true
placeholder:
zh_Hans: 在此输入您的 Model UID
en_US: Enter the model uid
```
Now, we have completed the basic definition of the vendor.
### Writing the Model Code
Next, let's take the `llm` type as an example and write `xinference.llm.llm.py`.
In `llm.py`, create a Xinference LLM class, we name it `XinferenceAILargeLanguageModel` (this can be arbitrary), inheriting from the `__base.large_language_model.LargeLanguageModel` base class, and implement the following methods:
- LLM Invocation
Implement the core method for LLM invocation, supporting both stream and synchronous responses.
```python
def _invoke(self, model: str, credentials: dict,
prompt_messages: list[PromptMessage], model_parameters: dict,
tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None,
stream: bool = True, user: Optional[str] = None) \
-> Union[LLMResult, Generator]:
"""
Invoke large language model
:param model: model name
:param credentials: model credentials
:param prompt_messages: prompt messages
:param model_parameters: model parameters
:param tools: tools for tool usage
:param stop: stop words
:param stream: is the response a stream
:param user: unique user id
:return: full response or stream response chunk generator result
"""
```
When implementing, ensure to use two functions to return data separately for synchronous and stream responses. This is important because Python treats functions containing the `yield` keyword as generator functions, mandating them to return `Generator` types. Heres an example (note that the example uses simplified parameters; in real implementation, use the parameter list as defined above):
```python
def _invoke(self, stream: bool, **kwargs) \
-> Union[LLMResult, Generator]:
if stream:
return self._handle_stream_response(**kwargs)
return self._handle_sync_response(**kwargs)
def _handle_stream_response(self, **kwargs) -> Generator:
for chunk in response:
yield chunk
def _handle_sync_response(self, **kwargs) -> LLMResult:
return LLMResult(**response)
```
- Pre-compute Input Tokens
If the model does not provide an interface for pre-computing tokens, you can return 0 directly.
```python
def get_num_tokens(self, model: str, credentials: dict, prompt_messages: list[PromptMessage],tools: Optional[list[PromptMessageTool]] = None) -> int:
"""
Get number of tokens for given prompt messages
:param model: model name
:param credentials: model credentials
:param prompt_messages: prompt messages
:param tools: tools for tool usage
:return: token count
"""
```
Sometimes, you might not want to return 0 directly. In such cases, you can use `self._get_num_tokens_by_gpt2(text: str)` to get pre-computed tokens and ensure environment variable `PLUGIN_BASED_TOKEN_COUNTING_ENABLED` is set to `true`, This method is provided by the `AIModel` base class, and it uses GPT2's Tokenizer for calculation. However, it should be noted that this is only a substitute and may not be fully accurate.
- Model Credentials Validation
Similar to vendor credentials validation, this method validates individual model credentials.
```python
def validate_credentials(self, model: str, credentials: dict) -> None:
"""
Validate model credentials
:param model: model name
:param credentials: model credentials
:return: None
"""
```
- Model Parameter Schema
Unlike custom types, since the YAML file does not define which parameters a model supports, we need to dynamically generate the model parameter schema.
For instance, Xinference supports `max_tokens`, `temperature`, and `top_p` parameters.
However, some vendors may support different parameters for different models. For example, the `OpenLLM` vendor supports `top_k`, but not all models provided by this vendor support `top_k`. Let's say model A supports `top_k` but model B does not. In such cases, we need to dynamically generate the model parameter schema, as illustrated below:
```python
def get_customizable_model_schema(self, model: str, credentials: dict) -> Optional[AIModelEntity]:
"""
used to define customizable model schema
"""
rules = [
ParameterRule(
name='temperature', type=ParameterType.FLOAT,
use_template='temperature',
label=I18nObject(
zh_Hans='温度', en_US='Temperature'
)
),
ParameterRule(
name='top_p', type=ParameterType.FLOAT,
use_template='top_p',
label=I18nObject(
zh_Hans='Top P', en_US='Top P'
)
),
ParameterRule(
name='max_tokens', type=ParameterType.INT,
use_template='max_tokens',
min=1,
default=512,
label=I18nObject(
zh_Hans='最大生成长度', en_US='Max Tokens'
)
)
]
# if model is A, add top_k to rules
if model == 'A':
rules.append(
ParameterRule(
name='top_k', type=ParameterType.INT,
use_template='top_k',
min=1,
default=50,
label=I18nObject(
zh_Hans='Top K', en_US='Top K'
)
)
)
"""
some NOT IMPORTANT code here
"""
entity = AIModelEntity(
model=model,
label=I18nObject(
en_US=model
),
fetch_from=FetchFrom.CUSTOMIZABLE_MODEL,
model_type=model_type,
model_properties={
ModelPropertyKey.MODE: ModelType.LLM,
},
parameter_rules=rules
)
return entity
```
- Exception Error Mapping
When a model invocation error occurs, it should be mapped to the runtime's specified `InvokeError` type, enabling Dify to handle different errors appropriately.
Runtime Errors:
- `InvokeConnectionError` Connection error during invocation
- `InvokeServerUnavailableError` Service provider unavailable
- `InvokeRateLimitError` Rate limit reached
- `InvokeAuthorizationError` Authorization failure
- `InvokeBadRequestError` Invalid request parameters
```python
@property
def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]:
"""
Map model invoke error to unified error
The key is the error type thrown to the caller
The value is the error type thrown by the model,
which needs to be converted into a unified error type for the caller.
:return: Invoke error mapping
"""
```
For interface method details, see: [Interfaces](./interfaces.md). For specific implementations, refer to: [llm.py](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/llm/llm.py).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 230 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 205 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 370 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 113 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 541 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 262 KiB

View File

@ -1,701 +0,0 @@
# Interface Methods
This section describes the interface methods and parameter explanations that need to be implemented by providers and various model types.
## Provider
Inherit the `__base.model_provider.ModelProvider` base class and implement the following interfaces:
```python
def validate_provider_credentials(self, credentials: dict) -> None:
"""
Validate provider credentials
You can choose any validate_credentials method of model type or implement validate method by yourself,
such as: get model list api
if validate failed, raise exception
:param credentials: provider credentials, credentials form defined in `provider_credential_schema`.
"""
```
- `credentials` (object) Credential information
The parameters of credential information are defined by the `provider_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included.
If verification fails, throw the `errors.validate.CredentialsValidateFailedError` error.
## Model
Models are divided into 5 different types, each inheriting from different base classes and requiring the implementation of different methods.
All models need to uniformly implement the following 2 methods:
- Model Credential Verification
Similar to provider credential verification, this step involves verification for an individual model.
```python
def validate_credentials(self, model: str, credentials: dict) -> None:
"""
Validate model credentials
:param model: model name
:param credentials: model credentials
:return:
"""
```
Parameters:
- `model` (string) Model name
- `credentials` (object) Credential information
The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included.
If verification fails, throw the `errors.validate.CredentialsValidateFailedError` error.
- Invocation Error Mapping Table
When there is an exception in model invocation, it needs to be mapped to the `InvokeError` type specified by Runtime. This facilitates Dify's ability to handle different errors with appropriate follow-up actions.
Runtime Errors:
- `InvokeConnectionError` Invocation connection error
- `InvokeServerUnavailableError` Invocation service provider unavailable
- `InvokeRateLimitError` Invocation reached rate limit
- `InvokeAuthorizationError` Invocation authorization failure
- `InvokeBadRequestError` Invocation parameter error
```python
@property
def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]:
"""
Map model invoke error to unified error
The key is the error type thrown to the caller
The value is the error type thrown by the model,
which needs to be converted into a unified error type for the caller.
:return: Invoke error mapping
"""
```
You can refer to OpenAI's `_invoke_error_mapping` for an example.
### LLM
Inherit the `__base.large_language_model.LargeLanguageModel` base class and implement the following interfaces:
- LLM Invocation
Implement the core method for LLM invocation, which can support both streaming and synchronous returns.
```python
def _invoke(self, model: str, credentials: dict,
prompt_messages: list[PromptMessage], model_parameters: dict,
tools: Optional[list[PromptMessageTool]] = None, stop: Optional[List[str]] = None,
stream: bool = True, user: Optional[str] = None) \
-> Union[LLMResult, Generator]:
"""
Invoke large language model
:param model: model name
:param credentials: model credentials
:param prompt_messages: prompt messages
:param model_parameters: model parameters
:param tools: tools for tool calling
:param stop: stop words
:param stream: is stream response
:param user: unique user id
:return: full response or stream response chunk generator result
"""
```
- Parameters:
- `model` (string) Model name
- `credentials` (object) Credential information
The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included.
- `prompt_messages` (array\[[PromptMessage](#PromptMessage)\]) List of prompts
If the model is of the `Completion` type, the list only needs to include one [UserPromptMessage](#UserPromptMessage) element;
If the model is of the `Chat` type, it requires a list of elements such as [SystemPromptMessage](#SystemPromptMessage), [UserPromptMessage](#UserPromptMessage), [AssistantPromptMessage](#AssistantPromptMessage), [ToolPromptMessage](#ToolPromptMessage) depending on the message.
- `model_parameters` (object) Model parameters
The model parameters are defined by the `parameter_rules` in the model's YAML configuration.
- `tools` (array\[[PromptMessageTool](#PromptMessageTool)\]) [optional] List of tools, equivalent to the `function` in `function calling`.
That is, the tool list for tool calling.
- `stop` (array[string]) [optional] Stop sequences
The model output will stop before the string defined by the stop sequence.
- `stream` (bool) Whether to output in a streaming manner, default is True
Streaming output returns Generator\[[LLMResultChunk](#LLMResultChunk)\], non-streaming output returns [LLMResult](#LLMResult).
- `user` (string) [optional] Unique identifier of the user
This can help the provider monitor and detect abusive behavior.
- Returns
Streaming output returns Generator\[[LLMResultChunk](#LLMResultChunk)\], non-streaming output returns [LLMResult](#LLMResult).
- Pre-calculating Input Tokens
If the model does not provide a pre-calculated tokens interface, you can directly return 0.
```python
def get_num_tokens(self, model: str, credentials: dict, prompt_messages: list[PromptMessage],
tools: Optional[list[PromptMessageTool]] = None) -> int:
"""
Get number of tokens for given prompt messages
:param model: model name
:param credentials: model credentials
:param prompt_messages: prompt messages
:param tools: tools for tool calling
:return:
"""
```
For parameter explanations, refer to the above section on `LLM Invocation`.
- Fetch Custom Model Schema [Optional]
```python
def get_customizable_model_schema(self, model: str, credentials: dict) -> Optional[AIModelEntity]:
"""
Get customizable model schema
:param model: model name
:param credentials: model credentials
:return: model schema
"""
```
When the provider supports adding custom LLMs, this method can be implemented to allow custom models to fetch model schema. The default return null.
### TextEmbedding
Inherit the `__base.text_embedding_model.TextEmbeddingModel` base class and implement the following interfaces:
- Embedding Invocation
```python
def _invoke(self, model: str, credentials: dict,
texts: list[str], user: Optional[str] = None) \
-> TextEmbeddingResult:
"""
Invoke large language model
:param model: model name
:param credentials: model credentials
:param texts: texts to embed
:param user: unique user id
:return: embeddings result
"""
```
- Parameters:
- `model` (string) Model name
- `credentials` (object) Credential information
The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included.
- `texts` (array[string]) List of texts, capable of batch processing
- `user` (string) [optional] Unique identifier of the user
This can help the provider monitor and detect abusive behavior.
- Returns:
[TextEmbeddingResult](#TextEmbeddingResult) entity.
- Pre-calculating Tokens
```python
def get_num_tokens(self, model: str, credentials: dict, texts: list[str]) -> int:
"""
Get number of tokens for given prompt messages
:param model: model name
:param credentials: model credentials
:param texts: texts to embed
:return:
"""
```
For parameter explanations, refer to the above section on `Embedding Invocation`.
### Rerank
Inherit the `__base.rerank_model.RerankModel` base class and implement the following interfaces:
- Rerank Invocation
```python
def _invoke(self, model: str, credentials: dict,
query: str, docs: list[str], score_threshold: Optional[float] = None, top_n: Optional[int] = None,
user: Optional[str] = None) \
-> RerankResult:
"""
Invoke rerank model
:param model: model name
:param credentials: model credentials
:param query: search query
:param docs: docs for reranking
:param score_threshold: score threshold
:param top_n: top n
:param user: unique user id
:return: rerank result
"""
```
- Parameters:
- `model` (string) Model name
- `credentials` (object) Credential information
The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included.
- `query` (string) Query request content
- `docs` (array[string]) List of segments to be reranked
- `score_threshold` (float) [optional] Score threshold
- `top_n` (int) [optional] Select the top n segments
- `user` (string) [optional] Unique identifier of the user
This can help the provider monitor and detect abusive behavior.
- Returns:
[RerankResult](#RerankResult) entity.
### Speech2text
Inherit the `__base.speech2text_model.Speech2TextModel` base class and implement the following interfaces:
- Invoke Invocation
```python
def _invoke(self, model: str, credentials: dict, file: IO[bytes], user: Optional[str] = None) -> str:
"""
Invoke large language model
:param model: model name
:param credentials: model credentials
:param file: audio file
:param user: unique user id
:return: text for given audio file
"""
```
- Parameters:
- `model` (string) Model name
- `credentials` (object) Credential information
The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included.
- `file` (File) File stream
- `user` (string) [optional] Unique identifier of the user
This can help the provider monitor and detect abusive behavior.
- Returns:
The string after speech-to-text conversion.
### Text2speech
Inherit the `__base.text2speech_model.Text2SpeechModel` base class and implement the following interfaces:
- Invoke Invocation
```python
def _invoke(self, model: str, credentials: dict, content_text: str, streaming: bool, user: Optional[str] = None):
"""
Invoke large language model
:param model: model name
:param credentials: model credentials
:param content_text: text content to be translated
:param streaming: output is streaming
:param user: unique user id
:return: translated audio file
"""
```
- Parameters
- `model` (string) Model name
- `credentials` (object) Credential information
The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included.
- `content_text` (string) The text content that needs to be converted
- `streaming` (bool) Whether to stream output
- `user` (string) [optional] Unique identifier of the user
This can help the provider monitor and detect abusive behavior.
- Returns
Text converted speech stream.
### Moderation
Inherit the `__base.moderation_model.ModerationModel` base class and implement the following interfaces:
- Invoke Invocation
```python
def _invoke(self, model: str, credentials: dict,
text: str, user: Optional[str] = None) \
-> bool:
"""
Invoke large language model
:param model: model name
:param credentials: model credentials
:param text: text to moderate
:param user: unique user id
:return: false if text is safe, true otherwise
"""
```
- Parameters:
- `model` (string) Model name
- `credentials` (object) Credential information
The parameters of credential information are defined by either the `provider_credential_schema` or `model_credential_schema` in the provider's YAML configuration file. Inputs such as `api_key` are included.
- `text` (string) Text content
- `user` (string) [optional] Unique identifier of the user
This can help the provider monitor and detect abusive behavior.
- Returns:
False indicates that the input text is safe, True indicates otherwise.
## Entities
### PromptMessageRole
Message role
```python
class PromptMessageRole(Enum):
"""
Enum class for prompt message.
"""
SYSTEM = "system"
USER = "user"
ASSISTANT = "assistant"
TOOL = "tool"
```
### PromptMessageContentType
Message content types, divided into text and image.
```python
class PromptMessageContentType(Enum):
"""
Enum class for prompt message content type.
"""
TEXT = 'text'
IMAGE = 'image'
```
### PromptMessageContent
Message content base class, used only for parameter declaration and cannot be initialized.
```python
class PromptMessageContent(BaseModel):
"""
Model class for prompt message content.
"""
type: PromptMessageContentType
data: str
```
Currently, two types are supported: text and image. It's possible to simultaneously input text and multiple images.
You need to initialize `TextPromptMessageContent` and `ImagePromptMessageContent` separately for input.
### TextPromptMessageContent
```python
class TextPromptMessageContent(PromptMessageContent):
"""
Model class for text prompt message content.
"""
type: PromptMessageContentType = PromptMessageContentType.TEXT
```
If inputting a combination of text and images, the text needs to be constructed into this entity as part of the `content` list.
### ImagePromptMessageContent
```python
class ImagePromptMessageContent(PromptMessageContent):
"""
Model class for image prompt message content.
"""
class DETAIL(Enum):
LOW = 'low'
HIGH = 'high'
type: PromptMessageContentType = PromptMessageContentType.IMAGE
detail: DETAIL = DETAIL.LOW # Resolution
```
If inputting a combination of text and images, the images need to be constructed into this entity as part of the `content` list.
`data` can be either a `url` or a `base64` encoded string of the image.
### PromptMessage
The base class for all Role message bodies, used only for parameter declaration and cannot be initialized.
```python
class PromptMessage(BaseModel):
"""
Model class for prompt message.
"""
role: PromptMessageRole
content: Optional[str | list[PromptMessageContent]] = None # Supports two types: string and content list. The content list is designed to meet the needs of multimodal inputs. For more details, see the PromptMessageContent explanation.
name: Optional[str] = None
```
### UserPromptMessage
UserMessage message body, representing a user's message.
```python
class UserPromptMessage(PromptMessage):
"""
Model class for user prompt message.
"""
role: PromptMessageRole = PromptMessageRole.USER
```
### AssistantPromptMessage
Represents a message returned by the model, typically used for `few-shots` or inputting chat history.
```python
class AssistantPromptMessage(PromptMessage):
"""
Model class for assistant prompt message.
"""
class ToolCall(BaseModel):
"""
Model class for assistant prompt message tool call.
"""
class ToolCallFunction(BaseModel):
"""
Model class for assistant prompt message tool call function.
"""
name: str # tool name
arguments: str # tool arguments
id: str # Tool ID, effective only in OpenAI tool calls. It's the unique ID for tool invocation and the same tool can be called multiple times.
type: str # default: function
function: ToolCallFunction # tool call information
role: PromptMessageRole = PromptMessageRole.ASSISTANT
tool_calls: list[ToolCall] = [] # The result of tool invocation in response from the model (returned only when tools are input and the model deems it necessary to invoke a tool).
```
Where `tool_calls` are the list of `tool calls` returned by the model after invoking the model with the `tools` input.
### SystemPromptMessage
Represents system messages, usually used for setting system commands given to the model.
```python
class SystemPromptMessage(PromptMessage):
"""
Model class for system prompt message.
"""
role: PromptMessageRole = PromptMessageRole.SYSTEM
```
### ToolPromptMessage
Represents tool messages, used for conveying the results of a tool execution to the model for the next step of processing.
```python
class ToolPromptMessage(PromptMessage):
"""
Model class for tool prompt message.
"""
role: PromptMessageRole = PromptMessageRole.TOOL
tool_call_id: str # Tool invocation ID. If OpenAI tool call is not supported, the name of the tool can also be inputted.
```
The base class's `content` takes in the results of tool execution.
### PromptMessageTool
```python
class PromptMessageTool(BaseModel):
"""
Model class for prompt message tool.
"""
name: str
description: str
parameters: dict
```
______________________________________________________________________
### LLMResult
```python
class LLMResult(BaseModel):
"""
Model class for llm result.
"""
model: str # Actual used modele
prompt_messages: list[PromptMessage] # prompt messages
message: AssistantPromptMessage # response message
usage: LLMUsage # usage info
system_fingerprint: Optional[str] = None # request fingerprint, refer to OpenAI definition
```
### LLMResultChunkDelta
In streaming returns, each iteration contains the `delta` entity.
```python
class LLMResultChunkDelta(BaseModel):
"""
Model class for llm result chunk delta.
"""
index: int
message: AssistantPromptMessage # response message
usage: Optional[LLMUsage] = None # usage info
finish_reason: Optional[str] = None # finish reason, only the last one returns
```
### LLMResultChunk
Each iteration entity in streaming returns.
```python
class LLMResultChunk(BaseModel):
"""
Model class for llm result chunk.
"""
model: str # Actual used modele
prompt_messages: list[PromptMessage] # prompt messages
system_fingerprint: Optional[str] = None # request fingerprint, refer to OpenAI definition
delta: LLMResultChunkDelta
```
### LLMUsage
```python
class LLMUsage(ModelUsage):
"""
Model class for LLM usage.
"""
prompt_tokens: int # Tokens used for prompt
prompt_unit_price: Decimal # Unit price for prompt
prompt_price_unit: Decimal # Price unit for prompt, i.e., the unit price based on how many tokens
prompt_price: Decimal # Cost for prompt
completion_tokens: int # Tokens used for response
completion_unit_price: Decimal # Unit price for response
completion_price_unit: Decimal # Price unit for response, i.e., the unit price based on how many tokens
completion_price: Decimal # Cost for response
total_tokens: int # Total number of tokens used
total_price: Decimal # Total cost
currency: str # Currency unit
latency: float # Request latency (s)
```
______________________________________________________________________
### TextEmbeddingResult
```python
class TextEmbeddingResult(BaseModel):
"""
Model class for text embedding result.
"""
model: str # Actual model used
embeddings: list[list[float]] # List of embedding vectors, corresponding to the input texts list
usage: EmbeddingUsage # Usage information
```
### EmbeddingUsage
```python
class EmbeddingUsage(ModelUsage):
"""
Model class for embedding usage.
"""
tokens: int # Number of tokens used
total_tokens: int # Total number of tokens used
unit_price: Decimal # Unit price
price_unit: Decimal # Price unit, i.e., the unit price based on how many tokens
total_price: Decimal # Total cost
currency: str # Currency unit
latency: float # Request latency (s)
```
______________________________________________________________________
### RerankResult
```python
class RerankResult(BaseModel):
"""
Model class for rerank result.
"""
model: str # Actual model used
docs: list[RerankDocument] # Reranked document list
```
### RerankDocument
```python
class RerankDocument(BaseModel):
"""
Model class for rerank document.
"""
index: int # original index
text: str
score: float
```

View File

@ -1,176 +0,0 @@
## Predefined Model Integration
After completing the vendor integration, the next step is to integrate the models from the vendor.
First, we need to determine the type of model to be integrated and create the corresponding model type `module` under the respective vendor's directory.
Currently supported model types are:
- `llm` Text Generation Model
- `text_embedding` Text Embedding Model
- `rerank` Rerank Model
- `speech2text` Speech-to-Text
- `tts` Text-to-Speech
- `moderation` Moderation
Continuing with `Anthropic` as an example, `Anthropic` only supports LLM, so create a `module` named `llm` under `model_providers.anthropic`.
For predefined models, we first need to create a YAML file named after the model under the `llm` `module`, such as `claude-2.1.yaml`.
### Prepare Model YAML
```yaml
model: claude-2.1 # Model identifier
# Display name of the model, which can be set to en_US English or zh_Hans Chinese. If zh_Hans is not set, it will default to en_US.
# This can also be omitted, in which case the model identifier will be used as the label
label:
en_US: claude-2.1
model_type: llm # Model type, claude-2.1 is an LLM
features: # Supported features, agent-thought supports Agent reasoning, vision supports image understanding
- agent-thought
model_properties: # Model properties
mode: chat # LLM mode, complete for text completion models, chat for conversation models
context_size: 200000 # Maximum context size
parameter_rules: # Parameter rules for the model call; only LLM requires this
- name: temperature # Parameter variable name
# Five default configuration templates are provided: temperature/top_p/max_tokens/presence_penalty/frequency_penalty
# The template variable name can be set directly in use_template, which will use the default configuration in entities.defaults.PARAMETER_RULE_TEMPLATE
# Additional configuration parameters will override the default configuration if set
use_template: temperature
- name: top_p
use_template: top_p
- name: top_k
label: # Display name of the parameter
zh_Hans: 取样数量
en_US: Top k
type: int # Parameter type, supports float/int/string/boolean
help: # Help information, describing the parameter's function
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false # Whether the parameter is mandatory; can be omitted
- name: max_tokens_to_sample
use_template: max_tokens
default: 4096 # Default value of the parameter
min: 1 # Minimum value of the parameter, applicable to float/int only
max: 4096 # Maximum value of the parameter, applicable to float/int only
pricing: # Pricing information
input: '8.00' # Input unit price, i.e., prompt price
output: '24.00' # Output unit price, i.e., response content price
unit: '0.000001' # Price unit, meaning the above prices are per 100K
currency: USD # Price currency
```
It is recommended to prepare all model configurations before starting the implementation of the model code.
You can also refer to the YAML configuration information under the corresponding model type directories of other vendors in the `model_providers` directory. For the complete YAML rules, refer to: [Schema](schema.md#aimodelentity).
### Implement the Model Call Code
Next, create a Python file named `llm.py` under the `llm` `module` to write the implementation code.
Create an Anthropic LLM class named `AnthropicLargeLanguageModel` (or any other name), inheriting from the `__base.large_language_model.LargeLanguageModel` base class, and implement the following methods:
- LLM Call
Implement the core method for calling the LLM, supporting both streaming and synchronous responses.
```python
def _invoke(self, model: str, credentials: dict,
prompt_messages: list[PromptMessage], model_parameters: dict,
tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None,
stream: bool = True, user: Optional[str] = None) \
-> Union[LLMResult, Generator]:
"""
Invoke large language model
:param model: model name
:param credentials: model credentials
:param prompt_messages: prompt messages
:param model_parameters: model parameters
:param tools: tools for tool calling
:param stop: stop words
:param stream: is stream response
:param user: unique user id
:return: full response or stream response chunk generator result
"""
```
Ensure to use two functions for returning data, one for synchronous returns and the other for streaming returns, because Python identifies functions containing the `yield` keyword as generator functions, fixing the return type to `Generator`. Thus, synchronous and streaming returns need to be implemented separately, as shown below (note that the example uses simplified parameters, for actual implementation follow the above parameter list):
```python
def _invoke(self, stream: bool, **kwargs) \
-> Union[LLMResult, Generator]:
if stream:
return self._handle_stream_response(**kwargs)
return self._handle_sync_response(**kwargs)
def _handle_stream_response(self, **kwargs) -> Generator:
for chunk in response:
yield chunk
def _handle_sync_response(self, **kwargs) -> LLMResult:
return LLMResult(**response)
```
- Pre-compute Input Tokens
If the model does not provide an interface to precompute tokens, return 0 directly.
```python
def get_num_tokens(self, model: str, credentials: dict, prompt_messages: list[PromptMessage],
tools: Optional[list[PromptMessageTool]] = None) -> int:
"""
Get number of tokens for given prompt messages
:param model: model name
:param credentials: model credentials
:param prompt_messages: prompt messages
:param tools: tools for tool calling
:return:
"""
```
- Validate Model Credentials
Similar to vendor credential validation, but specific to a single model.
```python
def validate_credentials(self, model: str, credentials: dict) -> None:
"""
Validate model credentials
:param model: model name
:param credentials: model credentials
:return:
"""
```
- Map Invoke Errors
When a model call fails, map it to a specific `InvokeError` type as required by Runtime, allowing Dify to handle different errors accordingly.
Runtime Errors:
- `InvokeConnectionError` Connection error
- `InvokeServerUnavailableError` Service provider unavailable
- `InvokeRateLimitError` Rate limit reached
- `InvokeAuthorizationError` Authorization failed
- `InvokeBadRequestError` Parameter error
```python
@property
def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]:
"""
Map model invoke error to unified error
The key is the error type thrown to the caller
The value is the error type thrown by the model,
which needs to be converted into a unified error type for the caller.
:return: Invoke error mapping
"""
```
For interface method explanations, see: [Interfaces](./interfaces.md). For detailed implementation, refer to: [llm.py](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/llm/llm.py).

View File

@ -1,266 +0,0 @@
## Adding a New Provider
Providers support three types of model configuration methods:
- `predefined-model` Predefined model
This indicates that users only need to configure the unified provider credentials to use the predefined models under the provider.
- `customizable-model` Customizable model
Users need to add credential configurations for each model.
- `fetch-from-remote` Fetch from remote
This is consistent with the `predefined-model` configuration method. Only unified provider credentials need to be configured, and models are obtained from the provider through credential information.
These three configuration methods **can coexist**, meaning a provider can support `predefined-model` + `customizable-model` or `predefined-model` + `fetch-from-remote`, etc. In other words, configuring the unified provider credentials allows the use of predefined and remotely fetched models, and if new models are added, they can be used in addition to the custom models.
## Getting Started
Adding a new provider starts with determining the English identifier of the provider, such as `anthropic`, and using this identifier to create a `module` in `model_providers`.
Under this `module`, we first need to prepare the provider's YAML configuration.
### Preparing Provider YAML
Here, using `Anthropic` as an example, we preset the provider's basic information, supported model types, configuration methods, and credential rules.
```YAML
provider: anthropic # Provider identifier
label: # Provider display name, can be set in en_US English and zh_Hans Chinese, zh_Hans will default to en_US if not set.
en_US: Anthropic
icon_small: # Small provider icon, stored in the _assets directory under the corresponding provider implementation directory, same language strategy as label
en_US: icon_s_en.png
icon_large: # Large provider icon, stored in the _assets directory under the corresponding provider implementation directory, same language strategy as label
en_US: icon_l_en.png
supported_model_types: # Supported model types, Anthropic only supports LLM
- llm
configurate_methods: # Supported configuration methods, Anthropic only supports predefined models
- predefined-model
provider_credential_schema: # Provider credential rules, as Anthropic only supports predefined models, unified provider credential rules need to be defined
credential_form_schemas: # List of credential form items
- variable: anthropic_api_key # Credential parameter variable name
label: # Display name
en_US: API Key
type: secret-input # Form type, here secret-input represents an encrypted information input box, showing masked information when editing.
required: true # Whether required
placeholder: # Placeholder information
zh_Hans: Enter your API Key here
en_US: Enter your API Key
- variable: anthropic_api_url
label:
en_US: API URL
type: text-input # Form type, here text-input represents a text input box
required: false
placeholder:
zh_Hans: Enter your API URL here
en_US: Enter your API URL
```
You can also refer to the YAML configuration information under other provider directories in `model_providers`. The complete YAML rules are available at: [Schema](schema.md#provider).
### Implementing Provider Code
Providers need to inherit the `__base.model_provider.ModelProvider` base class and implement the `validate_provider_credentials` method for unified provider credential verification. For reference, see [AnthropicProvider](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/anthropic.py).
> If the provider is the type of `customizable-model`, there is no need to implement the `validate_provider_credentials` method.
```python
def validate_provider_credentials(self, credentials: dict) -> None:
"""
Validate provider credentials
You can choose any validate_credentials method of model type or implement validate method by yourself,
such as: get model list api
if validate failed, raise exception
:param credentials: provider credentials, credentials form defined in `provider_credential_schema`.
"""
```
Of course, you can also preliminarily reserve the implementation of `validate_provider_credentials` and directly reuse it after the model credential verification method is implemented.
______________________________________________________________________
### Adding Models
After the provider integration is complete, the next step is to integrate models under the provider.
First, we need to determine the type of the model to be integrated and create a `module` for the corresponding model type in the provider's directory.
The currently supported model types are as follows:
- `llm` Text generation model
- `text_embedding` Text Embedding model
- `rerank` Rerank model
- `speech2text` Speech to text
- `tts` Text to speech
- `moderation` Moderation
Continuing with `Anthropic` as an example, since `Anthropic` only supports LLM, we create a `module` named `llm` in `model_providers.anthropic`.
For predefined models, we first need to create a YAML file named after the model, such as `claude-2.1.yaml`, under the `llm` `module`.
#### Preparing Model YAML
```yaml
model: claude-2.1 # Model identifier
# Model display name, can be set in en_US English and zh_Hans Chinese, zh_Hans will default to en_US if not set.
# Alternatively, if the label is not set, use the model identifier content.
label:
en_US: claude-2.1
model_type: llm # Model type, claude-2.1 is an LLM
features: # Supported features, agent-thought for Agent reasoning, vision for image understanding
- agent-thought
model_properties: # Model properties
mode: chat # LLM mode, complete for text completion model, chat for dialogue model
context_size: 200000 # Maximum supported context size
parameter_rules: # Model invocation parameter rules, only required for LLM
- name: temperature # Invocation parameter variable name
# Default preset with 5 variable content configuration templates: temperature/top_p/max_tokens/presence_penalty/frequency_penalty
# Directly set the template variable name in use_template, which will use the default configuration in entities.defaults.PARAMETER_RULE_TEMPLATE
# If additional configuration parameters are set, they will override the default configuration
use_template: temperature
- name: top_p
use_template: top_p
- name: top_k
label: # Invocation parameter display name
zh_Hans: Sampling quantity
en_US: Top k
type: int # Parameter type, supports float/int/string/boolean
help: # Help information, describing the role of the parameter
zh_Hans: Only sample from the top K options for each subsequent token.
en_US: Only sample from the top K options for each subsequent token.
required: false # Whether required, can be left unset
- name: max_tokens_to_sample
use_template: max_tokens
default: 4096 # Default parameter value
min: 1 # Minimum parameter value, only applicable for float/int
max: 4096 # Maximum parameter value, only applicable for float/int
pricing: # Pricing information
input: '8.00' # Input price, i.e., Prompt price
output: '24.00' # Output price, i.e., returned content price
unit: '0.000001' # Pricing unit, i.e., the above prices are per 100K
currency: USD # Currency
```
It is recommended to prepare all model configurations before starting the implementation of the model code.
Similarly, you can also refer to the YAML configuration information for corresponding model types of other providers in the `model_providers` directory. The complete YAML rules can be found at: [Schema](schema.md#AIModel).
#### Implementing Model Invocation Code
Next, you need to create a python file named `llm.py` under the `llm` `module` to write the implementation code.
In `llm.py`, create an Anthropic LLM class, which we name `AnthropicLargeLanguageModel` (arbitrarily), inheriting the `__base.large_language_model.LargeLanguageModel` base class, and implement the following methods:
- LLM Invocation
Implement the core method for LLM invocation, which can support both streaming and synchronous returns.
```python
def _invoke(self, model: str, credentials: dict,
prompt_messages: list[PromptMessage], model_parameters: dict,
tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None,
stream: bool = True, user: Optional[str] = None) \
-> Union[LLMResult, Generator]:
"""
Invoke large language model
:param model: model name
:param credentials: model credentials
:param prompt_messages: prompt messages
:param model_parameters: model parameters
:param tools: tools for tool calling
:param stop: stop words
:param stream: is stream response
:param user: unique user id
:return: full response or stream response chunk generator result
"""
```
- Pre-calculating Input Tokens
If the model does not provide a pre-calculated tokens interface, you can directly return 0.
```python
def get_num_tokens(self, model: str, credentials: dict, prompt_messages: list[PromptMessage],
tools: Optional[list[PromptMessageTool]] = None) -> int:
"""
Get number of tokens for given prompt messages
:param model: model name
:param credentials: model credentials
:param prompt_messages: prompt messages
:param tools: tools for tool calling
:return:
"""
```
- Model Credential Verification
Similar to provider credential verification, this step involves verification for an individual model.
```python
def validate_credentials(self, model: str, credentials: dict) -> None:
"""
Validate model credentials
:param model: model name
:param credentials: model credentials
:return:
"""
```
- Invocation Error Mapping Table
When there is an exception in model invocation, it needs to be mapped to the `InvokeError` type specified by Runtime. This facilitates Dify's ability to handle different errors with appropriate follow-up actions.
Runtime Errors:
- `InvokeConnectionError` Invocation connection error
- `InvokeServerUnavailableError` Invocation service provider unavailable
- `InvokeRateLimitError` Invocation reached rate limit
- `InvokeAuthorizationError` Invocation authorization failure
- `InvokeBadRequestError` Invocation parameter error
```python
@property
def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]:
"""
Map model invoke error to unified error
The key is the error type thrown to the caller
The value is the error type thrown by the model,
which needs to be converted into a unified error type for the caller.
:return: Invoke error mapping
"""
```
For details on the interface methods, see: [Interfaces](interfaces.md). For specific implementations, refer to: [llm.py](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/llm/llm.py).
### Testing
To ensure the availability of integrated providers/models, each method written needs corresponding integration test code in the `tests` directory.
Continuing with `Anthropic` as an example:
Before writing test code, you need to first add the necessary credential environment variables for the test provider in `.env.example`, such as: `ANTHROPIC_API_KEY`.
Before execution, copy `.env.example` to `.env` and then execute.
#### Writing Test Code
Create a `module` with the same name as the provider in the `tests` directory: `anthropic`, and continue to create `test_provider.py` and test py files for the corresponding model types within this module, as shown below:
```shell
.
├── __init__.py
├── anthropic
│   ├── __init__.py
│   ├── test_llm.py # LLM Testing
│   └── test_provider.py # Provider Testing
```
Write test code for all the various cases implemented above and submit the code after passing the tests.

View File

@ -1,208 +0,0 @@
# Configuration Rules
- Provider rules are based on the [Provider](#Provider) entity.
- Model rules are based on the [AIModelEntity](#AIModelEntity) entity.
> All entities mentioned below are based on `Pydantic BaseModel` and can be found in the `entities` module.
### Provider
- `provider` (string) Provider identifier, e.g., `openai`
- `label` (object) Provider display name, i18n, with `en_US` English and `zh_Hans` Chinese language settings
- `zh_Hans` (string) [optional] Chinese label name, if `zh_Hans` is not set, `en_US` will be used by default.
- `en_US` (string) English label name
- `description` (object) Provider description, i18n
- `zh_Hans` (string) [optional] Chinese description
- `en_US` (string) English description
- `icon_small` (string) [optional] Small provider ICON, stored in the `_assets` directory under the corresponding provider implementation directory, with the same language strategy as `label`
- `zh_Hans` (string) Chinese ICON
- `en_US` (string) English ICON
- `icon_large` (string) [optional] Large provider ICON, stored in the `_assets` directory under the corresponding provider implementation directory, with the same language strategy as `label`
- `zh_Hans` (string) Chinese ICON
- `en_US` (string) English ICON
- `background` (string) [optional] Background color value, e.g., #FFFFFF, if empty, the default frontend color value will be displayed.
- `help` (object) [optional] help information
- `title` (object) help title, i18n
- `zh_Hans` (string) [optional] Chinese title
- `en_US` (string) English title
- `url` (object) help link, i18n
- `zh_Hans` (string) [optional] Chinese link
- `en_US` (string) English link
- `supported_model_types` (array\[[ModelType](#ModelType)\]) Supported model types
- `configurate_methods` (array\[[ConfigurateMethod](#ConfigurateMethod)\]) Configuration methods
- `provider_credential_schema` ([ProviderCredentialSchema](#ProviderCredentialSchema)) Provider credential specification
- `model_credential_schema` ([ModelCredentialSchema](#ModelCredentialSchema)) Model credential specification
### AIModelEntity
- `model` (string) Model identifier, e.g., `gpt-3.5-turbo`
- `label` (object) [optional] Model display name, i18n, with `en_US` English and `zh_Hans` Chinese language settings
- `zh_Hans` (string) [optional] Chinese label name
- `en_US` (string) English label name
- `model_type` ([ModelType](#ModelType)) Model type
- `features` (array\[[ModelFeature](#ModelFeature)\]) [optional] Supported feature list
- `model_properties` (object) Model properties
- `mode` ([LLMMode](#LLMMode)) Mode (available for model type `llm`)
- `context_size` (int) Context size (available for model types `llm`, `text-embedding`)
- `max_chunks` (int) Maximum number of chunks (available for model types `text-embedding`, `moderation`)
- `file_upload_limit` (int) Maximum file upload limit, in MB (available for model type `speech2text`)
- `supported_file_extensions` (string) Supported file extension formats, e.g., mp3, mp4 (available for model type `speech2text`)
- `default_voice` (string) default voice, e.g.alloy,echo,fable,onyx,nova,shimmeravailable for model type `tts`
- `voices` (list) List of available voice.available for model type `tts`
- `mode` (string) voice model.available for model type `tts`
- `name` (string) voice model display name.available for model type `tts`
- `language` (string) the voice model supports languages.available for model type `tts`
- `word_limit` (int) Single conversion word limit, paragraph-wise by defaultavailable for model type `tts`
- `audio_type` (string) Support audio file extension format, e.g.mp3,wavavailable for model type `tts`
- `max_workers` (int) Number of concurrent workers supporting text and audio conversionavailable for model type`tts`
- `max_characters_per_chunk` (int) Maximum characters per chunk (available for model type `moderation`)
- `parameter_rules` (array\[[ParameterRule](#ParameterRule)\]) [optional] Model invocation parameter rules
- `pricing` ([PriceConfig](#PriceConfig)) [optional] Pricing information
- `deprecated` (bool) Whether deprecated. If deprecated, the model will no longer be displayed in the list, but those already configured can continue to be used. Default False.
### ModelType
- `llm` Text generation model
- `text-embedding` Text Embedding model
- `rerank` Rerank model
- `speech2text` Speech to text
- `tts` Text to speech
- `moderation` Moderation
### ConfigurateMethod
- `predefined-model` Predefined model
Indicates that users can use the predefined models under the provider by configuring the unified provider credentials.
- `customizable-model` Customizable model
Users need to add credential configuration for each model.
- `fetch-from-remote` Fetch from remote
Consistent with the `predefined-model` configuration method, only unified provider credentials need to be configured, and models are obtained from the provider through credential information.
### ModelFeature
- `agent-thought` Agent reasoning, generally over 70B with thought chain capability.
- `vision` Vision, i.e., image understanding.
- `tool-call`
- `multi-tool-call`
- `stream-tool-call`
### FetchFrom
- `predefined-model` Predefined model
- `fetch-from-remote` Remote model
### LLMMode
- `complete` Text completion
- `chat` Dialogue
### ParameterRule
- `name` (string) Actual model invocation parameter name
- `use_template` (string) [optional] Using template
By default, 5 variable content configuration templates are preset:
- `temperature`
- `top_p`
- `frequency_penalty`
- `presence_penalty`
- `max_tokens`
In use_template, you can directly set the template variable name, which will use the default configuration in entities.defaults.PARAMETER_RULE_TEMPLATE
No need to set any parameters other than `name` and `use_template`. If additional configuration parameters are set, they will override the default configuration.
Refer to `openai/llm/gpt-3.5-turbo.yaml`.
- `label` (object) [optional] Label, i18n
- `zh_Hans`(string) [optional] Chinese label name
- `en_US` (string) English label name
- `type`(string) [optional] Parameter type
- `int` Integer
- `float` Float
- `string` String
- `boolean` Boolean
- `help` (string) [optional] Help information
- `zh_Hans` (string) [optional] Chinese help information
- `en_US` (string) English help information
- `required` (bool) Required, default False.
- `default`(int/float/string/bool) [optional] Default value
- `min`(int/float) [optional] Minimum value, applicable only to numeric types
- `max`(int/float) [optional] Maximum value, applicable only to numeric types
- `precision`(int) [optional] Precision, number of decimal places to keep, applicable only to numeric types
- `options` (array[string]) [optional] Dropdown option values, applicable only when `type` is `string`, if not set or null, option values are not restricted
### PriceConfig
- `input` (float) Input price, i.e., Prompt price
- `output` (float) Output price, i.e., returned content price
- `unit` (float) Pricing unit, e.g., if the price is measured in 1M tokens, the corresponding token amount for the unit price is `0.000001`.
- `currency` (string) Currency unit
### ProviderCredentialSchema
- `credential_form_schemas` (array\[[CredentialFormSchema](#CredentialFormSchema)\]) Credential form standard
### ModelCredentialSchema
- `model` (object) Model identifier, variable name defaults to `model`
- `label` (object) Model form item display name
- `en_US` (string) English
- `zh_Hans`(string) [optional] Chinese
- `placeholder` (object) Model prompt content
- `en_US`(string) English
- `zh_Hans`(string) [optional] Chinese
- `credential_form_schemas` (array\[[CredentialFormSchema](#CredentialFormSchema)\]) Credential form standard
### CredentialFormSchema
- `variable` (string) Form item variable name
- `label` (object) Form item label name
- `en_US`(string) English
- `zh_Hans` (string) [optional] Chinese
- `type` ([FormType](#FormType)) Form item type
- `required` (bool) Whether required
- `default`(string) Default value
- `options` (array\[[FormOption](#FormOption)\]) Specific property of form items of type `select` or `radio`, defining dropdown content
- `placeholder`(object) Specific property of form items of type `text-input`, placeholder content
- `en_US`(string) English
- `zh_Hans` (string) [optional] Chinese
- `max_length` (int) Specific property of form items of type `text-input`, defining maximum input length, 0 for no limit.
- `show_on` (array\[[FormShowOnObject](#FormShowOnObject)\]) Displayed when other form item values meet certain conditions, displayed always if empty.
### FormType
- `text-input` Text input component
- `secret-input` Password input component
- `select` Single-choice dropdown
- `radio` Radio component
- `switch` Switch component, only supports `true` and `false` values
### FormOption
- `label` (object) Label
- `en_US`(string) English
- `zh_Hans`(string) [optional] Chinese
- `value` (string) Dropdown option value
- `show_on` (array\[[FormShowOnObject](#FormShowOnObject)\]) Displayed when other form item values meet certain conditions, displayed always if empty.
### FormShowOnObject
- `variable` (string) Variable name of other form items
- `value` (string) Variable value of other form items

View File

@ -1,304 +0,0 @@
## 自定义预定义模型接入
### 介绍
供应商集成完成后,接下来为供应商下模型的接入,为了帮助理解整个接入过程,我们以`Xinference`为例,逐步完成一个完整的供应商接入。
需要注意的是,对于自定义模型,每一个模型的接入都需要填写一个完整的供应商凭据。
而不同于预定义模型,自定义供应商接入时永远会拥有如下两个参数,不需要在供应商 yaml 中定义。
![Alt text](images/index/image-3.png)
在前文中,我们已经知道了供应商无需实现`validate_provider_credential`Runtime 会自行根据用户在此选择的模型类型和模型名称调用对应的模型层的`validate_credentials`来进行验证。
### 编写供应商 yaml
我们首先要确定,接入的这个供应商支持哪些类型的模型。
当前支持模型类型如下:
- `llm` 文本生成模型
- `text_embedding` 文本 Embedding 模型
- `rerank` Rerank 模型
- `speech2text` 语音转文字
- `tts` 文字转语音
- `moderation` 审查
`Xinference`支持`LLM``Text Embedding`和 Rerank那么我们开始编写`xinference.yaml`
```yaml
provider: xinference #确定供应商标识
label: # 供应商展示名称,可设置 en_US 英文、zh_Hans 中文两种语言zh_Hans 不设置将默认使用 en_US。
en_US: Xorbits Inference
icon_small: # 小图标,可以参考其他供应商的图标,存储在对应供应商实现目录下的 _assets 目录,中英文策略同 label
en_US: icon_s_en.svg
icon_large: # 大图标
en_US: icon_l_en.svg
help: # 帮助
title:
en_US: How to deploy Xinference
zh_Hans: 如何部署 Xinference
url:
en_US: https://github.com/xorbitsai/inference
supported_model_types: # 支持的模型类型Xinference 同时支持 LLM/Text Embedding/Rerank
- llm
- text-embedding
- rerank
configurate_methods: # 因为 Xinference 为本地部署的供应商,并且没有预定义模型,需要用什么模型需要根据 Xinference 的文档自己部署,所以这里只支持自定义模型
- customizable-model
provider_credential_schema:
credential_form_schemas:
```
随后,我们需要思考在 Xinference 中定义一个模型需要哪些凭据
- 它支持三种不同的模型,因此,我们需要有`model_type`来指定这个模型的类型,它有三种类型,所以我们这么编写
```yaml
provider_credential_schema:
credential_form_schemas:
- variable: model_type
type: select
label:
en_US: Model type
zh_Hans: 模型类型
required: true
options:
- value: text-generation
label:
en_US: Language Model
zh_Hans: 语言模型
- value: embeddings
label:
en_US: Text Embedding
- value: reranking
label:
en_US: Rerank
```
- 每一个模型都有自己的名称`model_name`,因此需要在这里定义
```yaml
- variable: model_name
type: text-input
label:
en_US: Model name
zh_Hans: 模型名称
required: true
placeholder:
zh_Hans: 填写模型名称
en_US: Input model name
```
- 填写 Xinference 本地部署的地址
```yaml
- variable: server_url
label:
zh_Hans: 服务器 URL
en_US: Server url
type: text-input
required: true
placeholder:
zh_Hans: 在此输入 Xinference 的服务器地址,如 https://example.com/xxx
en_US: Enter the url of your Xinference, for example https://example.com/xxx
```
- 每个模型都有唯一的 model_uid因此需要在这里定义
```yaml
- variable: model_uid
label:
zh_Hans: 模型 UID
en_US: Model uid
type: text-input
required: true
placeholder:
zh_Hans: 在此输入您的 Model UID
en_US: Enter the model uid
```
现在,我们就完成了供应商的基础定义。
### 编写模型代码
然后我们以`llm`类型为例,编写`xinference.llm.llm.py`
`llm.py` 中创建一个 Xinference LLM 类,我们取名为 `XinferenceAILargeLanguageModel`(随意),继承 `__base.large_language_model.LargeLanguageModel` 基类,实现以下几个方法:
- LLM 调用
实现 LLM 调用的核心方法,可同时支持流式和同步返回。
```python
def _invoke(self, model: str, credentials: dict,
prompt_messages: list[PromptMessage], model_parameters: dict,
tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None,
stream: bool = True, user: Optional[str] = None) \
-> Union[LLMResult, Generator]:
"""
Invoke large language model
:param model: model name
:param credentials: model credentials
:param prompt_messages: prompt messages
:param model_parameters: model parameters
:param tools: tools for tool calling
:param stop: stop words
:param stream: is stream response
:param user: unique user id
:return: full response or stream response chunk generator result
"""
```
在实现时,需要注意使用两个函数来返回数据,分别用于处理同步返回和流式返回,因为 Python 会将函数中包含 `yield` 关键字的函数识别为生成器函数,返回的数据类型固定为 `Generator`,因此同步和流式返回需要分别实现,就像下面这样(注意下面例子使用了简化参数,实际实现时需要按照上面的参数列表进行实现):
```python
def _invoke(self, stream: bool, **kwargs) \
-> Union[LLMResult, Generator]:
if stream:
return self._handle_stream_response(**kwargs)
return self._handle_sync_response(**kwargs)
def _handle_stream_response(self, **kwargs) -> Generator:
for chunk in response:
yield chunk
def _handle_sync_response(self, **kwargs) -> LLMResult:
return LLMResult(**response)
```
- 预计算输入 tokens
若模型未提供预计算 tokens 接口,可直接返回 0。
```python
def get_num_tokens(self, model: str, credentials: dict, prompt_messages: list[PromptMessage],
tools: Optional[list[PromptMessageTool]] = None) -> int:
"""
Get number of tokens for given prompt messages
:param model: model name
:param credentials: model credentials
:param prompt_messages: prompt messages
:param tools: tools for tool calling
:return:
"""
```
有时候,也许你不需要直接返回 0所以你可以使用`self._get_num_tokens_by_gpt2(text: str)`来获取预计算的 tokens并确保环境变量`PLUGIN_BASED_TOKEN_COUNTING_ENABLED`设置为`true`,这个方法位于`AIModel`基类中,它会使用 GPT2 的 Tokenizer 进行计算,但是只能作为替代方法,并不完全准确。
- 模型凭据校验
与供应商凭据校验类似,这里针对单个模型进行校验。
```python
def validate_credentials(self, model: str, credentials: dict) -> None:
"""
Validate model credentials
:param model: model name
:param credentials: model credentials
:return:
"""
```
- 模型参数 Schema
与自定义类型不同,由于没有在 yaml 文件中定义一个模型支持哪些参数,因此,我们需要动态时间模型参数的 Schema。
如 Xinference 支持`max_tokens` `temperature` `top_p` 这三个模型参数。
但是有的供应商根据不同的模型支持不同的参数,如供应商`OpenLLM`支持`top_k`,但是并不是这个供应商提供的所有模型都支持`top_k`,我们这里举例 A 模型支持`top_k`B 模型不支持`top_k`,那么我们需要在这里动态生成模型参数的 Schema如下所示
```python
def get_customizable_model_schema(self, model: str, credentials: dict) -> Optional[AIModelEntity]:
"""
used to define customizable model schema
"""
rules = [
ParameterRule(
name='temperature', type=ParameterType.FLOAT,
use_template='temperature',
label=I18nObject(
zh_Hans='温度', en_US='Temperature'
)
),
ParameterRule(
name='top_p', type=ParameterType.FLOAT,
use_template='top_p',
label=I18nObject(
zh_Hans='Top P', en_US='Top P'
)
),
ParameterRule(
name='max_tokens', type=ParameterType.INT,
use_template='max_tokens',
min=1,
default=512,
label=I18nObject(
zh_Hans='最大生成长度', en_US='Max Tokens'
)
)
]
# if model is A, add top_k to rules
if model == 'A':
rules.append(
ParameterRule(
name='top_k', type=ParameterType.INT,
use_template='top_k',
min=1,
default=50,
label=I18nObject(
zh_Hans='Top K', en_US='Top K'
)
)
)
"""
some NOT IMPORTANT code here
"""
entity = AIModelEntity(
model=model,
label=I18nObject(
en_US=model
),
fetch_from=FetchFrom.CUSTOMIZABLE_MODEL,
model_type=model_type,
model_properties={
ModelPropertyKey.MODE: ModelType.LLM,
},
parameter_rules=rules
)
return entity
```
- 调用异常错误映射表
当模型调用异常时需要映射到 Runtime 指定的 `InvokeError` 类型,方便 Dify 针对不同错误做不同后续处理。
Runtime Errors:
- `InvokeConnectionError` 调用连接错误
- `InvokeServerUnavailableError ` 调用服务方不可用
- `InvokeRateLimitError ` 调用达到限额
- `InvokeAuthorizationError` 调用鉴权失败
- `InvokeBadRequestError ` 调用传参有误
```python
@property
def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]:
"""
Map model invoke error to unified error
The key is the error type thrown to the caller
The value is the error type thrown by the model,
which needs to be converted into a unified error type for the caller.
:return: Invoke error mapping
"""
```
接口方法说明见:[Interfaces](./interfaces.md),具体实现可参考:[llm.py](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/llm/llm.py)。

Binary file not shown.

Before

Width:  |  Height:  |  Size: 230 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 205 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 385 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 113 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 541 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 262 KiB

View File

@ -1,744 +0,0 @@
# 接口方法
这里介绍供应商和各模型类型需要实现的接口方法和参数说明。
## 供应商
继承 `__base.model_provider.ModelProvider` 基类,实现以下接口:
```python
def validate_provider_credentials(self, credentials: dict) -> None:
"""
Validate provider credentials
You can choose any validate_credentials method of model type or implement validate method by yourself,
such as: get model list api
if validate failed, raise exception
:param credentials: provider credentials, credentials form defined in `provider_credential_schema`.
"""
```
- `credentials` (object) 凭据信息
凭据信息的参数由供应商 YAML 配置文件的 `provider_credential_schema` 定义,传入如:`api_key` 等。
验证失败请抛出 `errors.validate.CredentialsValidateFailedError` 错误。
**注:预定义模型需完整实现该接口,自定义模型供应商只需要如下简单实现即可**
```python
class XinferenceProvider(Provider):
def validate_provider_credentials(self, credentials: dict) -> None:
pass
```
## 模型
模型分为 5 种不同的模型类型,不同模型类型继承的基类不同,需要实现的方法也不同。
### 通用接口
所有模型均需要统一实现下面 2 个方法:
- 模型凭据校验
与供应商凭据校验类似,这里针对单个模型进行校验。
```python
def validate_credentials(self, model: str, credentials: dict) -> None:
"""
Validate model credentials
:param model: model name
:param credentials: model credentials
:return:
"""
```
参数:
- `model` (string) 模型名称
- `credentials` (object) 凭据信息
凭据信息的参数由供应商 YAML 配置文件的 `provider_credential_schema` 或 `model_credential_schema` 定义,传入如:`api_key` 等。
验证失败请抛出 `errors.validate.CredentialsValidateFailedError` 错误。
- 调用异常错误映射表
当模型调用异常时需要映射到 Runtime 指定的 `InvokeError` 类型,方便 Dify 针对不同错误做不同后续处理。
Runtime Errors:
- `InvokeConnectionError` 调用连接错误
- `InvokeServerUnavailableError ` 调用服务方不可用
- `InvokeRateLimitError ` 调用达到限额
- `InvokeAuthorizationError` 调用鉴权失败
- `InvokeBadRequestError ` 调用传参有误
```python
@property
def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]:
"""
Map model invoke error to unified error
The key is the error type thrown to the caller
The value is the error type thrown by the model,
which needs to be converted into a unified error type for the caller.
:return: Invoke error mapping
"""
```
也可以直接抛出对应 Errors并做如下定义这样在之后的调用中可以直接抛出`InvokeConnectionError`等异常。
```python
@property
def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]:
return {
InvokeConnectionError: [
InvokeConnectionError
],
InvokeServerUnavailableError: [
InvokeServerUnavailableError
],
InvokeRateLimitError: [
InvokeRateLimitError
],
InvokeAuthorizationError: [
InvokeAuthorizationError
],
InvokeBadRequestError: [
InvokeBadRequestError
],
}
```
可参考 OpenAI `_invoke_error_mapping`。
### LLM
继承 `__base.large_language_model.LargeLanguageModel` 基类,实现以下接口:
- LLM 调用
实现 LLM 调用的核心方法,可同时支持流式和同步返回。
```python
def _invoke(self, model: str, credentials: dict,
prompt_messages: list[PromptMessage], model_parameters: dict,
tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None,
stream: bool = True, user: Optional[str] = None) \
-> Union[LLMResult, Generator]:
"""
Invoke large language model
:param model: model name
:param credentials: model credentials
:param prompt_messages: prompt messages
:param model_parameters: model parameters
:param tools: tools for tool calling
:param stop: stop words
:param stream: is stream response
:param user: unique user id
:return: full response or stream response chunk generator result
"""
```
- 参数:
- `model` (string) 模型名称
- `credentials` (object) 凭据信息
凭据信息的参数由供应商 YAML 配置文件的 `provider_credential_schema` 或 `model_credential_schema` 定义,传入如:`api_key` 等。
- `prompt_messages` (array\[[PromptMessage](#PromptMessage)\]) Prompt 列表
若模型为 `Completion` 类型,则列表只需要传入一个 [UserPromptMessage](#UserPromptMessage) 元素即可;
若模型为 `Chat` 类型,需要根据消息不同传入 [SystemPromptMessage](#SystemPromptMessage), [UserPromptMessage](#UserPromptMessage), [AssistantPromptMessage](#AssistantPromptMessage), [ToolPromptMessage](#ToolPromptMessage) 元素列表
- `model_parameters` (object) 模型参数
模型参数由模型 YAML 配置的 `parameter_rules` 定义。
- `tools` (array\[[PromptMessageTool](#PromptMessageTool)\]) [optional] 工具列表,等同于 `function calling` 中的 `function`。
即传入 tool calling 的工具列表。
- `stop` (array[string]) [optional] 停止序列
模型返回将在停止序列定义的字符串之前停止输出。
- `stream` (bool) 是否流式输出,默认 True
流式输出返回 Generator\[[LLMResultChunk](#LLMResultChunk)\],非流式输出返回 [LLMResult](#LLMResult)。
- `user` (string) [optional] 用户的唯一标识符
可以帮助供应商监控和检测滥用行为。
- 返回
流式输出返回 Generator\[[LLMResultChunk](#LLMResultChunk)\],非流式输出返回 [LLMResult](#LLMResult)。
- 预计算输入 tokens
若模型未提供预计算 tokens 接口,可直接返回 0。
```python
def get_num_tokens(self, model: str, credentials: dict, prompt_messages: list[PromptMessage],
tools: Optional[list[PromptMessageTool]] = None) -> int:
"""
Get number of tokens for given prompt messages
:param model: model name
:param credentials: model credentials
:param prompt_messages: prompt messages
:param tools: tools for tool calling
:return:
"""
```
参数说明见上述 `LLM 调用`。
该接口需要根据对应`model`选择合适的`tokenizer`进行计算,如果对应模型没有提供`tokenizer`,可以使用`AIModel`基类中的`_get_num_tokens_by_gpt2(text: str)`方法进行计算。
- 获取自定义模型规则 [可选]
```python
def get_customizable_model_schema(self, model: str, credentials: dict) -> Optional[AIModelEntity]:
"""
Get customizable model schema
:param model: model name
:param credentials: model credentials
:return: model schema
"""
```
​当供应商支持增加自定义 LLM 时,可实现此方法让自定义模型可获取模型规则,默认返回 None。
对于`OpenAI`供应商下的大部分微调模型,可以通过其微调模型名称获取到其基类模型,如`gpt-3.5-turbo-1106`,然后返回基类模型的预定义参数规则,参考[openai](https://github.com/langgenius/dify/blob/feat/model-runtime/api/core/model_runtime/model_providers/openai/llm/llm.py#L801)
的具体实现
### TextEmbedding
继承 `__base.text_embedding_model.TextEmbeddingModel` 基类,实现以下接口:
- Embedding 调用
```python
def _invoke(self, model: str, credentials: dict,
texts: list[str], user: Optional[str] = None) \
-> TextEmbeddingResult:
"""
Invoke large language model
:param model: model name
:param credentials: model credentials
:param texts: texts to embed
:param user: unique user id
:return: embeddings result
"""
```
- 参数:
- `model` (string) 模型名称
- `credentials` (object) 凭据信息
凭据信息的参数由供应商 YAML 配置文件的 `provider_credential_schema` 或 `model_credential_schema` 定义,传入如:`api_key` 等。
- `texts` (array[string]) 文本列表,可批量处理
- `user` (string) [optional] 用户的唯一标识符
可以帮助供应商监控和检测滥用行为。
- 返回:
[TextEmbeddingResult](#TextEmbeddingResult) 实体。
- 预计算 tokens
```python
def get_num_tokens(self, model: str, credentials: dict, texts: list[str]) -> int:
"""
Get number of tokens for given prompt messages
:param model: model name
:param credentials: model credentials
:param texts: texts to embed
:return:
"""
```
参数说明见上述 `Embedding 调用`。
同上述`LargeLanguageModel`,该接口需要根据对应`model`选择合适的`tokenizer`进行计算,如果对应模型没有提供`tokenizer`,可以使用`AIModel`基类中的`_get_num_tokens_by_gpt2(text: str)`方法进行计算。
### Rerank
继承 `__base.rerank_model.RerankModel` 基类,实现以下接口:
- rerank 调用
```python
def _invoke(self, model: str, credentials: dict,
query: str, docs: list[str], score_threshold: Optional[float] = None, top_n: Optional[int] = None,
user: Optional[str] = None) \
-> RerankResult:
"""
Invoke rerank model
:param model: model name
:param credentials: model credentials
:param query: search query
:param docs: docs for reranking
:param score_threshold: score threshold
:param top_n: top n
:param user: unique user id
:return: rerank result
"""
```
- 参数:
- `model` (string) 模型名称
- `credentials` (object) 凭据信息
凭据信息的参数由供应商 YAML 配置文件的 `provider_credential_schema` 或 `model_credential_schema` 定义,传入如:`api_key` 等。
- `query` (string) 查询请求内容
- `docs` (array[string]) 需要重排的分段列表
- `score_threshold` (float) [optional] Score 阈值
- `top_n` (int) [optional] 取前 n 个分段
- `user` (string) [optional] 用户的唯一标识符
可以帮助供应商监控和检测滥用行为。
- 返回:
[RerankResult](#RerankResult) 实体。
### Speech2text
继承 `__base.speech2text_model.Speech2TextModel` 基类,实现以下接口:
- Invoke 调用
```python
def _invoke(self, model: str, credentials: dict,
file: IO[bytes], user: Optional[str] = None) \
-> str:
"""
Invoke large language model
:param model: model name
:param credentials: model credentials
:param file: audio file
:param user: unique user id
:return: text for given audio file
"""
```
- 参数:
- `model` (string) 模型名称
- `credentials` (object) 凭据信息
凭据信息的参数由供应商 YAML 配置文件的 `provider_credential_schema` 或 `model_credential_schema` 定义,传入如:`api_key` 等。
- `file` (File) 文件流
- `user` (string) [optional] 用户的唯一标识符
可以帮助供应商监控和检测滥用行为。
- 返回:
语音转换后的字符串。
### Text2speech
继承 `__base.text2speech_model.Text2SpeechModel` 基类,实现以下接口:
- Invoke 调用
```python
def _invoke(self, model: str, credentials: dict, content_text: str, streaming: bool, user: Optional[str] = None):
"""
Invoke large language model
:param model: model name
:param credentials: model credentials
:param content_text: text content to be translated
:param streaming: output is streaming
:param user: unique user id
:return: translated audio file
"""
```
- 参数:
- `model` (string) 模型名称
- `credentials` (object) 凭据信息
凭据信息的参数由供应商 YAML 配置文件的 `provider_credential_schema` 或 `model_credential_schema` 定义,传入如:`api_key` 等。
- `content_text` (string) 需要转换的文本内容
- `streaming` (bool) 是否进行流式输出
- `user` (string) [optional] 用户的唯一标识符
可以帮助供应商监控和检测滥用行为。
- 返回:
文本转换后的语音流。
### Moderation
继承 `__base.moderation_model.ModerationModel` 基类,实现以下接口:
- Invoke 调用
```python
def _invoke(self, model: str, credentials: dict,
text: str, user: Optional[str] = None) \
-> bool:
"""
Invoke large language model
:param model: model name
:param credentials: model credentials
:param text: text to moderate
:param user: unique user id
:return: false if text is safe, true otherwise
"""
```
- 参数:
- `model` (string) 模型名称
- `credentials` (object) 凭据信息
凭据信息的参数由供应商 YAML 配置文件的 `provider_credential_schema` 或 `model_credential_schema` 定义,传入如:`api_key` 等。
- `text` (string) 文本内容
- `user` (string) [optional] 用户的唯一标识符
可以帮助供应商监控和检测滥用行为。
- 返回:
False 代表传入的文本安全True 则反之。
## 实体
### PromptMessageRole
消息角色
```python
class PromptMessageRole(Enum):
"""
Enum class for prompt message.
"""
SYSTEM = "system"
USER = "user"
ASSISTANT = "assistant"
TOOL = "tool"
```
### PromptMessageContentType
消息内容类型,分为纯文本和图片。
```python
class PromptMessageContentType(Enum):
"""
Enum class for prompt message content type.
"""
TEXT = 'text'
IMAGE = 'image'
```
### PromptMessageContent
消息内容基类,仅作为参数声明用,不可初始化。
```python
class PromptMessageContent(BaseModel):
"""
Model class for prompt message content.
"""
type: PromptMessageContentType
data: str # 内容数据
```
当前支持文本和图片两种类型,可支持同时传入文本和多图。
需要分别初始化 `TextPromptMessageContent` 和 `ImagePromptMessageContent` 传入。
### TextPromptMessageContent
```python
class TextPromptMessageContent(PromptMessageContent):
"""
Model class for text prompt message content.
"""
type: PromptMessageContentType = PromptMessageContentType.TEXT
```
若传入图文,其中文字需要构造此实体作为 `content` 列表中的一部分。
### ImagePromptMessageContent
```python
class ImagePromptMessageContent(PromptMessageContent):
"""
Model class for image prompt message content.
"""
class DETAIL(Enum):
LOW = 'low'
HIGH = 'high'
type: PromptMessageContentType = PromptMessageContentType.IMAGE
detail: DETAIL = DETAIL.LOW # 分辨率
```
若传入图文,其中图片需要构造此实体作为 `content` 列表中的一部分
`data` 可以为 `url` 或者图片 `base64` 加密后的字符串。
### PromptMessage
所有 Role 消息体的基类,仅作为参数声明用,不可初始化。
```python
class PromptMessage(BaseModel):
"""
Model class for prompt message.
"""
role: PromptMessageRole # 消息角色
content: Optional[str | list[PromptMessageContent]] = None # 支持两种类型,字符串和内容列表,内容列表是为了满足多模态的需要,可详见 PromptMessageContent 说明。
name: Optional[str] = None # 名称,可选。
```
### UserPromptMessage
UserMessage 消息体,代表用户消息。
```python
class UserPromptMessage(PromptMessage):
"""
Model class for user prompt message.
"""
role: PromptMessageRole = PromptMessageRole.USER
```
### AssistantPromptMessage
代表模型返回消息,通常用于 `few-shots` 或聊天历史传入。
```python
class AssistantPromptMessage(PromptMessage):
"""
Model class for assistant prompt message.
"""
class ToolCall(BaseModel):
"""
Model class for assistant prompt message tool call.
"""
class ToolCallFunction(BaseModel):
"""
Model class for assistant prompt message tool call function.
"""
name: str # 工具名称
arguments: str # 工具参数
id: str # 工具 ID仅在 OpenAI tool call 生效,为工具调用的唯一 ID同一个工具可以调用多次
type: str # 默认 function
function: ToolCallFunction # 工具调用信息
role: PromptMessageRole = PromptMessageRole.ASSISTANT
tool_calls: list[ToolCall] = [] # 模型回复的工具调用结果(仅当传入 tools并且模型认为需要调用工具时返回
```
其中 `tool_calls` 为调用模型传入 `tools` 后,由模型返回的 `tool call` 列表。
### SystemPromptMessage
代表系统消息,通常用于设定给模型的系统指令。
```python
class SystemPromptMessage(PromptMessage):
"""
Model class for system prompt message.
"""
role: PromptMessageRole = PromptMessageRole.SYSTEM
```
### ToolPromptMessage
代表工具消息,用于工具执行后将结果交给模型进行下一步计划。
```python
class ToolPromptMessage(PromptMessage):
"""
Model class for tool prompt message.
"""
role: PromptMessageRole = PromptMessageRole.TOOL
tool_call_id: str # 工具调用 ID若不支持 OpenAI tool call也可传入工具名称
```
基类的 `content` 传入工具执行结果。
### PromptMessageTool
```python
class PromptMessageTool(BaseModel):
"""
Model class for prompt message tool.
"""
name: str # 工具名称
description: str # 工具描述
parameters: dict # 工具参数 dict
```
______________________________________________________________________
### LLMResult
```python
class LLMResult(BaseModel):
"""
Model class for llm result.
"""
model: str # 实际使用模型
prompt_messages: list[PromptMessage] # prompt 消息列表
message: AssistantPromptMessage # 回复消息
usage: LLMUsage # 使用的 tokens 及费用信息
system_fingerprint: Optional[str] = None # 请求指纹,可参考 OpenAI 该参数定义
```
### LLMResultChunkDelta
流式返回中每个迭代内部 `delta` 实体
```python
class LLMResultChunkDelta(BaseModel):
"""
Model class for llm result chunk delta.
"""
index: int # 序号
message: AssistantPromptMessage # 回复消息
usage: Optional[LLMUsage] = None # 使用的 tokens 及费用信息,仅最后一条返回
finish_reason: Optional[str] = None # 结束原因,仅最后一条返回
```
### LLMResultChunk
流式返回中每个迭代实体
```python
class LLMResultChunk(BaseModel):
"""
Model class for llm result chunk.
"""
model: str # 实际使用模型
prompt_messages: list[PromptMessage] # prompt 消息列表
system_fingerprint: Optional[str] = None # 请求指纹,可参考 OpenAI 该参数定义
delta: LLMResultChunkDelta # 每个迭代存在变化的内容
```
### LLMUsage
```python
class LLMUsage(ModelUsage):
"""
Model class for llm usage.
"""
prompt_tokens: int # prompt 使用 tokens
prompt_unit_price: Decimal # prompt 单价
prompt_price_unit: Decimal # prompt 价格单位,即单价基于多少 tokens
prompt_price: Decimal # prompt 费用
completion_tokens: int # 回复使用 tokens
completion_unit_price: Decimal # 回复单价
completion_price_unit: Decimal # 回复价格单位,即单价基于多少 tokens
completion_price: Decimal # 回复费用
total_tokens: int # 总使用 token 数
total_price: Decimal # 总费用
currency: str # 货币单位
latency: float # 请求耗时 (s)
```
______________________________________________________________________
### TextEmbeddingResult
```python
class TextEmbeddingResult(BaseModel):
"""
Model class for text embedding result.
"""
model: str # 实际使用模型
embeddings: list[list[float]] # embedding 向量列表,对应传入的 texts 列表
usage: EmbeddingUsage # 使用信息
```
### EmbeddingUsage
```python
class EmbeddingUsage(ModelUsage):
"""
Model class for embedding usage.
"""
tokens: int # 使用 token 数
total_tokens: int # 总使用 token 数
unit_price: Decimal # 单价
price_unit: Decimal # 价格单位,即单价基于多少 tokens
total_price: Decimal # 总费用
currency: str # 货币单位
latency: float # 请求耗时 (s)
```
______________________________________________________________________
### RerankResult
```python
class RerankResult(BaseModel):
"""
Model class for rerank result.
"""
model: str # 实际使用模型
docs: list[RerankDocument] # 重排后的分段列表
```
### RerankDocument
```python
class RerankDocument(BaseModel):
"""
Model class for rerank document.
"""
index: int # 原序号
text: str # 分段文本内容
score: float # 分数
```

View File

@ -1,172 +0,0 @@
## 预定义模型接入
供应商集成完成后,接下来为供应商下模型的接入。
我们首先需要确定接入模型的类型,并在对应供应商的目录下创建对应模型类型的 `module`
当前支持模型类型如下:
- `llm` 文本生成模型
- `text_embedding` 文本 Embedding 模型
- `rerank` Rerank 模型
- `speech2text` 语音转文字
- `tts` 文字转语音
- `moderation` 审查
依旧以 `Anthropic` 为例,`Anthropic` 仅支持 LLM因此在 `model_providers.anthropic` 创建一个 `llm` 为名称的 `module`
对于预定义的模型,我们首先需要在 `llm` `module` 下创建以模型名为文件名称的 YAML 文件,如:`claude-2.1.yaml`
### 准备模型 YAML
```yaml
model: claude-2.1 # 模型标识
# 模型展示名称,可设置 en_US 英文、zh_Hans 中文两种语言zh_Hans 不设置将默认使用 en_US。
# 也可不设置 label则使用 model 标识内容。
label:
en_US: claude-2.1
model_type: llm # 模型类型claude-2.1 为 LLM
features: # 支持功能agent-thought 为支持 Agent 推理vision 为支持图片理解
- agent-thought
model_properties: # 模型属性
mode: chat # LLM 模式complete 文本补全模型chat 对话模型
context_size: 200000 # 支持最大上下文大小
parameter_rules: # 模型调用参数规则,仅 LLM 需要提供
- name: temperature # 调用参数变量名
# 默认预置了 5 种变量内容配置模板temperature/top_p/max_tokens/presence_penalty/frequency_penalty
# 可在 use_template 中直接设置模板变量名,将会使用 entities.defaults.PARAMETER_RULE_TEMPLATE 中的默认配置
# 若设置了额外的配置参数,将覆盖默认配置
use_template: temperature
- name: top_p
use_template: top_p
- name: top_k
label: # 调用参数展示名称
zh_Hans: 取样数量
en_US: Top k
type: int # 参数类型,支持 float/int/string/boolean
help: # 帮助信息,描述参数作用
zh_Hans: 仅从每个后续标记的前 K 个选项中采样。
en_US: Only sample from the top K options for each subsequent token.
required: false # 是否必填,可不设置
- name: max_tokens_to_sample
use_template: max_tokens
default: 4096 # 参数默认值
min: 1 # 参数最小值,仅 float/int 可用
max: 4096 # 参数最大值,仅 float/int 可用
pricing: # 价格信息
input: '8.00' # 输入单价,即 Prompt 单价
output: '24.00' # 输出单价,即返回内容单价
unit: '0.000001' # 价格单位,即上述价格为每 100K 的单价
currency: USD # 价格货币
```
建议将所有模型配置都准备完毕后再开始模型代码的实现。
同样,也可以参考 `model_providers` 目录下其他供应商对应模型类型目录下的 YAML 配置信息,完整的 YAML 规则见:[Schema](schema.md#aimodelentity)。
### 实现模型调用代码
接下来需要在 `llm` `module` 下创建一个同名的 python 文件 `llm.py` 来编写代码实现。
`llm.py` 中创建一个 Anthropic LLM 类,我们取名为 `AnthropicLargeLanguageModel`(随意),继承 `__base.large_language_model.LargeLanguageModel` 基类,实现以下几个方法:
- LLM 调用
实现 LLM 调用的核心方法,可同时支持流式和同步返回。
```python
def _invoke(self, model: str, credentials: dict,
prompt_messages: list[PromptMessage], model_parameters: dict,
tools: Optional[list[PromptMessageTool]] = None, stop: Optional[list[str]] = None,
stream: bool = True, user: Optional[str] = None) \
-> Union[LLMResult, Generator]:
"""
Invoke large language model
:param model: model name
:param credentials: model credentials
:param prompt_messages: prompt messages
:param model_parameters: model parameters
:param tools: tools for tool calling
:param stop: stop words
:param stream: is stream response
:param user: unique user id
:return: full response or stream response chunk generator result
"""
```
在实现时,需要注意使用两个函数来返回数据,分别用于处理同步返回和流式返回,因为 Python 会将函数中包含 `yield` 关键字的函数识别为生成器函数,返回的数据类型固定为 `Generator`,因此同步和流式返回需要分别实现,就像下面这样(注意下面例子使用了简化参数,实际实现时需要按照上面的参数列表进行实现):
```python
def _invoke(self, stream: bool, **kwargs) \
-> Union[LLMResult, Generator]:
if stream:
return self._handle_stream_response(**kwargs)
return self._handle_sync_response(**kwargs)
def _handle_stream_response(self, **kwargs) -> Generator:
for chunk in response:
yield chunk
def _handle_sync_response(self, **kwargs) -> LLMResult:
return LLMResult(**response)
```
- 预计算输入 tokens
若模型未提供预计算 tokens 接口,可直接返回 0。
```python
def get_num_tokens(self, model: str, credentials: dict, prompt_messages: list[PromptMessage],
tools: Optional[list[PromptMessageTool]] = None) -> int:
"""
Get number of tokens for given prompt messages
:param model: model name
:param credentials: model credentials
:param prompt_messages: prompt messages
:param tools: tools for tool calling
:return:
"""
```
- 模型凭据校验
与供应商凭据校验类似,这里针对单个模型进行校验。
```python
def validate_credentials(self, model: str, credentials: dict) -> None:
"""
Validate model credentials
:param model: model name
:param credentials: model credentials
:return:
"""
```
- 调用异常错误映射表
当模型调用异常时需要映射到 Runtime 指定的 `InvokeError` 类型,方便 Dify 针对不同错误做不同后续处理。
Runtime Errors:
- `InvokeConnectionError` 调用连接错误
- `InvokeServerUnavailableError ` 调用服务方不可用
- `InvokeRateLimitError ` 调用达到限额
- `InvokeAuthorizationError` 调用鉴权失败
- `InvokeBadRequestError ` 调用传参有误
```python
@property
def _invoke_error_mapping(self) -> dict[type[InvokeError], list[type[Exception]]]:
"""
Map model invoke error to unified error
The key is the error type thrown to the caller
The value is the error type thrown by the model,
which needs to be converted into a unified error type for the caller.
:return: Invoke error mapping
"""
```
接口方法说明见:[Interfaces](./interfaces.md),具体实现可参考:[llm.py](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/llm/llm.py)。

View File

@ -1,192 +0,0 @@
## 增加新供应商
供应商支持三种模型配置方式:
- `predefined-model ` 预定义模型
表示用户只需要配置统一的供应商凭据即可使用供应商下的预定义模型。
- `customizable-model` 自定义模型
用户需要新增每个模型的凭据配置,如 Xinference它同时支持 LLM 和 Text Embedding但是每个模型都有唯一的**model_uid**,如果想要将两者同时接入,就需要为每个模型配置一个**model_uid**。
- `fetch-from-remote` 从远程获取
`predefined-model` 配置方式一致,只需要配置统一的供应商凭据即可,模型通过凭据信息从供应商获取。
如 OpenAI我们可以基于 gpt-turbo-3.5 来 Fine Tune 多个模型,而他们都位于同一个**api_key**下,当配置为 `fetch-from-remote` 时,开发者只需要配置统一的**api_key**即可让 DifyRuntime 获取到开发者所有的微调模型并接入 Dify。
这三种配置方式**支持共存**,即存在供应商支持 `predefined-model` + `customizable-model``predefined-model` + `fetch-from-remote` 等,也就是配置了供应商统一凭据可以使用预定义模型和从远程获取的模型,若新增了模型,则可以在此基础上额外使用自定义的模型。
## 开始
### 介绍
#### 名词解释
- `module`: 一个`module`即为一个 Python Package或者通俗一点称为一个文件夹里面包含了一个`__init__.py`文件,以及其他的`.py`文件。
#### 步骤
新增一个供应商主要分为几步,这里简单列出,帮助大家有一个大概的认识,具体的步骤会在下面详细介绍。
- 创建供应商 yaml 文件,根据[ProviderSchema](./schema.md#provider)编写
- 创建供应商代码,实现一个`class`
- 根据模型类型,在供应商`module`下创建对应的模型类型 `module`,如`llm``text_embedding`
- 根据模型类型,在对应的模型`module`下创建同名的代码文件,如`llm.py`,并实现一个`class`
- 如果有预定义模型,根据模型名称创建同名的 yaml 文件在模型`module`下,如`claude-2.1.yaml`,根据[AIModelEntity](./schema.md#aimodelentity)编写。
- 编写测试代码,确保功能可用。
### 开始吧
增加一个新的供应商需要先确定供应商的英文标识,如 `anthropic`,使用该标识在 `model_providers` 创建以此为名称的 `module`
在此 `module` 下,我们需要先准备供应商的 YAML 配置。
#### 准备供应商 YAML
此处以 `Anthropic` 为例,预设了供应商基础信息、支持的模型类型、配置方式、凭据规则。
```YAML
provider: anthropic # 供应商标识
label: # 供应商展示名称,可设置 en_US 英文、zh_Hans 中文两种语言zh_Hans 不设置将默认使用 en_US。
en_US: Anthropic
icon_small: # 供应商小图标,存储在对应供应商实现目录下的 _assets 目录,中英文策略同 label
en_US: icon_s_en.png
icon_large: # 供应商大图标,存储在对应供应商实现目录下的 _assets 目录,中英文策略同 label
en_US: icon_l_en.png
supported_model_types: # 支持的模型类型Anthropic 仅支持 LLM
- llm
configurate_methods: # 支持的配置方式Anthropic 仅支持预定义模型
- predefined-model
provider_credential_schema: # 供应商凭据规则,由于 Anthropic 仅支持预定义模型,则需要定义统一供应商凭据规则
credential_form_schemas: # 凭据表单项列表
- variable: anthropic_api_key # 凭据参数变量名
label: # 展示名称
en_US: API Key
type: secret-input # 表单类型,此处 secret-input 代表加密信息输入框,编辑时只展示屏蔽后的信息。
required: true # 是否必填
placeholder: # PlaceHolder 信息
zh_Hans: 在此输入您的 API Key
en_US: Enter your API Key
- variable: anthropic_api_url
label:
en_US: API URL
type: text-input # 表单类型,此处 text-input 代表文本输入框
required: false
placeholder:
zh_Hans: 在此输入您的 API URL
en_US: Enter your API URL
```
如果接入的供应商提供自定义模型,比如`OpenAI`提供微调模型,那么我们就需要添加[`model_credential_schema`](./schema.md#modelcredentialschema),以`OpenAI`为例:
```yaml
model_credential_schema:
model: # 微调模型名称
label:
en_US: Model Name
zh_Hans: 模型名称
placeholder:
en_US: Enter your model name
zh_Hans: 输入模型名称
credential_form_schemas:
- variable: openai_api_key
label:
en_US: API Key
type: secret-input
required: true
placeholder:
zh_Hans: 在此输入您的 API Key
en_US: Enter your API Key
- variable: openai_organization
label:
zh_Hans: 组织 ID
en_US: Organization
type: text-input
required: false
placeholder:
zh_Hans: 在此输入您的组织 ID
en_US: Enter your Organization ID
- variable: openai_api_base
label:
zh_Hans: API Base
en_US: API Base
type: text-input
required: false
placeholder:
zh_Hans: 在此输入您的 API Base
en_US: Enter your API Base
```
也可以参考 `model_providers` 目录下其他供应商目录下的 YAML 配置信息,完整的 YAML 规则见:[Schema](schema.md#provider)。
#### 实现供应商代码
我们需要在`model_providers`下创建一个同名的 python 文件,如`anthropic.py`,并实现一个`class`,继承`__base.provider.Provider`基类,如`AnthropicProvider`
##### 自定义模型供应商
当供应商为 Xinference 等自定义模型供应商时,可跳过该步骤,仅创建一个空的`XinferenceProvider`类即可,并实现一个空的`validate_provider_credentials`方法,该方法并不会被实际使用,仅用作避免抽象类无法实例化。
```python
class XinferenceProvider(Provider):
def validate_provider_credentials(self, credentials: dict) -> None:
pass
```
##### 预定义模型供应商
供应商需要继承 `__base.model_provider.ModelProvider` 基类,实现 `validate_provider_credentials` 供应商统一凭据校验方法即可,可参考 [AnthropicProvider](https://github.com/langgenius/dify-runtime/blob/main/lib/model_providers/anthropic/anthropic.py)。
```python
def validate_provider_credentials(self, credentials: dict) -> None:
"""
Validate provider credentials
You can choose any validate_credentials method of model type or implement validate method by yourself,
such as: get model list api
if validate failed, raise exception
:param credentials: provider credentials, credentials form defined in `provider_credential_schema`.
"""
```
当然也可以先预留 `validate_provider_credentials` 实现,在模型凭据校验方法实现后直接复用。
#### 增加模型
#### [增加预定义模型 👈🏻](./predefined_model_scale_out.md)
对于预定义模型,我们可以通过简单定义一个 yaml并通过实现调用代码来接入。
#### [增加自定义模型 👈🏻](./customizable_model_scale_out.md)
对于自定义模型,我们只需要实现调用代码即可接入,但是它需要处理的参数可能会更加复杂。
______________________________________________________________________
### 测试
为了保证接入供应商/模型的可用性,编写后的每个方法均需要在 `tests` 目录中编写对应的集成测试代码。
依旧以 `Anthropic` 为例。
在编写测试代码前,需要先在 `.env.example` 新增测试供应商所需要的凭据环境变量,如:`ANTHROPIC_API_KEY`
在执行前需要将 `.env.example` 复制为 `.env` 再执行。
#### 编写测试代码
`tests` 目录下创建供应商同名的 `module`: `anthropic`,继续在此模块中创建 `test_provider.py` 以及对应模型类型的 test py 文件,如下所示:
```shell
.
├── __init__.py
├── anthropic
│   ├── __init__.py
│   ├── test_llm.py # LLM 测试
│   └── test_provider.py # 供应商测试
```
针对上面实现的代码的各种情况进行测试代码编写,并测试通过后提交代码。

View File

@ -1,209 +0,0 @@
# 配置规则
- 供应商规则基于 [Provider](#Provider) 实体。
- 模型规则基于 [AIModelEntity](#AIModelEntity) 实体。
> 以下所有实体均基于 `Pydantic BaseModel`,可在 `entities` 模块中找到对应实体。
### Provider
- `provider` (string) 供应商标识,如:`openai`
- `label` (object) 供应商展示名称i18n可设置 `en_US` 英文、`zh_Hans` 中文两种语言
- `zh_Hans ` (string) [optional] 中文标签名,`zh_Hans` 不设置将默认使用 `en_US`
- `en_US` (string) 英文标签名
- `description` (object) [optional] 供应商描述i18n
- `zh_Hans` (string) [optional] 中文描述
- `en_US` (string) 英文描述
- `icon_small` (string) [optional] 供应商小 ICON存储在对应供应商实现目录下的 `_assets` 目录,中英文策略同 `label`
- `zh_Hans` (string) [optional] 中文 ICON
- `en_US` (string) 英文 ICON
- `icon_large` (string) [optional] 供应商大 ICON存储在对应供应商实现目录下的 \_assets 目录,中英文策略同 label
- `zh_Hans `(string) [optional] 中文 ICON
- `en_US` (string) 英文 ICON
- `background` (string) [optional] 背景颜色色值,例:#FFFFFF,为空则展示前端默认色值。
- `help` (object) [optional] 帮助信息
- `title` (object) 帮助标题i18n
- `zh_Hans` (string) [optional] 中文标题
- `en_US` (string) 英文标题
- `url` (object) 帮助链接i18n
- `zh_Hans` (string) [optional] 中文链接
- `en_US` (string) 英文链接
- `supported_model_types` (array\[[ModelType](#ModelType)\]) 支持的模型类型
- `configurate_methods` (array\[[ConfigurateMethod](#ConfigurateMethod)\]) 配置方式
- `provider_credential_schema` ([ProviderCredentialSchema](#ProviderCredentialSchema)) 供应商凭据规格
- `model_credential_schema` ([ModelCredentialSchema](#ModelCredentialSchema)) 模型凭据规格
### AIModelEntity
- `model` (string) 模型标识,如:`gpt-3.5-turbo`
- `label` (object) [optional] 模型展示名称i18n可设置 `en_US` 英文、`zh_Hans` 中文两种语言
- `zh_Hans `(string) [optional] 中文标签名
- `en_US` (string) 英文标签名
- `model_type` ([ModelType](#ModelType)) 模型类型
- `features` (array\[[ModelFeature](#ModelFeature)\]) [optional] 支持功能列表
- `model_properties` (object) 模型属性
- `mode` ([LLMMode](#LLMMode)) 模式 (模型类型 `llm` 可用)
- `context_size` (int) 上下文大小 (模型类型 `llm` `text-embedding` 可用)
- `max_chunks` (int) 最大分块数量 (模型类型 `text-embedding ` `moderation` 可用)
- `file_upload_limit` (int) 文件最大上传限制单位MB。模型类型 `speech2text` 可用)
- `supported_file_extensions` (string) 支持文件扩展格式mp3,mp4模型类型 `speech2text` 可用)
- `default_voice` (string) 缺省音色必选alloy,echo,fable,onyx,nova,shimmer模型类型 `tts` 可用)
- `voices` (list) 可选音色列表。
- `mode` (string) 音色模型。(模型类型 `tts` 可用)
- `name` (string) 音色模型显示名称。(模型类型 `tts` 可用)
- `language` (string) 音色模型支持语言。(模型类型 `tts` 可用)
- `word_limit` (int) 单次转换字数限制,默认按段落分段(模型类型 `tts` 可用)
- `audio_type` (string) 支持音频文件扩展格式mp3,wav模型类型 `tts` 可用)
- `max_workers` (int) 支持文字音频转换并发任务数(模型类型 `tts` 可用)
- `max_characters_per_chunk` (int) 每块最大字符数 (模型类型 `moderation` 可用)
- `parameter_rules` (array\[[ParameterRule](#ParameterRule)\]) [optional] 模型调用参数规则
- `pricing` ([PriceConfig](#PriceConfig)) [optional] 价格信息
- `deprecated` (bool) 是否废弃。若废弃,模型列表将不再展示,但已经配置的可以继续使用,默认 False。
### ModelType
- `llm` 文本生成模型
- `text-embedding` 文本 Embedding 模型
- `rerank` Rerank 模型
- `speech2text` 语音转文字
- `tts` 文字转语音
- `moderation` 审查
### ConfigurateMethod
- `predefined-model ` 预定义模型
表示用户只需要配置统一的供应商凭据即可使用供应商下的预定义模型。
- `customizable-model` 自定义模型
用户需要新增每个模型的凭据配置。
- `fetch-from-remote` 从远程获取
`predefined-model` 配置方式一致,只需要配置统一的供应商凭据即可,模型通过凭据信息从供应商获取。
### ModelFeature
- `agent-thought` Agent 推理,一般超过 70B 有思维链能力。
- `vision` 视觉,即:图像理解。
- `tool-call` 工具调用
- `multi-tool-call` 多工具调用
- `stream-tool-call` 流式工具调用
### FetchFrom
- `predefined-model` 预定义模型
- `fetch-from-remote` 远程模型
### LLMMode
- `completion` 文本补全
- `chat` 对话
### ParameterRule
- `name` (string) 调用模型实际参数名
- `use_template` (string) [optional] 使用模板
默认预置了 5 种变量内容配置模板:
- `temperature`
- `top_p`
- `frequency_penalty`
- `presence_penalty`
- `max_tokens`
可在 use_template 中直接设置模板变量名,将会使用 entities.defaults.PARAMETER_RULE_TEMPLATE 中的默认配置
不用设置除 `name``use_template` 之外的所有参数,若设置了额外的配置参数,将覆盖默认配置。
可参考 `openai/llm/gpt-3.5-turbo.yaml`
- `label` (object) [optional] 标签i18n
- `zh_Hans`(string) [optional] 中文标签名
- `en_US` (string) 英文标签名
- `type`(string) [optional] 参数类型
- `int` 整数
- `float` 浮点数
- `string` 字符串
- `boolean` 布尔型
- `help` (string) [optional] 帮助信息
- `zh_Hans` (string) [optional] 中文帮助信息
- `en_US` (string) 英文帮助信息
- `required` (bool) 是否必填,默认 False。
- `default`(int/float/string/bool) [optional] 默认值
- `min`(int/float) [optional] 最小值,仅数字类型适用
- `max`(int/float) [optional] 最大值,仅数字类型适用
- `precision`(int) [optional] 精度,保留小数位数,仅数字类型适用
- `options` (array[string]) [optional] 下拉选项值,仅当 `type``string` 时适用,若不设置或为 null 则不限制选项值
### PriceConfig
- `input` (float) 输入单价,即 Prompt 单价
- `output` (float) 输出单价,即返回内容单价
- `unit` (float) 价格单位,如以 1M tokens 计价,则单价对应的单位 token 数为 `0.000001`
- `currency` (string) 货币单位
### ProviderCredentialSchema
- `credential_form_schemas` (array\[[CredentialFormSchema](#CredentialFormSchema)\]) 凭据表单规范
### ModelCredentialSchema
- `model` (object) 模型标识,变量名默认 `model`
- `label` (object) 模型表单项展示名称
- `en_US` (string) 英文
- `zh_Hans`(string) [optional] 中文
- `placeholder` (object) 模型提示内容
- `en_US`(string) 英文
- `zh_Hans`(string) [optional] 中文
- `credential_form_schemas` (array\[[CredentialFormSchema](#CredentialFormSchema)\]) 凭据表单规范
### CredentialFormSchema
- `variable` (string) 表单项变量名
- `label` (object) 表单项标签名
- `en_US`(string) 英文
- `zh_Hans` (string) [optional] 中文
- `type` ([FormType](#FormType)) 表单项类型
- `required` (bool) 是否必填
- `default`(string) 默认值
- `options` (array\[[FormOption](#FormOption)\]) 表单项为 `select``radio` 专有属性,定义下拉内容
- `placeholder`(object) 表单项为 `text-input `专有属性,表单项 PlaceHolder
- `en_US`(string) 英文
- `zh_Hans` (string) [optional] 中文
- `max_length` (int) 表单项为`text-input`专有属性定义输入最大长度0 为不限制。
- `show_on` (array\[[FormShowOnObject](#FormShowOnObject)\]) 当其他表单项值符合条件时显示,为空则始终显示。
### FormType
- `text-input` 文本输入组件
- `secret-input` 密码输入组件
- `select` 单选下拉
- `radio` Radio 组件
- `switch` 开关组件,仅支持 `true``false`
### FormOption
- `label` (object) 标签
- `en_US`(string) 英文
- `zh_Hans`(string) [optional] 中文
- `value` (string) 下拉选项值
- `show_on` (array\[[FormShowOnObject](#FormShowOnObject)\]) 当其他表单项值符合条件时显示,为空则始终显示。
### FormShowOnObject
- `variable` (string) 其他表单项变量名
- `value` (string) 其他表单项变量值

View File

@ -99,6 +99,7 @@ class SimpleProviderEntity(BaseModel):
provider: str
label: I18nObject
icon_small: I18nObject | None = None
icon_small_dark: I18nObject | None = None
icon_large: I18nObject | None = None
supported_model_types: Sequence[ModelType]
models: list[AIModelEntity] = []
@ -124,7 +125,6 @@ class ProviderEntity(BaseModel):
icon_small: I18nObject | None = None
icon_large: I18nObject | None = None
icon_small_dark: I18nObject | None = None
icon_large_dark: I18nObject | None = None
background: str | None = None
help: ProviderHelpEntity | None = None
supported_model_types: Sequence[ModelType]

View File

@ -300,6 +300,14 @@ class ModelProviderFactory:
file_name = provider_schema.icon_small.zh_Hans
else:
file_name = provider_schema.icon_small.en_US
elif icon_type.lower() == "icon_small_dark":
if not provider_schema.icon_small_dark:
raise ValueError(f"Provider {provider} does not have small dark icon.")
if lang.lower() == "zh_hans":
file_name = provider_schema.icon_small_dark.zh_Hans
else:
file_name = provider_schema.icon_small_dark.en_US
else:
if not provider_schema.icon_large:
raise ValueError(f"Provider {provider} does not have large icon.")

View File

@ -222,6 +222,59 @@ class TencentSpanBuilder:
links=links,
)
@staticmethod
def build_message_llm_span(
trace_info: MessageTraceInfo, trace_id: int, parent_span_id: int, user_id: str
) -> SpanData:
"""Build LLM span for message traces with detailed LLM attributes."""
status = Status(StatusCode.OK)
if trace_info.error:
status = Status(StatusCode.ERROR, trace_info.error)
# Extract model information from `metadata`` or `message_data`
trace_metadata = trace_info.metadata or {}
message_data = trace_info.message_data or {}
model_provider = trace_metadata.get("ls_provider") or (
message_data.get("model_provider", "") if isinstance(message_data, dict) else ""
)
model_name = trace_metadata.get("ls_model_name") or (
message_data.get("model_id", "") if isinstance(message_data, dict) else ""
)
inputs_str = str(trace_info.inputs or "")
outputs_str = str(trace_info.outputs or "")
attributes = {
GEN_AI_SESSION_ID: trace_metadata.get("conversation_id", ""),
GEN_AI_USER_ID: str(user_id),
GEN_AI_SPAN_KIND: GenAISpanKind.GENERATION.value,
GEN_AI_FRAMEWORK: "dify",
GEN_AI_MODEL_NAME: str(model_name),
GEN_AI_PROVIDER: str(model_provider),
GEN_AI_USAGE_INPUT_TOKENS: str(trace_info.message_tokens or 0),
GEN_AI_USAGE_OUTPUT_TOKENS: str(trace_info.answer_tokens or 0),
GEN_AI_USAGE_TOTAL_TOKENS: str(trace_info.total_tokens or 0),
GEN_AI_PROMPT: inputs_str,
GEN_AI_COMPLETION: outputs_str,
INPUT_VALUE: inputs_str,
OUTPUT_VALUE: outputs_str,
}
if trace_info.is_streaming_request:
attributes[GEN_AI_IS_STREAMING_REQUEST] = "true"
return SpanData(
trace_id=trace_id,
parent_span_id=parent_span_id,
span_id=TencentTraceUtils.convert_to_span_id(trace_info.message_id, "llm"),
name="GENERATION",
start_time=TencentSpanBuilder._get_time_nanoseconds(trace_info.start_time),
end_time=TencentSpanBuilder._get_time_nanoseconds(trace_info.end_time),
attributes=attributes,
status=status,
)
@staticmethod
def build_tool_span(trace_info: ToolTraceInfo, trace_id: int, parent_span_id: int) -> SpanData:
"""Build tool span."""

View File

@ -107,9 +107,13 @@ class TencentDataTrace(BaseTraceInstance):
links.append(TencentTraceUtils.create_link(trace_info.trace_id))
message_span = TencentSpanBuilder.build_message_span(trace_info, trace_id, str(user_id), links)
self.trace_client.add_span(message_span)
# Add LLM child span with detailed attributes
parent_span_id = TencentTraceUtils.convert_to_span_id(trace_info.message_id, "message")
llm_span = TencentSpanBuilder.build_message_llm_span(trace_info, trace_id, parent_span_id, str(user_id))
self.trace_client.add_span(llm_span)
self._record_message_llm_metrics(trace_info)
# Record trace duration for entry span

View File

@ -1,20 +1,110 @@
import re
from operator import itemgetter
from typing import cast
class JiebaKeywordTableHandler:
def __init__(self):
from core.rag.datasource.keyword.jieba.stopwords import STOPWORDS
tfidf = self._load_tfidf_extractor()
tfidf.stop_words = STOPWORDS # type: ignore[attr-defined]
self._tfidf = tfidf
def _load_tfidf_extractor(self):
"""
Load jieba TFIDF extractor with fallback strategy.
Loading Flow:
┌─────────────────────────────────────────────────────────────────────┐
│ jieba.analyse.default_tfidf │
│ exists? │
└─────────────────────────────────────────────────────────────────────┘
│ │
YES NO
│ │
▼ ▼
┌──────────────────┐ ┌──────────────────────────────────┐
│ Return default │ │ jieba.analyse.TFIDF exists? │
│ TFIDF │ └──────────────────────────────────┘
└──────────────────┘ │ │
YES NO
│ │
│ ▼
│ ┌────────────────────────────┐
│ │ Try import from │
│ │ jieba.analyse.tfidf.TFIDF │
│ └────────────────────────────┘
│ │ │
│ SUCCESS FAILED
│ │ │
▼ ▼ ▼
┌────────────────────────┐ ┌─────────────────┐
│ Instantiate TFIDF() │ │ Build fallback │
│ & cache to default │ │ _SimpleTFIDF │
└────────────────────────┘ └─────────────────┘
"""
import jieba.analyse # type: ignore
tfidf = getattr(jieba.analyse, "default_tfidf", None)
if tfidf is not None:
return tfidf
tfidf_class = getattr(jieba.analyse, "TFIDF", None)
if tfidf_class is None:
try:
from jieba.analyse.tfidf import TFIDF # type: ignore
tfidf_class = TFIDF
except Exception:
tfidf_class = None
if tfidf_class is not None:
tfidf = tfidf_class()
jieba.analyse.default_tfidf = tfidf # type: ignore[attr-defined]
return tfidf
return self._build_fallback_tfidf()
@staticmethod
def _build_fallback_tfidf():
"""Fallback lightweight TFIDF for environments missing jieba's TFIDF."""
import jieba # type: ignore
from core.rag.datasource.keyword.jieba.stopwords import STOPWORDS
jieba.analyse.default_tfidf.stop_words = STOPWORDS # type: ignore
class _SimpleTFIDF:
def __init__(self):
self.stop_words = STOPWORDS
self._lcut = getattr(jieba, "lcut", None)
def extract_tags(self, sentence: str, top_k: int | None = 20, **kwargs):
# Basic frequency-based keyword extraction as a fallback when TF-IDF is unavailable.
top_k = kwargs.pop("topK", top_k)
cut = getattr(jieba, "cut", None)
if self._lcut:
tokens = self._lcut(sentence)
elif callable(cut):
tokens = list(cut(sentence))
else:
tokens = re.findall(r"\w+", sentence)
words = [w for w in tokens if w and w not in self.stop_words]
freq: dict[str, int] = {}
for w in words:
freq[w] = freq.get(w, 0) + 1
sorted_words = sorted(freq.items(), key=itemgetter(1), reverse=True)
if top_k is not None:
sorted_words = sorted_words[:top_k]
return [item[0] for item in sorted_words]
return _SimpleTFIDF()
def extract_keywords(self, text: str, max_keywords_per_chunk: int | None = 10) -> set[str]:
"""Extract keywords with JIEBA tfidf."""
import jieba.analyse # type: ignore
keywords = jieba.analyse.extract_tags(
keywords = self._tfidf.extract_tags(
sentence=text,
topK=max_keywords_per_chunk,
)

View File

@ -58,11 +58,39 @@ class OceanBaseVector(BaseVector):
password=self._config.password,
db_name=self._config.database,
)
self._fields: list[str] = [] # List of fields in the collection
if self._client.check_table_exists(collection_name):
self._load_collection_fields()
self._hybrid_search_enabled = self._check_hybrid_search_support() # Check if hybrid search is supported
def get_type(self) -> str:
return VectorType.OCEANBASE
def _load_collection_fields(self):
"""
Load collection fields from the database table.
This method populates the _fields list with column names from the table.
"""
try:
if self._collection_name in self._client.metadata_obj.tables:
table = self._client.metadata_obj.tables[self._collection_name]
# Store all column names except 'id' (primary key)
self._fields = [column.name for column in table.columns if column.name != "id"]
logger.debug("Loaded fields for collection '%s': %s", self._collection_name, self._fields)
else:
logger.warning("Collection '%s' not found in metadata", self._collection_name)
except Exception as e:
logger.warning("Failed to load collection fields for '%s': %s", self._collection_name, str(e))
def field_exists(self, field: str) -> bool:
"""
Check if a field exists in the collection.
:param field: Field name to check
:return: True if field exists, False otherwise
"""
return field in self._fields
def create(self, texts: list[Document], embeddings: list[list[float]], **kwargs):
self._vec_dim = len(embeddings[0])
self._create_collection()
@ -151,6 +179,7 @@ class OceanBaseVector(BaseVector):
logger.debug("DEBUG: Hybrid search is NOT enabled for '%s'", self._collection_name)
self._client.refresh_metadata([self._collection_name])
self._load_collection_fields()
redis_client.set(collection_exist_cache_key, 1, ex=3600)
def _check_hybrid_search_support(self) -> bool:
@ -177,42 +206,134 @@ class OceanBaseVector(BaseVector):
def add_texts(self, documents: list[Document], embeddings: list[list[float]], **kwargs):
ids = self._get_uuids(documents)
for id, doc, emb in zip(ids, documents, embeddings):
self._client.insert(
table_name=self._collection_name,
data={
"id": id,
"vector": emb,
"text": doc.page_content,
"metadata": doc.metadata,
},
)
try:
self._client.insert(
table_name=self._collection_name,
data={
"id": id,
"vector": emb,
"text": doc.page_content,
"metadata": doc.metadata,
},
)
except Exception as e:
logger.exception(
"Failed to insert document with id '%s' in collection '%s'",
id,
self._collection_name,
)
raise Exception(f"Failed to insert document with id '{id}'") from e
def text_exists(self, id: str) -> bool:
cur = self._client.get(table_name=self._collection_name, ids=id)
return bool(cur.rowcount != 0)
try:
cur = self._client.get(table_name=self._collection_name, ids=id)
return bool(cur.rowcount != 0)
except Exception as e:
logger.exception(
"Failed to check if text exists with id '%s' in collection '%s'",
id,
self._collection_name,
)
raise Exception(f"Failed to check text existence for id '{id}'") from e
def delete_by_ids(self, ids: list[str]):
if not ids:
return
self._client.delete(table_name=self._collection_name, ids=ids)
try:
self._client.delete(table_name=self._collection_name, ids=ids)
logger.debug("Deleted %d documents from collection '%s'", len(ids), self._collection_name)
except Exception as e:
logger.exception(
"Failed to delete %d documents from collection '%s'",
len(ids),
self._collection_name,
)
raise Exception(f"Failed to delete documents from collection '{self._collection_name}'") from e
def get_ids_by_metadata_field(self, key: str, value: str) -> list[str]:
from sqlalchemy import text
try:
import re
cur = self._client.get(
table_name=self._collection_name,
ids=None,
where_clause=[text(f"metadata->>'$.{key}' = '{value}'")],
output_column_name=["id"],
)
return [row[0] for row in cur]
from sqlalchemy import text
# Validate key to prevent injection in JSON path
if not re.match(r"^[a-zA-Z0-9_.]+$", key):
raise ValueError(f"Invalid characters in metadata key: {key}")
# Use parameterized query to prevent SQL injection
sql = text(f"SELECT id FROM `{self._collection_name}` WHERE metadata->>'$.{key}' = :value")
with self._client.engine.connect() as conn:
result = conn.execute(sql, {"value": value})
ids = [row[0] for row in result]
logger.debug(
"Found %d documents with metadata field '%s'='%s' in collection '%s'",
len(ids),
key,
value,
self._collection_name,
)
return ids
except Exception as e:
logger.exception(
"Failed to get IDs by metadata field '%s'='%s' in collection '%s'",
key,
value,
self._collection_name,
)
raise Exception(f"Failed to query documents by metadata field '{key}'") from e
def delete_by_metadata_field(self, key: str, value: str):
ids = self.get_ids_by_metadata_field(key, value)
self.delete_by_ids(ids)
if ids:
self.delete_by_ids(ids)
else:
logger.debug("No documents found to delete with metadata field '%s'='%s'", key, value)
def _process_search_results(
self, results: list[tuple], score_threshold: float = 0.0, score_key: str = "score"
) -> list[Document]:
"""
Common method to process search results
:param results: Search results as list of tuples (text, metadata, score)
:param score_threshold: Score threshold for filtering
:param score_key: Key name for score in metadata
:return: List of documents
"""
docs = []
for row in results:
text, metadata_str, score = row[0], row[1], row[2]
# Parse metadata JSON
try:
metadata = json.loads(metadata_str) if isinstance(metadata_str, str) else metadata_str
except json.JSONDecodeError:
logger.warning("Invalid JSON metadata: %s", metadata_str)
metadata = {}
# Add score to metadata
metadata[score_key] = score
# Filter by score threshold
if score >= score_threshold:
docs.append(Document(page_content=text, metadata=metadata))
return docs
def search_by_full_text(self, query: str, **kwargs: Any) -> list[Document]:
if not self._hybrid_search_enabled:
logger.warning(
"Full-text search is disabled: set OCEANBASE_ENABLE_HYBRID_SEARCH=true (requires OceanBase >= 4.3.5.1)."
)
return []
if not self.field_exists("text"):
logger.warning(
"Full-text search unavailable: collection '%s' missing 'text' field; "
"recreate the collection after enabling OCEANBASE_ENABLE_HYBRID_SEARCH to add fulltext index.",
self._collection_name,
)
return []
try:
@ -220,13 +341,24 @@ class OceanBaseVector(BaseVector):
if not isinstance(top_k, int) or top_k <= 0:
raise ValueError("top_k must be a positive integer")
document_ids_filter = kwargs.get("document_ids_filter")
where_clause = ""
if document_ids_filter:
document_ids = ", ".join(f"'{id}'" for id in document_ids_filter)
where_clause = f" AND metadata->>'$.document_id' IN ({document_ids})"
score_threshold = float(kwargs.get("score_threshold") or 0.0)
full_sql = f"""SELECT metadata, text, MATCH (text) AGAINST (:query) AS score
# Build parameterized query to prevent SQL injection
from sqlalchemy import text
document_ids_filter = kwargs.get("document_ids_filter")
params = {"query": query}
where_clause = ""
if document_ids_filter:
# Create parameterized placeholders for document IDs
placeholders = ", ".join(f":doc_id_{i}" for i in range(len(document_ids_filter)))
where_clause = f" AND metadata->>'$.document_id' IN ({placeholders})"
# Add document IDs to parameters
for i, doc_id in enumerate(document_ids_filter):
params[f"doc_id_{i}"] = doc_id
full_sql = f"""SELECT text, metadata, MATCH (text) AGAINST (:query) AS score
FROM {self._collection_name}
WHERE MATCH (text) AGAINST (:query) > 0
{where_clause}
@ -235,41 +367,45 @@ class OceanBaseVector(BaseVector):
with self._client.engine.connect() as conn:
with conn.begin():
from sqlalchemy import text
result = conn.execute(text(full_sql), {"query": query})
result = conn.execute(text(full_sql), params)
rows = result.fetchall()
docs = []
for row in rows:
metadata_str, _text, score = row
try:
metadata = json.loads(metadata_str)
except json.JSONDecodeError:
logger.warning("Invalid JSON metadata: %s", metadata_str)
metadata = {}
metadata["score"] = score
docs.append(Document(page_content=_text, metadata=metadata))
return docs
return self._process_search_results(rows, score_threshold=score_threshold)
except Exception as e:
logger.warning("Failed to fulltext search: %s.", str(e))
return []
logger.exception(
"Failed to perform full-text search on collection '%s' with query '%s'",
self._collection_name,
query,
)
raise Exception(f"Full-text search failed for collection '{self._collection_name}'") from e
def search_by_vector(self, query_vector: list[float], **kwargs: Any) -> list[Document]:
from sqlalchemy import text
document_ids_filter = kwargs.get("document_ids_filter")
_where_clause = None
if document_ids_filter:
# Validate document IDs to prevent SQL injection
# Document IDs should be alphanumeric with hyphens and underscores
import re
for doc_id in document_ids_filter:
if not isinstance(doc_id, str) or not re.match(r"^[a-zA-Z0-9_-]+$", doc_id):
raise ValueError(f"Invalid document ID format: {doc_id}")
# Safe to use in query after validation
document_ids = ", ".join(f"'{id}'" for id in document_ids_filter)
where_clause = f"metadata->>'$.document_id' in ({document_ids})"
from sqlalchemy import text
_where_clause = [text(where_clause)]
ef_search = kwargs.get("ef_search", self._hnsw_ef_search)
if ef_search != self._hnsw_ef_search:
self._client.set_ob_hnsw_ef_search(ef_search)
self._hnsw_ef_search = ef_search
topk = kwargs.get("top_k", 10)
try:
score_threshold = float(val) if (val := kwargs.get("score_threshold")) is not None else 0.0
except (ValueError, TypeError) as e:
raise ValueError(f"Invalid score_threshold parameter: {e}") from e
try:
cur = self._client.ann_search(
table_name=self._collection_name,
@ -282,21 +418,27 @@ class OceanBaseVector(BaseVector):
where_clause=_where_clause,
)
except Exception as e:
raise Exception("Failed to search by vector. ", e)
docs = []
for _text, metadata, distance in cur:
metadata = json.loads(metadata)
metadata["score"] = 1 - distance / math.sqrt(2)
docs.append(
Document(
page_content=_text,
metadata=metadata,
)
logger.exception(
"Failed to perform vector search on collection '%s'",
self._collection_name,
)
return docs
raise Exception(f"Vector search failed for collection '{self._collection_name}'") from e
# Convert distance to score and prepare results for processing
results = []
for _text, metadata_str, distance in cur:
score = 1 - distance / math.sqrt(2)
results.append((_text, metadata_str, score))
return self._process_search_results(results, score_threshold=score_threshold)
def delete(self):
self._client.drop_table_if_exist(self._collection_name)
try:
self._client.drop_table_if_exist(self._collection_name)
logger.debug("Dropped collection '%s'", self._collection_name)
except Exception as e:
logger.exception("Failed to delete collection '%s'", self._collection_name)
raise Exception(f"Failed to delete collection '{self._collection_name}'") from e
class OceanBaseVectorFactory(AbstractVectorFactory):

View File

@ -54,6 +54,8 @@ class ToolProviderApiEntity(BaseModel):
configuration: MCPConfiguration | None = Field(
default=None, description="The timeout and sse_read_timeout of the MCP tool"
)
# Workflow
workflow_app_id: str | None = Field(default=None, description="The app id of the workflow tool")
@field_validator("tools", mode="before")
@classmethod
@ -87,6 +89,8 @@ class ToolProviderApiEntity(BaseModel):
optional_fields.update(self.optional_field("is_dynamic_registration", self.is_dynamic_registration))
optional_fields.update(self.optional_field("masked_headers", self.masked_headers))
optional_fields.update(self.optional_field("original_headers", self.original_headers))
elif self.type == ToolProviderType.WORKFLOW:
optional_fields.update(self.optional_field("workflow_app_id", self.workflow_app_id))
return {
"id": self.id,
"author": self.author,

View File

@ -1,4 +1,6 @@
from pydantic import BaseModel
from collections.abc import Mapping
from pydantic import BaseModel, Field
from core.tools.entities.tool_entities import ToolParameter
@ -25,3 +27,5 @@ class ApiToolBundle(BaseModel):
icon: str | None = None
# openapi operation
openapi: dict
# output schema
output_schema: Mapping[str, object] = Field(default_factory=dict)

View File

@ -123,6 +123,16 @@ class ApiProviderAuthType(StrEnum):
raise ValueError(f"invalid mode value '{value}', expected one of: {valid}")
class ToolAuthType(StrEnum):
"""
Enum class for tool authentication type.
Determines whether OAuth credentials are workspace-level or end-user-level.
"""
WORKSPACE = "workspace"
END_USER = "end_user"
class ToolInvokeMessage(BaseModel):
class TextMessage(BaseModel):
text: str

View File

@ -3,6 +3,7 @@ from typing import Any
from core.app.app_config.entities import VariableEntity
from core.tools.entities.tool_entities import WorkflowToolParameterConfiguration
from core.workflow.nodes.base.entities import OutputVariableEntity
class WorkflowToolConfigurationUtils:
@ -24,6 +25,31 @@ class WorkflowToolConfigurationUtils:
return [VariableEntity.model_validate(variable) for variable in start_node.get("data", {}).get("variables", [])]
@classmethod
def get_workflow_graph_output(cls, graph: Mapping[str, Any]) -> Sequence[OutputVariableEntity]:
"""
get workflow graph output
"""
nodes = graph.get("nodes", [])
outputs_by_variable: dict[str, OutputVariableEntity] = {}
variable_order: list[str] = []
for node in nodes:
if node.get("data", {}).get("type") != "end":
continue
for output in node.get("data", {}).get("outputs", []):
entity = OutputVariableEntity.model_validate(output)
variable = entity.variable
if variable not in variable_order:
variable_order.append(variable)
# Later end nodes override duplicated variable definitions.
outputs_by_variable[variable] = entity
return [outputs_by_variable[variable] for variable in variable_order]
@classmethod
def check_is_synced(
cls, variables: list[VariableEntity], tool_configurations: list[WorkflowToolParameterConfiguration]

View File

@ -162,6 +162,20 @@ class WorkflowToolProviderController(ToolProviderController):
else:
raise ValueError("variable not found")
# get output schema from workflow
outputs = WorkflowToolConfigurationUtils.get_workflow_graph_output(graph)
reserved_keys = {"json", "text", "files"}
properties = {}
for output in outputs:
if output.variable not in reserved_keys:
properties[output.variable] = {
"type": output.value_type,
"description": "",
}
output_schema = {"type": "object", "properties": properties}
return WorkflowTool(
workflow_as_tool_id=db_provider.id,
entity=ToolEntity(
@ -177,6 +191,7 @@ class WorkflowToolProviderController(ToolProviderController):
llm=db_provider.description,
),
parameters=workflow_tool_parameters,
output_schema=output_schema,
),
runtime=ToolRuntime(
tenant_id=db_provider.tenant_id,

View File

@ -114,6 +114,11 @@ class WorkflowTool(Tool):
for file in files:
yield self.create_file_message(file) # type: ignore
# traverse `outputs` field and create variable messages
for key, value in outputs.items():
if key not in {"text", "json", "files"}:
yield self.create_variable_message(variable_name=key, variable_value=value)
self._latest_usage = self._derive_usage_from_result(data)
yield self.create_text_message(json.dumps(outputs, ensure_ascii=False))
@ -198,7 +203,7 @@ class WorkflowTool(Tool):
Resolve user object in both HTTP and worker contexts.
In HTTP context: dereference the current_user LocalProxy (can return Account or EndUser).
In worker context: load Account from database by user_id (only returns Account, never EndUser).
In worker context: load Account(knowledge pipeline) or EndUser(trigger) from database by user_id.
Returns:
Account | EndUser | None: The resolved user object, or None if resolution fails.
@ -219,24 +224,28 @@ class WorkflowTool(Tool):
logger.warning("Failed to resolve user from request context: %s", e)
return None
def _resolve_user_from_database(self, user_id: str) -> Account | None:
def _resolve_user_from_database(self, user_id: str) -> Account | EndUser | None:
"""
Resolve user from database (worker/Celery context).
"""
user_stmt = select(Account).where(Account.id == user_id)
user = db.session.scalar(user_stmt)
if not user:
return None
tenant_stmt = select(Tenant).where(Tenant.id == self.runtime.tenant_id)
tenant = db.session.scalar(tenant_stmt)
if not tenant:
return None
user.current_tenant = tenant
user_stmt = select(Account).where(Account.id == user_id)
user = db.session.scalar(user_stmt)
if user:
user.current_tenant = tenant
return user
return user
end_user_stmt = select(EndUser).where(EndUser.id == user_id, EndUser.tenant_id == tenant.id)
end_user = db.session.scalar(end_user_stmt)
if end_user:
return end_user
return None
def _get_workflow(self, app_id: str, version: str) -> Workflow:
"""

View File

@ -1,17 +1,11 @@
from ..runtime.graph_runtime_state import GraphRuntimeState
from ..runtime.variable_pool import VariablePool
from .agent import AgentNodeStrategyInit
from .graph_init_params import GraphInitParams
from .workflow_execution import WorkflowExecution
from .workflow_node_execution import WorkflowNodeExecution
from .workflow_pause import WorkflowPauseEntity
__all__ = [
"AgentNodeStrategyInit",
"GraphInitParams",
"GraphRuntimeState",
"VariablePool",
"WorkflowExecution",
"WorkflowNodeExecution",
"WorkflowPauseEntity",
]

View File

@ -1,49 +1,26 @@
from enum import StrEnum, auto
from typing import Annotated, Any, ClassVar, TypeAlias
from typing import Annotated, Literal, TypeAlias
from pydantic import BaseModel, Discriminator, Tag
from pydantic import BaseModel, Field
class _PauseReasonType(StrEnum):
class PauseReasonType(StrEnum):
HUMAN_INPUT_REQUIRED = auto()
SCHEDULED_PAUSE = auto()
class _PauseReasonBase(BaseModel):
TYPE: ClassVar[_PauseReasonType]
class HumanInputRequired(BaseModel):
TYPE: Literal[PauseReasonType.HUMAN_INPUT_REQUIRED] = PauseReasonType.HUMAN_INPUT_REQUIRED
form_id: str
# The identifier of the human input node causing the pause.
node_id: str
class HumanInputRequired(_PauseReasonBase):
TYPE = _PauseReasonType.HUMAN_INPUT_REQUIRED
class SchedulingPause(_PauseReasonBase):
TYPE = _PauseReasonType.SCHEDULED_PAUSE
class SchedulingPause(BaseModel):
TYPE: Literal[PauseReasonType.SCHEDULED_PAUSE] = PauseReasonType.SCHEDULED_PAUSE
message: str
def _get_pause_reason_discriminator(v: Any) -> _PauseReasonType | None:
if isinstance(v, _PauseReasonBase):
return v.TYPE
elif isinstance(v, dict):
reason_type_str = v.get("TYPE")
if reason_type_str is None:
return None
try:
reason_type = _PauseReasonType(reason_type_str)
except ValueError:
return None
return reason_type
else:
# return None if the discriminator value isn't found
return None
PauseReason: TypeAlias = Annotated[
(
Annotated[HumanInputRequired, Tag(_PauseReasonType.HUMAN_INPUT_REQUIRED)]
| Annotated[SchedulingPause, Tag(_PauseReasonType.SCHEDULED_PAUSE)]
),
Discriminator(_get_pause_reason_discriminator),
]
PauseReason: TypeAlias = Annotated[HumanInputRequired | SchedulingPause, Field(discriminator="TYPE")]

View File

@ -42,7 +42,7 @@ class GraphExecutionState(BaseModel):
completed: bool = Field(default=False)
aborted: bool = Field(default=False)
paused: bool = Field(default=False)
pause_reason: PauseReason | None = Field(default=None)
pause_reasons: list[PauseReason] = Field(default_factory=list)
error: GraphExecutionErrorState | None = Field(default=None)
exceptions_count: int = Field(default=0)
node_executions: list[NodeExecutionState] = Field(default_factory=list[NodeExecutionState])
@ -107,7 +107,7 @@ class GraphExecution:
completed: bool = False
aborted: bool = False
paused: bool = False
pause_reason: PauseReason | None = None
pause_reasons: list[PauseReason] = field(default_factory=list)
error: Exception | None = None
node_executions: dict[str, NodeExecution] = field(default_factory=dict[str, NodeExecution])
exceptions_count: int = 0
@ -137,10 +137,8 @@ class GraphExecution:
raise RuntimeError("Cannot pause execution that has completed")
if self.aborted:
raise RuntimeError("Cannot pause execution that has been aborted")
if self.paused:
return
self.paused = True
self.pause_reason = reason
self.pause_reasons.append(reason)
def fail(self, error: Exception) -> None:
"""Mark the graph execution as failed."""
@ -195,7 +193,7 @@ class GraphExecution:
completed=self.completed,
aborted=self.aborted,
paused=self.paused,
pause_reason=self.pause_reason,
pause_reasons=self.pause_reasons,
error=_serialize_error(self.error),
exceptions_count=self.exceptions_count,
node_executions=node_states,
@ -221,7 +219,7 @@ class GraphExecution:
self.completed = state.completed
self.aborted = state.aborted
self.paused = state.paused
self.pause_reason = state.pause_reason
self.pause_reasons = state.pause_reasons
self.error = _deserialize_error(state.error)
self.exceptions_count = state.exceptions_count
self.node_executions = {

View File

@ -110,7 +110,13 @@ class EventManager:
"""
with self._lock.write_lock():
self._events.append(event)
self._notify_layers(event)
# NOTE: `_notify_layers` is intentionally called outside the critical section
# to minimize lock contention and avoid blocking other readers or writers.
#
# The public `notify_layers` method also does not use a write lock,
# so protecting `_notify_layers` with a lock here is unnecessary.
self._notify_layers(event)
def _get_new_events(self, start_index: int) -> list[GraphEngineEvent]:
"""

View File

@ -232,7 +232,7 @@ class GraphEngine:
self._graph_execution.start()
else:
self._graph_execution.paused = False
self._graph_execution.pause_reason = None
self._graph_execution.pause_reasons = []
start_event = GraphRunStartedEvent()
self._event_manager.notify_layers(start_event)
@ -246,11 +246,11 @@ class GraphEngine:
# Handle completion
if self._graph_execution.is_paused:
pause_reason = self._graph_execution.pause_reason
assert pause_reason is not None, "pause_reason should not be None when execution is paused."
pause_reasons = self._graph_execution.pause_reasons
assert pause_reasons, "pause_reasons should not be empty when execution is paused."
# Ensure we have a valid PauseReason for the event
paused_event = GraphRunPausedEvent(
reason=pause_reason,
reasons=pause_reasons,
outputs=self._graph_runtime_state.outputs,
)
self._event_manager.notify_layers(paused_event)

View File

@ -45,8 +45,7 @@ class GraphRunAbortedEvent(BaseGraphEvent):
class GraphRunPausedEvent(BaseGraphEvent):
"""Event emitted when a graph run is paused by user command."""
# reason: str | None = Field(default=None, description="reason for pause")
reason: PauseReason = Field(..., description="reason for pause")
reasons: list[PauseReason] = Field(description="reason for pause", default_factory=list)
outputs: dict[str, object] = Field(
default_factory=dict,
description="Outputs available to the client while the run is paused.",

View File

@ -26,7 +26,6 @@ from core.tools.tool_manager import ToolManager
from core.tools.utils.message_transformer import ToolFileMessageTransformer
from core.variables.segments import ArrayFileSegment, StringSegment
from core.workflow.enums import (
ErrorStrategy,
NodeType,
SystemVariableKey,
WorkflowNodeExecutionMetadataKey,
@ -40,7 +39,6 @@ from core.workflow.node_events import (
StreamCompletedEvent,
)
from core.workflow.nodes.agent.entities import AgentNodeData, AgentOldVersionModelFeatures, ParamsAutoGenerated
from core.workflow.nodes.base.entities import BaseNodeData, RetryConfig
from core.workflow.nodes.base.node import Node
from core.workflow.nodes.base.variable_template_parser import VariableTemplateParser
from core.workflow.runtime import VariablePool
@ -66,34 +64,12 @@ if TYPE_CHECKING:
from core.plugin.entities.request import InvokeCredentials
class AgentNode(Node):
class AgentNode(Node[AgentNodeData]):
"""
Agent Node
"""
node_type = NodeType.AGENT
_node_data: AgentNodeData
def init_node_data(self, data: Mapping[str, Any]):
self._node_data = AgentNodeData.model_validate(data)
def _get_error_strategy(self) -> ErrorStrategy | None:
return self._node_data.error_strategy
def _get_retry_config(self) -> RetryConfig:
return self._node_data.retry_config
def _get_title(self) -> str:
return self._node_data.title
def _get_description(self) -> str | None:
return self._node_data.desc
def _get_default_value_dict(self) -> dict[str, Any]:
return self._node_data.default_value_dict
def get_base_node_data(self) -> BaseNodeData:
return self._node_data
@classmethod
def version(cls) -> str:
@ -105,8 +81,8 @@ class AgentNode(Node):
try:
strategy = get_plugin_agent_strategy(
tenant_id=self.tenant_id,
agent_strategy_provider_name=self._node_data.agent_strategy_provider_name,
agent_strategy_name=self._node_data.agent_strategy_name,
agent_strategy_provider_name=self.node_data.agent_strategy_provider_name,
agent_strategy_name=self.node_data.agent_strategy_name,
)
except Exception as e:
yield StreamCompletedEvent(
@ -124,13 +100,13 @@ class AgentNode(Node):
parameters = self._generate_agent_parameters(
agent_parameters=agent_parameters,
variable_pool=self.graph_runtime_state.variable_pool,
node_data=self._node_data,
node_data=self.node_data,
strategy=strategy,
)
parameters_for_log = self._generate_agent_parameters(
agent_parameters=agent_parameters,
variable_pool=self.graph_runtime_state.variable_pool,
node_data=self._node_data,
node_data=self.node_data,
for_log=True,
strategy=strategy,
)
@ -163,7 +139,7 @@ class AgentNode(Node):
messages=message_stream,
tool_info={
"icon": self.agent_strategy_icon,
"agent_strategy": self._node_data.agent_strategy_name,
"agent_strategy": self.node_data.agent_strategy_name,
},
parameters_for_log=parameters_for_log,
user_id=self.user_id,
@ -410,7 +386,7 @@ class AgentNode(Node):
current_plugin = next(
plugin
for plugin in plugins
if f"{plugin.plugin_id}/{plugin.name}" == self._node_data.agent_strategy_provider_name
if f"{plugin.plugin_id}/{plugin.name}" == self.node_data.agent_strategy_provider_name
)
icon = current_plugin.declaration.icon
except StopIteration:

View File

@ -2,48 +2,24 @@ from collections.abc import Mapping, Sequence
from typing import Any
from core.variables import ArrayFileSegment, FileSegment, Segment
from core.workflow.enums import ErrorStrategy, NodeExecutionType, NodeType, WorkflowNodeExecutionStatus
from core.workflow.enums import NodeExecutionType, NodeType, WorkflowNodeExecutionStatus
from core.workflow.node_events import NodeRunResult
from core.workflow.nodes.answer.entities import AnswerNodeData
from core.workflow.nodes.base.entities import BaseNodeData, RetryConfig
from core.workflow.nodes.base.node import Node
from core.workflow.nodes.base.template import Template
from core.workflow.nodes.base.variable_template_parser import VariableTemplateParser
class AnswerNode(Node):
class AnswerNode(Node[AnswerNodeData]):
node_type = NodeType.ANSWER
execution_type = NodeExecutionType.RESPONSE
_node_data: AnswerNodeData
def init_node_data(self, data: Mapping[str, Any]):
self._node_data = AnswerNodeData.model_validate(data)
def _get_error_strategy(self) -> ErrorStrategy | None:
return self._node_data.error_strategy
def _get_retry_config(self) -> RetryConfig:
return self._node_data.retry_config
def _get_title(self) -> str:
return self._node_data.title
def _get_description(self) -> str | None:
return self._node_data.desc
def _get_default_value_dict(self) -> dict[str, Any]:
return self._node_data.default_value_dict
def get_base_node_data(self) -> BaseNodeData:
return self._node_data
@classmethod
def version(cls) -> str:
return "1"
def _run(self) -> NodeRunResult:
segments = self.graph_runtime_state.variable_pool.convert_template(self._node_data.answer)
segments = self.graph_runtime_state.variable_pool.convert_template(self.node_data.answer)
files = self._extract_files_from_segments(segments.value)
return NodeRunResult(
status=WorkflowNodeExecutionStatus.SUCCEEDED,
@ -93,4 +69,4 @@ class AnswerNode(Node):
Returns:
Template instance for this Answer node
"""
return Template.from_answer_template(self._node_data.answer)
return Template.from_answer_template(self.node_data.answer)

View File

@ -5,7 +5,7 @@ from collections.abc import Sequence
from enum import StrEnum
from typing import Any, Union
from pydantic import BaseModel, model_validator
from pydantic import BaseModel, field_validator, model_validator
from core.workflow.enums import ErrorStrategy
@ -35,6 +35,45 @@ class VariableSelector(BaseModel):
value_selector: Sequence[str]
class OutputVariableType(StrEnum):
STRING = "string"
NUMBER = "number"
INTEGER = "integer"
SECRET = "secret"
BOOLEAN = "boolean"
OBJECT = "object"
FILE = "file"
ARRAY = "array"
ARRAY_STRING = "array[string]"
ARRAY_NUMBER = "array[number]"
ARRAY_OBJECT = "array[object]"
ARRAY_BOOLEAN = "array[boolean]"
ARRAY_FILE = "array[file]"
ANY = "any"
ARRAY_ANY = "array[any]"
class OutputVariableEntity(BaseModel):
"""
Output Variable Entity.
"""
variable: str
value_type: OutputVariableType
value_selector: Sequence[str]
@field_validator("value_type", mode="before")
@classmethod
def normalize_value_type(cls, v: Any) -> Any:
"""
Normalize value_type to handle case-insensitive array types.
Converts 'Array[...]' to 'array[...]' for backward compatibility.
"""
if isinstance(v, str) and v.startswith("Array["):
return v.lower()
return v
class DefaultValueType(StrEnum):
STRING = "string"
NUMBER = "number"

View File

@ -1,8 +1,12 @@
import importlib
import logging
import operator
import pkgutil
from abc import abstractmethod
from collections.abc import Generator, Mapping, Sequence
from functools import singledispatchmethod
from typing import Any, ClassVar
from types import MappingProxyType
from typing import Any, ClassVar, Generic, TypeVar, cast, get_args, get_origin
from uuid import uuid4
from core.app.entities.app_invoke_entities import InvokeFrom
@ -49,12 +53,152 @@ from models.enums import UserFrom
from .entities import BaseNodeData, RetryConfig
NodeDataT = TypeVar("NodeDataT", bound=BaseNodeData)
logger = logging.getLogger(__name__)
class Node:
class Node(Generic[NodeDataT]):
node_type: ClassVar["NodeType"]
execution_type: NodeExecutionType = NodeExecutionType.EXECUTABLE
_node_data_type: ClassVar[type[BaseNodeData]] = BaseNodeData
def __init_subclass__(cls, **kwargs: Any) -> None:
"""
Automatically extract and validate the node data type from the generic parameter.
When a subclass is defined as `class MyNode(Node[MyNodeData])`, this method:
1. Inspects `__orig_bases__` to find the `Node[T]` parameterization
2. Extracts `T` (e.g., `MyNodeData`) from the generic argument
3. Validates that `T` is a proper `BaseNodeData` subclass
4. Stores it in `_node_data_type` for automatic hydration in `__init__`
This eliminates the need for subclasses to manually implement boilerplate
accessor methods like `_get_title()`, `_get_error_strategy()`, etc.
How it works:
::
class CodeNode(Node[CodeNodeData]):
│ │
│ └─────────────────────────────────┐
│ │
▼ ▼
┌─────────────────────────────┐ ┌─────────────────────────────────┐
│ __orig_bases__ = ( │ │ CodeNodeData(BaseNodeData) │
│ Node[CodeNodeData], │ │ title: str │
│ ) │ │ desc: str | None │
└──────────────┬──────────────┘ │ ... │
│ └─────────────────────────────────┘
▼ ▲
┌─────────────────────────────┐ │
│ get_origin(base) -> Node │ │
│ get_args(base) -> ( │ │
│ CodeNodeData, │ ──────────────────────┘
│ ) │
└──────────────┬──────────────┘
┌─────────────────────────────┐
│ Validate: │
│ - Is it a type? │
│ - Is it a BaseNodeData │
│ subclass? │
└──────────────┬──────────────┘
┌─────────────────────────────┐
│ cls._node_data_type = │
│ CodeNodeData │
└─────────────────────────────┘
Later, in __init__:
::
config["data"] ──► _hydrate_node_data() ──► _node_data_type.model_validate()
CodeNodeData instance
(stored in self._node_data)
Example:
class CodeNode(Node[CodeNodeData]): # CodeNodeData is auto-extracted
node_type = NodeType.CODE
# No need to implement _get_title, _get_error_strategy, etc.
"""
super().__init_subclass__(**kwargs)
if cls is Node:
return
node_data_type = cls._extract_node_data_type_from_generic()
if node_data_type is None:
raise TypeError(f"{cls.__name__} must inherit from Node[T] with a BaseNodeData subtype")
cls._node_data_type = node_data_type
# Skip base class itself
if cls is Node:
return
# Only register production node implementations defined under core.workflow.nodes.*
# This prevents test helper subclasses from polluting the global registry and
# accidentally overriding real node types (e.g., a test Answer node).
module_name = getattr(cls, "__module__", "")
# Only register concrete subclasses that define node_type and version()
node_type = cls.node_type
version = cls.version()
bucket = Node._registry.setdefault(node_type, {})
if module_name.startswith("core.workflow.nodes."):
# Production node definitions take precedence and may override
bucket[version] = cls # type: ignore[index]
else:
# External/test subclasses may register but must not override production
bucket.setdefault(version, cls) # type: ignore[index]
# Maintain a "latest" pointer preferring numeric versions; fallback to lexicographic
version_keys = [v for v in bucket if v != "latest"]
numeric_pairs: list[tuple[str, int]] = []
for v in version_keys:
numeric_pairs.append((v, int(v)))
if numeric_pairs:
latest_key = max(numeric_pairs, key=operator.itemgetter(1))[0]
else:
latest_key = max(version_keys) if version_keys else version
bucket["latest"] = bucket[latest_key]
@classmethod
def _extract_node_data_type_from_generic(cls) -> type[BaseNodeData] | None:
"""
Extract the node data type from the generic parameter `Node[T]`.
Inspects `__orig_bases__` to find the `Node[T]` parameterization and extracts `T`.
Returns:
The extracted BaseNodeData subtype, or None if not found.
Raises:
TypeError: If the generic argument is invalid (not exactly one argument,
or not a BaseNodeData subtype).
"""
# __orig_bases__ contains the original generic bases before type erasure.
# For `class CodeNode(Node[CodeNodeData])`, this would be `(Node[CodeNodeData],)`.
for base in getattr(cls, "__orig_bases__", ()): # type: ignore[attr-defined]
origin = get_origin(base) # Returns `Node` for `Node[CodeNodeData]`
if origin is Node:
args = get_args(base) # Returns `(CodeNodeData,)` for `Node[CodeNodeData]`
if len(args) != 1:
raise TypeError(f"{cls.__name__} must specify exactly one node data generic argument")
candidate = args[0]
if not isinstance(candidate, type) or not issubclass(candidate, BaseNodeData):
raise TypeError(f"{cls.__name__} must parameterize Node with a BaseNodeData subtype")
return candidate
return None
# Global registry populated via __init_subclass__
_registry: ClassVar[dict["NodeType", dict[str, type["Node"]]]] = {}
def __init__(
self,
@ -63,6 +207,7 @@ class Node:
graph_init_params: "GraphInitParams",
graph_runtime_state: "GraphRuntimeState",
) -> None:
self._graph_init_params = graph_init_params
self.id = id
self.tenant_id = graph_init_params.tenant_id
self.app_id = graph_init_params.app_id
@ -83,8 +228,24 @@ class Node:
self._node_execution_id: str = ""
self._start_at = naive_utc_now()
@abstractmethod
def init_node_data(self, data: Mapping[str, Any]) -> None: ...
raw_node_data = config.get("data") or {}
if not isinstance(raw_node_data, Mapping):
raise ValueError("Node config data must be a mapping.")
self._node_data: NodeDataT = self._hydrate_node_data(raw_node_data)
self.post_init()
def post_init(self) -> None:
"""Optional hook for subclasses requiring extra initialization."""
return
@property
def graph_init_params(self) -> "GraphInitParams":
return self._graph_init_params
def _hydrate_node_data(self, data: Mapping[str, Any]) -> NodeDataT:
return cast(NodeDataT, self._node_data_type.model_validate(data))
@abstractmethod
def _run(self) -> NodeRunResult | Generator[NodeEventBase, None, None]:
@ -114,23 +275,23 @@ class Node:
from core.workflow.nodes.tool.tool_node import ToolNode
if isinstance(self, ToolNode):
start_event.provider_id = getattr(self.get_base_node_data(), "provider_id", "")
start_event.provider_type = getattr(self.get_base_node_data(), "provider_type", "")
start_event.provider_id = getattr(self.node_data, "provider_id", "")
start_event.provider_type = getattr(self.node_data, "provider_type", "")
from core.workflow.nodes.datasource.datasource_node import DatasourceNode
if isinstance(self, DatasourceNode):
plugin_id = getattr(self.get_base_node_data(), "plugin_id", "")
provider_name = getattr(self.get_base_node_data(), "provider_name", "")
plugin_id = getattr(self.node_data, "plugin_id", "")
provider_name = getattr(self.node_data, "provider_name", "")
start_event.provider_id = f"{plugin_id}/{provider_name}"
start_event.provider_type = getattr(self.get_base_node_data(), "provider_type", "")
start_event.provider_type = getattr(self.node_data, "provider_type", "")
from core.workflow.nodes.trigger_plugin.trigger_event_node import TriggerEventNode
if isinstance(self, TriggerEventNode):
start_event.provider_id = getattr(self.get_base_node_data(), "provider_id", "")
start_event.provider_type = getattr(self.get_base_node_data(), "provider_type", "")
start_event.provider_id = getattr(self.node_data, "provider_id", "")
start_event.provider_type = getattr(self.node_data, "provider_type", "")
from typing import cast
@ -139,7 +300,7 @@ class Node:
if isinstance(self, AgentNode):
start_event.agent_strategy = AgentNodeStrategyInit(
name=cast(AgentNodeData, self.get_base_node_data()).agent_strategy_name,
name=cast(AgentNodeData, self.node_data).agent_strategy_name,
icon=self.agent_strategy_icon,
)
@ -269,42 +430,52 @@ class Node:
# in `api/core/workflow/nodes/__init__.py`.
raise NotImplementedError("subclasses of BaseNode must implement `version` method.")
@classmethod
def get_node_type_classes_mapping(cls) -> Mapping["NodeType", Mapping[str, type["Node"]]]:
"""Return mapping of NodeType -> {version -> Node subclass} using __init_subclass__ registry.
Import all modules under core.workflow.nodes so subclasses register themselves on import.
Then we return a readonly view of the registry to avoid accidental mutation.
"""
# Import all node modules to ensure they are loaded (thus registered)
import core.workflow.nodes as _nodes_pkg
for _, _modname, _ in pkgutil.walk_packages(_nodes_pkg.__path__, _nodes_pkg.__name__ + "."):
# Avoid importing modules that depend on the registry to prevent circular imports
# e.g. node_factory imports node_mapping which builds the mapping here.
if _modname in {
"core.workflow.nodes.node_factory",
"core.workflow.nodes.node_mapping",
}:
continue
importlib.import_module(_modname)
# Return a readonly view so callers can't mutate the registry by accident
return {nt: MappingProxyType(ver_map) for nt, ver_map in cls._registry.items()}
@property
def retry(self) -> bool:
return False
# Abstract methods that subclasses must implement to provide access
# to BaseNodeData properties in a type-safe way
@abstractmethod
def _get_error_strategy(self) -> ErrorStrategy | None:
"""Get the error strategy for this node."""
...
return self._node_data.error_strategy
@abstractmethod
def _get_retry_config(self) -> RetryConfig:
"""Get the retry configuration for this node."""
...
return self._node_data.retry_config
@abstractmethod
def _get_title(self) -> str:
"""Get the node title."""
...
return self._node_data.title
@abstractmethod
def _get_description(self) -> str | None:
"""Get the node description."""
...
return self._node_data.desc
@abstractmethod
def _get_default_value_dict(self) -> dict[str, Any]:
"""Get the default values dictionary for this node."""
...
@abstractmethod
def get_base_node_data(self) -> BaseNodeData:
"""Get the BaseNodeData object for this node."""
...
return self._node_data.default_value_dict
# Public interface properties that delegate to abstract methods
@property
@ -332,6 +503,11 @@ class Node:
"""Get the default values dictionary for this node."""
return self._get_default_value_dict()
@property
def node_data(self) -> NodeDataT:
"""Typed access to this node's configuration data."""
return self._node_data
def _convert_node_run_result_to_graph_node_event(self, result: NodeRunResult) -> GraphNodeEventBase:
match result.status:
case WorkflowNodeExecutionStatus.FAILED:
@ -426,7 +602,7 @@ class Node:
id=self._node_execution_id,
node_id=self._node_id,
node_type=self.node_type,
node_title=self.get_base_node_data().title,
node_title=self.node_data.title,
start_at=event.start_at,
inputs=event.inputs,
metadata=event.metadata,
@ -439,7 +615,7 @@ class Node:
id=self._node_execution_id,
node_id=self._node_id,
node_type=self.node_type,
node_title=self.get_base_node_data().title,
node_title=self.node_data.title,
index=event.index,
pre_loop_output=event.pre_loop_output,
)
@ -450,7 +626,7 @@ class Node:
id=self._node_execution_id,
node_id=self._node_id,
node_type=self.node_type,
node_title=self.get_base_node_data().title,
node_title=self.node_data.title,
start_at=event.start_at,
inputs=event.inputs,
outputs=event.outputs,
@ -464,7 +640,7 @@ class Node:
id=self._node_execution_id,
node_id=self._node_id,
node_type=self.node_type,
node_title=self.get_base_node_data().title,
node_title=self.node_data.title,
start_at=event.start_at,
inputs=event.inputs,
outputs=event.outputs,
@ -479,7 +655,7 @@ class Node:
id=self._node_execution_id,
node_id=self._node_id,
node_type=self.node_type,
node_title=self.get_base_node_data().title,
node_title=self.node_data.title,
start_at=event.start_at,
inputs=event.inputs,
metadata=event.metadata,
@ -492,7 +668,7 @@ class Node:
id=self._node_execution_id,
node_id=self._node_id,
node_type=self.node_type,
node_title=self.get_base_node_data().title,
node_title=self.node_data.title,
index=event.index,
pre_iteration_output=event.pre_iteration_output,
)
@ -503,7 +679,7 @@ class Node:
id=self._node_execution_id,
node_id=self._node_id,
node_type=self.node_type,
node_title=self.get_base_node_data().title,
node_title=self.node_data.title,
start_at=event.start_at,
inputs=event.inputs,
outputs=event.outputs,
@ -517,7 +693,7 @@ class Node:
id=self._node_execution_id,
node_id=self._node_id,
node_type=self.node_type,
node_title=self.get_base_node_data().title,
node_title=self.node_data.title,
start_at=event.start_at,
inputs=event.inputs,
outputs=event.outputs,

View File

@ -9,9 +9,8 @@ from core.helper.code_executor.javascript.javascript_code_provider import Javasc
from core.helper.code_executor.python3.python3_code_provider import Python3CodeProvider
from core.variables.segments import ArrayFileSegment
from core.variables.types import SegmentType
from core.workflow.enums import ErrorStrategy, NodeType, WorkflowNodeExecutionStatus
from core.workflow.enums import NodeType, WorkflowNodeExecutionStatus
from core.workflow.node_events import NodeRunResult
from core.workflow.nodes.base.entities import BaseNodeData, RetryConfig
from core.workflow.nodes.base.node import Node
from core.workflow.nodes.code.entities import CodeNodeData
@ -22,32 +21,9 @@ from .exc import (
)
class CodeNode(Node):
class CodeNode(Node[CodeNodeData]):
node_type = NodeType.CODE
_node_data: CodeNodeData
def init_node_data(self, data: Mapping[str, Any]):
self._node_data = CodeNodeData.model_validate(data)
def _get_error_strategy(self) -> ErrorStrategy | None:
return self._node_data.error_strategy
def _get_retry_config(self) -> RetryConfig:
return self._node_data.retry_config
def _get_title(self) -> str:
return self._node_data.title
def _get_description(self) -> str | None:
return self._node_data.desc
def _get_default_value_dict(self) -> dict[str, Any]:
return self._node_data.default_value_dict
def get_base_node_data(self) -> BaseNodeData:
return self._node_data
@classmethod
def get_default_config(cls, filters: Mapping[str, object] | None = None) -> Mapping[str, object]:
"""
@ -70,12 +46,12 @@ class CodeNode(Node):
def _run(self) -> NodeRunResult:
# Get code language
code_language = self._node_data.code_language
code = self._node_data.code
code_language = self.node_data.code_language
code = self.node_data.code
# Get variables
variables = {}
for variable_selector in self._node_data.variables:
for variable_selector in self.node_data.variables:
variable_name = variable_selector.variable
variable = self.graph_runtime_state.variable_pool.get(variable_selector.value_selector)
if isinstance(variable, ArrayFileSegment):
@ -91,7 +67,7 @@ class CodeNode(Node):
)
# Transform result
result = self._transform_result(result=result, output_schema=self._node_data.outputs)
result = self._transform_result(result=result, output_schema=self.node_data.outputs)
except (CodeExecutionError, CodeNodeError) as e:
return NodeRunResult(
status=WorkflowNodeExecutionStatus.FAILED, inputs=variables, error=str(e), error_type=type(e).__name__
@ -428,7 +404,7 @@ class CodeNode(Node):
@property
def retry(self) -> bool:
return self._node_data.retry_config.retry_enabled
return self.node_data.retry_config.retry_enabled
@staticmethod
def _convert_boolean_to_int(value: bool | int | float | None) -> int | float | None:

View File

@ -20,9 +20,8 @@ from core.plugin.impl.exc import PluginDaemonClientSideError
from core.variables.segments import ArrayAnySegment
from core.variables.variables import ArrayAnyVariable
from core.workflow.entities.workflow_node_execution import WorkflowNodeExecutionStatus
from core.workflow.enums import ErrorStrategy, NodeExecutionType, NodeType, SystemVariableKey
from core.workflow.enums import NodeExecutionType, NodeType, SystemVariableKey
from core.workflow.node_events import NodeRunResult, StreamChunkEvent, StreamCompletedEvent
from core.workflow.nodes.base.entities import BaseNodeData, RetryConfig
from core.workflow.nodes.base.node import Node
from core.workflow.nodes.base.variable_template_parser import VariableTemplateParser
from core.workflow.nodes.tool.exc import ToolFileError
@ -38,42 +37,20 @@ from .entities import DatasourceNodeData
from .exc import DatasourceNodeError, DatasourceParameterError
class DatasourceNode(Node):
class DatasourceNode(Node[DatasourceNodeData]):
"""
Datasource Node
"""
_node_data: DatasourceNodeData
node_type = NodeType.DATASOURCE
execution_type = NodeExecutionType.ROOT
def init_node_data(self, data: Mapping[str, Any]) -> None:
self._node_data = DatasourceNodeData.model_validate(data)
def _get_error_strategy(self) -> ErrorStrategy | None:
return self._node_data.error_strategy
def _get_retry_config(self) -> RetryConfig:
return self._node_data.retry_config
def _get_title(self) -> str:
return self._node_data.title
def _get_description(self) -> str | None:
return self._node_data.desc
def _get_default_value_dict(self) -> dict[str, Any]:
return self._node_data.default_value_dict
def get_base_node_data(self) -> BaseNodeData:
return self._node_data
def _run(self) -> Generator:
"""
Run the datasource node
"""
node_data = self._node_data
node_data = self.node_data
variable_pool = self.graph_runtime_state.variable_pool
datasource_type_segement = variable_pool.get(["sys", SystemVariableKey.DATASOURCE_TYPE])
if not datasource_type_segement:

View File

@ -25,9 +25,8 @@ from core.file import File, FileTransferMethod, file_manager
from core.helper import ssrf_proxy
from core.variables import ArrayFileSegment
from core.variables.segments import ArrayStringSegment, FileSegment
from core.workflow.enums import ErrorStrategy, NodeType, WorkflowNodeExecutionStatus
from core.workflow.enums import NodeType, WorkflowNodeExecutionStatus
from core.workflow.node_events import NodeRunResult
from core.workflow.nodes.base.entities import BaseNodeData, RetryConfig
from core.workflow.nodes.base.node import Node
from .entities import DocumentExtractorNodeData
@ -36,7 +35,7 @@ from .exc import DocumentExtractorError, FileDownloadError, TextExtractionError,
logger = logging.getLogger(__name__)
class DocumentExtractorNode(Node):
class DocumentExtractorNode(Node[DocumentExtractorNodeData]):
"""
Extracts text content from various file types.
Supports plain text, PDF, and DOC/DOCX files.
@ -44,35 +43,12 @@ class DocumentExtractorNode(Node):
node_type = NodeType.DOCUMENT_EXTRACTOR
_node_data: DocumentExtractorNodeData
def init_node_data(self, data: Mapping[str, Any]):
self._node_data = DocumentExtractorNodeData.model_validate(data)
def _get_error_strategy(self) -> ErrorStrategy | None:
return self._node_data.error_strategy
def _get_retry_config(self) -> RetryConfig:
return self._node_data.retry_config
def _get_title(self) -> str:
return self._node_data.title
def _get_description(self) -> str | None:
return self._node_data.desc
def _get_default_value_dict(self) -> dict[str, Any]:
return self._node_data.default_value_dict
def get_base_node_data(self) -> BaseNodeData:
return self._node_data
@classmethod
def version(cls) -> str:
return "1"
def _run(self):
variable_selector = self._node_data.variable_selector
variable_selector = self.node_data.variable_selector
variable = self.graph_runtime_state.variable_pool.get(variable_selector)
if variable is None:

Some files were not shown because too many files have changed in this diff Show More