Compare commits

...

566 Commits

Author SHA1 Message Date
d34c95bf8e feat: convert components to dynamic imports for improved performance 2025-07-17 15:32:42 +08:00
1df1ffa2ec fix(apps): add translation and document title for Apps component 2025-07-17 14:08:00 +08:00
947dbd8854 chore(apps): move app list components to components folder 2025-07-17 14:00:29 +08:00
ce619287b3 feat(app-publisher): add relative time formatting for timestamps 2025-07-16 18:34:03 +08:00
cd1ec65286 feat: convert components to dynamic imports for improved performance 2025-07-16 16:51:55 +08:00
qfl
bdb9f29948 feat(app): support custom max_active_requests per app (#22073) 2025-07-16 15:31:19 +08:00
66cc1b4308 feat(variable-list): add drag-and-drop functionality for variables in code node (#22127) 2025-07-16 15:24:19 +08:00
d52fb18457 feat: auto-fill MCP server description with app description #22443 (#22477) 2025-07-16 15:03:33 +08:00
4a2169bd5f Chore/update gh template (#22480) 2025-07-16 14:22:51 +08:00
2c9ee54a16 fix aliyun trace session_id (#22468) 2025-07-16 13:56:44 +08:00
aef67ed7ec fix: add background color for chat bubble in light and dark themes (#22472) 2025-07-16 13:36:51 +08:00
ddfd8c8525 feat(api): add UUIDv7 implementation in SQL and Python (#22058)
This PR introduces UUIDv7 implementations in both Python and SQL to establish the foundation for migrating from UUIDv4 to UUIDv7 as proposed in #19754.

ID generation algorithm of existing models are not changed, and new models should use UUIDv7 for ID generation.

Close #19754.
2025-07-16 13:07:08 +08:00
2c1ab4879f refactor(api): Separate SegmentType for Integer/Float to Enable Pydantic Serialization (#22025)
refactor(api): Separate SegmentType for Integer/Float to Enable Pydantic Serialization (#22025)

This PR addresses serialization issues in the VariablePool model by separating the `value_type` tags for `IntegerSegment`/`FloatSegment` and `IntegerVariable`/`FloatVariable`. Previously, both Integer and Float types shared the same `SegmentType.NUMBER` tag, causing conflicts during serialization.

Key changes:
- Introduce distinct `value_type` tags for Integer and Float segments/variables
- Add `VariableUnion` and `SegmentUnion` types for proper type discrimination
- Leverage Pydantic's discriminated union feature for seamless serialization/deserialization
- Enable accurate serialization of data structures containing these types

Closes #22024.
2025-07-16 12:31:37 +08:00
229b4d621e Improve Tooltip UX by enabling delay by default (#21383) 2025-07-16 11:26:54 +08:00
0dee41c074 fix: When var value changed, PromptEditor should be reset (#22219) 2025-07-16 11:22:54 +08:00
bf542233a9 minor fix: using Pydantic model_validate instead of deprecated parse_obj (#22239)
Signed-off-by: neatguycoding <15627489+NeatGuyCoding@users.noreply.github.com>
2025-07-16 10:57:08 +08:00
38106074b4 test: add comprehensive unit tests for console authentication and authorization decorators (#22439) 2025-07-16 10:07:01 +08:00
znn
1f4b3591ae adding tooltip for bindingCount (#22450)
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: crazywoola <427733928@qq.com>
2025-07-16 09:59:42 +08:00
7bf3d2c8bf fix(api): Fix potential thread leak in MCP BaseSession (#22169)
The `BaseSession` class in the `core/mcp/session` package uses `ThreadPoolExecutor` 
to run the receive loop but fails to properly clean up the executor and receiver 
future, leading to potential thread leaks.

This PR addresses this issue by:
- Initializing `_executor` and `_receiver_future` attributes to `None` for proper cleanup checks
- Adding graceful shutdown with a 5-second timeout in the `__exit__` method
- Ensuring the ThreadPoolExecutor is properly shut down to prevent resource leaks

This fix prevents memory leaks and hanging threads in long-running scenarios where 
multiple MCP sessions are created and destroyed.

Signed-off-by: neatguycoding <15627489+NeatGuyCoding@users.noreply.github.com>
Co-authored-by: QuantumGhost <obelisk.reg+git@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-16 00:01:44 +08:00
da53bf511f chore: add SQLALCHEMY_POOL_USE_LIFO option and missing SQLALCHEMY_POOL_PRE_PING env default value. (#22371) 2025-07-15 19:46:48 +08:00
7388fd1ec6 fix: Disable question editing in chat history (#22438) 2025-07-15 19:41:51 +08:00
b803eeb528 fix: Update condition items to support variable type acquisition (#22414) 2025-07-15 19:38:13 +08:00
14f79ee652 fix: create api workflow run repository error (#22422) 2025-07-15 16:12:02 +08:00
df89629e04 fix: conversatino statistic including data from debugger (#22412)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-07-15 15:45:45 +08:00
d427088ab5 fix: remove PickerPanel padding (#22419) 2025-07-15 15:37:13 +08:00
32c541a9ed fix: generate deterministic operationId for root endpoints without one (#19888) 2025-07-15 14:19:55 +08:00
7e666dc3b1 fix(prompt-editor): show error warning for destructive env and conv var (#21802) 2025-07-15 14:10:50 +08:00
5247c19498 fix: code result included "error" field (#22392) 2025-07-15 13:55:00 +08:00
9823edd3a2 fix workflow node iterator . (#21008)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-15 10:55:49 +08:00
88537991d6 fix: Metadata filtering with Manual option in Agent mode does not take effect when specifying input variables. (#20362) 2025-07-15 10:47:20 +08:00
8e910d8c59 fix(plugin): introduce response_type parameter in plugin list API to enable paginated response support (#22251) 2025-07-15 10:10:37 +08:00
a0b32b6027 feat(config-modal): add space to underscore conversion in variable name input of start node (#22284) 2025-07-15 10:00:19 +08:00
bf7b2c339b tablestore vector support more method (#22225)
Co-authored-by: xiaozhiqing.xzq <xiaozhiqing.xzq@alibaba-inc.com>
2025-07-15 09:58:48 +08:00
a1dfe6d402 chore: bump nextjs to 15.3 (#22262) 2025-07-15 09:35:17 +08:00
d2a3e8b9b1 Provides a set of Kubernetes manifests supporting version 1.6.0 (#22287) 2025-07-15 09:34:17 +08:00
ebb88bbe0b improve opik workflow_trace span name to node name (#22356) 2025-07-15 09:33:06 +08:00
b690a9d839 fix: aliyun trace title&description (#22347) 2025-07-14 17:14:24 +08:00
9d9423808e Update README.md (#22351) 2025-07-14 17:13:50 +08:00
3e96c0c468 fix: close session before doing long latency operation (#22306) 2025-07-14 15:16:10 +08:00
6eb155ae69 feat(api/repo): Allow to config repository implementation (#21458)
Signed-off-by: -LAN- <laipz8200@outlook.com>
Co-authored-by: QuantumGhost <obelisk.reg+git@gmail.com>
2025-07-14 14:54:38 +08:00
b27c540379 Fix: Remove height and overflow style settings (#22327) 2025-07-14 13:57:53 +08:00
8b1f428ead Chore: Replace lodash/noop with lodash-es/noop (#22331) 2025-07-14 13:57:26 +08:00
1d54ffcf89 fix: error parsing object type parameters for code node (#22230) 2025-07-14 10:37:26 +08:00
d9eb5554b3 fix: prevent trigger form submit action when press 'enter' (#22313) 2025-07-14 09:59:20 +08:00
da94bdeb54 Update README.md (#22305) 2025-07-14 09:37:17 +08:00
27e5e2745b test: add comprehensive unit tests for login decorator (#22294) 2025-07-14 09:34:13 +08:00
znn
1b26f9a4c6 fixing Enum part in backend and making it same as front end (#22296) 2025-07-14 09:34:04 +08:00
df886259bd test(web): add password regexp test case (#22308) 2025-07-14 09:32:34 +08:00
016ff0feae fix(ui): prevent var icon hidden when only one var in list of start node (#22290) 2025-07-13 20:10:15 +08:00
aa6cad5f1d fix: tool's model selector and app selector not work (#22291) 2025-07-13 20:04:29 +08:00
e7388779a1 chore: bump ruff to 0.12.x (#22259) 2025-07-12 20:00:54 +08:00
6c233e05a9 minor fix: wrong and (#22242) 2025-07-12 19:59:07 +08:00
9f013f7644 Add unit test for account service (#22278) 2025-07-12 19:58:42 +08:00
253d8e5a5f test: add comprehensive unit tests for PassportService with exception handling optimization (#22268) 2025-07-12 19:56:20 +08:00
7f5087c6db reject whitespace characters in password regexp (#22232) 2025-07-11 19:18:18 +08:00
817071e448 fix: iteration itemType support conversation var (#22220) (#22236) 2025-07-11 19:16:30 +08:00
f193e9764e fix: Optimize the workspace panel width calculation (#22195) 2025-07-11 19:12:12 +08:00
5f9628e027 feat(workflow): add drag-and-drop support for variable list items for start node (#22150) 2025-07-11 18:53:29 +08:00
76d21743fd fix(web): Optimize AppInfo Component Layout (#22212) (#22218) 2025-07-11 17:54:09 +08:00
2d3c5b3b7c fix(emoji-picker): Adjust the style of the emoji picker (#22161) (#22231) 2025-07-11 17:52:16 +08:00
1d85979a74 chore:extract last run common logic (#22214) 2025-07-11 16:41:25 +08:00
2a85f28963 fix:Fixed the problem of plugin installation failure caused by incons… (#22156) 2025-07-11 15:18:42 +08:00
fe4e2f7921 feat: support var in suggested questions (#17340)
Co-authored-by: crazywoola <427733928@qq.com>
2025-07-11 15:07:32 +08:00
9a9ec0c99b feat: Add Audio configuration setting to app configuration UI (#21957) 2025-07-11 14:04:42 +08:00
K
d5624ba671 fix: resolve Docker file URL networking issue for plugins (#21334) (#21382)
Co-authored-by: crazywoola <427733928@qq.com>
2025-07-11 12:11:59 +08:00
c805238471 fix: adjust layout styles for header and dataset update (#22182) 2025-07-11 11:17:28 +08:00
e576b989b8 feat(tool): add support for API key authentication via query parameter (#21656) 2025-07-11 10:39:20 +08:00
f929bfb94c minor fix: remove duplicates, fix typo, and add restriction for get mcp server (#22170)
Signed-off-by: neatguycoding <15627489+NeatGuyCoding@users.noreply.github.com>
2025-07-11 09:40:17 +08:00
f4df80e093 fix(custom_tool): omit optional parameters instead of setting them to None (#22171) 2025-07-10 20:56:45 +08:00
390e4cc0bf chore(version): bump to 1.6.0 (#22136) 2025-07-10 17:49:32 +08:00
11f9a897e8 chore: fix schema editor can not hover item (#22155) 2025-07-10 17:33:11 +08:00
0e793a660d fix: add the default value to the dark icon (#22149) 2025-07-10 17:13:48 +08:00
7b2cab5767 feat: support ping method for MCP server (#22144) 2025-07-10 16:14:46 +08:00
c51b4290dc fix: mcp server card button display (#22141) 2025-07-10 16:14:18 +08:00
94a13d7d62 feat: add support for dark icons in provider and tool entities (#22081) 2025-07-10 14:43:31 +08:00
edf5fd28c9 update worklow events logs. (#19871)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-07-10 14:21:34 +08:00
b834131f50 chore: translate i18n files (#22132)
Co-authored-by: iamjoel <2120155+iamjoel@users.noreply.github.com>
2025-07-10 14:19:26 +08:00
5375d9bb27 feat: the frontend part of mcp (#22131)
Co-authored-by: jZonG <jzongcode@gmail.com>
Co-authored-by: Novice <novice12185727@gmail.com>
Co-authored-by: nite-knite <nkCoding@gmail.com>
Co-authored-by: Hanqing Zhao <sherry9277@gmail.com>
2025-07-10 14:14:02 +08:00
535fff62f3 feat: add MCP support (#20716)
Co-authored-by: QuantumGhost <obelisk.reg+git@gmail.com>
2025-07-10 14:01:34 +08:00
18b58424ec Fix: Resolve issue with json_output (#22053) 2025-07-10 13:34:06 +08:00
10858ea1dc Chore: rm useless import and vars (#22108) 2025-07-10 11:47:43 +08:00
6f8c7a66c8 feat: add redis fallback mechanism #21043 (#21044)
Co-authored-by: tech <cto@sb>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-10 10:19:58 +08:00
a371390d6c optimize: batch embedding and qdrant write_consistency_factor parameter (#21776)
Co-authored-by: hobo.l <hobo.l@binance.com>
2025-07-10 10:16:59 +08:00
a316766ad7 chore: Update theme vars (#22113) 2025-07-10 10:11:31 +08:00
a9cc19f530 feat(question-classifier): add drag-and-drop sorting for topics list (#22066)
Co-authored-by: crazywoola <427733928@qq.com>
2025-07-10 10:03:11 +08:00
881a151d30 test: add comprehensive unit tests for encrypter module (#22102) 2025-07-10 10:01:15 +08:00
785c4caa67 fix: allow update plugin install settings (#22111) 2025-07-10 09:58:48 +08:00
4403bc67a1 fix(Drawer): add overflow hidden to ensure copy button is always clickable (#21992) (#22103)
Co-authored-by: wangheyang <wangheyang@corp.netease.com>
2025-07-10 09:20:02 +08:00
b237113311 Update clean_document_task.py (#22090) 2025-07-10 09:18:50 +08:00
4cb50f1809 feat(libs): Introduce extract_tenant_id (#22086)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-07-09 17:45:56 +08:00
1885426421 feat: Allow to change SSL verify in HTTP Node (#22052)
Co-authored-by: crazywoola <427733928@qq.com>
2025-07-09 15:53:24 +08:00
89b52471fb Optimize the memory usage of Tencent Vector Database (#22079)
Co-authored-by: wlleiiwang <wlleiiwang@tencent.com>
2025-07-09 15:53:06 +08:00
3643ed1014 Feat: description field for env variables (#21556) 2025-07-09 15:18:23 +08:00
e39236186d feat: introduce new env ALLOW_UNSAFE_DATA_SCHEME to allow rendering data uri scheme (#21321) 2025-07-09 10:12:40 +08:00
521488f926 Remove tow unused files (#22022) 2025-07-09 09:28:26 +08:00
d61ea5a2de test: add comprehensive unit tests for UrlSigner (#22030) 2025-07-08 21:22:37 +08:00
816210d744 Expose LLM usage in workflows (#21766)
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2025-07-08 21:18:00 +08:00
f925869f61 fix(variable): ensure unique variable names in var-list (#22038) 2025-07-08 15:41:27 +08:00
f62b59a805 don't add search params when opening detail links from marketplace. (#22034) 2025-07-08 15:15:38 +08:00
a4bdeba60d feat(question-classifier): add instanceId to class-item editor (#22002) 2025-07-08 10:04:05 +08:00
5c0cb7f912 test: add unit tests for password validation and hashing (#22003) 2025-07-08 10:00:00 +08:00
2ffbf5435d minro fix: fix duplicate local import of ToolProviderType (#22013)
Signed-off-by: neatguycoding <15627489+NeatGuyCoding@users.noreply.github.com>
2025-07-08 09:49:53 +08:00
71385d594d fix(variables): Improve getNodeUsedVars implementation details (#21987) 2025-07-08 09:33:13 +08:00
53c4912cbb feat: add unit tests and validation for aliyun tracing (#22012)
Signed-off-by: neatguycoding <15627489+NeatGuyCoding@users.noreply.github.com>
2025-07-08 09:32:30 +08:00
1760179093 minro fix: fix a typo for aliyun (#22001)
Signed-off-by: neatguycoding <15627489+NeatGuyCoding@users.noreply.github.com>
2025-07-07 22:04:38 +08:00
aded30b664 fix: resolve dropdown menu visibility issue caused by z-index conflict (#22000) 2025-07-07 21:58:05 +08:00
de54f8d0ef Chore: remove unreachable code (#21986) 2025-07-07 21:55:34 +08:00
5b0b64c7e5 fix: document delete image files check file exist (#21991) 2025-07-07 21:53:40 +08:00
b654c852a5 chore(docker): increase NGINX_CLIENT_MAX_BODY_SIZE from 15M to 100M i… (#21995) 2025-07-07 21:51:49 +08:00
c48b32c9e3 ENH(ui): enhance check list (#21932)
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-07 14:52:36 +08:00
8f723697ef refactor(graph_engine): Take GraphRuntimeState out of GraphEngine (#21882) 2025-07-07 13:15:18 +08:00
de22648b9f feat: Add support for type="hidden" input elements in Markdown forms (#21922)
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-07 10:35:30 +08:00
b9f56852dc fix: resolve JSON.parse precision issue causing 'list index out of ra… (#21253) 2025-07-07 10:05:54 +08:00
108cc3486f fix(agent): show agent run steps, fixes #21718 (#21945)
Co-authored-by: crazywoola <427733928@qq.com>
2025-07-07 09:59:47 +08:00
ac69b8b191 refactor: extract common url validator for config_entity.py (#21934)
Signed-off-by: neatguycoding <15627489+NeatGuyCoding@users.noreply.github.com>
2025-07-07 09:34:13 +08:00
8288145ee4 chore(i18n): fix typos and improve Korean translation (#21955) 2025-07-07 09:33:09 +08:00
51f6095be7 minor fix: translation for pause (#21949)
Signed-off-by: neatguycoding <15627489+NeatGuyCoding@users.noreply.github.com>
2025-07-05 12:45:29 +08:00
a201e9faee feat: Add Aliyun LLM Observability Integration (#21471) 2025-07-04 21:54:33 +08:00
fec6bafcda refactor(web): Restructure the operation buttons layout in the app information component (#21742) (#21818) 2025-07-04 21:53:21 +08:00
2639f950cc minor fix: removes the duplicated handling logic for TracingProviderEnum.ARIZE and TracingProviderEnum.PHOENIX from the OpsTraceProviderConfigMap (#21927)
Signed-off-by: neatguycoding <15627489+NeatGuyCoding@users.noreply.github.com>
2025-07-04 16:46:48 +08:00
6663187eca test:add unit test for api version config (#21919) 2025-07-04 15:33:20 +08:00
13990f31a1 feat: update account menu style (#21916) 2025-07-04 14:52:30 +08:00
de39b737b6 Feat list query (#21907) 2025-07-04 14:18:31 +08:00
a66ed7157e feat: add document pause and resume functionality (#21894) 2025-07-04 14:06:47 +08:00
c9c49200e0 use repair_json fix json parse error of HTTPRequestNode (#21909)
Co-authored-by: lizb <lizb@sugon.com>
2025-07-04 14:01:17 +08:00
317d287458 fix(loop-variables): validate variable name input (#21888) 2025-07-03 23:30:56 +08:00
a79f37b686 fix: tts tool must choose a voice (#21877) 2025-07-03 17:10:01 +08:00
1c7404099d fix: prevent timeout in file encoding detection for large files (#21453)
Co-authored-by: crazywoola <427733928@qq.com>
2025-07-03 17:06:49 +08:00
ed54bd5121 fix: not search plugin if marketplace enabled (#21880) 2025-07-03 16:43:11 +08:00
06c3deff11 Fix: Add title attribute to edit time text for improved accessibility (#21871) 2025-07-03 16:07:07 +08:00
47954aa284 feat(api): validate and reject external datasets in document update (#21783) 2025-07-03 14:50:53 +08:00
f3c8625fe2 fix: The statistics page cannot display the tokens consumed by agent node (#21861) 2025-07-03 14:40:47 +08:00
ebc4fdc4b2 moving the MessageStatus class from the models.model module to models.enums module (#21867)
Signed-off-by: neatguycoding <15627489+NeatGuyCoding@users.noreply.github.com>
2025-07-03 13:56:23 +08:00
1af3d40c1a feat: Improve Observability with Arize & Phoenix Integration (#19840)
Co-authored-by: crazywoola <427733928@qq.com>
Co-authored-by: Gu <guchenhe@gmail.com>
2025-07-03 13:52:14 +08:00
31eb8548ef fix: Before publish the app, preview the voice of tts, it raise an er… (#21821)
Co-authored-by: 刘江波 <jiangbo721@163.com>
2025-07-03 10:53:14 +08:00
a45aa1e505 feat(variables): auto replace spaces with underscores in variable name inputs (#21843) 2025-07-03 10:36:38 +08:00
cb0d4a1e15 style(config-var): update styling classes to use design system tokens (#21846) 2025-07-03 10:00:44 +08:00
21e68b9cf1 fix: nodeExtraData might be undefined (#21856) 2025-07-03 09:59:19 +08:00
a3654c8fe9 fix(web): adjust HTTP node method and input layout (#21834) (#21855) 2025-07-03 09:26:38 +08:00
980b0188d2 feat(tests): add structured output parser tests for LLM responses (#21838) 2025-07-03 09:10:04 +08:00
daab648c78 fix: plugin deamon start fail (#21841) 2025-07-03 09:09:02 +08:00
e17b33e004 chore: add message status enum (#21825)
Co-authored-by: 刘江波 <jiangbo721@163.com>
2025-07-02 21:22:28 +08:00
4e7c9dd2ae feat: Retain llm setting for agent node (#21842) 2025-07-02 20:28:25 +08:00
5487463385 fix: add list contents handling in structured LLM output (#21837) 2025-07-02 19:14:21 +08:00
68f41bbaa8 Fix/workflow use nodes hooks (#21822) 2025-07-02 17:48:23 +08:00
3bfa9767c0 Chore/workflow last run (#21823)
Co-authored-by: Joel <iamjoel007@gmail.com>
2025-07-02 17:48:07 +08:00
8978b9d38b chore(version): Bump plugin daemon version to 0.1.3 (#21835)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-07-02 17:46:18 +08:00
cc89d7b1a5 remove unused config CURRENT_VERSION (#21832)
as API module's version code refactored into pyproject.toml file in refactor: define the Dify project version in pyproject.toml #20910, the deprecated CURRENT_VERSION is no longger used and should be removed.
2025-07-02 17:22:22 +08:00
bb955806e0 chore(version): bump to 1.5.1 (#21808)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-07-02 16:17:40 +08:00
0c39490bb1 chore: put new run var to the top (#21816) 2025-07-02 15:56:22 +08:00
826bf25abf Fix: prevent SQL errors when metadata filter Constant value is None or blank (#21803) 2025-07-02 14:43:01 +08:00
89250a36b7 fix(api): files not returned in the answer node (#21807) 2025-07-02 13:54:10 +08:00
c2e599cd85 fix(api): Fix resetting sys var causing internal server error (#21604)
and sorts draft variables by their creation time, ensures a consist order.
2025-07-02 13:36:35 +08:00
71d6cf1b1d fix: Make the latency and logs of web applications consistent. (#21578)
Co-authored-by: 刘江波 <jiangbo721@163.com>
2025-07-02 12:04:33 +08:00
86179beaa5 FIX: dollar-sign escaping in preprocessLaTeX code‐block handling (#21796)
Co-authored-by: LinYing <linying@momenta.ai>
2025-07-02 11:32:23 +08:00
f53b177e1f chore: new inspected variable add to top position instead of bottom (#21793) 2025-07-02 11:07:43 +08:00
58dfe2ca03 fix: when config plugin endpoint choose no start form app cause page crashed (#21789) 2025-07-02 10:55:38 +08:00
c4b960cc1a Improve Langfuse trace readability (#21777) 2025-07-02 09:15:24 +08:00
70035aa9a9 fix: notion kownledge datasets can't add new page (#21779) 2025-07-02 09:14:48 +08:00
a82943a83d minor fix: add parameters in error msg of Plugin service returned no options (#21662) 2025-07-01 22:58:59 +08:00
9a4c1fe834 fix: if parameter is not required, continue (#21761)
Co-authored-by: 刘江波 <jiangbo721@163.com>
2025-07-01 21:25:45 +08:00
b9ff716c18 fix: incorrect api module version in pyproject.toml (#21755)
Co-authored-by: crazywoola <427733928@qq.com>
2025-07-01 17:12:52 +08:00
8516d15a4e fix: handle configure button for notion internal integration (#21412) 2025-07-01 16:58:00 +08:00
4198a533ad fix: code Interpreter error handling not work (#21736) 2025-07-01 16:16:34 +08:00
a67441689a chore: upgrade package versions for security reason (#21751) 2025-07-01 16:02:04 +08:00
5c11c22302 fix: can not reset system variables (#21750) 2025-07-01 16:00:17 +08:00
1a7ad195f0 refactor: define the Dify project version in pyproject.toml (#20910) 2025-07-01 12:07:24 +08:00
cf2173644e Release db.session connection before workflow new thread long time operation (#21726)
Co-authored-by: 李强04 <liqiang04@gaotu.cn>
2025-07-01 12:05:29 +08:00
1b99e44e99 feat: Retain LLM Configuration Settings When Changing Model (#21247) 2025-07-01 11:32:46 +08:00
b8b9c3a783 fix: set the func.coalesce() second paramter default value #21239 (#21240)
Signed-off-by: YoungLH <974840768@qq.com>
2025-07-01 11:31:14 +08:00
25de39d9c6 Feat: sync input variable names to main() function (#21667) 2025-07-01 10:57:07 +08:00
2455135eaa chore: translate i18n files (#21732)
Co-authored-by: douxc <7553076+douxc@users.noreply.github.com>
2025-07-01 10:53:59 +08:00
dffbdd140c fix: user cannot select 'Customer Service & Operations' category (#21733) 2025-07-01 10:48:11 +08:00
Han
69b6f6f5d2 Fixes issue 21157/20661 extra quote in agent node (#21674)
Co-authored-by: Wang Han <wanghan@zhejianglab.org>
2025-07-01 10:43:46 +08:00
6013d90426 Fix/ serveral bugs fixed in enterprise (#21729) 2025-07-01 10:42:11 +08:00
9ded6f6a40 [fix] #21678 User input of remote file link on the run page form causes conversation/message interface error (#21683)
Co-authored-by: 李强04 <liqiang04@gaotu.cn>
2025-07-01 10:40:39 +08:00
7c76458b18 fix: fix node valid detect (#21709) 2025-07-01 10:32:00 +08:00
eb9edf4908 fix: copy inspect variable value get extra quotes (#21680) 2025-06-30 22:14:29 +08:00
55a6b330ec Add get document detail service api (#21700)
Co-authored-by: lizb <lizb@sugon.com>
2025-06-30 22:13:56 +08:00
96d27d7087 fix: enter and exit full canvas cause nav items missing (#21691) 2025-06-30 22:00:35 +08:00
9588a64487 fix: keep search params in web app url when needs authorize (#21717) 2025-06-30 18:28:31 +08:00
18757d07c9 fix: #21427 correct segment settings when creating documents via API (#21673) 2025-06-29 14:49:32 +08:00
0f23e3d9ab fix(ui): no hover effect in copy button of code node (#21671) 2025-06-29 14:49:10 +08:00
765adabb32 feat: Add autofill by default value in endpoint plugin setting page. (#21669) 2025-06-29 13:09:21 +08:00
37e19de7ab feat(inner-api/workspace): include tenant details in CreateWorkspace response (#21636) 2025-06-27 18:28:03 +08:00
28f5c37211 Add Env 'CELERY_SENTINEL_PASSWORD' for celery connect redis sentinel. (#21198) 2025-06-27 17:37:11 +08:00
38a704743c chore: Add missing svg icon sources (#21627) 2025-06-27 16:56:22 +08:00
87efe45240 feat(plugin): Add API endpoint for invoking LLM with structured output (#21624) 2025-06-27 15:57:44 +08:00
7236c4895b fix: annotation remove functionality Fixes #21448 (#21616)
Co-authored-by: siqiuchen <siqiu.chen2@dentsplysirona.com>
2025-06-27 15:45:24 +08:00
0cb00d5fd2 refactor: move structured output support outside LLM Node (#21565)
Co-authored-by: Novice <novice12185727@gmail.com>
2025-06-27 14:55:31 +08:00
cdb9eecbaf fix: Resolving conflicts caused by tablestore dependency on enum34 (#21605)
Co-authored-by: xiaozhiqing.xzq <xiaozhiqing.xzq@alibaba-inc.com>
2025-06-27 14:17:52 +08:00
ae8653beb0 style: decrease navbar z-index value from 30 to 15, fix style error (#21612) 2025-06-27 13:22:29 +08:00
787ad5ab38 feat: Refactor panel component, add adaptive width observer to optimize panel width management (#21576) 2025-06-27 10:50:33 +08:00
81fc49d78c fix: value_selector will be empty string (#21598) 2025-06-27 10:38:22 +08:00
7d9d670f90 feat: to add tag when tag input is unfocus (#21555) 2025-06-27 10:36:01 +08:00
fd41645f95 feat: Add display control logic for the variable inspection panel (#21539) 2025-06-27 10:22:39 +08:00
a06af88b26 Feat/api validate model provider (#21582)
Co-authored-by: crazywoola <427733928@qq.com>
2025-06-27 09:59:44 +08:00
33f0457a23 fix: wrong token number when using qa_model and answer is updated. (#21574) 2025-06-27 09:11:41 +08:00
cea6522122 feat: add DYNAMIC_SELECT parameter type for dynamic options in parameter entities (#21425) 2025-06-26 17:44:14 +08:00
ae00ba44db fix: fix create custom modal overlay add tool (#21553) 2025-06-26 16:45:45 +08:00
79fa3c7519 fix(web): optimize the pop logic of the tool selector (#21558) (#21559) 2025-06-26 16:45:01 +08:00
d2814650e6 feat: prevent input of non-numeric values ​​in numer input (#21562) 2025-06-26 16:43:26 +08:00
17722f581b feat: add tooltip to workflow run node name (#21564) 2025-06-26 16:42:48 +08:00
ad9eebd02d fix: update retrieval method cache (#21409) 2025-06-26 10:58:21 +08:00
cefb8e4218 chore: Simplify code logic (#21496)
Co-authored-by: 刘江波 <jiangbo721@163.com>
2025-06-26 10:09:52 +08:00
90aba77471 chore: remove unused code (#21497)
Co-authored-by: 刘江波 <jiangbo721@163.com>
2025-06-26 10:08:17 +08:00
785d4b3de7 feat: refactor: test_dataset unit tests #21499 (#21502) 2025-06-26 10:07:54 +08:00
6bb82f8ee0 Fix minor comment missing (#21517) 2025-06-26 10:06:49 +08:00
45dc0a43d3 fix: prompt editor insert context (#21526) 2025-06-26 09:58:58 +08:00
1610f62a28 fix: var inspect doc link error (#21515) 2025-06-25 20:21:51 +08:00
d454f09e13 feat: add a magic field in the cancel invite api response (#21505) 2025-06-25 18:37:56 +08:00
00f0b569cc Feat/kb index (#20868)
Co-authored-by: twwu <twwu@dify.ai>
2025-06-25 17:52:59 +08:00
3acaa59885 fix(update_provider_when_message_created): Fix db transaction (#21503)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-06-25 17:23:45 +08:00
2d5cdbe79c chore(version): Bump to 1.5.0 (#21415)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-06-25 16:54:44 +08:00
9ad3553d7f fix: app description too long break ui (#21491) 2025-06-25 16:41:55 +08:00
8f15341f1e fix(event_handlers): DB dead lock (#21468)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-06-25 16:20:37 +08:00
31b0cff709 chore: update cover image (#21489) 2025-06-25 16:06:21 +08:00
1dd2607dfd feat(oauth): refactor proxy context (#21483) 2025-06-25 15:10:45 +08:00
164e5481c5 feat(oauth): plugin oauth service (#21480) 2025-06-25 14:14:30 +08:00
a098825fcc Update smtp.py (#21335) 2025-06-25 13:50:35 +08:00
0f5417ab84 feat(web): Contains sys.files in the default template. (#21476) 2025-06-25 13:06:07 +08:00
268da31332 fix(api): adding variable to variable pool recursively while loading draft variables. (#21478)
This PR fix the issue that `ObjectSegment` are not recursively added to the draft variable pool while loading draft variables from database. It also fixes an issue about loading variables with more than two elements in the its selector.

Enhances #19735.
Closes #21477.
2025-06-25 12:39:22 +08:00
94f8e48647 Refactor update dataset (fix #21401) (#21402)
Co-authored-by: Jyong <76649700+JohnJyong@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-06-25 11:44:35 +08:00
819c02f1f5 fix: prompt editor click to insert variable (#21470) 2025-06-25 11:34:58 +08:00
449e16782e fix: update DocumentList to include max height and overflow styles (#21466) 2025-06-25 11:17:31 +08:00
257809bb30 fix: in plugin config panel appselector can not list all apps() (#21457) 2025-06-25 11:14:46 +08:00
8a20134a5b fix: inputs.items is undefined (#21463) 2025-06-25 11:02:27 +08:00
27172b0898 fix (conf/code): Variables not correctly filled can still be referenced (#21451)
Co-authored-by: crazywoola <427733928@qq.com>
2025-06-25 10:27:36 +08:00
5b33d086c6 chore: translate i18n files (#21459)
Co-authored-by: laipz8200 <16485841+laipz8200@users.noreply.github.com>
2025-06-25 10:26:21 +08:00
c7ee0f2a93 fix(knowledge_base): Unchecked metadata name length (#21454)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-06-25 10:18:20 +08:00
8ea27bc341 feat: persist debug-and-preview panel width in localstorage (#21434) 2025-06-24 20:34:11 +08:00
c8ebad055c fix: text generation app log not show (#21436) 2025-06-24 20:33:10 +08:00
9de552cb42 fix: first message query error (#21444)
Co-authored-by: 刘江波 <jiangbo721@163.com>
2025-06-24 20:32:50 +08:00
501d3b6203 feat(api): Explicitly define version method for all BaseNode subclasses (#21443)
This PR addresses issue #21441 by implementing explicit `version` method definitions for all `BaseNode` subclasses to improve code maintainability.

### Changes

Added explicit `version` method definitions for all `BaseNode` subclasses:

- `QuestionClassifierNode`
- `KnowledgeRetrievalNode` 
- `AgentNode`

Added comprehensive test suite to validate:

1. All subclasses of `BaseNode` have explicitly defined `version` method
2. All subclasses have required `_node_type` property
3. The `(node_type, node_version)` combination is unique across all subclasses
2025-06-24 20:27:22 +08:00
f668d89621 fix: last run not export var and ts problem (#21424) 2025-06-24 16:48:33 +08:00
d60287621a add dataset info in response (#21413) 2025-06-24 16:07:31 +08:00
197c018c1f fix missing style of conversation variable panel default value textarea (#21422) 2025-06-24 15:58:39 +08:00
45146edb31 fix(document_extractor): xlsx file column int type error (#21408) 2025-06-24 13:42:13 +08:00
973b3854b4 add dataset info in response (#21406) 2025-06-24 11:13:56 +08:00
2467bd738a Improve App Icon Picker: Stable Modal Height & Collapsible Emoji Style Section (#21399) 2025-06-24 11:13:28 +08:00
27baf383dd feat: make RESPECT_XFORWARD_HEADERS_ENABLED configurable in Compose (#21248) 2025-06-24 10:00:51 +08:00
537b3b2b4a Fixes #21366: Add link to the one-click Dify deployment solution on Alibaba Cloud DMS (#21385) 2025-06-24 09:40:58 +08:00
e0de3b97ab Fixes #21357 Correct the wrong link destination (#quick-start) (#21373) 2025-06-24 09:28:12 +08:00
4a59360ad3 chore: translate i18n files (#21395)
Co-authored-by: QuantumGhost <2939865+QuantumGhost@users.noreply.github.com>
2025-06-24 09:25:02 +08:00
1a1bfd4048 feat: last run frontend (#21369)
The frontend of feat: Persist Variables for Enhanced Debugging Workflow (#20699).

Co-authored-by: jZonG <jzongcode@gmail.com>
2025-06-24 09:10:30 +08:00
10b738a296 feat: Persist Variables for Enhanced Debugging Workflow (#20699)
This pull request introduces a feature aimed at improving the debugging experience during workflow editing. With the addition of variable persistence, the system will automatically retain the output variables from previously executed nodes. These persisted variables can then be reused when debugging subsequent nodes, eliminating the need for repetitive manual input.

By streamlining this aspect of the workflow, the feature minimizes user errors and significantly reduces debugging effort, offering a smoother and more efficient experience.

Key highlights of this change:

- Automatic persistence of output variables for executed nodes.
- Reuse of persisted variables to simplify input steps for nodes requiring them (e.g., `code`, `template`, `variable_assigner`).
- Enhanced debugging experience with reduced friction.

Closes #19735.
2025-06-24 09:05:29 +08:00
3113350e51 fix(migrate/tools): Correct parameter name in tool_builtin_providers migration function (#21358) 2025-06-23 14:56:34 +08:00
15
3537088135 Fixes #21351 (#21354) 2025-06-23 14:01:51 +08:00
43f5b21852 feat: add config for max-tree-depth (#21291) 2025-06-23 13:55:57 +08:00
e56d7547f7 fix: web error (#21340) 2025-06-23 13:54:52 +08:00
ea68c92bbb fix segment display the default (#21317)
Co-authored-by: lizb <lizb@sugon.com>
2025-06-23 13:48:39 +08:00
b348455144 fix: Update segmentation type initialization to support parent-child mode based on document form (#21359) 2025-06-23 13:45:35 +08:00
f2f6e95880 fix: Update the logic for segment type settings to support parent-child mode selection. (#21278) 2025-06-23 10:36:30 +08:00
725a221315 fix: hide marketplace information(#21275) (#21276)
Co-authored-by: Xiaoba Yu <xb1823725853@gmail.com>
2025-06-23 10:23:43 +08:00
a0a89b562c Feature:Refactor batch update document status for #21324 (#21325) 2025-06-23 09:49:13 +08:00
Jin
3e7f8bad56 fix: markdown_extractor lost chunks if it starts without a header(#21308) (#21309) 2025-06-21 23:10:00 +08:00
d333aac84a refactor: env check with function (#21310) 2025-06-21 18:33:13 +08:00
3fefb34d44 fix: wrong translation (#21311) 2025-06-21 11:51:52 +08:00
870e73c03b Knowledge base API supports status updates #18147 (#18235) 2025-06-21 11:18:48 +08:00
ef20f694b2 Fixes #21203 Add a rapid deployment solution for deploying to Alibaba Cloud throug… (#21206)
Co-authored-by: herui.gh <>
2025-06-20 20:10:45 +08:00
57f7368a0e fix notion dataset rule not found (#21236) 2025-06-20 20:05:01 +08:00
fa70033438 style(dark): Adjust chat bubble background color (langgenius#20990) (#21264) 2025-06-20 20:04:39 +08:00
77be115f09 critical! insert_explore_app_list_api (#21277) 2025-06-20 20:03:30 +08:00
3f9ced5374 Revert "feat:conversation variable support file array" (#21273) 2025-06-20 19:57:28 +08:00
ba5eebf3a2 feat(mermaid): Rearchitect component for robustness, security, and theming (#21281) 2025-06-20 19:55:58 +08:00
186ac74c1f fix: update auto translations for days of the week(#21279) (#21287)
Co-authored-by: Xiaoba Yu <xb1823725853@gmail.com>
2025-06-20 19:54:43 +08:00
223448af18 Feat: support selecting model in auto generator (#21208) 2025-06-20 13:42:42 +08:00
d34795fc08 bug: fix minor exception msg missing (#21255) 2025-06-20 09:23:41 +08:00
40e8ad419b fix: not permitted schema of markdown link cause page crash (#21258) 2025-06-20 09:23:30 +08:00
6b1ad634f1 fix(workflow_run): sequence_number race. (#21228)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-06-19 17:53:49 +08:00
8f64327d57 feat: use default access mode when importing dsl (#21231) 2025-06-19 17:14:28 +08:00
1d41b642ca chore: remove redundant code (#21211) 2025-06-19 15:47:08 +08:00
cff039d123 fix: workflow import sync env and conversation variables (#21215) 2025-06-19 15:44:42 +08:00
0cfdb8c043 fix: fix load_balancing_config save error (#21213) 2025-06-19 15:41:36 +08:00
db20f9bb71 Fix/dataset page may redirect to apps if it still getting user info and workspace info (#21221) 2025-06-19 15:40:15 +08:00
2020a31785 fix(plugin/migrations) refactor data migration to use specific provider ID classes. (#21187) 2025-06-19 13:02:39 +08:00
2c04a16eaa Revert "bug: fix sequence number may be duplicated when multi-threads running the same workflow #21047" (#21207) 2025-06-19 12:05:44 +08:00
6325129761 fix wrongly remove reset nodes (#20880)
Co-authored-by: zhuqingchao <zhuqingchao@xiaomi.com>
2025-06-19 11:37:07 +08:00
9a18a98b58 fix keyword search top-k not initial (#21202) 2025-06-19 11:10:41 +08:00
7b9e01aa07 Feat/support sendgrid (#21011)
Co-authored-by: André de Matteo <andre.matteo@accenture.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2025-06-19 10:27:38 +08:00
2bb19f85c6 feat:conversation variable support file array (#21174)
Co-authored-by: kino.lu <kino.lu@vipshop.com>
2025-06-19 10:26:38 +08:00
17fe62cf91 feat: add support for Matrixone database (#20714) 2025-06-19 10:20:12 +08:00
e99861d4fe Add Filter of Get Workflow Logs (#21172)
Co-authored-by: lizb <lizb@sugon.com>
2025-06-19 10:10:16 +08:00
a205ee16b9 feat: improve the orgnize node operation (#21183) 2025-06-19 10:05:33 +08:00
9835730278 Translation fix (#21194) 2025-06-19 09:36:56 +08:00
8331b63baa add func args missing in apps chat. (#21085)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-06-18 20:42:33 +08:00
2df4699312 fix(echarts): Resolve interaction issues on charts with timelines (#21185) 2025-06-18 20:22:33 +08:00
2eae7503e1 Minor Improvements for File Validation and Configuration Handling #21179 (#21171)
Co-authored-by: tech <cto@sb>
2025-06-18 18:33:28 +08:00
dbae5b0564 fix: workflow shortcuts (#21164) 2025-06-18 16:31:41 +08:00
30cfc9c172 Feat/plugin install scope management (#19963) 2025-06-18 16:25:00 +08:00
420a34a546 Fix: web app auth maybe failed (#21166) 2025-06-18 16:15:41 +08:00
918bb9a2f7 bug: fix sequence number may be duplicated when multi-threads running the same workflow #21047 (#21153) 2025-06-18 16:10:11 +08:00
99acdcdef7 chore: translate i18n files (#21163)
Co-authored-by: douxc <7553076+douxc@users.noreply.github.com>
2025-06-18 16:03:41 +08:00
59fdfc3728 fix: remove redundant PG_USER (#21162) 2025-06-18 16:03:32 +08:00
614c5e087e Feat: add check before install plugin (#20014) 2025-06-18 15:51:23 +08:00
ine
83719cab73 fix: add environment variable POSTGRES_USER (#20786) 2025-06-18 14:44:42 +08:00
9e73e8b9e8 feat: add search endpoint for Firecrawl Integration (#20521)
Co-authored-by: crazywoola <427733928@qq.com>
2025-06-18 14:37:03 +08:00
d4be356ffb fix(api): add support for "image" icon when duplicate app (#20744) (#20761) 2025-06-18 14:35:42 +08:00
47e0f92c0f Fixes #20748 KnowledgeRetrievalNode return all external documents when reranker disabled even top-k configed (#20762) 2025-06-18 14:35:12 +08:00
6d033d4064 clean duplicate validate for dataset_configs (#20775)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-06-18 14:34:58 +08:00
ab290ed968 unreachable-code for lb model fetch. (#20797)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-06-18 14:33:49 +08:00
879f839d75 refactor(graph_engine): Merge duplicated if block (#20784)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-06-18 14:33:29 +08:00
15800c6108 feat: Embedded chat window supports userVariables configuration. (#20983) 2025-06-18 14:27:02 +08:00
787a556bd7 add service api ratelimit check (#20878) 2025-06-18 14:05:28 +08:00
ea44b895e2 chore: cancel enforcing uppercase of the text of plugin navigation button on the header bar (#20890) 2025-06-18 14:02:45 +08:00
37f26c412f add healthcheck to oceanbase container (#20989) 2025-06-18 14:00:59 +08:00
1da8027445 feat: Support drop DSL file into the browser to create app (#20706)
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-06-18 13:58:57 +08:00
ce3e2e5eb8 Set a default value for the PLUGIN_S3_USE_AWS environment variable in the dify-plugin-daemon. (#21152) 2025-06-18 12:29:14 +08:00
b69f952e3e fix(web): number type prompt variable required validation not effective (#20898) 2025-06-18 11:33:10 +08:00
0784c6295d fix Multiple <think>\n Interface rendering exception (#20977) 2025-06-18 11:31:04 +08:00
45c89bd6de feat: add pagenation to notion extractor (#20919) 2025-06-18 11:30:55 +08:00
8ac3bd1768 feat: Add support for hidden attributes to form item types (#20956) 2025-06-18 11:30:30 +08:00
945d1569ee fix(web): fix unique key issue (#20809) (#20810) 2025-06-18 10:04:18 +08:00
61526c027d [Bug] fix misusing ACCESS_TOKEN_EXPIRE_MINUTES in jwt on exp (#21030)
Co-authored-by: tech <cto@sb>
2025-06-18 09:37:49 +08:00
4689e8953e fix: shorten connection timeout to pypi.org for deprecation check for weaviate client (#21131) 2025-06-18 09:25:52 +08:00
4e701b1efd fix(web): enhance API test page experience by adding loading state for test button (#21091) 2025-06-18 09:24:41 +08:00
21c2de2d7e fix(code-editor): optimize the loading style of the CodeEditor component in dark mode (#21116) (#21120) 2025-06-17 17:49:44 +08:00
72a6cde828 chore: translate i18n files (#21053)
Co-authored-by: zxhlyh <16177003+zxhlyh@users.noreply.github.com>
2025-06-17 17:49:01 +08:00
0476937f55 fix(agent_node):Fix spelling errors. (#21094) 2025-06-17 17:48:43 +08:00
fc187d6998 chore: responsive header (#21115) 2025-06-17 17:37:06 +08:00
0dcacdf83d feat: add a flask_context_manager. (#21061)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-06-17 16:31:29 +08:00
7a2a8a2ffd chore: check input variable key of code/template node is valid (#21057) 2025-06-17 16:27:51 +08:00
2ddc3658a0 fix: remove the x overflow scroll bar of monitoring page (#21059) 2025-06-17 16:26:56 +08:00
6a5236b200 chore: cleanup wrong and unused doc links in i18 translations by appling docLink method usage (#21112) 2025-06-17 16:14:53 +08:00
6c0a91a64f fix: some dark theme display incorrect (#21055) 2025-06-17 16:11:57 +08:00
df6451076b fix: prevent nodes from being unintentionally deleted by pressing the backspace key. (#21023) 2025-06-17 16:11:30 +08:00
19339c3de1 fix: doc error (#21108) 2025-06-17 15:56:45 +08:00
c5acffb034 fix: page loop in datasets and apps if current user is dataset_operator (#21114) 2025-06-17 15:48:58 +08:00
110bb5e5a5 fix: auto redirect to login page if web app needs login (#21096) 2025-06-17 11:33:46 +08:00
d7663159e9 Fix/webapp loop login (#21092) 2025-06-17 10:45:03 +08:00
2440ac43b1 fix: Replace GenericProviderID with ToolProviderID (#21064)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-06-16 17:35:46 +08:00
41cb39eb5d fix: redirect to apps page if current user has no permission to visit dataset page (#21065) 2025-06-16 16:39:05 +08:00
809a0ab6bf chore: bump version to 1.4.3 (#21045)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-06-16 15:29:53 +08:00
51b63b2398 chore: rename workflow blocks (#21052) 2025-06-16 14:55:32 +08:00
59b89b9971 fix: update documentation links for various components to support localization (#21048) 2025-06-16 14:13:04 +08:00
ecd8f32cce Feat/add rag dev deploy (#21049)
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
2025-06-16 14:07:11 +08:00
909259da37 fix: delete some dead code using vulture (#20999)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-06-16 12:07:41 +08:00
366ddb05ae test: run vdb test of oceanbase with docker compose in CI tests (#20945) 2025-06-16 11:05:19 +08:00
d587480a3e fix(web): optimize conversation-panel Modal width adjustment logic (#21018) 2025-06-15 09:22:10 +02:00
765189d4f5 fix: correct description for edu coupon (#21020) 2025-06-15 09:21:28 +02:00
f6aa2498a3 document indexing not bound to a Session (#21015)
Co-authored-by: xuhaixing <xuhaixing@itiger.com>
2025-06-14 17:44:35 +02:00
f6641c0f41 docs: conv and user_id (#21004) 2025-06-13 15:07:30 +02:00
f4df759ba6 refactor: generalize method for getting doc link respecting locale and fix error link paths (#20801) 2025-06-13 10:58:43 +02:00
3a628bc671 chore: app info add author_name (#20973) 2025-06-13 10:17:35 +02:00
175571e740 fix(auth): Clear login rate limit after password reset (#20948) 2025-06-13 10:17:12 +02:00
8cb3ed5cc2 feat: add S3_USE_AWS env var to explicitly distinguish AWS S3 usage in plugin-daemon (#20923) 2025-06-13 15:05:55 +08:00
c05e47ebc0 refactor(sqlalchemy_workflow_execution_repository): Use the max funtion for getting next_sequence_number. (#20966) 2025-06-13 09:42:02 +08:00
b2ac11bc47 fix: markdown button can't send message (#20933) 2025-06-12 08:18:15 +02:00
af83120832 🐛 Fix(Gemini LLM): Support Gemini 0.2.x plugin on agent app (#20794)
Co-authored-by: QuantumGhost <obelisk.reg+git@gmail.com>
2025-06-12 00:49:38 +08:00
1e03c97663 fix(llm_node): missing parameters for structure outputs (#20915)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-06-11 18:56:07 +08:00
41e3ecc837 fix remote ip header CF-Connecting-IP (#20846) 2025-06-11 16:57:24 +08:00
acb2488fc8 chore(package): Bump version to 1.4.2 (#20897)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-06-11 16:28:36 +08:00
d6d8cca053 refactor: replace compact response generation with length-prefixed response for backwards invocation api (#20903) 2025-06-11 16:01:50 +08:00
f601093ccc fix: only enterprise version request app access mode (#20785) 2025-06-11 15:38:51 +08:00
0f3d4d0b6e chore: bump mypy to 1.16 (#20608) 2025-06-11 01:01:33 +08:00
60777bc610 chore: update plugin publish link text (#20873) 2025-06-10 17:34:26 +08:00
21a50e22d2 fix auto metadata filter (#20845) 2025-06-10 10:46:17 +02:00
fc6e2d14a5 fix(web): optimize prompt change logic for LLM nodes (#20841) (#20865) 2025-06-10 09:04:10 +02:00
c439e82038 refactor(api): Decouple ParameterExtractorNode from LLMNode (#20843)
- Extract methods used by `ParameterExtractorNode` from `LLMNode` into a separate file.
- Convert `ParameterExtractorNode` into a subclass of `BaseNode`.
- Refactor code referencing the extracted methods to ensure functionality and clarity.
- Fixes the issue that `ParameterExtractorNode` returns error when executed.
- Fix relevant test cases.

Closes #20840.
2025-06-10 11:47:50 +08:00
a97ff587d2 fix(api): Resolve error encountered when executing QuestionClassifieNode (#20829)
The `QuestionClassifierNode` class extends `LLMNode`, meaning that, per the Liskov Substitution Principle, `QuestionClassifierNodeData` **SHOULD** be compatible in contexts where `LLMNodeData` is expected.

However, the absence of the `structured_output_enabled` attribute violates this principle, causing `QuestionClassifierNode` to fail during execution.

This commit implements a quick and temporary workaround. A proper resolution would involve refactoring to decouple `QuestionClassifierNode` from `LLMNode` to address the underlying design issue.

Fixes #20725.
2025-06-10 00:34:51 +08:00
91144207e0 refactor(DSL imports): using organization/name/version to fetch DSL dependencies. (#20757) 2025-06-09 19:05:29 +08:00
0720bc7408 Feat/webapp verified sso main (#20494) 2025-06-09 17:19:53 +09:00
ab62a9662c fix: some dark mode display incorrect (#20788) 2025-06-09 16:09:27 +08:00
d6a8af03b4 Fix/add webapp no permission page (#20819) 2025-06-09 15:44:49 +08:00
65c7c01d90 fix: clean up two unreachable code (#20773)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-06-07 23:06:46 +08:00
e6e76852d5 Add support for W&B dedicated cloud instances in Weave tracing integration (#20765)
Co-authored-by: crazywoola <427733928@qq.com>
2025-06-07 23:06:23 +08:00
930c4cb609 feat(api): Adjust WorkflowDraftVariable and WorkflowNodeExecutionModel (#20746)
- Add `node_execution_id` column to `WorkflowDraftVariable`, allowing efficient implementation of 
  the "Reset to last run value" feature.
- Add additional index for `WorkflowNodeExecutionModel` to improve the performance of last run lookup.

Closes #20745.
2025-06-06 21:03:59 +08:00
0c8447fd0e fix: missing bot name in orchestrate (#20747) 2025-06-06 16:44:36 +08:00
37c3283450 fix: opensearch vector search falls back to keyword search (#20723)
Co-authored-by: wenjun.gu <wenjun.gu@envision-energy.com>
2025-06-06 16:29:15 +08:00
723b69cf8d chore: chart panel ui enhance (#20743) 2025-06-06 16:15:37 +08:00
85859b6723 feat: add browser list (#20717) 2025-06-06 10:53:57 +08:00
c1a13fa553 chore: replace pseudo-random generators with secrets module (#20616) 2025-06-06 10:48:28 +08:00
4f0c9fdf2b chore: remove repeat public api and service api panel (#20715) 2025-06-06 10:44:21 +08:00
4271602cfc fix: opensearch metadata filtering returns empty (#20701)
Co-authored-by: wenjun.gu <wenjun.gu@envision-energy.com>
Co-authored-by: crazywoola <427733928@qq.com>
2025-06-06 09:10:01 +08:00
4f14d7c0ca chore: bump uv to 0.7.x (#20692) 2025-06-06 09:09:31 +08:00
38554c5f3e fix(inner_api/plugin/wraps): refresh user model after creation in get user function (#20704) 2025-06-05 23:36:33 +08:00
138ad6e8b3 fix: opensearch fulltext search with metadata filtering dsl error (#20702)
Co-authored-by: wenjun.gu <wenjun.gu@envision-energy.com>
2025-06-05 23:09:00 +08:00
f76f70f0b6 Fix builtin_providers for tools. (#20697)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-06-05 23:05:50 +08:00
7094680e23 feat: reorder app types (#20685) 2025-06-05 17:02:26 +08:00
59dc7c880e Fix: style of radio checked (#20681) 2025-06-05 15:47:42 +08:00
3fb9b41fe5 A more concise and effective extractor for excel and csv files (#20625)
Co-authored-by: haiyangpengai <xxxx>
2025-06-05 14:59:55 +08:00
0ccf8cb23e fix: agent moderation not working (#20673) 2025-06-05 14:56:41 +08:00
837f769960 fix: update text_to_audio method to send data as JSON (#20663) 2025-06-05 14:33:24 +08:00
3367d4258d chore: translate i18n files (#20664)
Co-authored-by: douxc <7553076+douxc@users.noreply.github.com>
2025-06-05 13:35:40 +08:00
d608be6e7f Add vscode debugger (#20668) 2025-06-05 13:35:32 +08:00
de9c7f2ea4 Update template.zh.mdx-fix document update metadata body param (#20659) 2025-06-05 12:11:11 +08:00
1fbbbb735d fix: the locale format(#20662) (#20665)
Co-authored-by: Xiaoba Yu <xb1823725853@gmail.com>
2025-06-05 11:07:54 +08:00
9915a70d7f Fix/webapp access scope (#20109) 2025-06-05 10:55:17 +08:00
822298f69d Fix 500 error (#20614) 2025-06-05 10:29:13 +08:00
ad2f25875e fix(llm_node): update file variable mapping to use vision configs (#20417) 2025-06-05 09:58:24 +08:00
ad8e79c440 assign dataset indexing_technique to args if not explicitly provided (#20597) 2025-06-05 09:47:57 +08:00
f2dcfc976d feat: allow fill inputs from url params (#20630) 2025-06-05 09:44:41 +08:00
5ccfb1f4ba refactor: Improve model status handling and structured output (#20586)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-06-04 19:56:54 +08:00
92614765ff Feat/queue monitor (#20647) 2025-06-04 19:56:34 +08:00
4f066454d0 fix(markdown): Ensure abbr: links render correctly in react-markdown v9+ (#20648) 2025-06-04 19:52:12 +08:00
7ae5819c67 feat: plugin storage support volcengine tos (#20613) 2025-06-04 19:46:47 +08:00
2b0f3edcef chore: ensure web code consistency by applying pnpm fix (#20643) 2025-06-04 19:45:29 +08:00
244687c9a7 fix: plugin update redcorner mark display incorrect (#20636) 2025-06-04 19:44:47 +08:00
d22c351221 chore: fix some security issues in markdown (#20639) 2025-06-04 15:56:29 +08:00
006496f24e raise error when process_rule is required but missing (#20599) 2025-06-04 14:19:35 +08:00
01d500db14 fix: autocorrect everything in web (#20605)
Signed-off-by: kenwoodjw <blackxin55+@gmail.com>
2025-06-04 14:12:24 +08:00
4ac3600f81 fix: update app tag error (#20618) 2025-06-04 13:55:00 +08:00
6aba223383 fix: adjust sticky header properties in Container component (#20624) 2025-06-04 13:54:30 +08:00
f1c19cda74 fix: unable to upload custom file in case of incorrect inffered by multiple extensions mapped from mime type with filename extension hints (#20559) 2025-06-04 13:20:57 +08:00
275e86a26c refactor: Removes tenant ID check from rate limit logic (#20585)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-06-03 18:56:38 +08:00
077d627953 fix: ensure newlines around think tags for proper markdown rendering (#20594) 2025-06-03 18:56:09 +08:00
ca0b268ae5 fix: variable aggregator with group and file raise exception (#20581) 2025-06-03 18:17:34 +08:00
25be7c1ad5 Revert "♻️ refactor(middleware): remove duplicate CSP header assignment" (#20592) 2025-06-03 17:43:48 +08:00
888cd86afd chore: prepare the plugin daemon base url to yarl URL ahead intstead of in every invocation (#20541) 2025-06-03 17:01:35 +08:00
157d916154 ♻️ refactor(middleware): remove duplicate CSP header assignment (#20548) 2025-06-03 16:46:57 +08:00
e40e9db39a fixes #19634 (#20545) 2025-06-03 16:38:48 +08:00
36f1b4b222 fix: Ensure model config integrity in retrieval processes (#20576)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-06-03 16:36:18 +08:00
257bf13fef refactor: Removes unused LLMMode value_of method (#20575)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-06-03 16:36:10 +08:00
957f5b212e fix: Upgrade Flask-Cors (#20577)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-06-03 16:35:34 +08:00
72fdafc180 refactor: Replaces direct DB session usage with context managers (#20569)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-06-03 16:16:06 +08:00
db83bfc53a chore: update pnpm version to 10.11.1 (#20573) 2025-06-03 15:21:53 +08:00
744159a079 fix: agent thought replaced by response text (#20571) 2025-06-03 14:54:42 +08:00
d6b30efe2c Fix/dark theme style issues (#20566) 2025-06-03 13:53:24 +08:00
3f7aa38d77 fix: #20560 When elasticsearch is used as the vector database, the Retrieval Test fails to filter the data after setting the Score Threshold, and the score of the recalled results is empty (#20561) 2025-06-03 13:24:26 +08:00
a145c2a8fe fix: ensure proper conversation role alternation for vLLM (#18837) 2025-06-03 12:47:39 +08:00
c29cb503be Fix #20536: Force header in custom tool be string (#20537)
Co-authored-by: Peter Xin <iami@Artemis.local>
2025-06-02 18:09:01 +08:00
8025ad0661 Fixes #20534: Allow $ref in parameter for custom tools (#20535)
Co-authored-by: Peter Xin <iami@Artemis.local>
2025-06-02 18:08:53 +08:00
b4b59148dc check zilliz cloud of full-text search (#20519) 2025-06-02 18:04:13 +08:00
23c9f1b444 fix ts5097 (#20543)
Signed-off-by: kenwoodjw <blackxin55+@gmail.com>
2025-06-02 18:02:50 +08:00
b33f8b47ca nacos config init , and force add ts parms. (#20526)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-06-01 10:17:40 +08:00
c26e1929d6 fix(housekeeping): exclude files that are used as app icons or avatar images from being removed (#20532) 2025-05-31 23:27:47 +08:00
e01d975b80 fix: the plugin order is not the same as passed to api in DSL (#20515) 2025-05-30 18:13:00 +08:00
92528360f9 fix: fetch tenant_id in other trace providers besides langfuse (#20495)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-30 17:15:49 +08:00
1d9c90089c Amend color typo (#20497)
Co-authored-by: Davide Delbianco <davide.delbianco@zucchetti.it>
2025-05-30 15:27:30 +08:00
e303417e04 fix: agent app tool update (#20490) 2025-05-30 14:56:32 +08:00
c8d9f8e2e4 fix: resolve unstable scrolling in workflow debug panel with multiple input fields #19697 (#19698) 2025-05-30 14:54:30 +08:00
51f64797cd Add APIs for Knowledge Base Tag Management and Dataset Binding (#20023)
Co-authored-by: lizb <lizb@sugon.com>
2025-05-30 14:48:00 +08:00
1ea4459d9f update knowledge base api (#20426) 2025-05-30 14:45:30 +08:00
55371e5abf Improve CONVERSATION_TITLE_PROMPT to correctly handle Japanese and input (#20351) 2025-05-30 14:43:51 +08:00
fb12a3033d fix celery job not closed issue (#19268) 2025-05-30 14:42:47 +08:00
a6ea15e63c Refactor/message cycle manage and knowledge retrieval (#20460)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-30 14:36:44 +08:00
5a991295e0 fix: drop some type fixme (#20344) 2025-05-30 14:10:09 +08:00
9b47f9f786 fix(json-schema-editor): Add container reference for resize observer in CodeEditor; Update language hook and help doc URL in JsonSchemaConfig (#20488) 2025-05-30 13:54:12 +08:00
f65c2fcb1d Refactor/markdown component split (#20177) 2025-05-30 11:31:50 +08:00
156bb8238d fix: some display error in dark mode (#20469) 2025-05-30 11:25:46 +08:00
db488bef51 refactor(api/core/workflow/enums): Rename WORKFLOW_RUN_ID to WORKFLOW_EXECUTION_ID (#20459)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-30 11:05:08 +08:00
d72d02b970 chore: translate i18n files (#20476)
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2025-05-30 10:57:37 +08:00
dd2725be68 fix: import from curl not work for --data (#20471) 2025-05-30 10:52:38 +08:00
8e2d342de6 Feat/15534 support replacing the bot in chat input placeholder with the bots name (#20473) 2025-05-30 10:51:19 +08:00
91eeb2ab76 chore: Colorize new OpenAI LLM versions (#20463)
Co-authored-by: Davide Delbianco <davide.delbianco@zucchetti.it>
2025-05-30 09:24:31 +08:00
f2e0d161a1 fix(ops_trace_manager): Adds app_id to TraceTask initialization (#20461)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-30 09:00:05 +08:00
2ebf4e767b fix(models): WorkflowRun's total_steps and exceptions_count mismatch with database (#20452)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-30 07:53:13 +08:00
f7fb10635f refactor(workflow): Rename workflow node execution models (#20458)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-30 04:56:37 +08:00
32e779eef3 refactor(workflow): Rename NodeRunMetadataKey to WorkflowNodeExecutionMetadataKey (#20457)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-30 04:47:56 +08:00
482e50aae9 Refactor/remove db from cycle manager (#20455)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-30 04:34:13 +08:00
cd0a05f114 tests: Removes outdated marketplace download test (#20454)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-30 01:57:55 +08:00
d4408e0f54 fix: handle values in output arrays for CodeNode transformation (#20437) 2025-05-29 17:55:20 +08:00
eee88a8012 chore: improve error logging for requests to plugin daemon (#20328) 2025-05-29 17:12:27 +08:00
0368e1769a fix: wrong env usage in middleware (#20436) 2025-05-29 17:10:49 +08:00
2d4f8f1377 fix: apps/annotation missing 1 required positional argument: 'end_user' (#20428) 2025-05-29 16:10:28 +08:00
8ef91222ea fix: show 'reset brand' button after set branding image (#20420) 2025-05-29 15:13:00 +08:00
808aa4467c docs: Update PR template to emphasize guidelines and issue linking (#20382)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-29 14:32:08 +08:00
b2ab401279 chore: remove agent turn limits (#19930) 2025-05-29 09:51:56 +08:00
9bbd646f40 fix: inner invoke llm token too long (#20391) 2025-05-29 09:49:44 +08:00
57ece83c30 Fix/branding broken (#20375)
Co-authored-by: zxhlyh <jasonapring2015@outlook.com>
2025-05-28 20:06:58 +08:00
c3c67d9608 fix: register user model to current_user in backward invoke. (#20374)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-28 19:56:33 +08:00
f59fb94dae feat(agent_node): ensure that the enum-checking syntax is compatible with Python 3.11. (#20373)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-28 19:56:17 +08:00
00199c41bb fix: workflow plugins list update (#20357) 2025-05-28 17:45:08 +08:00
400ae664bb fix(http): force multipart/form-data even without files(#20322) (#20323) 2025-05-28 17:04:38 +08:00
b39ca7ee31 Fixes some i18n(ko) translations. (#20348) 2025-05-28 16:58:59 +08:00
4250501058 fix: reset password page dark style (#20350) 2025-05-28 16:36:32 +08:00
eaaf551497 fix: Instance <Account> is not bound to a Session (#20347)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-28 16:36:08 +08:00
f233a64eb5 fix(workflow): fetch user failed when workflow run in parallel mode (#20321)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-27 22:41:07 +08:00
2b81b6673f [Observability] Add type check and try-except in otel (#20319) 2025-05-27 21:17:45 +08:00
4c46f04d77 fix: Enhances tenant ID handling in telemetry (#20304)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-27 17:44:40 +08:00
d467c8536b fix: i18n auto run failed (#20302) 2025-05-27 17:29:56 +08:00
abc32edf28 chore: enchance the copywriting of tool (#20294) 2025-05-27 16:40:11 +08:00
047a1b5166 Chore/update img (#20292) 2025-05-27 16:33:43 +08:00
a06fa7374d update img (#20291) 2025-05-27 16:30:04 +08:00
fe01de5667 chore(*): Bump version to 1.4.1 (#20275)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-27 15:32:59 +08:00
275b042998 chore(remove_app_and_related_data_task): Revert _delete_app_workflow_node_executions (#20278)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-27 14:58:08 +08:00
4c4887c5fc feat(qdrant):add replication_factor when create collection in qdrant (#20133)
Co-authored-by: 刘敏 <min.liu@tongdun.net>
2025-05-27 14:46:04 +08:00
0ebaba98f0 fix: dataset permission check for partial team members (#19249) (#20242)
Co-authored-by: MioINAMIJIMA <m.inamijima@optimaize-consulting.com>
2025-05-27 14:33:11 +08:00
d3bfcd498b fix: Refactor web reader to use readabilipy (#19789)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-27 14:17:32 +08:00
efad1e4790 Modifying the preview of the uploaded avatar will freeze, fix the bug… (#20202)
Co-authored-by: qingguo <qingguo@lexin.com>
2025-05-27 14:04:22 +08:00
9c9d3d7bd0 feat: document extractor chardet encoding (#20269)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-27 13:27:46 +08:00
756f35f480 feat: add pagination for plugin page (#20151) 2025-05-27 12:54:52 +08:00
55503ce771 fix: persist workflow execution status on partial success and failure (#20264)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-27 11:24:08 +08:00
c57726a587 fix[cve]: update qdrant-client from 1.7.3 to 1.9.0 (#20231) 2025-05-27 11:08:26 +08:00
b12c28a984 fix: workflow http node (#20262) 2025-05-27 11:05:38 +08:00
b357eca307 fix: Copy request context and current user in app generators. (#20240)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-27 10:56:23 +08:00
acd4b9a8ac fix: not save workflow_run_id of chatflow message (#20257) 2025-05-27 10:43:22 +08:00
ded4b024f3 Fix dataseat card height (#20239) 2025-05-27 09:42:29 +08:00
f21e6e03a3 refactor: Consolidate Flask-Login Authentication Logic (#20235)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-26 18:22:01 +08:00
6f982eb7e4 feat: add author_name for app list card (#16900)
Co-authored-by: crazywoola <427733928@qq.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2025-05-26 18:20:53 +08:00
2cad98f01f fix: #18132 when deepseek llm model, auto_generate name can't work (#18646)
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2025-05-26 18:04:52 +08:00
eb26dc3213 fix: Remove the custom comparison function from the memo HOC(#19679) (#20197) 2025-05-26 18:04:25 +08:00
405c4d51f0 feat: clean chunk content after add (#19785) 2025-05-26 17:53:50 +08:00
3e30914e13 feat: add alias for production web start command (#20229) 2025-05-26 16:52:56 +08:00
53aaf91ce4 fix: show two PromptLogModal when click log button (#20115) (#20142) 2025-05-26 16:45:10 +08:00
b9b5d43dc6 fix: add 'floatfmt' when extract number from excel ( #20153 ) (#20193)
Co-authored-by: wangheyang <wangheyang@corp.netease.com>
Co-authored-by: crazywoola <427733928@qq.com>
2025-05-26 16:41:57 +08:00
38e48c0c40 doc: tiny fix github -> GitHub (#20185)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-05-26 16:33:12 +08:00
84679f1a5b fix: prevent save when default max_iteration has value in agent node (#20211) 2025-05-26 16:28:29 +08:00
4c7351176c fix: resolve Mermaid mindmap generation issue (#20227) 2025-05-26 16:26:56 +08:00
ba7a2fd135 fix: can not show loop detail in one step run (#20215) 2025-05-26 14:23:11 +08:00
3995f55cbc fix: update Line component for dark mode support and improve Empty co… (#20196)
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-05-26 10:42:54 +08:00
cbfc32b11f fix(echarts): Resolve chart flickering and animation loop in Markdown (#20161) 2025-05-25 18:05:41 +08:00
9b1dc1de7a fix: system file upload can't export custom file types (#20122) 2025-05-22 22:29:27 +08:00
db09d18e92 fix: some dark theme not display well (#20121) 2025-05-22 22:29:12 +08:00
210b9ebf56 fix: GitHub stars count sync not working (#20126) 2025-05-22 22:28:43 +08:00
fa80ef90d2 simplify app create experience with collapsed basic app types and imp… (#20007) 2025-05-22 18:02:40 +08:00
6c492e51fa fix: update messages credita (#20092) 2025-05-22 17:39:28 +08:00
7bf00ef25c fix(markdown): improve ECharts rendering for streaming content and da… (#20101)
Co-authored-by: crazywoola <427733928@qq.com>
2025-05-22 16:31:13 +08:00
8fad3036bf set oceanbase ip to 127.0.0.1 to avoid connection failure after restart (#20103) 2025-05-22 16:19:53 +08:00
c939f04b1a Add support for tracking conversation with Opik Tracer (#20063) 2025-05-22 16:11:50 +08:00
916c415b4b feat: add entry point for requesting a plugin (#20026) 2025-05-22 14:15:00 +08:00
9afd7f6c87 chore: Update S3StorageConfig to match boto3 type hints (#20072) 2025-05-22 14:10:14 +08:00
648393cc7b fix: improve tracing provider validation logic in OpsTraceManager (#20042) 2025-05-22 14:08:36 +08:00
6f48af2610 Refactor OpenSearch config to separate use_ssl and verify_certs flags (#20075)
Co-authored-by: he.huang <he.huang1@outlook.com>
Co-authored-by: crazywoola <427733928@qq.com>
2025-05-22 10:14:38 +08:00
adca981eee fix: uninitialized variable error on empty knowledge retrieval(agent) (#20025)
Co-authored-by: crazywoola <427733928@qq.com>
2025-05-22 10:09:07 +08:00
38b1e46241 fix: correct indentation in dataset retrieval model assignment (#20040) 2025-05-22 10:05:24 +08:00
6b3666f826 feat: Split WorkflowCycleManager (#20071)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-22 09:49:25 +08:00
02929b2cce Fix/fix trace provider delete err (#20070) 2025-05-21 23:51:42 +08:00
d31235ca13 feat: Introduce WorkflowExecution Domain Entity and Repository, Replace WorkflowRun Direct Usage, and Unify Stream Response Logic (#20067)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-21 22:01:53 +08:00
7d230acf40 tencent vectordb compatible with version 1.1.3 and below (#20056)
Co-authored-by: wlleiiwang <wlleiiwang@tencent.com>
2025-05-21 20:24:05 +08:00
13dc1c8795 Simplify execution_metadata Handling for WorkflowNodeExecution (#20062)
Currently, `WorkflowNodeExecution.execution_metadata_dict` returns `None` when metadata is absent in the database. This requires all callers to perform `None` checks when processing metadata, leading to more complex caller-side logic.

This pull request updates the `execution_metadata_dict` method to return an empty dictionary instead of `None` when metadata is absent. This change would simplify the caller logic, as it removes the need for explicit `None` checks and provides a more consistent data structure to work with.
2025-05-21 18:38:16 +08:00
997b46bfaa Fix/modify translation (#20046) 2025-05-21 16:41:05 +08:00
57bcb616bc fix(sqlalchemy_workflow_node_execution_repository): Missing triggered_from while querying WorkflowNodeExecution (#20044)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-21 16:37:44 +08:00
3196dc2d61 refactor: Use typed SQLAlchemy base model and fix type errors (#19980)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-21 15:38:03 +08:00
ef3569e667 feat: support chatflow start node custom input field hidden (#19678) 2025-05-21 13:52:21 +08:00
627911d4ff feat: Move to node in workflow panel and fix help link hover style (#19998) 2025-05-21 11:29:24 +08:00
2266f7cb6a Feat:Plugin Storage Type Support Aliyun OSS (#20012) 2025-05-21 11:28:47 +08:00
a0ebbaa840 fix: emoji picker in dark mode (#20019) 2025-05-21 11:23:26 +08:00
36b321735e fix create_tracing_app_config error (#19884) (#20004)
Co-authored-by: codly <codly.fun@gmail.com>
2025-05-21 10:45:10 +08:00
75cacc2855 fix: ReactFlow background in dark theme (#20013) 2025-05-21 10:32:07 +08:00
83e71ab27c fix: update IN SERVICE status text in app detail panel (#19993) 2025-05-21 10:16:21 +08:00
d35d854259 Revert "fix: fix duplicate app lose custom image" (#19995) 2025-05-20 18:42:59 +08:00
c58678d84d chore: Reduce the invocation of the plugin interface (#19629)
Co-authored-by: hobo.l <hobo.l@binance.com>
Co-authored-by: crazywoola <427733928@qq.com>
2025-05-20 16:55:21 +08:00
618981f1ae fix: fix duplicate app lose custom image (#19775) 2025-05-20 16:44:51 +08:00
b2ae46b80f fix: search query and refine the logic (#19987) 2025-05-20 16:42:37 +08:00
9ebc58b1a2 feat: Web <video> and <audio> element support src attribute (#19988) 2025-05-20 16:37:31 +08:00
87f9d11d65 fix: ensure Decimal values in metadata are JSON serializable (fixes #19936) (#19955)
Co-authored-by: crazywoola <427733928@qq.com>
2025-05-20 15:38:31 +08:00
8cb3b4aef2 fix: multiple retrieve reranking_enabled switch (#19958) 2025-05-20 15:22:03 +08:00
09547b4c8d fix: fix page broken for undefined permission (#19972) 2025-05-20 14:41:10 +08:00
d186daa131 E-300 (#19726)
Signed-off-by: -LAN- <laipz8200@outlook.com>
Co-authored-by: Hash Brown <hi@xzd.me>
Co-authored-by: crazywoola <427733928@qq.com>
Co-authored-by: GareArc <chen4851@purdue.edu>
Co-authored-by: Byron.wang <byron@dify.ai>
Co-authored-by: Joel <iamjoel007@gmail.com>
Co-authored-by: -LAN- <laipz8200@outlook.com>
Co-authored-by: Garfield Dai <dai.hai@foxmail.com>
Co-authored-by: KVOJJJin <jzongcode@gmail.com>
Co-authored-by: Alexi.F <654973939@qq.com>
Co-authored-by: Xiyuan Chen <52963600+GareArc@users.noreply.github.com>
Co-authored-by: kautsar_masuara <61046989+izon-masuara@users.noreply.github.com>
Co-authored-by: achmad-kautsar <achmad.kautsar@insignia.co.id>
Co-authored-by: Xin Zhang <sjhpzx@gmail.com>
Co-authored-by: kelvintsim <83445753+kelvintsim@users.noreply.github.com>
Co-authored-by: zxhlyh <jasonapring2015@outlook.com>
Co-authored-by: Zixuan Cheng <61724187+Theysua@users.noreply.github.com>
2025-05-20 12:07:50 +08:00
6a8ca8296b chore: update redis dependency to version 6.1.0 in api/pyproject.toml (#19885) 2025-05-20 10:45:03 +08:00
7ae529c3b0 Revert "chore: upgrade Redis from v6 to v7 in middlewares" (#19960) 2025-05-20 10:44:53 +08:00
911f9eadd0 fix model workflow_draft_variables duplicate app_id set. (#19949)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-05-20 10:26:34 +08:00
c9ee60e197 Feat(WaterCrawl error handling): add custom exceptions and error handling (#19948) 2025-05-20 10:25:16 +08:00
4e5789df89 docs: Optimize Response data array object indentation for the /messages interface (#19922) 2025-05-20 10:23:48 +08:00
a18a6f50ab chore: upgrade Redis from v6 to v7 in middlewares (#19935) 2025-05-20 09:36:21 +08:00
276c02f341 feat: Variable click jumps to source node (#13623) 2025-05-19 23:17:18 +08:00
6a9e0b1005 feat(api): Introduce WorkflowDraftVariable Model (#19737)
- Introduce `WorkflowDraftVariable` model and the corresponding migration.
- Implement `EnumText`,  a custom column type for SQLAlchemy designed
  to work seamlessly with enumeration classes based on `StrEnum`.
2025-05-19 22:59:56 +08:00
bbebf9ad3e fix: db_model save to _node_execution_cache (#19911) 2025-05-19 17:17:43 +08:00
11146b6bae fix create_tracing_app_config error (#19884) 2025-05-19 10:09:21 +08:00
499392c6f9 chore: improve some doc (#19881) 2025-05-18 21:42:03 +08:00
a287da9ccd docs: Add text_to_speech left out in the API response (#19862) 2025-05-18 12:59:15 +08:00
c22e640df3 fix(devcontainer): uv sync fail (#19834) 2025-05-18 12:58:45 +08:00
2862631f03 fix: tool node number type constant field dark style (#19818) 2025-05-18 12:58:22 +08:00
749bcc889b fix: nav selector's dark theme (#19869) 2025-05-18 12:51:09 +08:00
6a74c97a0a feat: add debug log for request and response (#19781) (#19783)
Co-authored-by: hashjang <hash@geek.com>
Co-authored-by: QuantumGhost <obelisk.reg+git@gmail.com>
2025-05-17 17:31:09 +08:00
e0e8cd6ca3 feat(DraftWorkflowApi): Requires environment_variables in DraftWorkflowApi (#19849)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-17 13:45:00 +08:00
e7659ecd9d revert https://github.com/langgenius/dify/pull/19497 (19497) (#19807)
Co-authored-by: qingguo <qingguo@lexin.com>
2025-05-17 12:32:27 +08:00
7d0106b220 fix: correct type mismatch in WorkflowService node execution handling (#19846)
Signed-off-by: -LAN- <laipz8200@outlook.com>
2025-05-17 12:31:27 +08:00
df631591f2 fix: upload avatar failed (#19853) 2025-05-17 10:55:12 +08:00
4977bb21ec feat(workflow): domain model for workflow node execution (#19430)
Signed-off-by: -LAN- <laipz8200@outlook.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-05-17 00:56:16 +08:00
aeceb200ec fix: llm parameters radio type dark style (#19833) 2025-05-16 20:45:15 +08:00
a15129a00c fix: fix overflow when bot description too long (#19805) 2025-05-16 14:35:22 +08:00
de2cfd2927 fix: fix metadata condition name overflow (#19812) 2025-05-16 14:33:14 +08:00
8bf9adbc08 Fix: style of app publisher (#19803) 2025-05-16 14:32:30 +08:00
582b721160 Resolve Python Logger library warnings (#19791)
Signed-off-by: Emmanuel Ferdman <emmanuelferdman@gmail.com>
2025-05-16 14:31:54 +08:00
1551 changed files with 75268 additions and 19709 deletions

View File

@ -1,5 +1,4 @@
FROM mcr.microsoft.com/devcontainers/python:3.12
# [Optional] Uncomment this section to install additional OS packages.
# RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
# && apt-get -y install --no-install-recommends <your-package-list-here>
RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \
&& apt-get -y install libgmp-dev libmpfr-dev libmpc-dev

View File

@ -1,12 +1,13 @@
#!/bin/bash
npm add -g pnpm@10.8.0
npm add -g pnpm@10.11.1
cd web && pnpm install
pipx install uv
echo 'alias start-api="cd /workspaces/dify/api && uv run python -m flask run --host 0.0.0.0 --port=5001 --debug"' >> ~/.bashrc
echo 'alias start-worker="cd /workspaces/dify/api && uv run python -m celery -A app.celery worker -P gevent -c 1 --loglevel INFO -Q dataset,generation,mail,ops_trace,app_deletion"' >> ~/.bashrc
echo 'alias start-web="cd /workspaces/dify/web && pnpm dev"' >> ~/.bashrc
echo 'alias start-web-prod="cd /workspaces/dify/web && pnpm build && pnpm start"' >> ~/.bashrc
echo 'alias start-containers="cd /workspaces/dify/docker && docker-compose -f docker-compose.middleware.yaml -p dify --env-file middleware.env up -d"' >> ~/.bashrc
echo 'alias stop-containers="cd /workspaces/dify/docker && docker-compose -f docker-compose.middleware.yaml -p dify --env-file middleware.env down"' >> ~/.bashrc

View File

@ -8,13 +8,13 @@ body:
label: Self Checks
description: "To make sure we get to you in time, please check the following :)"
options:
- label: I have read the [Contributing Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) and [Language Policy](https://github.com/langgenius/dify/issues/1542).
required: true
- label: This is only for bug report, if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
required: true
- label: I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
required: true
- label: I confirm that I am using English to submit this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "[FOR CHINESE USERS] 请务必使用英文提交 Issue否则会被关闭。谢谢:)"
- label: I confirm that I am using English to submit this report, otherwise it will be closed.
required: true
- label: "Please do not modify this template :) and fill in all the required fields."
required: true
@ -42,20 +42,22 @@ body:
attributes:
label: Steps to reproduce
description: We highly suggest including screenshots and a bug report log. Please use the right markdown syntax for code blocks.
placeholder: Having detailed steps helps us reproduce the bug.
placeholder: Having detailed steps helps us reproduce the bug. If you have logs, please use fenced code blocks (triple backticks ```) to format them.
validations:
required: true
- type: textarea
attributes:
label: ✔️ Expected Behavior
placeholder: What were you expecting?
description: Describe what you expected to happen.
placeholder: What were you expecting? Please do not copy and paste the steps to reproduce here.
validations:
required: false
required: true
- type: textarea
attributes:
label: ❌ Actual Behavior
placeholder: What happened instead?
description: Describe what actually happened.
placeholder: What happened instead? Please do not copy and paste the steps to reproduce here.
validations:
required: false

View File

@ -1,5 +1,11 @@
blank_issues_enabled: false
contact_links:
- name: "\U0001F4A1 Model Providers & Plugins"
url: "https://github.com/langgenius/dify-official-plugins/issues/new/choose"
about: Report issues with official plugins or model providers, you will need to provide the plugin version and other relevant details.
- name: "\U0001F4AC Documentation Issues"
url: "https://github.com/langgenius/dify-docs/issues/new"
about: Report issues with the documentation, such as typos, outdated information, or missing content. Please provide the specific section and details of the issue.
- name: "\U0001F4E7 Discussions"
url: https://github.com/langgenius/dify/discussions/categories/general
about: General discussions and request help from the community
about: General discussions and seek help from the community

View File

@ -1,24 +0,0 @@
name: "📚 Documentation Issue"
description: Report issues in our documentation
labels:
- documentation
body:
- type: checkboxes
attributes:
label: Self Checks
description: "To make sure we get to you in time, please check the following :)"
options:
- label: I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
required: true
- label: I confirm that I am using English to submit report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "[FOR CHINESE USERS] 请务必使用英文提交 Issue否则会被关闭。谢谢:)"
required: true
- label: "Please do not modify this template :) and fill in all the required fields."
required: true
- type: textarea
attributes:
label: Provide a description of requested docs changes
placeholder: Briefly describe which document needs to be corrected and why.
validations:
required: true

View File

@ -8,11 +8,11 @@ body:
label: Self Checks
description: "To make sure we get to you in time, please check the following :)"
options:
- label: I have read the [Contributing Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) and [Language Policy](https://github.com/langgenius/dify/issues/1542).
required: true
- label: I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
required: true
- label: I confirm that I am using English to submit this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "[FOR CHINESE USERS] 请务必使用英文提交 Issue否则会被关闭。谢谢:)"
- label: I confirm that I am using English to submit this report, otherwise it will be closed.
required: true
- label: "Please do not modify this template :) and fill in all the required fields."
required: true

View File

@ -1,55 +0,0 @@
name: "🌐 Localization/Translation issue"
description: Report incorrect translations. [please use English :)]
labels:
- translation
body:
- type: checkboxes
attributes:
label: Self Checks
description: "To make sure we get to you in time, please check the following :)"
options:
- label: I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
required: true
- label: I confirm that I am using English to submit this report (我已阅读并同意 [Language Policy](https://github.com/langgenius/dify/issues/1542)).
required: true
- label: "[FOR CHINESE USERS] 请务必使用英文提交 Issue否则会被关闭。谢谢:)"
required: true
- label: "Please do not modify this template :) and fill in all the required fields."
required: true
- type: input
attributes:
label: Dify version
description: Hover over system tray icon or look at Settings
validations:
required: true
- type: input
attributes:
label: Utility with translation issue
placeholder: Some area
description: Please input here the utility with the translation issue
validations:
required: true
- type: input
attributes:
label: 🌐 Language affected
placeholder: "German"
validations:
required: true
- type: textarea
attributes:
label: ❌ Actual phrase(s)
placeholder: What is there? Please include a screenshot as that is extremely helpful.
validations:
required: true
- type: textarea
attributes:
label: ✔️ Expected phrase(s)
placeholder: What was expected?
validations:
required: true
- type: textarea
attributes:
label: Why is the current translation wrong
placeholder: Why do you feel this is incorrect?
validations:
required: true

View File

@ -8,7 +8,7 @@ inputs:
uv-version:
description: UV version to set up
required: true
default: '0.6.14'
default: '~=0.7.11'
uv-lockfile:
description: Path to the UV lockfile to restore cache from
required: true

View File

@ -1,25 +1,23 @@
# Summary
> [!IMPORTANT]
>
> 1. Make sure you have read our [contribution guidelines](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md)
> 2. Ensure there is an associated issue and you have been assigned to it
> 3. Use the correct syntax to link this PR: `Fixes #<issue number>`.
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
## Summary
> [!Tip]
> Close issue syntax: `Fixes #<issue number>` or `Resolves #<issue number>`, see [documentation](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) for more details.
<!-- Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change. -->
# Screenshots
## Screenshots
| Before | After |
|--------|-------|
| ... | ... |
# Checklist
> [!IMPORTANT]
> Please review the checklist below before submitting your pull request.
## Checklist
- [ ] This change requires a documentation update, included: [Dify Document](https://github.com/langgenius/dify-docs)
- [x] I understand that this PR may be closed in case there was no previous discussion or issues. (This doesn't apply to typos!)
- [x] I've added a test for each change that was introduced, and I tried as much as possible to make a single atomic change.
- [x] I've updated the documentation accordingly.
- [x] I ran `dev/reformat`(backend) and `cd web && npx lint-staged`(frontend) to appease the lint gods

View File

@ -47,15 +47,17 @@ jobs:
- name: Run Unit tests
run: |
uv run --project api bash dev/pytest/pytest_unit_tests.sh
- name: Coverage Summary
run: |
set -x
# Extract coverage percentage and create a summary
TOTAL_COVERAGE=$(python -c 'import json; print(json.load(open("coverage.json"))["totals"]["percent_covered_display"])')
# Create a detailed coverage summary
echo "### Test Coverage Summary :test_tube:" >> $GITHUB_STEP_SUMMARY
echo "Total Coverage: ${TOTAL_COVERAGE}%" >> $GITHUB_STEP_SUMMARY
echo "\`\`\`" >> $GITHUB_STEP_SUMMARY
uv run --project api coverage report >> $GITHUB_STEP_SUMMARY
echo "\`\`\`" >> $GITHUB_STEP_SUMMARY
uv run --project api coverage report --format=markdown >> $GITHUB_STEP_SUMMARY
- name: Run dify config tests
run: uv run --project api dev/pytest/pytest_config_tests.py
@ -83,9 +85,15 @@ jobs:
compose-file: |
docker/docker-compose.middleware.yaml
services: |
db
redis
sandbox
ssrf_proxy
- name: setup test config
run: |
cp api/tests/integration_tests/.env.example api/tests/integration_tests/.env
- name: Run Workflow
run: uv run --project api bash dev/pytest/pytest_workflow.sh

28
.github/workflows/deploy-rag-dev.yml vendored Normal file
View File

@ -0,0 +1,28 @@
name: Deploy RAG Dev
permissions:
contents: read
on:
workflow_run:
workflows: ["Build and Push API & Web"]
branches:
- "deploy/rag-dev"
types:
- completed
jobs:
deploy:
runs-on: ubuntu-latest
if: |
github.event.workflow_run.conclusion == 'success' &&
github.event.workflow_run.head_branch == 'deploy/rag-dev'
steps:
- name: Deploy to server
uses: appleboy/ssh-action@v0.1.8
with:
host: ${{ secrets.RAG_SSH_HOST }}
username: ${{ secrets.SSH_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
${{ vars.SSH_SCRIPT || secrets.SSH_SCRIPT }}

View File

@ -10,6 +10,7 @@ yq eval '.services["elasticsearch"].ports += ["9200:9200"]' -i docker/docker-com
yq eval '.services.couchbase-server.ports += ["8091-8096:8091-8096"]' -i docker/docker-compose.yaml
yq eval '.services.couchbase-server.ports += ["11210:11210"]' -i docker/docker-compose.yaml
yq eval '.services.tidb.ports += ["4000:4000"]' -i docker/tidb/docker-compose.yaml
yq eval '.services.oceanbase.ports += ["2881:2881"]' -i docker/docker-compose.yaml
yq eval '.services.opengauss.ports += ["6600:6600"]' -i docker/docker-compose.yaml
echo "Ports exposed for sandbox, weaviate, tidb, qdrant, chroma, milvus, pgvector, pgvecto-rs, elasticsearch, couchbase, opengauss"

View File

@ -139,6 +139,7 @@ jobs:
- name: Checkout code
uses: actions/checkout@v4
with:
fetch-depth: 0
persist-credentials: false
- name: Check changed files

View File

@ -31,11 +31,19 @@ jobs:
echo "FILES_CHANGED=false" >> $GITHUB_ENV
fi
- name: Install pnpm
uses: pnpm/action-setup@v4
with:
version: 10
run_install: false
- name: Set up Node.js
if: env.FILES_CHANGED == 'true'
uses: actions/setup-node@v4
with:
node-version: 'lts/*'
cache: pnpm
cache-dependency-path: ./web/package.json
- name: Install dependencies
if: env.FILES_CHANGED == 'true'

View File

@ -31,6 +31,13 @@ jobs:
with:
persist-credentials: false
- name: Free Disk Space
uses: endersonmenezes/free-disk-space@v2
with:
remove_dotnet: true
remove_haskell: true
remove_tool_cache: true
- name: Setup UV and Python
uses: ./.github/actions/setup-uv
with:
@ -59,7 +66,7 @@ jobs:
tidb
tiflash
- name: Set up Vector Stores (Weaviate, Qdrant, PGVector, Milvus, PgVecto-RS, Chroma, MyScale, ElasticSearch, Couchbase)
- name: Set up Vector Stores (Weaviate, Qdrant, PGVector, Milvus, PgVecto-RS, Chroma, MyScale, ElasticSearch, Couchbase, OceanBase)
uses: hoverkraft-tech/compose-action@v2.0.2
with:
compose-file: |
@ -75,8 +82,15 @@ jobs:
pgvector
chroma
elasticsearch
oceanbase
- name: Check TiDB Ready
- name: setup test config
run: |
echo $(pwd)
ls -lah .
cp api/tests/integration_tests/.env.example api/tests/integration_tests/.env
- name: Check VDB Ready (TiDB)
run: uv run --project api python api/tests/integration_tests/vdb/tidb_vector/check_tiflash_ready.py
- name: Test Vector Stores

12
.gitignore vendored
View File

@ -179,6 +179,7 @@ docker/volumes/pgvecto_rs/data/*
docker/volumes/couchbase/*
docker/volumes/oceanbase/*
docker/volumes/plugin_daemon/*
docker/volumes/matrixone/*
!docker/volumes/oceanbase/init.d
docker/nginx/conf.d/default.conf
@ -192,12 +193,12 @@ sdks/python-client/dist
sdks/python-client/dify_client.egg-info
.vscode/*
!.vscode/launch.json
!.vscode/launch.json.template
!.vscode/README.md
pyrightconfig.json
api/.vscode
.idea/
.vscode
# pnpm
/.pnpm-store
@ -207,3 +208,10 @@ plugins.jsonl
# mise
mise.toml
# Next.js build output
.next/
# AI Assistant
.roo/
api/.env.backup

14
.vscode/README.md vendored Normal file
View File

@ -0,0 +1,14 @@
# Debugging with VS Code
This `launch.json.template` file provides various debug configurations for the Dify project within VS Code / Cursor. To use these configurations, you should copy the contents of this file into a new file named `launch.json` in the same `.vscode` directory.
## How to Use
1. **Create `launch.json`**: If you don't have one, create a file named `launch.json` inside the `.vscode` directory.
2. **Copy Content**: Copy the entire content from `launch.json.template` into your newly created `launch.json` file.
3. **Select Debug Configuration**: Go to the Run and Debug view in VS Code / Cursor (Ctrl+Shift+D or Cmd+Shift+D).
4. **Start Debugging**: Select the desired configuration from the dropdown menu and click the green play button.
## Tips
- If you need to debug with Edge browser instead of Chrome, modify the `serverReadyAction` configuration in the "Next.js: debug full stack" section, change `"debugWithChrome"` to `"debugWithEdge"` to use Microsoft Edge for debugging.

68
.vscode/launch.json.template vendored Normal file
View File

@ -0,0 +1,68 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Flask API",
"type": "debugpy",
"request": "launch",
"module": "flask",
"env": {
"FLASK_APP": "app.py",
"FLASK_ENV": "development",
"GEVENT_SUPPORT": "True"
},
"args": [
"run",
"--host=0.0.0.0",
"--port=5001",
"--no-debugger",
"--no-reload"
],
"jinja": true,
"justMyCode": true,
"cwd": "${workspaceFolder}/api",
"python": "${workspaceFolder}/api/.venv/bin/python"
},
{
"name": "Python: Celery Worker (Solo)",
"type": "debugpy",
"request": "launch",
"module": "celery",
"env": {
"GEVENT_SUPPORT": "True"
},
"args": [
"-A",
"app.celery",
"worker",
"-P",
"solo",
"-c",
"1",
"-Q",
"dataset,generation,mail,ops_trace",
"--loglevel",
"INFO"
],
"justMyCode": false,
"cwd": "${workspaceFolder}/api",
"python": "${workspaceFolder}/api/.venv/bin/python"
},
{
"name": "Next.js: debug full stack",
"type": "node",
"request": "launch",
"program": "${workspaceFolder}/web/node_modules/next/dist/bin/next",
"runtimeArgs": ["--inspect"],
"skipFiles": ["<node_internals>/**"],
"serverReadyAction": {
"action": "debugWithChrome",
"killOnServerStop": true,
"pattern": "- Local:.+(https?://.+)",
"uriFormat": "%s",
"webRoot": "${workspaceFolder}/web"
},
"cwd": "${workspaceFolder}/web"
}
]
}

View File

@ -54,7 +54,7 @@
<a href="./README_BN.md"><img alt="README in বাংলা" src="https://img.shields.io/badge/বাংলা-d9d9d9"></a>
</p>
Dify is an open-source LLM app development platform. Its intuitive interface combines agentic AI workflow, RAG pipeline, agent capabilities, model management, observability features, and more, allowing you to quickly move from prototype to production.
Dify is an open-source platform for developing LLM applications. Its intuitive interface combines agentic AI workflows, RAG pipelines, agent capabilities, model management, observability features, and moreallowing you to quickly move from prototype to production.
## Quick start
@ -65,7 +65,7 @@ Dify is an open-source LLM app development platform. Its intuitive interface com
</br>
The easiest way to start the Dify server is through [docker compose](docker/docker-compose.yaml). Before running Dify with the following commands, make sure that [Docker](https://docs.docker.com/get-docker/) and [Docker Compose](https://docs.docker.com/compose/install/) are installed on your machine:
The easiest way to start the Dify server is through [Docker Compose](docker/docker-compose.yaml). Before running Dify with the following commands, make sure that [Docker](https://docs.docker.com/get-docker/) and [Docker Compose](https://docs.docker.com/compose/install/) are installed on your machine:
```bash
cd dify
@ -205,6 +205,7 @@ If you'd like to configure a highly-available setup, there are community-contrib
- [Helm Chart by @magicsong](https://github.com/magicsong/ai-charts)
- [YAML file by @Winson-030](https://github.com/Winson-030/dify-kubernetes)
- [YAML file by @wyy-holding](https://github.com/wyy-holding/dify-k8s)
- [🚀 NEW! YAML files (Supports Dify v1.6.0) by @Zhoneym](https://github.com/Zhoneym/DifyAI-Kubernetes)
#### Using Terraform for Deployment
@ -226,6 +227,15 @@ Deploy Dify to AWS with [CDK](https://aws.amazon.com/cdk/)
- [AWS CDK by @KevinZhao](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
#### Using Alibaba Cloud Computing Nest
Quickly deploy Dify to Alibaba cloud with [Alibaba Cloud Computing Nest](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=Dify%E7%A4%BE%E5%8C%BA%E7%89%88)
#### Using Alibaba Cloud Data Management
One-Click deploy Dify to Alibaba Cloud with [Alibaba Cloud Data Management](https://www.alibabacloud.com/help/en/dms/dify-in-invitational-preview/)
## Contributing
For those who'd like to contribute code, see our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
@ -235,7 +245,7 @@ At the same time, please consider supporting Dify by sharing it on social media
## Community & contact
- [Github Discussion](https://github.com/langgenius/dify/discussions). Best for: sharing feedback and asking questions.
- [GitHub Discussion](https://github.com/langgenius/dify/discussions). Best for: sharing feedback and asking questions.
- [GitHub Issues](https://github.com/langgenius/dify/issues). Best for: bugs you encounter using Dify.AI, and feature proposals. See our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
- [Discord](https://discord.gg/FngNHpbcY7). Best for: sharing your applications and hanging out with the community.
- [X(Twitter)](https://twitter.com/dify_ai). Best for: sharing your applications and hanging out with the community.
@ -252,8 +262,8 @@ At the same time, please consider supporting Dify by sharing it on social media
## Security disclosure
To protect your privacy, please avoid posting security issues on GitHub. Instead, send your questions to security@dify.ai and we will provide you with a more detailed answer.
To protect your privacy, please avoid posting security issues on GitHub. Instead, report issues to security@dify.ai, and our team will respond with detailed answer.
## License
This repository is available under the [Dify Open Source License](LICENSE), which is essentially Apache 2.0 with a few additional restrictions.
This repository is licensed under the [Dify Open Source License](LICENSE), based on Apache 2.0 with additional conditions.

View File

@ -188,6 +188,7 @@ docker compose up -d
- [رسم بياني Helm من قبل @magicsong](https://github.com/magicsong/ai-charts)
- [ملف YAML من قبل @Winson-030](https://github.com/Winson-030/dify-kubernetes)
- [ملف YAML من قبل @wyy-holding](https://github.com/wyy-holding/dify-k8s)
- [🚀 جديد! ملفات YAML (تدعم Dify v1.6.0) بواسطة @Zhoneym](https://github.com/Zhoneym/DifyAI-Kubernetes)
#### استخدام Terraform للتوزيع
@ -209,6 +210,14 @@ docker compose up -d
- [AWS CDK بواسطة @KevinZhao](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
#### استخدام Alibaba Cloud للنشر
[بسرعة نشر Dify إلى سحابة علي بابا مع عش الحوسبة السحابية علي بابا](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=Dify%E7%A4%BE%E5%8C%BA%E7%89%88)
#### استخدام Alibaba Cloud Data Management للنشر
انشر Dify على علي بابا كلاود بنقرة واحدة باستخدام [Alibaba Cloud Data Management](https://www.alibabacloud.com/help/en/dms/dify-in-invitational-preview/)
## المساهمة
لأولئك الذين يرغبون في المساهمة، انظر إلى [دليل المساهمة](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) لدينا.
@ -223,7 +232,7 @@ docker compose up -d
</a>
## المجتمع والاتصال
- [مناقشة Github](https://github.com/langgenius/dify/discussions). الأفضل لـ: مشاركة التعليقات وطرح الأسئلة.
- [مناقشة GitHub](https://github.com/langgenius/dify/discussions). الأفضل لـ: مشاركة التعليقات وطرح الأسئلة.
- [المشكلات على GitHub](https://github.com/langgenius/dify/issues). الأفضل لـ: الأخطاء التي تواجهها في استخدام Dify.AI، واقتراحات الميزات. انظر [دليل المساهمة](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
- [Discord](https://discord.gg/FngNHpbcY7). الأفضل لـ: مشاركة تطبيقاتك والترفيه مع المجتمع.
- [تويتر](https://twitter.com/dify_ai). الأفضل لـ: مشاركة تطبيقاتك والترفيه مع المجتمع.

View File

@ -204,6 +204,8 @@ GitHub-এ ডিফাইকে স্টার দিয়ে রাখুন
- [Helm Chart by @magicsong](https://github.com/magicsong/ai-charts)
- [YAML file by @Winson-030](https://github.com/Winson-030/dify-kubernetes)
- [YAML file by @wyy-holding](https://github.com/wyy-holding/dify-k8s)
- [🚀 নতুন! YAML ফাইলসমূহ (Dify v1.6.0 সমর্থিত) তৈরি করেছেন @Zhoneym](https://github.com/Zhoneym/DifyAI-Kubernetes)
#### টেরাফর্ম ব্যবহার করে ডিপ্লয়
@ -225,6 +227,15 @@ GitHub-এ ডিফাইকে স্টার দিয়ে রাখুন
- [AWS CDK by @KevinZhao](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
#### Alibaba Cloud ব্যবহার করে ডিপ্লয়
[Alibaba Cloud Computing Nest](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=Dify%E7%A4%BE%E5%8C%BA%E7%89%88)
#### Alibaba Cloud Data Management ব্যবহার করে ডিপ্লয়
[Alibaba Cloud Data Management](https://www.alibabacloud.com/help/en/dms/dify-in-invitational-preview/)
## Contributing
যারা কোড অবদান রাখতে চান, তাদের জন্য আমাদের [অবদান নির্দেশিকা] দেখুন (https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md)।
@ -234,7 +245,7 @@ GitHub-এ ডিফাইকে স্টার দিয়ে রাখুন
## কমিউনিটি এবং যোগাযোগ
- [Github Discussion](https://github.com/langgenius/dify/discussions) ফিডব্যাক এবং প্রতিক্রিয়া জানানোর মাধ্যম।
- [GitHub Discussion](https://github.com/langgenius/dify/discussions) ফিডব্যাক এবং প্রতিক্রিয়া জানানোর মাধ্যম।
- [GitHub Issues](https://github.com/langgenius/dify/issues). Dify.AI ব্যবহার করে আপনি যেসব বাগের সম্মুখীন হন এবং ফিচার প্রস্তাবনা। আমাদের [অবদান নির্দেশিকা](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) দেখুন।
- [Discord](https://discord.gg/FngNHpbcY7) আপনার এপ্লিকেশন শেয়ার এবং কমিউনিটি আড্ডার মাধ্যম।
- [X(Twitter)](https://twitter.com/dify_ai) আপনার এপ্লিকেশন শেয়ার এবং কমিউনিটি আড্ডার মাধ্যম।

View File

@ -194,9 +194,9 @@ docker compose up -d
如果您需要自定义配置,请参考 [.env.example](docker/.env.example) 文件中的注释,并更新 `.env` 文件中对应的值。此外,您可能需要根据您的具体部署环境和需求对 `docker-compose.yaml` 文件本身进行调整,例如更改镜像版本、端口映射或卷挂载。完成任何更改后,请重新运行 `docker-compose up -d`。您可以在[此处](https://docs.dify.ai/getting-started/install-self-hosted/environments)找到可用环境变量的完整列表。
#### 使用 Helm Chart 部署
#### 使用 Helm Chart 或 Kubernetes 资源清单YAML部署
使用 [Helm Chart](https://helm.sh/) 版本或者 YAML 文件,可以在 Kubernetes 上部署 Dify。
使用 [Helm Chart](https://helm.sh/) 版本或者 Kubernetes 资源清单(YAML,可以在 Kubernetes 上部署 Dify。
- [Helm Chart by @LeoQuote](https://github.com/douban/charts/tree/master/charts/dify)
- [Helm Chart by @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)
@ -204,6 +204,10 @@ docker compose up -d
- [YAML 文件 by @Winson-030](https://github.com/Winson-030/dify-kubernetes)
- [YAML file by @wyy-holding](https://github.com/wyy-holding/dify-k8s)
- [🚀 NEW! YAML 文件 (支持 Dify v1.6.0) by @Zhoneym](https://github.com/Zhoneym/DifyAI-Kubernetes)
#### 使用 Terraform 部署
使用 [terraform](https://www.terraform.io/) 一键将 Dify 部署到云平台
@ -221,6 +225,15 @@ docker compose up -d
##### AWS
- [AWS CDK by @KevinZhao](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
#### 使用 阿里云计算巢 部署
使用 [阿里云计算巢](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=Dify%E7%A4%BE%E5%8C%BA%E7%89%88) 将 Dify 一键部署到 阿里云
#### 使用 阿里云数据管理DMS 部署
使用 [阿里云数据管理DMS](https://help.aliyun.com/zh/dms/dify-in-invitational-preview) 将 Dify 一键部署到 阿里云
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=langgenius/dify&type=Date)](https://star-history.com/#langgenius/dify&Date)
@ -243,7 +256,7 @@ docker compose up -d
我们欢迎您为 Dify 做出贡献,以帮助改善 Dify。包括提交代码、问题、新想法或分享您基于 Dify 创建的有趣且有用的 AI 应用程序。同时,我们也欢迎您在不同的活动、会议和社交媒体上分享 Dify。
- [Github Discussion](https://github.com/langgenius/dify/discussions). 👉:分享您的应用程序并与社区交流。
- [GitHub Discussion](https://github.com/langgenius/dify/discussions). 👉:分享您的应用程序并与社区交流。
- [GitHub Issues](https://github.com/langgenius/dify/issues)。👉:使用 Dify.AI 时遇到的错误和问题,请参阅[贡献指南](CONTRIBUTING.md)。
- [电子邮件支持](mailto:hello@dify.ai?subject=[GitHub]Questions%20About%20Dify)。👉:关于使用 Dify.AI 的问题。
- [Discord](https://discord.gg/FngNHpbcY7)。👉:分享您的应用程序并与社区交流。

View File

@ -203,6 +203,7 @@ Falls Sie eine hochverfügbare Konfiguration einrichten möchten, gibt es von de
- [Helm Chart by @magicsong](https://github.com/magicsong/ai-charts)
- [YAML file by @Winson-030](https://github.com/Winson-030/dify-kubernetes)
- [YAML file by @wyy-holding](https://github.com/wyy-holding/dify-k8s)
- [🚀 NEW! YAML files (Supports Dify v1.6.0) by @Zhoneym](https://github.com/Zhoneym/DifyAI-Kubernetes)
#### Terraform für die Bereitstellung verwenden
@ -221,6 +222,15 @@ Bereitstellung von Dify auf AWS mit [CDK](https://aws.amazon.com/cdk/)
##### AWS
- [AWS CDK by @KevinZhao](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
#### Alibaba Cloud
[Alibaba Cloud Computing Nest](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=Dify%E7%A4%BE%E5%8C%BA%E7%89%88)
#### Alibaba Cloud Data Management
Ein-Klick-Bereitstellung von Dify in der Alibaba Cloud mit [Alibaba Cloud Data Management](https://www.alibabacloud.com/help/en/dms/dify-in-invitational-preview/)
## Contributing
Falls Sie Code beitragen möchten, lesen Sie bitte unseren [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md). Gleichzeitig bitten wir Sie, Dify zu unterstützen, indem Sie es in den sozialen Medien teilen und auf Veranstaltungen und Konferenzen präsentieren.
@ -230,7 +240,7 @@ Falls Sie Code beitragen möchten, lesen Sie bitte unseren [Contribution Guide](
## Gemeinschaft & Kontakt
* [Github Discussion](https://github.com/langgenius/dify/discussions). Am besten geeignet für: den Austausch von Feedback und das Stellen von Fragen.
* [GitHub Discussion](https://github.com/langgenius/dify/discussions). Am besten geeignet für: den Austausch von Feedback und das Stellen von Fragen.
* [GitHub Issues](https://github.com/langgenius/dify/issues). Am besten für: Fehler, auf die Sie bei der Verwendung von Dify.AI stoßen, und Funktionsvorschläge. Siehe unseren [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [Discord](https://discord.gg/FngNHpbcY7). Am besten geeignet für: den Austausch von Bewerbungen und den Austausch mit der Community.
* [X(Twitter)](https://twitter.com/dify_ai). Am besten geeignet für: den Austausch von Bewerbungen und den Austausch mit der Community.

View File

@ -203,6 +203,7 @@ Si desea configurar una configuración de alta disponibilidad, la comunidad prop
- [Gráfico Helm por @magicsong](https://github.com/magicsong/ai-charts)
- [Ficheros YAML por @Winson-030](https://github.com/Winson-030/dify-kubernetes)
- [Ficheros YAML por @wyy-holding](https://github.com/wyy-holding/dify-k8s)
- [🚀 ¡NUEVO! Archivos YAML (compatible con Dify v1.6.0) por @Zhoneym](https://github.com/Zhoneym/DifyAI-Kubernetes)
#### Uso de Terraform para el despliegue
@ -221,6 +222,15 @@ Despliegue Dify en AWS usando [CDK](https://aws.amazon.com/cdk/)
##### AWS
- [AWS CDK por @KevinZhao](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
#### Alibaba Cloud
[Alibaba Cloud Computing Nest](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=Dify%E7%A4%BE%E5%8C%BA%E7%89%88)
#### Alibaba Cloud Data Management
Despliega Dify en Alibaba Cloud con un solo clic con [Alibaba Cloud Data Management](https://www.alibabacloud.com/help/en/dms/dify-in-invitational-preview/)
## Contribuir
Para aquellos que deseen contribuir con código, consulten nuestra [Guía de contribución](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).

View File

@ -201,6 +201,7 @@ Si vous souhaitez configurer une configuration haute disponibilité, la communau
- [Helm Chart par @magicsong](https://github.com/magicsong/ai-charts)
- [Fichier YAML par @Winson-030](https://github.com/Winson-030/dify-kubernetes)
- [Fichier YAML par @wyy-holding](https://github.com/wyy-holding/dify-k8s)
- [🚀 NOUVEAU ! Fichiers YAML (compatible avec Dify v1.6.0) par @Zhoneym](https://github.com/Zhoneym/DifyAI-Kubernetes)
#### Utilisation de Terraform pour le déploiement
@ -219,6 +220,15 @@ Déployez Dify sur AWS en utilisant [CDK](https://aws.amazon.com/cdk/)
##### AWS
- [AWS CDK par @KevinZhao](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
#### Alibaba Cloud
[Alibaba Cloud Computing Nest](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=Dify%E7%A4%BE%E5%8C%BA%E7%89%88)
#### Alibaba Cloud Data Management
Déployez Dify en un clic sur Alibaba Cloud avec [Alibaba Cloud Data Management](https://www.alibabacloud.com/help/en/dms/dify-in-invitational-preview/)
## Contribuer
Pour ceux qui souhaitent contribuer du code, consultez notre [Guide de contribution](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).

View File

@ -155,7 +155,7 @@ DifyはオープンソースのLLMアプリケーション開発プラットフ
[こちら](https://dify.ai)のDify Cloudサービスを利用して、セットアップ不要で試すことができます。サンドボックスプランには、200回のGPT-4呼び出しが無料で含まれています。
- **Dify Community Editionのセルフホスティング</br>**
この[スタートガイド](#quick-start)を使用して、ローカル環境でDifyを簡単に実行できます。
この[スタートガイド](#クイックスタート)を使用して、ローカル環境でDifyを簡単に実行できます。
詳しくは[ドキュメント](https://docs.dify.ai)をご覧ください。
- **企業/組織向けのDify</br>**
@ -202,6 +202,7 @@ docker compose up -d
- [Helm Chart by @magicsong](https://github.com/magicsong/ai-charts)
- [YAML file by @Winson-030](https://github.com/Winson-030/dify-kubernetes)
- [YAML file by @wyy-holding](https://github.com/wyy-holding/dify-k8s)
- [🚀 新着YAML ファイルDify v1.6.0 対応by @Zhoneym](https://github.com/Zhoneym/DifyAI-Kubernetes)
#### Terraformを使用したデプロイ
@ -220,6 +221,13 @@ docker compose up -d
##### AWS
- [@KevinZhaoによるAWS CDK](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
#### Alibaba Cloud
[Alibaba Cloud Computing Nest](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=Dify%E7%A4%BE%E5%8C%BA%E7%89%88)
#### Alibaba Cloud Data Management
[Alibaba Cloud Data Management](https://www.alibabacloud.com/help/en/dms/dify-in-invitational-preview/) を利用して、DifyをAlibaba Cloudへワンクリックでデプロイできます
## 貢献
コードに貢献したい方は、[Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md)を参照してください。
@ -236,7 +244,7 @@ docker compose up -d
## コミュニティ & お問い合わせ
* [Github Discussion](https://github.com/langgenius/dify/discussions). 主に: フィードバックの共有や質問。
* [GitHub Discussion](https://github.com/langgenius/dify/discussions). 主に: フィードバックの共有や質問。
* [GitHub Issues](https://github.com/langgenius/dify/issues). 主に: Dify.AIを使用する際に発生するエラーや問題については、[貢献ガイド](CONTRIBUTING_JA.md)を参照してください
* [Discord](https://discord.gg/FngNHpbcY7). 主に: アプリケーションの共有やコミュニティとの交流。
* [X(Twitter)](https://twitter.com/dify_ai). 主に: アプリケーションの共有やコミュニティとの交流。

View File

@ -201,6 +201,7 @@ If you'd like to configure a highly-available setup, there are community-contrib
- [Helm Chart by @magicsong](https://github.com/magicsong/ai-charts)
- [YAML file by @Winson-030](https://github.com/Winson-030/dify-kubernetes)
- [YAML file by @wyy-holding](https://github.com/wyy-holding/dify-k8s)
- [🚀 NEW! YAML files (Supports Dify v1.6.0) by @Zhoneym](https://github.com/Zhoneym/DifyAI-Kubernetes)
#### Terraform atorlugu pilersitsineq
@ -219,6 +220,15 @@ wa'logh nIqHom neH ghun deployment toy'wI' [CDK](https://aws.amazon.com/cdk/) lo
##### AWS
- [AWS CDK qachlot @KevinZhao](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
#### Alibaba Cloud
[Alibaba Cloud Computing Nest](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=Dify%E7%A4%BE%E5%8C%BA%E7%89%88)
#### Alibaba Cloud Data Management
[Alibaba Cloud Data Management](https://www.alibabacloud.com/help/en/dms/dify-in-invitational-preview/)
## Contributing
For those who'd like to contribute code, see our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
@ -235,7 +245,7 @@ At the same time, please consider supporting Dify by sharing it on social media
## Community & Contact
* [Github Discussion](https://github.com/langgenius/dify/discussions
* [GitHub Discussion](https://github.com/langgenius/dify/discussions
). Best for: sharing feedback and asking questions.
* [GitHub Issues](https://github.com/langgenius/dify/issues). Best for: bugs you encounter using Dify.AI, and feature proposals. See our [Contribution Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).

View File

@ -195,6 +195,7 @@ Dify를 Kubernetes에 배포하고 프리미엄 스케일링 설정을 구성했
- [Helm Chart by @magicsong](https://github.com/magicsong/ai-charts)
- [YAML file by @Winson-030](https://github.com/Winson-030/dify-kubernetes)
- [YAML file by @wyy-holding](https://github.com/wyy-holding/dify-k8s)
- [🚀 NEW! YAML files (Supports Dify v1.6.0) by @Zhoneym](https://github.com/Zhoneym/DifyAI-Kubernetes)
#### Terraform을 사용한 배포
@ -213,6 +214,15 @@ Dify를 Kubernetes에 배포하고 프리미엄 스케일링 설정을 구성했
##### AWS
- [KevinZhao의 AWS CDK](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
#### Alibaba Cloud
[Alibaba Cloud Computing Nest](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=Dify%E7%A4%BE%E5%8C%BA%E7%89%88)
#### Alibaba Cloud Data Management
[Alibaba Cloud Data Management](https://www.alibabacloud.com/help/en/dms/dify-in-invitational-preview/)를 통해 원클릭으로 Dify를 Alibaba Cloud에 배포할 수 있습니다
## 기여
코드에 기여하고 싶은 분들은 [기여 가이드](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md)를 참조하세요.
@ -229,7 +239,7 @@ Dify를 Kubernetes에 배포하고 프리미엄 스케일링 설정을 구성했
## 커뮤니티 & 연락처
* [Github 토론](https://github.com/langgenius/dify/discussions). 피드백 공유 및 질문하기에 적합합니다.
* [GitHub 토론](https://github.com/langgenius/dify/discussions). 피드백 공유 및 질문하기에 적합합니다.
* [GitHub 이슈](https://github.com/langgenius/dify/issues). Dify.AI 사용 중 발견한 버그와 기능 제안에 적합합니다. [기여 가이드](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md)를 참조하세요.
* [디스코드](https://discord.gg/FngNHpbcY7). 애플리케이션 공유 및 커뮤니티와 소통하기에 적합합니다.
* [트위터](https://twitter.com/dify_ai). 애플리케이션 공유 및 커뮤니티와 소통하기에 적합합니다.

View File

@ -200,6 +200,7 @@ Se deseja configurar uma instalação de alta disponibilidade, há [Helm Charts]
- [Helm Chart de @magicsong](https://github.com/magicsong/ai-charts)
- [Arquivo YAML por @Winson-030](https://github.com/Winson-030/dify-kubernetes)
- [Arquivo YAML por @wyy-holding](https://github.com/wyy-holding/dify-k8s)
- [🚀 NOVO! Arquivos YAML (Compatível com Dify v1.6.0) por @Zhoneym](https://github.com/Zhoneym/DifyAI-Kubernetes)
#### Usando o Terraform para Implantação
@ -218,6 +219,15 @@ Implante o Dify na AWS usando [CDK](https://aws.amazon.com/cdk/)
##### AWS
- [AWS CDK por @KevinZhao](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
#### Alibaba Cloud
[Alibaba Cloud Computing Nest](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=Dify%E7%A4%BE%E5%8C%BA%E7%89%88)
#### Alibaba Cloud Data Management
Implante o Dify na Alibaba Cloud com um clique usando o [Alibaba Cloud Data Management](https://www.alibabacloud.com/help/en/dms/dify-in-invitational-preview/)
## Contribuindo
Para aqueles que desejam contribuir com código, veja nosso [Guia de Contribuição](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).

View File

@ -201,6 +201,7 @@ Star Dify on GitHub and be instantly notified of new releases.
- [Helm Chart by @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)
- [YAML file by @Winson-030](https://github.com/Winson-030/dify-kubernetes)
- [YAML file by @wyy-holding](https://github.com/wyy-holding/dify-k8s)
- [🚀 NEW! YAML files (Supports Dify v1.6.0) by @Zhoneym](https://github.com/Zhoneym/DifyAI-Kubernetes)
#### Uporaba Terraform za uvajanje
@ -219,6 +220,15 @@ Uvedite Dify v AWS z uporabo [CDK](https://aws.amazon.com/cdk/)
##### AWS
- [AWS CDK by @KevinZhao](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
#### Alibaba Cloud
[Alibaba Cloud Computing Nest](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=Dify%E7%A4%BE%E5%8C%BA%E7%89%88)
#### Alibaba Cloud Data Management
Z enim klikom namestite Dify na Alibaba Cloud z [Alibaba Cloud Data Management](https://www.alibabacloud.com/help/en/dms/dify-in-invitational-preview/)
## Prispevam
Za tiste, ki bi radi prispevali kodo, si oglejte naš vodnik za prispevke . Hkrati vas prosimo, da podprete Dify tako, da ga delite na družbenih medijih ter na dogodkih in konferencah.
@ -229,7 +239,7 @@ Za tiste, ki bi radi prispevali kodo, si oglejte naš vodnik za prispevke . Hkra
## Skupnost in stik
* [Github Discussion](https://github.com/langgenius/dify/discussions). Najboljše za: izmenjavo povratnih informacij in postavljanje vprašanj.
* [GitHub Discussion](https://github.com/langgenius/dify/discussions). Najboljše za: izmenjavo povratnih informacij in postavljanje vprašanj.
* [GitHub Issues](https://github.com/langgenius/dify/issues). Najboljše za: hrošče, na katere naletite pri uporabi Dify.AI, in predloge funkcij. Oglejte si naš [vodnik za prispevke](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md).
* [Discord](https://discord.gg/FngNHpbcY7). Najboljše za: deljenje vaših aplikacij in druženje s skupnostjo.
* [X(Twitter)](https://twitter.com/dify_ai). Najboljše za: deljenje vaših aplikacij in druženje s skupnostjo.

View File

@ -194,6 +194,7 @@ Yüksek kullanılabilirliğe sahip bir kurulum yapılandırmak isterseniz, Dify'
- [@BorisPolonsky tarafından Helm Chart](https://github.com/BorisPolonsky/dify-helm)
- [@Winson-030 tarafından YAML dosyası](https://github.com/Winson-030/dify-kubernetes)
- [@wyy-holding tarafından YAML dosyası](https://github.com/wyy-holding/dify-k8s)
- [🚀 YENİ! YAML dosyaları (Dify v1.6.0 destekli) @Zhoneym tarafından](https://github.com/Zhoneym/DifyAI-Kubernetes)
#### Dağıtım için Terraform Kullanımı
@ -212,6 +213,15 @@ Dify'ı bulut platformuna tek tıklamayla dağıtın [terraform](https://www.ter
##### AWS
- [AWS CDK tarafından @KevinZhao](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
#### Alibaba Cloud
[Alibaba Cloud Computing Nest](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=Dify%E7%A4%BE%E5%8C%BA%E7%89%88)
#### Alibaba Cloud Data Management
[Alibaba Cloud Data Management](https://www.alibabacloud.com/help/en/dms/dify-in-invitational-preview/) kullanarak Dify'ı tek tıkla Alibaba Cloud'a dağıtın
## Katkıda Bulunma
Kod katkısında bulunmak isteyenler için [Katkı Kılavuzumuza](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) bakabilirsiniz.
@ -227,7 +237,7 @@ Aynı zamanda, lütfen Dify'ı sosyal medyada, etkinliklerde ve konferanslarda p
## Topluluk & iletişim
* [Github Tartışmaları](https://github.com/langgenius/dify/discussions). En uygun: geri bildirim paylaşmak ve soru sormak için.
* [GitHub Tartışmaları](https://github.com/langgenius/dify/discussions). En uygun: geri bildirim paylaşmak ve soru sormak için.
* [GitHub Sorunları](https://github.com/langgenius/dify/issues). En uygun: Dify.AI kullanırken karşılaştığınız hatalar ve özellik önerileri için. [Katkı Kılavuzumuza](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) bakın.
* [Discord](https://discord.gg/FngNHpbcY7). En uygun: uygulamalarınızı paylaşmak ve toplulukla vakit geçirmek için.
* [X(Twitter)](https://twitter.com/dify_ai). En uygun: uygulamalarınızı paylaşmak ve toplulukla vakit geçirmek için.

View File

@ -197,12 +197,13 @@ Dify 的所有功能都提供相應的 API因此您可以輕鬆地將 Dify
如果您需要自定義配置,請參考我們的 [.env.example](docker/.env.example) 文件中的註釋,並在您的 `.env` 文件中更新相應的值。此外,根據您特定的部署環境和需求,您可能需要調整 `docker-compose.yaml` 文件本身,例如更改映像版本、端口映射或卷掛載。進行任何更改後,請重新運行 `docker-compose up -d`。您可以在[這裡](https://docs.dify.ai/getting-started/install-self-hosted/environments)找到可用環境變數的完整列表。
如果您想配置高可用性設置,社區貢獻的 [Helm Charts](https://helm.sh/) 和 YAML 文件允許在 Kubernetes 上部署 Dify。
如果您想配置高可用性設置,社區貢獻的 [Helm Charts](https://helm.sh/) 和 Kubernetes 資源清單(YAML允許在 Kubernetes 上部署 Dify。
- [由 @LeoQuote 提供的 Helm Chart](https://github.com/douban/charts/tree/master/charts/dify)
- [由 @BorisPolonsky 提供的 Helm Chart](https://github.com/BorisPolonsky/dify-helm)
- [由 @Winson-030 提供的 YAML 文件](https://github.com/Winson-030/dify-kubernetes)
- [由 @wyy-holding 提供的 YAML 文件](https://github.com/wyy-holding/dify-k8s)
- [🚀 NEW! YAML 檔案(支援 Dify v1.6.0by @Zhoneym](https://github.com/Zhoneym/DifyAI-Kubernetes)
### 使用 Terraform 進行部署
@ -224,6 +225,15 @@ Dify 的所有功能都提供相應的 API因此您可以輕鬆地將 Dify
- [由 @KevinZhao 提供的 AWS CDK](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
#### 使用 阿里云计算巢進行部署
[阿里云](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=Dify%E7%A4%BE%E5%8C%BA%E7%89%88)
#### 使用 阿里雲數據管理DMS 進行部署
透過 [阿里雲數據管理DMS](https://www.alibabacloud.com/help/en/dms/dify-in-invitational-preview/),一鍵將 Dify 部署至阿里雲
## 貢獻
對於想要貢獻程式碼的開發者,請參閱我們的[貢獻指南](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md)。
@ -233,7 +243,7 @@ Dify 的所有功能都提供相應的 API因此您可以輕鬆地將 Dify
## 社群與聯絡方式
- [Github Discussion](https://github.com/langgenius/dify/discussions):最適合分享反饋和提問。
- [GitHub Discussion](https://github.com/langgenius/dify/discussions):最適合分享反饋和提問。
- [GitHub Issues](https://github.com/langgenius/dify/issues):最適合報告使用 Dify.AI 時遇到的問題和提出功能建議。請參閱我們的[貢獻指南](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md)。
- [Discord](https://discord.gg/FngNHpbcY7):最適合分享您的應用程式並與社群互動。
- [X(Twitter)](https://twitter.com/dify_ai):最適合分享您的應用程式並與社群互動。

View File

@ -196,6 +196,7 @@ Nếu bạn muốn cấu hình một cài đặt có độ sẵn sàng cao, có
- [Helm Chart bởi @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)
- [Tệp YAML bởi @Winson-030](https://github.com/Winson-030/dify-kubernetes)
- [Tệp YAML bởi @wyy-holding](https://github.com/wyy-holding/dify-k8s)
- [🚀 MỚI! Tệp YAML (Hỗ trợ Dify v1.6.0) bởi @Zhoneym](https://github.com/Zhoneym/DifyAI-Kubernetes)
#### Sử dụng Terraform để Triển khai
@ -214,6 +215,16 @@ Triển khai Dify trên AWS bằng [CDK](https://aws.amazon.com/cdk/)
##### AWS
- [AWS CDK bởi @KevinZhao](https://github.com/aws-samples/solution-for-deploying-dify-on-aws)
#### Alibaba Cloud
[Alibaba Cloud Computing Nest](https://computenest.console.aliyun.com/service/instance/create/default?type=user&ServiceName=Dify%E7%A4%BE%E5%8C%BA%E7%89%88)
#### Alibaba Cloud Data Management
Triển khai Dify lên Alibaba Cloud chỉ với một cú nhấp chuột bằng [Alibaba Cloud Data Management](https://www.alibabacloud.com/help/en/dms/dify-in-invitational-preview/)
## Đóng góp
Đối với những người muốn đóng góp mã, xem [Hướng dẫn Đóng góp](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) của chúng tôi.

View File

@ -17,6 +17,11 @@ APP_WEB_URL=http://127.0.0.1:3000
# Files URL
FILES_URL=http://127.0.0.1:5001
# INTERNAL_FILES_URL is used for plugin daemon communication within Docker network.
# Set this to the internal Docker service URL for proper plugin file access.
# Example: INTERNAL_FILES_URL=http://api:5001
INTERNAL_FILES_URL=http://127.0.0.1:5001
# The time in seconds after the signature is rejected
FILES_ACCESS_TIMEOUT=300
@ -137,7 +142,7 @@ WEB_API_CORS_ALLOW_ORIGINS=http://127.0.0.1:3000,*
CONSOLE_CORS_ALLOW_ORIGINS=http://127.0.0.1:3000,*
# Vector database configuration
# support: weaviate, qdrant, milvus, myscale, relyt, pgvecto_rs, pgvector, pgvector, chroma, opensearch, tidb_vector, couchbase, vikingdb, upstash, lindorm, oceanbase, opengauss, tablestore
# support: weaviate, qdrant, milvus, myscale, relyt, pgvecto_rs, pgvector, pgvector, chroma, opensearch, tidb_vector, couchbase, vikingdb, upstash, lindorm, oceanbase, opengauss, tablestore, matrixone
VECTOR_STORE=weaviate
# Weaviate configuration
@ -152,6 +157,7 @@ QDRANT_API_KEY=difyai123456
QDRANT_CLIENT_TIMEOUT=20
QDRANT_GRPC_ENABLED=false
QDRANT_GRPC_PORT=6334
QDRANT_REPLICATION_FACTOR=1
#Couchbase configuration
COUCHBASE_CONNECTION_STRING=127.0.0.1
@ -269,6 +275,7 @@ OPENSEARCH_PORT=9200
OPENSEARCH_USER=admin
OPENSEARCH_PASSWORD=admin
OPENSEARCH_SECURE=true
OPENSEARCH_VERIFY_CERTS=true
# Baidu configuration
BAIDU_VECTOR_DB_ENDPOINT=http://127.0.0.1:5287
@ -292,6 +299,13 @@ VIKINGDB_SCHEMA=http
VIKINGDB_CONNECTION_TIMEOUT=30
VIKINGDB_SOCKET_TIMEOUT=30
# Matrixone configration
MATRIXONE_HOST=127.0.0.1
MATRIXONE_PORT=6001
MATRIXONE_USER=dump
MATRIXONE_PASSWORD=111
MATRIXONE_DATABASE=dify
# Lindorm configuration
LINDORM_URL=http://ld-*******************-proxy-search-pub.lindorm.aliyuncs.com:30070
LINDORM_USERNAME=admin
@ -330,9 +344,11 @@ PROMPT_GENERATION_MAX_TOKENS=512
CODE_GENERATION_MAX_TOKENS=1024
PLUGIN_BASED_TOKEN_COUNTING_ENABLED=false
# Mail configuration, support: resend, smtp
# Mail configuration, support: resend, smtp, sendgrid
MAIL_TYPE=
# If using SendGrid, use the 'from' field for authentication if necessary.
MAIL_DEFAULT_SEND_FROM=no-reply <no-reply@dify.ai>
# resend configuration
RESEND_API_KEY=
RESEND_API_URL=https://api.resend.com
# smtp configuration
@ -342,12 +358,14 @@ SMTP_USERNAME=123
SMTP_PASSWORD=abc
SMTP_USE_TLS=true
SMTP_OPPORTUNISTIC_TLS=false
# Sendgid configuration
SENDGRID_API_KEY=
# Sentry configuration
SENTRY_DSN=
# DEBUG
DEBUG=false
ENABLE_REQUEST_LOGGING=False
SQLALCHEMY_ECHO=false
# Notion import configuration, support public and internal
@ -431,6 +449,19 @@ MAX_VARIABLE_SIZE=204800
# hybrid: Save new data to object storage, read from both object storage and RDBMS
WORKFLOW_NODE_EXECUTION_STORAGE=rdbms
# Repository configuration
# Core workflow execution repository implementation
CORE_WORKFLOW_EXECUTION_REPOSITORY=core.repositories.sqlalchemy_workflow_execution_repository.SQLAlchemyWorkflowExecutionRepository
# Core workflow node execution repository implementation
CORE_WORKFLOW_NODE_EXECUTION_REPOSITORY=core.repositories.sqlalchemy_workflow_node_execution_repository.SQLAlchemyWorkflowNodeExecutionRepository
# API workflow node execution repository implementation
API_WORKFLOW_NODE_EXECUTION_REPOSITORY=repositories.sqlalchemy_api_workflow_node_execution_repository.DifyAPISQLAlchemyWorkflowNodeExecutionRepository
# API workflow run repository implementation
API_WORKFLOW_RUN_REPOSITORY=repositories.sqlalchemy_api_workflow_run_repository.DifyAPISQLAlchemyWorkflowRunRepository
# App configuration
APP_MAX_EXECUTION_TIME=1200
APP_MAX_ACTIVE_REQUESTS=0
@ -488,3 +519,10 @@ OTEL_METRIC_EXPORT_TIMEOUT=30000
# Prevent Clickjacking
ALLOW_EMBED=false
# Dataset queue monitor configuration
QUEUE_MONITOR_THRESHOLD=200
# You can configure multiple ones, separated by commas. eg: test1@dify.ai,test2@dify.ai
QUEUE_MONITOR_ALERT_EMAILS=
# Monitor interval in minutes, default is 30 minutes
QUEUE_MONITOR_INTERVAL=30

View File

@ -1,6 +1,4 @@
exclude = [
"migrations/*",
]
exclude = ["migrations/*"]
line-length = 120
[format]
@ -9,14 +7,14 @@ quote-style = "double"
[lint]
preview = false
select = [
"B", # flake8-bugbear rules
"C4", # flake8-comprehensions
"E", # pycodestyle E rules
"F", # pyflakes rules
"FURB", # refurb rules
"I", # isort rules
"N", # pep8-naming
"PT", # flake8-pytest-style rules
"B", # flake8-bugbear rules
"C4", # flake8-comprehensions
"E", # pycodestyle E rules
"F", # pyflakes rules
"FURB", # refurb rules
"I", # isort rules
"N", # pep8-naming
"PT", # flake8-pytest-style rules
"PLC0208", # iteration-over-set
"PLC0414", # useless-import-alias
"PLE0604", # invalid-all-object
@ -24,58 +22,60 @@ select = [
"PLR0402", # manual-from-import
"PLR1711", # useless-return
"PLR1714", # repeated-equality-comparison
"RUF013", # implicit-optional
"RUF019", # unnecessary-key-check
"RUF100", # unused-noqa
"RUF101", # redirected-noqa
"RUF200", # invalid-pyproject-toml
"RUF022", # unsorted-dunder-all
"S506", # unsafe-yaml-load
"SIM", # flake8-simplify rules
"TRY400", # error-instead-of-exception
"TRY401", # verbose-log-message
"UP", # pyupgrade rules
"W191", # tab-indentation
"W605", # invalid-escape-sequence
"RUF013", # implicit-optional
"RUF019", # unnecessary-key-check
"RUF100", # unused-noqa
"RUF101", # redirected-noqa
"RUF200", # invalid-pyproject-toml
"RUF022", # unsorted-dunder-all
"S506", # unsafe-yaml-load
"SIM", # flake8-simplify rules
"TRY400", # error-instead-of-exception
"TRY401", # verbose-log-message
"UP", # pyupgrade rules
"W191", # tab-indentation
"W605", # invalid-escape-sequence
# security related linting rules
# RCE proctection (sort of)
"S102", # exec-builtin, disallow use of `exec`
"S307", # suspicious-eval-usage, disallow use of `eval` and `ast.literal_eval`
"S301", # suspicious-pickle-usage, disallow use of `pickle` and its wrappers.
"S302", # suspicious-marshal-usage, disallow use of `marshal` module
"S311", # suspicious-non-cryptographic-random-usage
]
ignore = [
"E402", # module-import-not-at-top-of-file
"E711", # none-comparison
"E712", # true-false-comparison
"E721", # type-comparison
"E722", # bare-except
"F821", # undefined-name
"F841", # unused-variable
"E402", # module-import-not-at-top-of-file
"E711", # none-comparison
"E712", # true-false-comparison
"E721", # type-comparison
"E722", # bare-except
"F821", # undefined-name
"F841", # unused-variable
"FURB113", # repeated-append
"FURB152", # math-constant
"UP007", # non-pep604-annotation
"UP032", # f-string
"UP045", # non-pep604-annotation-optional
"B005", # strip-with-multi-characters
"B006", # mutable-argument-default
"B007", # unused-loop-control-variable
"B026", # star-arg-unpacking-after-keyword-arg
"B903", # class-as-data-structure
"B904", # raise-without-from-inside-except
"B905", # zip-without-explicit-strict
"N806", # non-lowercase-variable-in-function
"N815", # mixed-case-variable-in-class-scope
"PT011", # pytest-raises-too-broad
"SIM102", # collapsible-if
"SIM103", # needless-bool
"SIM105", # suppressible-exception
"SIM107", # return-in-try-except-finally
"SIM108", # if-else-block-instead-of-if-exp
"SIM113", # enumerate-for-loop
"SIM117", # multiple-with-statements
"SIM210", # if-expr-with-true-false
"UP007", # non-pep604-annotation
"UP032", # f-string
"UP045", # non-pep604-annotation-optional
"B005", # strip-with-multi-characters
"B006", # mutable-argument-default
"B007", # unused-loop-control-variable
"B026", # star-arg-unpacking-after-keyword-arg
"B903", # class-as-data-structure
"B904", # raise-without-from-inside-except
"B905", # zip-without-explicit-strict
"N806", # non-lowercase-variable-in-function
"N815", # mixed-case-variable-in-class-scope
"PT011", # pytest-raises-too-broad
"SIM102", # collapsible-if
"SIM103", # needless-bool
"SIM105", # suppressible-exception
"SIM107", # return-in-try-except-finally
"SIM108", # if-else-block-instead-of-if-exp
"SIM113", # enumerate-for-loop
"SIM117", # multiple-with-statements
"SIM210", # if-expr-with-true-false
"UP038", # deprecated and not recommended by Ruff, https://docs.astral.sh/ruff/rules/non-pep604-isinstance/
]
[lint.per-file-ignores]

View File

@ -4,7 +4,7 @@ FROM python:3.12-slim-bookworm AS base
WORKDIR /app/api
# Install uv
ENV UV_VERSION=0.6.14
ENV UV_VERSION=0.7.11
RUN pip install --no-cache-dir uv==${UV_VERSION}

View File

@ -54,6 +54,7 @@ def initialize_extensions(app: DifyApp):
ext_otel,
ext_proxy_fix,
ext_redis,
ext_request_logging,
ext_sentry,
ext_set_secretkey,
ext_storage,
@ -83,6 +84,7 @@ def initialize_extensions(app: DifyApp):
ext_blueprints,
ext_commands,
ext_otel,
ext_request_logging,
]
for ext in extensions:
short_name = ext.__name__.split(".")[-1]

View File

@ -27,7 +27,7 @@ from models.dataset import Dataset, DatasetCollectionBinding, DatasetMetadata, D
from models.dataset import Document as DatasetDocument
from models.model import Account, App, AppAnnotationSetting, AppMode, Conversation, MessageAnnotation
from models.provider import Provider, ProviderModel
from services.account_service import RegisterService, TenantService
from services.account_service import AccountService, RegisterService, TenantService
from services.clear_free_plan_tenant_expired_logs import ClearFreePlanTenantExpiredLogs
from services.plugin.data_migration import PluginDataMigration
from services.plugin.plugin_migration import PluginMigration
@ -68,6 +68,7 @@ def reset_password(email, new_password, password_confirm):
account.password = base64_password_hashed
account.password_salt = base64_salt
db.session.commit()
AccountService.reset_login_error_rate_limit(email)
click.echo(click.style("Password reset successfully.", fg="green"))
@ -280,6 +281,7 @@ def migrate_knowledge_vector_database():
VectorType.ELASTICSEARCH,
VectorType.OPENGAUSS,
VectorType.TABLESTORE,
VectorType.MATRIXONE,
}
lower_collection_vector_types = {
VectorType.ANALYTICDB,
@ -846,6 +848,9 @@ def clear_orphaned_file_records(force: bool):
{"type": "text", "table": "workflow_node_executions", "column": "outputs"},
{"type": "text", "table": "conversations", "column": "introduction"},
{"type": "text", "table": "conversations", "column": "system_instruction"},
{"type": "text", "table": "accounts", "column": "avatar"},
{"type": "text", "table": "apps", "column": "icon"},
{"type": "text", "table": "sites", "column": "icon"},
{"type": "json", "table": "messages", "column": "inputs"},
{"type": "json", "table": "messages", "column": "message"},
]

View File

@ -1,8 +1,11 @@
import logging
from pathlib import Path
from typing import Any
from pydantic.fields import FieldInfo
from pydantic_settings import BaseSettings, PydanticBaseSettingsSource, SettingsConfigDict
from pydantic_settings import BaseSettings, PydanticBaseSettingsSource, SettingsConfigDict, TomlConfigSettingsSource
from libs.file_utils import search_file_upwards
from .deploy import DeploymentConfig
from .enterprise import EnterpriseFeatureConfig
@ -99,4 +102,12 @@ class DifyConfig(
RemoteSettingsSourceFactory(settings_cls),
dotenv_settings,
file_secret_settings,
TomlConfigSettingsSource(
settings_cls=settings_cls,
toml_file=search_file_upwards(
base_dir_path=Path(__file__).parent,
target_file_name="pyproject.toml",
max_search_parent_depth=2,
),
),
)

View File

@ -17,6 +17,12 @@ class DeploymentConfig(BaseSettings):
default=False,
)
# Request logging configuration
ENABLE_REQUEST_LOGGING: bool = Field(
description="Enable request and response body logging",
default=False,
)
EDITION: str = Field(
description="Deployment edition of the application (e.g., 'SELF_HOSTED', 'CLOUD')",
default="SELF_HOSTED",

View File

@ -237,6 +237,13 @@ class FileAccessConfig(BaseSettings):
default="",
)
INTERNAL_FILES_URL: str = Field(
description="Internal base URL for file access within Docker network,"
" used for plugin daemon and internal service communication."
" Falls back to FILES_URL if not specified.",
default="",
)
FILES_ACCESS_TIMEOUT: int = Field(
description="Expiration time in seconds for file access URLs",
default=300,
@ -530,6 +537,33 @@ class WorkflowNodeExecutionConfig(BaseSettings):
)
class RepositoryConfig(BaseSettings):
"""
Configuration for repository implementations
"""
CORE_WORKFLOW_EXECUTION_REPOSITORY: str = Field(
description="Repository implementation for WorkflowExecution. Specify as a module path",
default="core.repositories.sqlalchemy_workflow_execution_repository.SQLAlchemyWorkflowExecutionRepository",
)
CORE_WORKFLOW_NODE_EXECUTION_REPOSITORY: str = Field(
description="Repository implementation for WorkflowNodeExecution. Specify as a module path",
default="core.repositories.sqlalchemy_workflow_node_execution_repository.SQLAlchemyWorkflowNodeExecutionRepository",
)
API_WORKFLOW_NODE_EXECUTION_REPOSITORY: str = Field(
description="Service-layer repository implementation for WorkflowNodeExecutionModel operations. "
"Specify as a module path",
default="repositories.sqlalchemy_api_workflow_node_execution_repository.DifyAPISQLAlchemyWorkflowNodeExecutionRepository",
)
API_WORKFLOW_RUN_REPOSITORY: str = Field(
description="Service-layer repository implementation for WorkflowRun operations. Specify as a module path",
default="repositories.sqlalchemy_api_workflow_run_repository.DifyAPISQLAlchemyWorkflowRunRepository",
)
class AuthConfig(BaseSettings):
"""
Configuration for authentication and OAuth
@ -609,7 +643,7 @@ class MailConfig(BaseSettings):
"""
MAIL_TYPE: Optional[str] = Field(
description="Email service provider type ('smtp' or 'resend'), default to None.",
description="Email service provider type ('smtp' or 'resend' or 'sendGrid), default to None.",
default=None,
)
@ -663,6 +697,11 @@ class MailConfig(BaseSettings):
default=50,
)
SENDGRID_API_KEY: Optional[str] = Field(
description="API key for SendGrid service",
default=None,
)
class RagEtlConfig(BaseSettings):
"""
@ -891,6 +930,7 @@ class FeatureConfig(
MultiModalTransferConfig,
PositionConfig,
RagEtlConfig,
RepositoryConfig,
SecurityConfig,
ToolConfig,
UpdateConfig,

View File

@ -2,7 +2,7 @@ import os
from typing import Any, Literal, Optional
from urllib.parse import parse_qsl, quote_plus
from pydantic import Field, NonNegativeInt, PositiveFloat, PositiveInt, computed_field
from pydantic import Field, NonNegativeFloat, NonNegativeInt, PositiveFloat, PositiveInt, computed_field
from pydantic_settings import BaseSettings
from .cache.redis_config import RedisConfig
@ -24,6 +24,7 @@ from .vdb.couchbase_config import CouchbaseConfig
from .vdb.elasticsearch_config import ElasticsearchConfig
from .vdb.huawei_cloud_config import HuaweiCloudConfig
from .vdb.lindorm_config import LindormConfig
from .vdb.matrixone_config import MatrixoneConfig
from .vdb.milvus_config import MilvusConfig
from .vdb.myscale_config import MyScaleConfig
from .vdb.oceanbase_config import OceanBaseVectorConfig
@ -161,6 +162,11 @@ class DatabaseConfig(BaseSettings):
default=3600,
)
SQLALCHEMY_POOL_USE_LIFO: bool = Field(
description="If True, SQLAlchemy will use last-in-first-out way to retrieve connections from pool.",
default=False,
)
SQLALCHEMY_POOL_PRE_PING: bool = Field(
description="If True, enables connection pool pre-ping feature to check connections.",
default=False,
@ -198,6 +204,7 @@ class DatabaseConfig(BaseSettings):
"pool_recycle": self.SQLALCHEMY_POOL_RECYCLE,
"pool_pre_ping": self.SQLALCHEMY_POOL_PRE_PING,
"connect_args": connect_args,
"pool_use_lifo": self.SQLALCHEMY_POOL_USE_LIFO,
}
@ -222,6 +229,10 @@ class CeleryConfig(DatabaseConfig):
default=None,
)
CELERY_SENTINEL_PASSWORD: Optional[str] = Field(
description="Password of the Redis Sentinel master.",
default=None,
)
CELERY_SENTINEL_SOCKET_TIMEOUT: Optional[PositiveFloat] = Field(
description="Timeout for Redis Sentinel socket operations in seconds.",
default=0.1,
@ -256,6 +267,25 @@ class InternalTestConfig(BaseSettings):
)
class DatasetQueueMonitorConfig(BaseSettings):
"""
Configuration settings for Dataset Queue Monitor
"""
QUEUE_MONITOR_THRESHOLD: Optional[NonNegativeInt] = Field(
description="Threshold for dataset queue monitor",
default=200,
)
QUEUE_MONITOR_ALERT_EMAILS: Optional[str] = Field(
description="Emails for dataset queue monitor alert, separated by commas",
default=None,
)
QUEUE_MONITOR_INTERVAL: Optional[NonNegativeFloat] = Field(
description="Interval for dataset queue monitor in minutes",
default=30,
)
class MiddlewareConfig(
# place the configs in alphabet order
CeleryConfig,
@ -303,5 +333,7 @@ class MiddlewareConfig(
BaiduVectorDBConfig,
OpenGaussConfig,
TableStoreConfig,
DatasetQueueMonitorConfig,
MatrixoneConfig,
):
pass

View File

@ -1,4 +1,4 @@
from typing import Optional
from typing import Literal, Optional
from pydantic import Field
from pydantic_settings import BaseSettings
@ -34,7 +34,7 @@ class S3StorageConfig(BaseSettings):
default=None,
)
S3_ADDRESS_STYLE: str = Field(
S3_ADDRESS_STYLE: Literal["auto", "virtual", "path"] = Field(
description="S3 addressing style: 'auto', 'path', or 'virtual'",
default="auto",
)

View File

@ -0,0 +1,14 @@
from pydantic import BaseModel, Field
class MatrixoneConfig(BaseModel):
"""Matrixone vector database configuration."""
MATRIXONE_HOST: str = Field(default="localhost", description="Host address of the Matrixone server")
MATRIXONE_PORT: int = Field(default=6001, description="Port number of the Matrixone server")
MATRIXONE_USER: str = Field(default="dump", description="Username for authenticating with Matrixone")
MATRIXONE_PASSWORD: str = Field(default="111", description="Password for authenticating with Matrixone")
MATRIXONE_DATABASE: str = Field(default="dify", description="Name of the Matrixone database to connect to")
MATRIXONE_METRIC: str = Field(
default="l2", description="Distance metric type for vector similarity search (cosine or l2)"
)

View File

@ -33,6 +33,11 @@ class OpenSearchConfig(BaseSettings):
default=False,
)
OPENSEARCH_VERIFY_CERTS: bool = Field(
description="Whether to verify SSL certificates for HTTPS connections (recommended to set True in production)",
default=True,
)
OPENSEARCH_AUTH_METHOD: AuthMethod = Field(
description="Authentication method for OpenSearch connection (default is 'basic')",
default=AuthMethod.BASIC,

View File

@ -33,3 +33,8 @@ class QdrantConfig(BaseSettings):
description="Port number for gRPC connection to Qdrant server (default is 6334)",
default=6334,
)
QDRANT_REPLICATION_FACTOR: PositiveInt = Field(
description="Replication factor for Qdrant collections (default is 1)",
default=1,
)

View File

@ -1,17 +1,13 @@
from pydantic import Field
from pydantic_settings import BaseSettings
from configs.packaging.pyproject import PyProjectConfig, PyProjectTomlConfig
class PackagingInfo(BaseSettings):
class PackagingInfo(PyProjectTomlConfig):
"""
Packaging build information
"""
CURRENT_VERSION: str = Field(
description="Dify version",
default="1.4.0",
)
COMMIT_SHA: str = Field(
description="SHA-1 checksum of the git commit used to build the app",
default="",

View File

@ -0,0 +1,17 @@
from pydantic import BaseModel, Field
from pydantic_settings import BaseSettings
class PyProjectConfig(BaseModel):
version: str = Field(description="Dify version", default="")
class PyProjectTomlConfig(BaseSettings):
"""
configs in api/pyproject.toml
"""
project: PyProjectConfig = Field(
description="configs in the project section of pyproject.toml",
default=PyProjectConfig(),
)

View File

@ -60,8 +60,7 @@ class NacosHttpClient:
sign_str = tenant + "+"
if group:
sign_str = sign_str + group + "+"
if sign_str:
sign_str += ts
sign_str += ts # Directly concatenate ts without conditional checks, because the nacos auth header forced it.
return sign_str
def get_access_token(self, force_refresh=False):

View File

@ -11,10 +11,6 @@ if TYPE_CHECKING:
from core.workflow.entities.variable_pool import VariablePool
tenant_id: ContextVar[str] = ContextVar("tenant_id")
workflow_variable_pool: ContextVar["VariablePool"] = ContextVar("workflow_variable_pool")
"""
To avoid race-conditions caused by gunicorn thread recycling, using RecyclableContextVar to replace with
"""

View File

@ -56,6 +56,7 @@ from .app import (
conversation,
conversation_variables,
generator,
mcp_server,
message,
model_config,
ops_trace,
@ -63,6 +64,7 @@ from .app import (
statistic,
workflow,
workflow_app_log,
workflow_draft_variable,
workflow_run,
workflow_statistic,
)

View File

@ -56,8 +56,7 @@ class InsertExploreAppListApi(Resource):
parser.add_argument("position", type=int, required=True, nullable=False, location="json")
args = parser.parse_args()
with Session(db.engine) as session:
app = session.execute(select(App).filter(App.id == args["app_id"])).scalar_one_or_none()
app = db.session.execute(select(App).filter(App.id == args["app_id"])).scalar_one_or_none()
if not app:
raise NotFound(f"App '{args['app_id']}' is not found")
@ -78,38 +77,38 @@ class InsertExploreAppListApi(Resource):
select(RecommendedApp).filter(RecommendedApp.app_id == args["app_id"])
).scalar_one_or_none()
if not recommended_app:
recommended_app = RecommendedApp(
app_id=app.id,
description=desc,
copyright=copy_right,
privacy_policy=privacy_policy,
custom_disclaimer=custom_disclaimer,
language=args["language"],
category=args["category"],
position=args["position"],
)
if not recommended_app:
recommended_app = RecommendedApp(
app_id=app.id,
description=desc,
copyright=copy_right,
privacy_policy=privacy_policy,
custom_disclaimer=custom_disclaimer,
language=args["language"],
category=args["category"],
position=args["position"],
)
db.session.add(recommended_app)
db.session.add(recommended_app)
app.is_public = True
db.session.commit()
app.is_public = True
db.session.commit()
return {"result": "success"}, 201
else:
recommended_app.description = desc
recommended_app.copyright = copy_right
recommended_app.privacy_policy = privacy_policy
recommended_app.custom_disclaimer = custom_disclaimer
recommended_app.language = args["language"]
recommended_app.category = args["category"]
recommended_app.position = args["position"]
return {"result": "success"}, 201
else:
recommended_app.description = desc
recommended_app.copyright = copy_right
recommended_app.privacy_policy = privacy_policy
recommended_app.custom_disclaimer = custom_disclaimer
recommended_app.language = args["language"]
recommended_app.category = args["category"]
recommended_app.position = args["position"]
app.is_public = True
app.is_public = True
db.session.commit()
db.session.commit()
return {"result": "success"}, 200
return {"result": "success"}, 200
class InsertExploreAppApi(Resource):

View File

@ -208,7 +208,7 @@ class AnnotationBatchImportApi(Resource):
if len(request.files) > 1:
raise TooManyFilesError()
# check file type
if not file.filename.endswith(".csv"):
if not file.filename or not file.filename.lower().endswith(".csv"):
raise ValueError("Invalid file type. Only CSV files are allowed")
return AppAnnotationService.batch_import_app_annotations(app_id, file)

View File

@ -17,15 +17,13 @@ from controllers.console.wraps import (
)
from core.ops.ops_trace_manager import OpsTraceManager
from extensions.ext_database import db
from fields.app_fields import (
app_detail_fields,
app_detail_fields_with_site,
app_pagination_fields,
)
from fields.app_fields import app_detail_fields, app_detail_fields_with_site, app_pagination_fields
from libs.login import login_required
from models import Account, App
from services.app_dsl_service import AppDslService, ImportMode
from services.app_service import AppService
from services.enterprise.enterprise_service import EnterpriseService
from services.feature_service import FeatureService
ALLOW_CREATE_APP_MODES = ["chat", "agent-chat", "advanced-chat", "workflow", "completion"]
@ -75,7 +73,17 @@ class AppListApi(Resource):
if not app_pagination:
return {"data": [], "total": 0, "page": 1, "limit": 20, "has_more": False}
return marshal(app_pagination, app_pagination_fields)
if FeatureService.get_system_features().webapp_auth.enabled:
app_ids = [str(app.id) for app in app_pagination.items]
res = EnterpriseService.WebAppAuth.batch_get_app_access_mode_by_id(app_ids=app_ids)
if len(res) != len(app_ids):
raise BadRequest("Invalid app id in webapp auth")
for app in app_pagination.items:
if str(app.id) in res:
app.access_mode = res[str(app.id)].access_mode
return marshal(app_pagination, app_pagination_fields), 200
@setup_required
@login_required
@ -119,6 +127,10 @@ class AppApi(Resource):
app_model = app_service.get_app(app_model)
if FeatureService.get_system_features().webapp_auth.enabled:
app_setting = EnterpriseService.WebAppAuth.get_app_access_mode_by_id(app_id=str(app_model.id))
app_model.access_mode = app_setting.access_mode
return app_model
@setup_required
@ -139,6 +151,7 @@ class AppApi(Resource):
parser.add_argument("icon", type=str, location="json")
parser.add_argument("icon_background", type=str, location="json")
parser.add_argument("use_icon_as_answer_icon", type=bool, location="json")
parser.add_argument("max_active_requests", type=int, location="json")
args = parser.parse_args()
app_service = AppService()

View File

@ -17,6 +17,8 @@ from libs.login import login_required
from models import Account
from models.model import App
from services.app_dsl_service import AppDslService, ImportStatus
from services.enterprise.enterprise_service import EnterpriseService
from services.feature_service import FeatureService
class AppImportApi(Resource):
@ -60,7 +62,9 @@ class AppImportApi(Resource):
app_id=args.get("app_id"),
)
session.commit()
if result.app_id and FeatureService.get_system_features().webapp_auth.enabled:
# update web app setting as private
EnterpriseService.WebAppAuth.update_app_access_mode(result.app_id, "private")
# Return appropriate status code based on result
status = result.status
if status == ImportStatus.FAILED.value:

View File

@ -90,23 +90,11 @@ class ChatMessageTextApi(Resource):
message_id = args.get("message_id", None)
text = args.get("text", None)
if (
app_model.mode in {AppMode.ADVANCED_CHAT.value, AppMode.WORKFLOW.value}
and app_model.workflow
and app_model.workflow.features_dict
):
text_to_speech = app_model.workflow.features_dict.get("text_to_speech")
if text_to_speech is None:
raise ValueError("TTS is not enabled")
voice = args.get("voice") or text_to_speech.get("voice")
else:
try:
if app_model.app_model_config is None:
raise ValueError("AppModelConfig not found")
voice = args.get("voice") or app_model.app_model_config.text_to_speech_dict.get("voice")
except Exception:
voice = None
response = AudioService.transcript_tts(app_model=app_model, text=text, message_id=message_id, voice=voice)
voice = args.get("voice", None)
response = AudioService.transcript_tts(
app_model=app_model, text=text, voice=voice, message_id=message_id, is_draft=True
)
return response
except services.errors.app_model_config.AppModelConfigBrokenError:
logging.exception("App model config broken.")

View File

@ -0,0 +1,119 @@
import json
from enum import StrEnum
from flask_login import current_user
from flask_restful import Resource, marshal_with, reqparse
from werkzeug.exceptions import NotFound
from controllers.console import api
from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import account_initialization_required, setup_required
from extensions.ext_database import db
from fields.app_fields import app_server_fields
from libs.login import login_required
from models.model import AppMCPServer
class AppMCPServerStatus(StrEnum):
ACTIVE = "active"
INACTIVE = "inactive"
class AppMCPServerController(Resource):
@setup_required
@login_required
@account_initialization_required
@get_app_model
@marshal_with(app_server_fields)
def get(self, app_model):
server = db.session.query(AppMCPServer).filter(AppMCPServer.app_id == app_model.id).first()
return server
@setup_required
@login_required
@account_initialization_required
@get_app_model
@marshal_with(app_server_fields)
def post(self, app_model):
if not current_user.is_editor:
raise NotFound()
parser = reqparse.RequestParser()
parser.add_argument("description", type=str, required=False, location="json")
parser.add_argument("parameters", type=dict, required=True, location="json")
args = parser.parse_args()
description = args.get("description")
if not description:
description = app_model.description or ""
server = AppMCPServer(
name=app_model.name,
description=description,
parameters=json.dumps(args["parameters"], ensure_ascii=False),
status=AppMCPServerStatus.ACTIVE,
app_id=app_model.id,
tenant_id=current_user.current_tenant_id,
server_code=AppMCPServer.generate_server_code(16),
)
db.session.add(server)
db.session.commit()
return server
@setup_required
@login_required
@account_initialization_required
@get_app_model
@marshal_with(app_server_fields)
def put(self, app_model):
if not current_user.is_editor:
raise NotFound()
parser = reqparse.RequestParser()
parser.add_argument("id", type=str, required=True, location="json")
parser.add_argument("description", type=str, required=False, location="json")
parser.add_argument("parameters", type=dict, required=True, location="json")
parser.add_argument("status", type=str, required=False, location="json")
args = parser.parse_args()
server = db.session.query(AppMCPServer).filter(AppMCPServer.id == args["id"]).first()
if not server:
raise NotFound()
description = args.get("description")
if description is None:
pass
elif not description:
server.description = app_model.description or ""
else:
server.description = description
server.parameters = json.dumps(args["parameters"], ensure_ascii=False)
if args["status"]:
if args["status"] not in [status.value for status in AppMCPServerStatus]:
raise ValueError("Invalid status")
server.status = args["status"]
db.session.commit()
return server
class AppMCPServerRefreshController(Resource):
@setup_required
@login_required
@account_initialization_required
@marshal_with(app_server_fields)
def get(self, server_id):
if not current_user.is_editor:
raise NotFound()
server = (
db.session.query(AppMCPServer)
.filter(AppMCPServer.id == server_id)
.filter(AppMCPServer.tenant_id == current_user.current_tenant_id)
.first()
)
if not server:
raise NotFound()
server.server_code = AppMCPServer.generate_server_code(16)
db.session.commit()
return server
api.add_resource(AppMCPServerController, "/apps/<uuid:app_id>/server")
api.add_resource(AppMCPServerRefreshController, "/apps/<uuid:server_id>/server/refresh")

View File

@ -2,6 +2,7 @@ from datetime import datetime
from decimal import Decimal
import pytz
import sqlalchemy as sa
from flask import jsonify
from flask_login import current_user
from flask_restful import Resource, reqparse
@ -9,10 +10,11 @@ from flask_restful import Resource, reqparse
from controllers.console import api
from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import account_initialization_required, setup_required
from core.app.entities.app_invoke_entities import InvokeFrom
from extensions.ext_database import db
from libs.helper import DatetimeString
from libs.login import login_required
from models.model import AppMode
from models import AppMode, Message
class DailyMessageStatistic(Resource):
@ -85,46 +87,41 @@ class DailyConversationStatistic(Resource):
parser.add_argument("end", type=DatetimeString("%Y-%m-%d %H:%M"), location="args")
args = parser.parse_args()
sql_query = """SELECT
DATE(DATE_TRUNC('day', created_at AT TIME ZONE 'UTC' AT TIME ZONE :tz )) AS date,
COUNT(DISTINCT messages.conversation_id) AS conversation_count
FROM
messages
WHERE
app_id = :app_id"""
arg_dict = {"tz": account.timezone, "app_id": app_model.id}
timezone = pytz.timezone(account.timezone)
utc_timezone = pytz.utc
stmt = (
sa.select(
sa.func.date(
sa.func.date_trunc("day", sa.text("created_at AT TIME ZONE 'UTC' AT TIME ZONE :tz"))
).label("date"),
sa.func.count(sa.distinct(Message.conversation_id)).label("conversation_count"),
)
.select_from(Message)
.where(Message.app_id == app_model.id, Message.invoke_from != InvokeFrom.DEBUGGER.value)
)
if args["start"]:
start_datetime = datetime.strptime(args["start"], "%Y-%m-%d %H:%M")
start_datetime = start_datetime.replace(second=0)
start_datetime_timezone = timezone.localize(start_datetime)
start_datetime_utc = start_datetime_timezone.astimezone(utc_timezone)
sql_query += " AND created_at >= :start"
arg_dict["start"] = start_datetime_utc
stmt = stmt.where(Message.created_at >= start_datetime_utc)
if args["end"]:
end_datetime = datetime.strptime(args["end"], "%Y-%m-%d %H:%M")
end_datetime = end_datetime.replace(second=0)
end_datetime_timezone = timezone.localize(end_datetime)
end_datetime_utc = end_datetime_timezone.astimezone(utc_timezone)
stmt = stmt.where(Message.created_at < end_datetime_utc)
sql_query += " AND created_at < :end"
arg_dict["end"] = end_datetime_utc
sql_query += " GROUP BY date ORDER BY date"
stmt = stmt.group_by("date").order_by("date")
response_data = []
with db.engine.begin() as conn:
rs = conn.execute(db.text(sql_query), arg_dict)
for i in rs:
response_data.append({"date": str(i.date), "conversation_count": i.conversation_count})
rs = conn.execute(stmt, {"tz": account.timezone})
for row in rs:
response_data.append({"date": str(row.date), "conversation_count": row.conversation_count})
return jsonify({"data": response_data})

View File

@ -1,5 +1,6 @@
import json
import logging
from collections.abc import Sequence
from typing import cast
from flask import abort, request
@ -18,10 +19,12 @@ from controllers.console.app.error import (
from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import account_initialization_required, setup_required
from controllers.web.error import InvokeRateLimitError as InvokeRateLimitHttpError
from core.app.app_config.features.file_upload.manager import FileUploadConfigManager
from core.app.apps.base_app_queue_manager import AppQueueManager
from core.app.entities.app_invoke_entities import InvokeFrom
from core.file.models import File
from extensions.ext_database import db
from factories import variable_factory
from factories import file_factory, variable_factory
from fields.workflow_fields import workflow_fields, workflow_pagination_fields
from fields.workflow_run_fields import workflow_run_node_execution_fields
from libs import helper
@ -30,6 +33,7 @@ from libs.login import current_user, login_required
from models import App
from models.account import Account
from models.model import AppMode
from models.workflow import Workflow
from services.app_generate_service import AppGenerateService
from services.errors.app import WorkflowHashNotEqualError
from services.errors.llm import InvokeRateLimitError
@ -38,6 +42,24 @@ from services.workflow_service import DraftWorkflowDeletionError, WorkflowInUseE
logger = logging.getLogger(__name__)
# TODO(QuantumGhost): Refactor existing node run API to handle file parameter parsing
# at the controller level rather than in the workflow logic. This would improve separation
# of concerns and make the code more maintainable.
def _parse_file(workflow: Workflow, files: list[dict] | None = None) -> Sequence[File]:
files = files or []
file_extra_config = FileUploadConfigManager.convert(workflow.features_dict, is_vision=False)
file_objs: Sequence[File] = []
if file_extra_config is None:
return file_objs
file_objs = file_factory.build_from_mappings(
mappings=files,
tenant_id=workflow.tenant_id,
config=file_extra_config,
)
return file_objs
class DraftWorkflowApi(Resource):
@setup_required
@login_required
@ -81,8 +103,7 @@ class DraftWorkflowApi(Resource):
parser.add_argument("graph", type=dict, required=True, nullable=False, location="json")
parser.add_argument("features", type=dict, required=True, nullable=False, location="json")
parser.add_argument("hash", type=str, required=False, location="json")
# TODO: set this to required=True after frontend is updated
parser.add_argument("environment_variables", type=list, required=False, location="json")
parser.add_argument("environment_variables", type=list, required=True, location="json")
parser.add_argument("conversation_variables", type=list, required=False, location="json")
args = parser.parse_args()
elif "text/plain" in content_type:
@ -403,15 +424,30 @@ class DraftWorkflowNodeRunApi(Resource):
parser = reqparse.RequestParser()
parser.add_argument("inputs", type=dict, required=True, nullable=False, location="json")
parser.add_argument("query", type=str, required=False, location="json", default="")
parser.add_argument("files", type=list, location="json", default=[])
args = parser.parse_args()
inputs = args.get("inputs")
if inputs == None:
user_inputs = args.get("inputs")
if user_inputs is None:
raise ValueError("missing inputs")
workflow_srv = WorkflowService()
# fetch draft workflow by app_model
draft_workflow = workflow_srv.get_draft_workflow(app_model=app_model)
if not draft_workflow:
raise ValueError("Workflow not initialized")
files = _parse_file(draft_workflow, args.get("files"))
workflow_service = WorkflowService()
workflow_node_execution = workflow_service.run_draft_workflow_node(
app_model=app_model, node_id=node_id, user_inputs=inputs, account=current_user
app_model=app_model,
draft_workflow=draft_workflow,
node_id=node_id,
user_inputs=user_inputs,
account=current_user,
query=args.get("query", ""),
files=files,
)
return workflow_node_execution
@ -732,6 +768,27 @@ class WorkflowByIdApi(Resource):
return None, 204
class DraftWorkflowNodeLastRunApi(Resource):
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
@marshal_with(workflow_run_node_execution_fields)
def get(self, app_model: App, node_id: str):
srv = WorkflowService()
workflow = srv.get_draft_workflow(app_model)
if not workflow:
raise NotFound("Workflow not found")
node_exec = srv.get_node_last_run(
app_model=app_model,
workflow=workflow,
node_id=node_id,
)
if node_exec is None:
raise NotFound("last run not found")
return node_exec
api.add_resource(
DraftWorkflowApi,
"/apps/<uuid:app_id>/workflows/draft",
@ -796,3 +853,7 @@ api.add_resource(
WorkflowByIdApi,
"/apps/<uuid:app_id>/workflows/<string:workflow_id>",
)
api.add_resource(
DraftWorkflowNodeLastRunApi,
"/apps/<uuid:app_id>/workflows/draft/nodes/<string:node_id>/last-run",
)

View File

@ -6,12 +6,12 @@ from sqlalchemy.orm import Session
from controllers.console import api
from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import account_initialization_required, setup_required
from core.workflow.entities.workflow_execution import WorkflowExecutionStatus
from extensions.ext_database import db
from fields.workflow_app_log_fields import workflow_app_log_pagination_fields
from libs.login import login_required
from models import App
from models.model import AppMode
from models.workflow import WorkflowRunStatus
from services.workflow_app_service import WorkflowAppService
@ -34,11 +34,25 @@ class WorkflowAppLogApi(Resource):
parser.add_argument(
"created_at__after", type=str, location="args", help="Filter logs created after this timestamp"
)
parser.add_argument(
"created_by_end_user_session_id",
type=str,
location="args",
required=False,
default=None,
)
parser.add_argument(
"created_by_account",
type=str,
location="args",
required=False,
default=None,
)
parser.add_argument("page", type=int_range(1, 99999), default=1, location="args")
parser.add_argument("limit", type=int_range(1, 100), default=20, location="args")
args = parser.parse_args()
args.status = WorkflowRunStatus(args.status) if args.status else None
args.status = WorkflowExecutionStatus(args.status) if args.status else None
if args.created_at__before:
args.created_at__before = isoparse(args.created_at__before)
@ -57,6 +71,8 @@ class WorkflowAppLogApi(Resource):
created_at_after=args.created_at__after,
page=args.page,
limit=args.limit,
created_by_end_user_session_id=args.created_by_end_user_session_id,
created_by_account=args.created_by_account,
)
return workflow_app_log_pagination

View File

@ -0,0 +1,426 @@
import logging
from typing import Any, NoReturn
from flask import Response
from flask_restful import Resource, fields, inputs, marshal, marshal_with, reqparse
from sqlalchemy.orm import Session
from werkzeug.exceptions import Forbidden
from controllers.console import api
from controllers.console.app.error import (
DraftWorkflowNotExist,
)
from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import account_initialization_required, setup_required
from controllers.web.error import InvalidArgumentError, NotFoundError
from core.variables.segment_group import SegmentGroup
from core.variables.segments import ArrayFileSegment, FileSegment, Segment
from core.variables.types import SegmentType
from core.workflow.constants import CONVERSATION_VARIABLE_NODE_ID, SYSTEM_VARIABLE_NODE_ID
from factories.file_factory import build_from_mapping, build_from_mappings
from factories.variable_factory import build_segment_with_type
from libs.login import current_user, login_required
from models import App, AppMode, db
from models.workflow import WorkflowDraftVariable
from services.workflow_draft_variable_service import WorkflowDraftVariableList, WorkflowDraftVariableService
from services.workflow_service import WorkflowService
logger = logging.getLogger(__name__)
def _convert_values_to_json_serializable_object(value: Segment) -> Any:
if isinstance(value, FileSegment):
return value.value.model_dump()
elif isinstance(value, ArrayFileSegment):
return [i.model_dump() for i in value.value]
elif isinstance(value, SegmentGroup):
return [_convert_values_to_json_serializable_object(i) for i in value.value]
else:
return value.value
def _serialize_var_value(variable: WorkflowDraftVariable) -> Any:
value = variable.get_value()
# create a copy of the value to avoid affecting the model cache.
value = value.model_copy(deep=True)
# Refresh the url signature before returning it to client.
if isinstance(value, FileSegment):
file = value.value
file.remote_url = file.generate_url()
elif isinstance(value, ArrayFileSegment):
files = value.value
for file in files:
file.remote_url = file.generate_url()
return _convert_values_to_json_serializable_object(value)
def _create_pagination_parser():
parser = reqparse.RequestParser()
parser.add_argument(
"page",
type=inputs.int_range(1, 100_000),
required=False,
default=1,
location="args",
help="the page of data requested",
)
parser.add_argument("limit", type=inputs.int_range(1, 100), required=False, default=20, location="args")
return parser
def _serialize_variable_type(workflow_draft_var: WorkflowDraftVariable) -> str:
value_type = workflow_draft_var.value_type
return value_type.exposed_type().value
_WORKFLOW_DRAFT_VARIABLE_WITHOUT_VALUE_FIELDS = {
"id": fields.String,
"type": fields.String(attribute=lambda model: model.get_variable_type()),
"name": fields.String,
"description": fields.String,
"selector": fields.List(fields.String, attribute=lambda model: model.get_selector()),
"value_type": fields.String(attribute=_serialize_variable_type),
"edited": fields.Boolean(attribute=lambda model: model.edited),
"visible": fields.Boolean,
}
_WORKFLOW_DRAFT_VARIABLE_FIELDS = dict(
_WORKFLOW_DRAFT_VARIABLE_WITHOUT_VALUE_FIELDS,
value=fields.Raw(attribute=_serialize_var_value),
)
_WORKFLOW_DRAFT_ENV_VARIABLE_FIELDS = {
"id": fields.String,
"type": fields.String(attribute=lambda _: "env"),
"name": fields.String,
"description": fields.String,
"selector": fields.List(fields.String, attribute=lambda model: model.get_selector()),
"value_type": fields.String(attribute=_serialize_variable_type),
"edited": fields.Boolean(attribute=lambda model: model.edited),
"visible": fields.Boolean,
}
_WORKFLOW_DRAFT_ENV_VARIABLE_LIST_FIELDS = {
"items": fields.List(fields.Nested(_WORKFLOW_DRAFT_ENV_VARIABLE_FIELDS)),
}
def _get_items(var_list: WorkflowDraftVariableList) -> list[WorkflowDraftVariable]:
return var_list.variables
_WORKFLOW_DRAFT_VARIABLE_LIST_WITHOUT_VALUE_FIELDS = {
"items": fields.List(fields.Nested(_WORKFLOW_DRAFT_VARIABLE_WITHOUT_VALUE_FIELDS), attribute=_get_items),
"total": fields.Raw(),
}
_WORKFLOW_DRAFT_VARIABLE_LIST_FIELDS = {
"items": fields.List(fields.Nested(_WORKFLOW_DRAFT_VARIABLE_FIELDS), attribute=_get_items),
}
def _api_prerequisite(f):
"""Common prerequisites for all draft workflow variable APIs.
It ensures the following conditions are satisfied:
- Dify has been property setup.
- The request user has logged in and initialized.
- The requested app is a workflow or a chat flow.
- The request user has the edit permission for the app.
"""
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def wrapper(*args, **kwargs):
if not current_user.is_editor:
raise Forbidden()
return f(*args, **kwargs)
return wrapper
class WorkflowVariableCollectionApi(Resource):
@_api_prerequisite
@marshal_with(_WORKFLOW_DRAFT_VARIABLE_LIST_WITHOUT_VALUE_FIELDS)
def get(self, app_model: App):
"""
Get draft workflow
"""
parser = _create_pagination_parser()
args = parser.parse_args()
# fetch draft workflow by app_model
workflow_service = WorkflowService()
workflow_exist = workflow_service.is_workflow_exist(app_model=app_model)
if not workflow_exist:
raise DraftWorkflowNotExist()
# fetch draft workflow by app_model
with Session(bind=db.engine, expire_on_commit=False) as session:
draft_var_srv = WorkflowDraftVariableService(
session=session,
)
workflow_vars = draft_var_srv.list_variables_without_values(
app_id=app_model.id,
page=args.page,
limit=args.limit,
)
return workflow_vars
@_api_prerequisite
def delete(self, app_model: App):
draft_var_srv = WorkflowDraftVariableService(
session=db.session(),
)
draft_var_srv.delete_workflow_variables(app_model.id)
db.session.commit()
return Response("", 204)
def validate_node_id(node_id: str) -> NoReturn | None:
if node_id in [
CONVERSATION_VARIABLE_NODE_ID,
SYSTEM_VARIABLE_NODE_ID,
]:
# NOTE(QuantumGhost): While we store the system and conversation variables as node variables
# with specific `node_id` in database, we still want to make the API separated. By disallowing
# accessing system and conversation variables in `WorkflowDraftNodeVariableListApi`,
# we mitigate the risk that user of the API depending on the implementation detail of the API.
#
# ref: [Hyrum's Law](https://www.hyrumslaw.com/)
raise InvalidArgumentError(
f"invalid node_id, please use correspond api for conversation and system variables, node_id={node_id}",
)
return None
class NodeVariableCollectionApi(Resource):
@_api_prerequisite
@marshal_with(_WORKFLOW_DRAFT_VARIABLE_LIST_FIELDS)
def get(self, app_model: App, node_id: str):
validate_node_id(node_id)
with Session(bind=db.engine, expire_on_commit=False) as session:
draft_var_srv = WorkflowDraftVariableService(
session=session,
)
node_vars = draft_var_srv.list_node_variables(app_model.id, node_id)
return node_vars
@_api_prerequisite
def delete(self, app_model: App, node_id: str):
validate_node_id(node_id)
srv = WorkflowDraftVariableService(db.session())
srv.delete_node_variables(app_model.id, node_id)
db.session.commit()
return Response("", 204)
class VariableApi(Resource):
_PATCH_NAME_FIELD = "name"
_PATCH_VALUE_FIELD = "value"
@_api_prerequisite
@marshal_with(_WORKFLOW_DRAFT_VARIABLE_FIELDS)
def get(self, app_model: App, variable_id: str):
draft_var_srv = WorkflowDraftVariableService(
session=db.session(),
)
variable = draft_var_srv.get_variable(variable_id=variable_id)
if variable is None:
raise NotFoundError(description=f"variable not found, id={variable_id}")
if variable.app_id != app_model.id:
raise NotFoundError(description=f"variable not found, id={variable_id}")
return variable
@_api_prerequisite
@marshal_with(_WORKFLOW_DRAFT_VARIABLE_FIELDS)
def patch(self, app_model: App, variable_id: str):
# Request payload for file types:
#
# Local File:
#
# {
# "type": "image",
# "transfer_method": "local_file",
# "url": "",
# "upload_file_id": "daded54f-72c7-4f8e-9d18-9b0abdd9f190"
# }
#
# Remote File:
#
#
# {
# "type": "image",
# "transfer_method": "remote_url",
# "url": "http://127.0.0.1:5001/files/1602650a-4fe4-423c-85a2-af76c083e3c4/file-preview?timestamp=1750041099&nonce=...&sign=...=",
# "upload_file_id": "1602650a-4fe4-423c-85a2-af76c083e3c4"
# }
parser = reqparse.RequestParser()
parser.add_argument(self._PATCH_NAME_FIELD, type=str, required=False, nullable=True, location="json")
# Parse 'value' field as-is to maintain its original data structure
parser.add_argument(self._PATCH_VALUE_FIELD, type=lambda x: x, required=False, nullable=True, location="json")
draft_var_srv = WorkflowDraftVariableService(
session=db.session(),
)
args = parser.parse_args(strict=True)
variable = draft_var_srv.get_variable(variable_id=variable_id)
if variable is None:
raise NotFoundError(description=f"variable not found, id={variable_id}")
if variable.app_id != app_model.id:
raise NotFoundError(description=f"variable not found, id={variable_id}")
new_name = args.get(self._PATCH_NAME_FIELD, None)
raw_value = args.get(self._PATCH_VALUE_FIELD, None)
if new_name is None and raw_value is None:
return variable
new_value = None
if raw_value is not None:
if variable.value_type == SegmentType.FILE:
if not isinstance(raw_value, dict):
raise InvalidArgumentError(description=f"expected dict for file, got {type(raw_value)}")
raw_value = build_from_mapping(mapping=raw_value, tenant_id=app_model.tenant_id)
elif variable.value_type == SegmentType.ARRAY_FILE:
if not isinstance(raw_value, list):
raise InvalidArgumentError(description=f"expected list for files, got {type(raw_value)}")
if len(raw_value) > 0 and not isinstance(raw_value[0], dict):
raise InvalidArgumentError(description=f"expected dict for files[0], got {type(raw_value)}")
raw_value = build_from_mappings(mappings=raw_value, tenant_id=app_model.tenant_id)
new_value = build_segment_with_type(variable.value_type, raw_value)
draft_var_srv.update_variable(variable, name=new_name, value=new_value)
db.session.commit()
return variable
@_api_prerequisite
def delete(self, app_model: App, variable_id: str):
draft_var_srv = WorkflowDraftVariableService(
session=db.session(),
)
variable = draft_var_srv.get_variable(variable_id=variable_id)
if variable is None:
raise NotFoundError(description=f"variable not found, id={variable_id}")
if variable.app_id != app_model.id:
raise NotFoundError(description=f"variable not found, id={variable_id}")
draft_var_srv.delete_variable(variable)
db.session.commit()
return Response("", 204)
class VariableResetApi(Resource):
@_api_prerequisite
def put(self, app_model: App, variable_id: str):
draft_var_srv = WorkflowDraftVariableService(
session=db.session(),
)
workflow_srv = WorkflowService()
draft_workflow = workflow_srv.get_draft_workflow(app_model)
if draft_workflow is None:
raise NotFoundError(
f"Draft workflow not found, app_id={app_model.id}",
)
variable = draft_var_srv.get_variable(variable_id=variable_id)
if variable is None:
raise NotFoundError(description=f"variable not found, id={variable_id}")
if variable.app_id != app_model.id:
raise NotFoundError(description=f"variable not found, id={variable_id}")
resetted = draft_var_srv.reset_variable(draft_workflow, variable)
db.session.commit()
if resetted is None:
return Response("", 204)
else:
return marshal(resetted, _WORKFLOW_DRAFT_VARIABLE_FIELDS)
def _get_variable_list(app_model: App, node_id) -> WorkflowDraftVariableList:
with Session(bind=db.engine, expire_on_commit=False) as session:
draft_var_srv = WorkflowDraftVariableService(
session=session,
)
if node_id == CONVERSATION_VARIABLE_NODE_ID:
draft_vars = draft_var_srv.list_conversation_variables(app_model.id)
elif node_id == SYSTEM_VARIABLE_NODE_ID:
draft_vars = draft_var_srv.list_system_variables(app_model.id)
else:
draft_vars = draft_var_srv.list_node_variables(app_id=app_model.id, node_id=node_id)
return draft_vars
class ConversationVariableCollectionApi(Resource):
@_api_prerequisite
@marshal_with(_WORKFLOW_DRAFT_VARIABLE_LIST_FIELDS)
def get(self, app_model: App):
# NOTE(QuantumGhost): Prefill conversation variables into the draft variables table
# so their IDs can be returned to the caller.
workflow_srv = WorkflowService()
draft_workflow = workflow_srv.get_draft_workflow(app_model)
if draft_workflow is None:
raise NotFoundError(description=f"draft workflow not found, id={app_model.id}")
draft_var_srv = WorkflowDraftVariableService(db.session())
draft_var_srv.prefill_conversation_variable_default_values(draft_workflow)
db.session.commit()
return _get_variable_list(app_model, CONVERSATION_VARIABLE_NODE_ID)
class SystemVariableCollectionApi(Resource):
@_api_prerequisite
@marshal_with(_WORKFLOW_DRAFT_VARIABLE_LIST_FIELDS)
def get(self, app_model: App):
return _get_variable_list(app_model, SYSTEM_VARIABLE_NODE_ID)
class EnvironmentVariableCollectionApi(Resource):
@_api_prerequisite
def get(self, app_model: App):
"""
Get draft workflow
"""
# fetch draft workflow by app_model
workflow_service = WorkflowService()
workflow = workflow_service.get_draft_workflow(app_model=app_model)
if workflow is None:
raise DraftWorkflowNotExist()
env_vars = workflow.environment_variables
env_vars_list = []
for v in env_vars:
env_vars_list.append(
{
"id": v.id,
"type": "env",
"name": v.name,
"description": v.description,
"selector": v.selector,
"value_type": v.value_type.exposed_type().value,
"value": v.value,
# Do not track edited for env vars.
"edited": False,
"visible": True,
"editable": True,
}
)
return {"items": env_vars_list}
api.add_resource(
WorkflowVariableCollectionApi,
"/apps/<uuid:app_id>/workflows/draft/variables",
)
api.add_resource(NodeVariableCollectionApi, "/apps/<uuid:app_id>/workflows/draft/nodes/<string:node_id>/variables")
api.add_resource(VariableApi, "/apps/<uuid:app_id>/workflows/draft/variables/<uuid:variable_id>")
api.add_resource(VariableResetApi, "/apps/<uuid:app_id>/workflows/draft/variables/<uuid:variable_id>/reset")
api.add_resource(ConversationVariableCollectionApi, "/apps/<uuid:app_id>/workflows/draft/conversation-variables")
api.add_resource(SystemVariableCollectionApi, "/apps/<uuid:app_id>/workflows/draft/system-variables")
api.add_resource(EnvironmentVariableCollectionApi, "/apps/<uuid:app_id>/workflows/draft/environment-variables")

View File

@ -1,3 +1,6 @@
from typing import cast
from flask_login import current_user
from flask_restful import Resource, marshal_with, reqparse
from flask_restful.inputs import int_range
@ -12,8 +15,7 @@ from fields.workflow_run_fields import (
)
from libs.helper import uuid_value
from libs.login import login_required
from models import App
from models.model import AppMode
from models import Account, App, AppMode, EndUser
from services.workflow_run_service import WorkflowRunService
@ -90,7 +92,12 @@ class WorkflowRunNodeExecutionListApi(Resource):
run_id = str(run_id)
workflow_run_service = WorkflowRunService()
node_executions = workflow_run_service.get_workflow_run_node_executions(app_model=app_model, run_id=run_id)
user = cast("Account | EndUser", current_user)
node_executions = workflow_run_service.get_workflow_run_node_executions(
app_model=app_model,
run_id=run_id,
user=user,
)
return {"data": node_executions}

View File

@ -8,6 +8,15 @@ from libs.login import current_user
from models import App, AppMode
def _load_app_model(app_id: str) -> Optional[App]:
app_model = (
db.session.query(App)
.filter(App.id == app_id, App.tenant_id == current_user.current_tenant_id, App.status == "normal")
.first()
)
return app_model
def get_app_model(view: Optional[Callable] = None, *, mode: Union[AppMode, list[AppMode], None] = None):
def decorator(view_func):
@wraps(view_func)
@ -20,18 +29,12 @@ def get_app_model(view: Optional[Callable] = None, *, mode: Union[AppMode, list[
del kwargs["app_id"]
app_model = (
db.session.query(App)
.filter(App.id == app_id, App.tenant_id == current_user.current_tenant_id, App.status == "normal")
.first()
)
app_model = _load_app_model(app_id)
if not app_model:
raise AppNotFoundError()
app_mode = AppMode.value_of(app_model.mode)
if app_mode == AppMode.CHANNEL:
raise AppNotFoundError()
if mode is not None:
if isinstance(mode, list):

View File

@ -41,7 +41,7 @@ class OAuthDataSource(Resource):
if not internal_secret:
return ({"error": "Internal secret is not set"},)
oauth_provider.save_internal_access_token(internal_secret)
return {"data": ""}
return {"data": "internal"}
else:
auth_url = oauth_provider.get_authorization_url()
return {"data": auth_url}, 200

View File

@ -24,7 +24,7 @@ from libs.password import hash_password, valid_password
from models.account import Account
from services.account_service import AccountService, TenantService
from services.errors.account import AccountRegisterError
from services.errors.workspace import WorkSpaceNotAllowedCreateError
from services.errors.workspace import WorkSpaceNotAllowedCreateError, WorkspacesLimitExceededError
from services.feature_service import FeatureService
@ -168,6 +168,8 @@ class ForgotPasswordResetApi(Resource):
)
except WorkSpaceNotAllowedCreateError:
pass
except WorkspacesLimitExceededError:
pass
except AccountRegisterError:
raise AccountInFreezeError()

View File

@ -21,6 +21,7 @@ from controllers.console.error import (
AccountNotFound,
EmailSendIpLimitError,
NotAllowedCreateWorkspace,
WorkspacesLimitExceeded,
)
from controllers.console.wraps import email_password_login_enabled, setup_required
from events.tenant_event import tenant_was_created
@ -30,7 +31,7 @@ from models.account import Account
from services.account_service import AccountService, RegisterService, TenantService
from services.billing_service import BillingService
from services.errors.account import AccountRegisterError
from services.errors.workspace import WorkSpaceNotAllowedCreateError
from services.errors.workspace import WorkSpaceNotAllowedCreateError, WorkspacesLimitExceededError
from services.feature_service import FeatureService
@ -88,10 +89,15 @@ class LoginApi(Resource):
# SELF_HOSTED only have one workspace
tenants = TenantService.get_join_tenants(account)
if len(tenants) == 0:
return {
"result": "fail",
"data": "workspace not found, please contact system admin to invite you to join in a workspace",
}
system_features = FeatureService.get_system_features()
if system_features.is_allow_create_workspace and not system_features.license.workspaces.is_available():
raise WorkspacesLimitExceeded()
else:
return {
"result": "fail",
"data": "workspace not found, please contact system admin to invite you to join in a workspace",
}
token_pair = AccountService.login(account=account, ip_address=extract_remote_ip(request))
AccountService.reset_login_error_rate_limit(args["email"])
@ -196,15 +202,18 @@ class EmailCodeLoginApi(Resource):
except AccountRegisterError as are:
raise AccountInFreezeError()
if account:
tenant = TenantService.get_join_tenants(account)
if not tenant:
tenants = TenantService.get_join_tenants(account)
if not tenants:
workspaces = FeatureService.get_system_features().license.workspaces
if not workspaces.is_available():
raise WorkspacesLimitExceeded()
if not FeatureService.get_system_features().is_allow_create_workspace:
raise NotAllowedCreateWorkspace()
else:
tenant = TenantService.create_tenant(f"{account.name}'s Workspace")
TenantService.create_tenant_member(tenant, account, role="owner")
account.current_tenant = tenant
tenant_was_created.send(tenant)
new_tenant = TenantService.create_tenant(f"{account.name}'s Workspace")
TenantService.create_tenant_member(new_tenant, account, role="owner")
account.current_tenant = new_tenant
tenant_was_created.send(new_tenant)
if account is None:
try:
@ -215,6 +224,8 @@ class EmailCodeLoginApi(Resource):
return NotAllowedCreateWorkspace()
except AccountRegisterError as are:
raise AccountInFreezeError()
except WorkspacesLimitExceededError:
raise WorkspacesLimitExceeded()
token_pair = AccountService.login(account, ip_address=extract_remote_ip(request))
AccountService.reset_login_error_rate_limit(args["email"])
return {"result": "success", "data": token_pair.model_dump()}

View File

@ -148,15 +148,15 @@ def _generate_account(provider: str, user_info: OAuthUserInfo):
account = _get_account_by_openid_or_email(provider, user_info)
if account:
tenant = TenantService.get_join_tenants(account)
if not tenant:
tenants = TenantService.get_join_tenants(account)
if not tenants:
if not FeatureService.get_system_features().is_allow_create_workspace:
raise WorkSpaceNotAllowedCreateError()
else:
tenant = TenantService.create_tenant(f"{account.name}'s Workspace")
TenantService.create_tenant_member(tenant, account, role="owner")
account.current_tenant = tenant
tenant_was_created.send(tenant)
new_tenant = TenantService.create_tenant(f"{account.name}'s Workspace")
TenantService.create_tenant_member(new_tenant, account, role="owner")
account.current_tenant = new_tenant
tenant_was_created.send(new_tenant)
if not account:
if not FeatureService.get_system_features().is_allow_register:

View File

@ -540,9 +540,22 @@ class DatasetIndexingStatusApi(Resource):
.filter(DocumentSegment.document_id == str(document.id), DocumentSegment.status != "re_segment")
.count()
)
document.completed_segments = completed_segments
document.total_segments = total_segments
documents_status.append(marshal(document, document_status_fields))
# Create a dictionary with document attributes and additional fields
document_dict = {
"id": document.id,
"indexing_status": document.indexing_status,
"processing_started_at": document.processing_started_at,
"parsing_completed_at": document.parsing_completed_at,
"cleaning_completed_at": document.cleaning_completed_at,
"splitting_completed_at": document.splitting_completed_at,
"completed_at": document.completed_at,
"paused_at": document.paused_at,
"error": document.error,
"stopped_at": document.stopped_at,
"completed_segments": completed_segments,
"total_segments": total_segments,
}
documents_status.append(marshal(document_dict, document_status_fields))
data = {"data": documents_status}
return data
@ -673,6 +686,7 @@ class DatasetRetrievalSettingApi(Resource):
| VectorType.TABLESTORE
| VectorType.HUAWEI_CLOUD
| VectorType.TENCENT
| VectorType.MATRIXONE
):
return {
"retrieval_method": [
@ -720,6 +734,7 @@ class DatasetRetrievalSettingMockApi(Resource):
| VectorType.TABLESTORE
| VectorType.TENCENT
| VectorType.HUAWEI_CLOUD
| VectorType.MATRIXONE
):
return {
"retrieval_method": [

View File

@ -5,7 +5,7 @@ from typing import cast
from flask import request
from flask_login import current_user
from flask_restful import Resource, fields, marshal, marshal_with, reqparse
from flask_restful import Resource, marshal, marshal_with, reqparse
from sqlalchemy import asc, desc, select
from werkzeug.exceptions import Forbidden, NotFound
@ -43,7 +43,6 @@ from core.model_runtime.errors.invoke import InvokeAuthorizationError
from core.plugin.impl.exc import PluginDaemonClientSideError
from core.rag.extractor.entity.extract_setting import ExtractSetting
from extensions.ext_database import db
from extensions.ext_redis import redis_client
from fields.document_fields import (
dataset_and_document_fields,
document_fields,
@ -54,8 +53,6 @@ from libs.login import login_required
from models import Dataset, DatasetProcessRule, Document, DocumentSegment, UploadFile
from services.dataset_service import DatasetService, DocumentService
from services.entities.knowledge_entities.knowledge_entities import KnowledgeConfig
from tasks.add_document_to_index_task import add_document_to_index_task
from tasks.remove_document_from_index_task import remove_document_from_index_task
class DocumentResource(Resource):
@ -242,12 +239,10 @@ class DatasetDocumentListApi(Resource):
return response
documents_and_batch_fields = {"documents": fields.List(fields.Nested(document_fields)), "batch": fields.String}
@setup_required
@login_required
@account_initialization_required
@marshal_with(documents_and_batch_fields)
@marshal_with(dataset_and_document_fields)
@cloud_edition_billing_resource_check("vector_space")
@cloud_edition_billing_rate_limit_check("knowledge")
def post(self, dataset_id):
@ -293,6 +288,8 @@ class DatasetDocumentListApi(Resource):
try:
documents, batch = DocumentService.save_document_with_dataset_id(dataset, knowledge_config, current_user)
dataset = DatasetService.get_dataset(dataset_id)
except ProviderTokenNotInitError as ex:
raise ProviderNotInitializeError(ex.description)
except QuotaExceededError:
@ -300,7 +297,7 @@ class DatasetDocumentListApi(Resource):
except ModelCurrentlyNotSupportError:
raise ProviderModelCurrentlyNotSupportError()
return {"documents": documents, "batch": batch}
return {"dataset": dataset, "documents": documents, "batch": batch}
@setup_required
@login_required
@ -583,11 +580,22 @@ class DocumentBatchIndexingStatusApi(DocumentResource):
.filter(DocumentSegment.document_id == str(document.id), DocumentSegment.status != "re_segment")
.count()
)
document.completed_segments = completed_segments
document.total_segments = total_segments
if document.is_paused:
document.indexing_status = "paused"
documents_status.append(marshal(document, document_status_fields))
# Create a dictionary with document attributes and additional fields
document_dict = {
"id": document.id,
"indexing_status": "paused" if document.is_paused else document.indexing_status,
"processing_started_at": document.processing_started_at,
"parsing_completed_at": document.parsing_completed_at,
"cleaning_completed_at": document.cleaning_completed_at,
"splitting_completed_at": document.splitting_completed_at,
"completed_at": document.completed_at,
"paused_at": document.paused_at,
"error": document.error,
"stopped_at": document.stopped_at,
"completed_segments": completed_segments,
"total_segments": total_segments,
}
documents_status.append(marshal(document_dict, document_status_fields))
data = {"data": documents_status}
return data
@ -616,11 +624,22 @@ class DocumentIndexingStatusApi(DocumentResource):
.count()
)
document.completed_segments = completed_segments
document.total_segments = total_segments
if document.is_paused:
document.indexing_status = "paused"
return marshal(document, document_status_fields)
# Create a dictionary with document attributes and additional fields
document_dict = {
"id": document.id,
"indexing_status": "paused" if document.is_paused else document.indexing_status,
"processing_started_at": document.processing_started_at,
"parsing_completed_at": document.parsing_completed_at,
"cleaning_completed_at": document.cleaning_completed_at,
"splitting_completed_at": document.splitting_completed_at,
"completed_at": document.completed_at,
"paused_at": document.paused_at,
"error": document.error,
"stopped_at": document.stopped_at,
"completed_segments": completed_segments,
"total_segments": total_segments,
}
return marshal(document_dict, document_status_fields)
class DocumentDetailApi(DocumentResource):
@ -840,77 +859,16 @@ class DocumentStatusApi(DocumentResource):
DatasetService.check_dataset_permission(dataset, current_user)
document_ids = request.args.getlist("document_id")
for document_id in document_ids:
document = self.get_document(dataset_id, document_id)
indexing_cache_key = "document_{}_indexing".format(document.id)
cache_result = redis_client.get(indexing_cache_key)
if cache_result is not None:
raise InvalidActionError(f"Document:{document.name} is being indexed, please try again later")
try:
DocumentService.batch_update_document_status(dataset, document_ids, action, current_user)
except services.errors.document.DocumentIndexingError as e:
raise InvalidActionError(str(e))
except ValueError as e:
raise InvalidActionError(str(e))
except NotFound as e:
raise NotFound(str(e))
if action == "enable":
if document.enabled:
continue
document.enabled = True
document.disabled_at = None
document.disabled_by = None
document.updated_at = datetime.now(UTC).replace(tzinfo=None)
db.session.commit()
# Set cache to prevent indexing the same document multiple times
redis_client.setex(indexing_cache_key, 600, 1)
add_document_to_index_task.delay(document_id)
elif action == "disable":
if not document.completed_at or document.indexing_status != "completed":
raise InvalidActionError(f"Document: {document.name} is not completed.")
if not document.enabled:
continue
document.enabled = False
document.disabled_at = datetime.now(UTC).replace(tzinfo=None)
document.disabled_by = current_user.id
document.updated_at = datetime.now(UTC).replace(tzinfo=None)
db.session.commit()
# Set cache to prevent indexing the same document multiple times
redis_client.setex(indexing_cache_key, 600, 1)
remove_document_from_index_task.delay(document_id)
elif action == "archive":
if document.archived:
continue
document.archived = True
document.archived_at = datetime.now(UTC).replace(tzinfo=None)
document.archived_by = current_user.id
document.updated_at = datetime.now(UTC).replace(tzinfo=None)
db.session.commit()
if document.enabled:
# Set cache to prevent indexing the same document multiple times
redis_client.setex(indexing_cache_key, 600, 1)
remove_document_from_index_task.delay(document_id)
elif action == "un_archive":
if not document.archived:
continue
document.archived = False
document.archived_at = None
document.archived_by = None
document.updated_at = datetime.now(UTC).replace(tzinfo=None)
db.session.commit()
# Set cache to prevent indexing the same document multiple times
redis_client.setex(indexing_cache_key, 600, 1)
add_document_to_index_task.delay(document_id)
else:
raise InvalidActionError()
return {"result": "success"}, 200

View File

@ -374,7 +374,7 @@ class DatasetDocumentSegmentBatchImportApi(Resource):
if len(request.files) > 1:
raise TooManyFilesError()
# check file type
if not file.filename.endswith(".csv"):
if not file.filename or not file.filename.lower().endswith(".csv"):
raise ValueError("Invalid file type. Only CSV files are allowed")
try:

View File

@ -46,6 +46,18 @@ class NotAllowedCreateWorkspace(BaseHTTPException):
code = 400
class WorkspaceMembersLimitExceeded(BaseHTTPException):
error_code = "limit_exceeded"
description = "Unable to add member because the maximum workspace's member limit was exceeded"
code = 400
class WorkspacesLimitExceeded(BaseHTTPException):
error_code = "limit_exceeded"
description = "Unable to create workspace because the maximum workspace limit was exceeded"
code = 400
class AccountBannedError(BaseHTTPException):
error_code = "account_banned"
description = "Account is banned."

View File

@ -18,7 +18,6 @@ from controllers.console.app.error import (
from controllers.console.explore.wraps import InstalledAppResource
from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError
from core.model_runtime.errors.invoke import InvokeError
from models.model import AppMode
from services.audio_service import AudioService
from services.errors.audio import (
AudioTooLargeServiceError,
@ -79,19 +78,9 @@ class ChatTextApi(InstalledAppResource):
message_id = args.get("message_id", None)
text = args.get("text", None)
if (
app_model.mode in {AppMode.ADVANCED_CHAT.value, AppMode.WORKFLOW.value}
and app_model.workflow
and app_model.workflow.features_dict
):
text_to_speech = app_model.workflow.features_dict.get("text_to_speech")
voice = args.get("voice") or text_to_speech.get("voice")
else:
try:
voice = args.get("voice") or app_model.app_model_config.text_to_speech_dict.get("voice")
except Exception:
voice = None
response = AudioService.transcript_tts(app_model=app_model, message_id=message_id, voice=voice, text=text)
voice = args.get("voice", None)
response = AudioService.transcript_tts(app_model=app_model, text=text, voice=voice, message_id=message_id)
return response
except services.errors.app_model_config.AppModelConfigBrokenError:
logging.exception("App model config broken.")

View File

@ -23,3 +23,9 @@ class AppSuggestedQuestionsAfterAnswerDisabledError(BaseHTTPException):
error_code = "app_suggested_questions_after_answer_disabled"
description = "Function Suggested questions after answer disabled."
code = 403
class AppAccessDeniedError(BaseHTTPException):
error_code = "access_denied"
description = "App access denied."
code = 403

View File

@ -1,3 +1,4 @@
import logging
from datetime import UTC, datetime
from typing import Any
@ -15,6 +16,11 @@ from fields.installed_app_fields import installed_app_list_fields
from libs.login import login_required
from models import App, InstalledApp, RecommendedApp
from services.account_service import TenantService
from services.app_service import AppService
from services.enterprise.enterprise_service import EnterpriseService
from services.feature_service import FeatureService
logger = logging.getLogger(__name__)
class InstalledAppsListApi(Resource):
@ -48,6 +54,28 @@ class InstalledAppsListApi(Resource):
for installed_app in installed_apps
if installed_app.app is not None
]
# filter out apps that user doesn't have access to
if FeatureService.get_system_features().webapp_auth.enabled:
user_id = current_user.id
res = []
app_ids = [installed_app["app"].id for installed_app in installed_app_list]
webapp_settings = EnterpriseService.WebAppAuth.batch_get_app_access_mode_by_id(app_ids)
for installed_app in installed_app_list:
webapp_setting = webapp_settings.get(installed_app["app"].id)
if not webapp_setting:
continue
if webapp_setting.access_mode == "sso_verified":
continue
app_code = AppService.get_app_code_by_id(str(installed_app["app"].id))
if EnterpriseService.WebAppAuth.is_user_allowed_to_access_webapp(
user_id=user_id,
app_code=app_code,
):
res.append(installed_app)
installed_app_list = res
logger.debug(f"installed_app_list: {installed_app_list}, user_id: {user_id}")
installed_app_list.sort(
key=lambda app: (
-app["is_pinned"],

View File

@ -4,10 +4,14 @@ from flask_login import current_user
from flask_restful import Resource
from werkzeug.exceptions import NotFound
from controllers.console.explore.error import AppAccessDeniedError
from controllers.console.wraps import account_initialization_required
from extensions.ext_database import db
from libs.login import login_required
from models import InstalledApp
from services.app_service import AppService
from services.enterprise.enterprise_service import EnterpriseService
from services.feature_service import FeatureService
def installed_app_required(view=None):
@ -48,6 +52,36 @@ def installed_app_required(view=None):
return decorator
def user_allowed_to_access_app(view=None):
def decorator(view):
@wraps(view)
def decorated(installed_app: InstalledApp, *args, **kwargs):
feature = FeatureService.get_system_features()
if feature.webapp_auth.enabled:
app_id = installed_app.app_id
app_code = AppService.get_app_code_by_id(app_id)
res = EnterpriseService.WebAppAuth.is_user_allowed_to_access_webapp(
user_id=str(current_user.id),
app_code=app_code,
)
if not res:
raise AppAccessDeniedError()
return view(installed_app, *args, **kwargs)
return decorated
if view:
return decorator(view)
return decorator
class InstalledAppResource(Resource):
# must be reversed if there are multiple decorators
method_decorators = [installed_app_required, account_initialization_required, login_required]
method_decorators = [
user_allowed_to_access_app,
installed_app_required,
account_initialization_required,
login_required,
]

View File

@ -18,7 +18,7 @@ class VersionApi(Resource):
check_update_url = dify_config.CHECK_UPDATE_URL
result = {
"version": dify_config.CURRENT_VERSION,
"version": dify_config.project.version,
"release_date": "",
"release_notes": "",
"can_auto_update": False,

View File

@ -15,7 +15,7 @@ class LoadBalancingCredentialsValidateApi(Resource):
@login_required
@account_initialization_required
def post(self, provider: str):
if not TenantAccountRole.is_privileged_role(current_user.current_tenant.current_role):
if not TenantAccountRole.is_privileged_role(current_user.current_role):
raise Forbidden()
tenant_id = current_user.current_tenant_id
@ -64,7 +64,7 @@ class LoadBalancingConfigCredentialsValidateApi(Resource):
@login_required
@account_initialization_required
def post(self, provider: str, config_id: str):
if not TenantAccountRole.is_privileged_role(current_user.current_tenant.current_role):
if not TenantAccountRole.is_privileged_role(current_user.current_role):
raise Forbidden()
tenant_id = current_user.current_tenant_id

View File

@ -6,6 +6,7 @@ from flask_restful import Resource, abort, marshal_with, reqparse
import services
from configs import dify_config
from controllers.console import api
from controllers.console.error import WorkspaceMembersLimitExceeded
from controllers.console.wraps import (
account_initialization_required,
cloud_edition_billing_resource_check,
@ -17,6 +18,7 @@ from libs.login import login_required
from models.account import Account, TenantAccountRole
from services.account_service import RegisterService, TenantService
from services.errors.account import AccountAlreadyInTenantError
from services.feature_service import FeatureService
class MemberListApi(Resource):
@ -54,6 +56,12 @@ class MemberInviteEmailApi(Resource):
inviter = current_user
invitation_results = []
console_web_url = dify_config.CONSOLE_WEB_URL
workspace_members = FeatureService.get_features(tenant_id=inviter.current_tenant.id).workspace_members
if not workspace_members.is_available(len(invitee_emails)):
raise WorkspaceMembersLimitExceeded()
for invitee_email in invitee_emails:
try:
token = RegisterService.invite_new_member(
@ -77,6 +85,7 @@ class MemberInviteEmailApi(Resource):
return {
"result": "success",
"invitation_results": invitation_results,
"tenant_id": str(current_user.current_tenant.id),
}, 201
@ -102,7 +111,7 @@ class MemberCancelInviteApi(Resource):
except Exception as e:
raise ValueError(str(e))
return {"result": "success"}, 204
return {"result": "success", "tenant_id": str(current_user.current_tenant.id)}, 200
class MemberUpdateRoleApi(Resource):

View File

@ -13,6 +13,7 @@ from core.model_runtime.utils.encoders import jsonable_encoder
from core.plugin.impl.exc import PluginDaemonClientSideError
from libs.login import login_required
from models.account import TenantPluginPermission
from services.plugin.plugin_parameter_service import PluginParameterService
from services.plugin.plugin_permission_service import PluginPermissionService
from services.plugin.plugin_service import PluginService
@ -41,12 +42,16 @@ class PluginListApi(Resource):
@account_initialization_required
def get(self):
tenant_id = current_user.current_tenant_id
parser = reqparse.RequestParser()
parser.add_argument("page", type=int, required=False, location="args", default=1)
parser.add_argument("page_size", type=int, required=False, location="args", default=256)
args = parser.parse_args()
try:
plugins = PluginService.list(tenant_id)
plugins_with_total = PluginService.list_with_total(tenant_id, args["page"], args["page_size"])
except PluginDaemonClientSideError as e:
raise ValueError(e)
return jsonable_encoder({"plugins": plugins})
return jsonable_encoder({"plugins": plugins_with_total.list, "total": plugins_with_total.total})
class PluginListLatestVersionsApi(Resource):
@ -493,6 +498,42 @@ class PluginFetchPermissionApi(Resource):
)
class PluginFetchDynamicSelectOptionsApi(Resource):
@setup_required
@login_required
@account_initialization_required
def get(self):
# check if the user is admin or owner
if not current_user.is_admin_or_owner:
raise Forbidden()
tenant_id = current_user.current_tenant_id
user_id = current_user.id
parser = reqparse.RequestParser()
parser.add_argument("plugin_id", type=str, required=True, location="args")
parser.add_argument("provider", type=str, required=True, location="args")
parser.add_argument("action", type=str, required=True, location="args")
parser.add_argument("parameter", type=str, required=True, location="args")
parser.add_argument("provider_type", type=str, required=True, location="args")
args = parser.parse_args()
try:
options = PluginParameterService.get_dynamic_select_options(
tenant_id,
user_id,
args["plugin_id"],
args["provider"],
args["action"],
args["parameter"],
args["provider_type"],
)
except PluginDaemonClientSideError as e:
raise ValueError(e)
return jsonable_encoder({"options": options})
api.add_resource(PluginDebuggingKeyApi, "/workspaces/current/plugin/debugging-key")
api.add_resource(PluginListApi, "/workspaces/current/plugin/list")
api.add_resource(PluginListLatestVersionsApi, "/workspaces/current/plugin/list/latest-versions")
@ -517,3 +558,5 @@ api.add_resource(PluginFetchMarketplacePkgApi, "/workspaces/current/plugin/marke
api.add_resource(PluginChangePermissionApi, "/workspaces/current/plugin/permission/change")
api.add_resource(PluginFetchPermissionApi, "/workspaces/current/plugin/permission/fetch")
api.add_resource(PluginFetchDynamicSelectOptionsApi, "/workspaces/current/plugin/parameters/dynamic-options")

View File

@ -1,6 +1,7 @@
import io
from urllib.parse import urlparse
from flask import send_file
from flask import redirect, send_file
from flask_login import current_user
from flask_restful import Resource, reqparse
from sqlalchemy.orm import Session
@ -9,17 +10,34 @@ from werkzeug.exceptions import Forbidden
from configs import dify_config
from controllers.console import api
from controllers.console.wraps import account_initialization_required, enterprise_license_required, setup_required
from core.mcp.auth.auth_flow import auth, handle_callback
from core.mcp.auth.auth_provider import OAuthClientProvider
from core.mcp.error import MCPAuthError, MCPError
from core.mcp.mcp_client import MCPClient
from core.model_runtime.utils.encoders import jsonable_encoder
from extensions.ext_database import db
from libs.helper import alphanumeric, uuid_value
from libs.login import login_required
from services.tools.api_tools_manage_service import ApiToolManageService
from services.tools.builtin_tools_manage_service import BuiltinToolManageService
from services.tools.mcp_tools_mange_service import MCPToolManageService
from services.tools.tool_labels_service import ToolLabelsService
from services.tools.tools_manage_service import ToolCommonService
from services.tools.tools_transform_service import ToolTransformService
from services.tools.workflow_tools_manage_service import WorkflowToolManageService
def is_valid_url(url: str) -> bool:
if not url:
return False
try:
parsed = urlparse(url)
return all([parsed.scheme, parsed.netloc]) and parsed.scheme in ["http", "https"]
except Exception:
return False
class ToolProviderListApi(Resource):
@setup_required
@login_required
@ -34,7 +52,7 @@ class ToolProviderListApi(Resource):
req.add_argument(
"type",
type=str,
choices=["builtin", "model", "api", "workflow"],
choices=["builtin", "model", "api", "workflow", "mcp"],
required=False,
nullable=True,
location="args",
@ -613,6 +631,166 @@ class ToolLabelsApi(Resource):
return jsonable_encoder(ToolLabelsService.list_tool_labels())
class ToolProviderMCPApi(Resource):
@setup_required
@login_required
@account_initialization_required
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("server_url", type=str, required=True, nullable=False, location="json")
parser.add_argument("name", type=str, required=True, nullable=False, location="json")
parser.add_argument("icon", type=str, required=True, nullable=False, location="json")
parser.add_argument("icon_type", type=str, required=True, nullable=False, location="json")
parser.add_argument("icon_background", type=str, required=False, nullable=True, location="json", default="")
parser.add_argument("server_identifier", type=str, required=True, nullable=False, location="json")
args = parser.parse_args()
user = current_user
if not is_valid_url(args["server_url"]):
raise ValueError("Server URL is not valid.")
return jsonable_encoder(
MCPToolManageService.create_mcp_provider(
tenant_id=user.current_tenant_id,
server_url=args["server_url"],
name=args["name"],
icon=args["icon"],
icon_type=args["icon_type"],
icon_background=args["icon_background"],
user_id=user.id,
server_identifier=args["server_identifier"],
)
)
@setup_required
@login_required
@account_initialization_required
def put(self):
parser = reqparse.RequestParser()
parser.add_argument("server_url", type=str, required=True, nullable=False, location="json")
parser.add_argument("name", type=str, required=True, nullable=False, location="json")
parser.add_argument("icon", type=str, required=True, nullable=False, location="json")
parser.add_argument("icon_type", type=str, required=True, nullable=False, location="json")
parser.add_argument("icon_background", type=str, required=False, nullable=True, location="json")
parser.add_argument("provider_id", type=str, required=True, nullable=False, location="json")
parser.add_argument("server_identifier", type=str, required=True, nullable=False, location="json")
args = parser.parse_args()
if not is_valid_url(args["server_url"]):
if "[__HIDDEN__]" in args["server_url"]:
pass
else:
raise ValueError("Server URL is not valid.")
MCPToolManageService.update_mcp_provider(
tenant_id=current_user.current_tenant_id,
provider_id=args["provider_id"],
server_url=args["server_url"],
name=args["name"],
icon=args["icon"],
icon_type=args["icon_type"],
icon_background=args["icon_background"],
server_identifier=args["server_identifier"],
)
return {"result": "success"}
@setup_required
@login_required
@account_initialization_required
def delete(self):
parser = reqparse.RequestParser()
parser.add_argument("provider_id", type=str, required=True, nullable=False, location="json")
args = parser.parse_args()
MCPToolManageService.delete_mcp_tool(tenant_id=current_user.current_tenant_id, provider_id=args["provider_id"])
return {"result": "success"}
class ToolMCPAuthApi(Resource):
@setup_required
@login_required
@account_initialization_required
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("provider_id", type=str, required=True, nullable=False, location="json")
parser.add_argument("authorization_code", type=str, required=False, nullable=True, location="json")
args = parser.parse_args()
provider_id = args["provider_id"]
tenant_id = current_user.current_tenant_id
provider = MCPToolManageService.get_mcp_provider_by_provider_id(provider_id, tenant_id)
if not provider:
raise ValueError("provider not found")
try:
with MCPClient(
provider.decrypted_server_url,
provider_id,
tenant_id,
authed=False,
authorization_code=args["authorization_code"],
for_list=True,
):
MCPToolManageService.update_mcp_provider_credentials(
mcp_provider=provider,
credentials=provider.decrypted_credentials,
authed=True,
)
return {"result": "success"}
except MCPAuthError:
auth_provider = OAuthClientProvider(provider_id, tenant_id, for_list=True)
return auth(auth_provider, provider.decrypted_server_url, args["authorization_code"])
except MCPError as e:
MCPToolManageService.update_mcp_provider_credentials(
mcp_provider=provider,
credentials={},
authed=False,
)
raise ValueError(f"Failed to connect to MCP server: {e}") from e
class ToolMCPDetailApi(Resource):
@setup_required
@login_required
@account_initialization_required
def get(self, provider_id):
user = current_user
provider = MCPToolManageService.get_mcp_provider_by_provider_id(provider_id, user.current_tenant_id)
return jsonable_encoder(ToolTransformService.mcp_provider_to_user_provider(provider, for_list=True))
class ToolMCPListAllApi(Resource):
@setup_required
@login_required
@account_initialization_required
def get(self):
user = current_user
tenant_id = user.current_tenant_id
tools = MCPToolManageService.retrieve_mcp_tools(tenant_id=tenant_id)
return [tool.to_dict() for tool in tools]
class ToolMCPUpdateApi(Resource):
@setup_required
@login_required
@account_initialization_required
def get(self, provider_id):
tenant_id = current_user.current_tenant_id
tools = MCPToolManageService.list_mcp_tool_from_remote_server(
tenant_id=tenant_id,
provider_id=provider_id,
)
return jsonable_encoder(tools)
class ToolMCPCallbackApi(Resource):
def get(self):
parser = reqparse.RequestParser()
parser.add_argument("code", type=str, required=True, nullable=False, location="args")
parser.add_argument("state", type=str, required=True, nullable=False, location="args")
args = parser.parse_args()
state_key = args["state"]
authorization_code = args["code"]
handle_callback(state_key, authorization_code)
return redirect(f"{dify_config.CONSOLE_WEB_URL}/oauth-callback")
# tool provider
api.add_resource(ToolProviderListApi, "/workspaces/current/tool-providers")
@ -647,8 +825,15 @@ api.add_resource(ToolWorkflowProviderDeleteApi, "/workspaces/current/tool-provid
api.add_resource(ToolWorkflowProviderGetApi, "/workspaces/current/tool-provider/workflow/get")
api.add_resource(ToolWorkflowProviderListToolApi, "/workspaces/current/tool-provider/workflow/tools")
# mcp tool provider
api.add_resource(ToolMCPDetailApi, "/workspaces/current/tool-provider/mcp/tools/<path:provider_id>")
api.add_resource(ToolProviderMCPApi, "/workspaces/current/tool-provider/mcp")
api.add_resource(ToolMCPUpdateApi, "/workspaces/current/tool-provider/mcp/update/<path:provider_id>")
api.add_resource(ToolMCPAuthApi, "/workspaces/current/tool-provider/mcp/auth")
api.add_resource(ToolMCPCallbackApi, "/mcp/oauth/callback")
api.add_resource(ToolBuiltinListApi, "/workspaces/current/tools/builtin")
api.add_resource(ToolApiListApi, "/workspaces/current/tools/api")
api.add_resource(ToolMCPListAllApi, "/workspaces/current/tools/mcp")
api.add_resource(ToolWorkflowListApi, "/workspaces/current/tools/workflow")
api.add_resource(ToolLabelsApi, "/workspaces/current/tool-labels")

View File

@ -68,16 +68,24 @@ class TenantListApi(Resource):
@account_initialization_required
def get(self):
tenants = TenantService.get_join_tenants(current_user)
tenant_dicts = []
for tenant in tenants:
features = FeatureService.get_features(tenant.id)
if features.billing.enabled:
tenant.plan = features.billing.subscription.plan
else:
tenant.plan = "sandbox"
if tenant.id == current_user.current_tenant_id:
tenant.current = True # Set current=True for current tenant
return {"workspaces": marshal(tenants, tenants_fields)}, 200
# Create a dictionary with tenant attributes
tenant_dict = {
"id": tenant.id,
"name": tenant.name,
"status": tenant.status,
"created_at": tenant.created_at,
"plan": features.billing.subscription.plan if features.billing.enabled else "sandbox",
"current": tenant.id == current_user.current_tenant_id,
}
tenant_dicts.append(tenant_dict)
return {"workspaces": marshal(tenant_dicts, tenants_fields)}, 200
class WorkspaceListApi(Resource):

View File

@ -44,6 +44,17 @@ def only_edition_cloud(view):
return decorated
def only_edition_enterprise(view):
@wraps(view)
def decorated(*args, **kwargs):
if not dify_config.ENTERPRISE_ENABLED:
abort(404)
return view(*args, **kwargs)
return decorated
def only_edition_self_hosted(view):
@wraps(view)
def decorated(*args, **kwargs):

View File

@ -64,15 +64,28 @@ class PluginUploadFileApi(Resource):
extension = guess_extension(tool_file.mimetype) or ".bin"
preview_url = ToolFileManager.sign_file(tool_file_id=tool_file.id, extension=extension)
tool_file.mime_type = mimetype
tool_file.extension = extension
tool_file.preview_url = preview_url
# Create a dictionary with all the necessary attributes
result = {
"id": tool_file.id,
"user_id": tool_file.user_id,
"tenant_id": tool_file.tenant_id,
"conversation_id": tool_file.conversation_id,
"file_key": tool_file.file_key,
"mimetype": tool_file.mimetype,
"original_url": tool_file.original_url,
"name": tool_file.name,
"size": tool_file.size,
"mime_type": mimetype,
"extension": extension,
"preview_url": preview_url,
}
return result, 201
except services.errors.file.FileTooLargeError as file_too_large_error:
raise FileTooLargeError(file_too_large_error.description)
except services.errors.file.UnsupportedFileTypeError:
raise UnsupportedFileTypeError()
return tool_file, 201
api.add_resource(PluginUploadFileApi, "/files/upload/for-plugin")

View File

@ -5,5 +5,6 @@ from libs.external_api import ExternalApi
bp = Blueprint("inner_api", __name__, url_prefix="/inner/api")
api = ExternalApi(bp)
from . import mail
from .plugin import plugin
from .workspace import workspace

View File

@ -0,0 +1,27 @@
from flask_restful import (
Resource, # type: ignore
reqparse,
)
from controllers.console.wraps import setup_required
from controllers.inner_api import api
from controllers.inner_api.wraps import enterprise_inner_api_only
from services.enterprise.mail_service import DifyMail, EnterpriseMailService
class EnterpriseMail(Resource):
@setup_required
@enterprise_inner_api_only
def post(self):
parser = reqparse.RequestParser()
parser.add_argument("to", type=str, action="append", required=True)
parser.add_argument("subject", type=str, required=True)
parser.add_argument("body", type=str, required=True)
parser.add_argument("substitutions", type=dict, required=False)
args = parser.parse_args()
EnterpriseMailService.send_mail(DifyMail(**args))
return {"message": "success"}, 200
api.add_resource(EnterpriseMail, "/enterprise/mail")

View File

@ -17,6 +17,7 @@ from core.plugin.entities.request import (
RequestInvokeApp,
RequestInvokeEncrypt,
RequestInvokeLLM,
RequestInvokeLLMWithStructuredOutput,
RequestInvokeModeration,
RequestInvokeParameterExtractorNode,
RequestInvokeQuestionClassifierNode,
@ -29,7 +30,7 @@ from core.plugin.entities.request import (
RequestRequestUploadFile,
)
from core.tools.entities.tool_entities import ToolProviderType
from libs.helper import compact_generate_response
from libs.helper import length_prefixed_response
from models.account import Account, Tenant
from models.model import EndUser
@ -44,7 +45,22 @@ class PluginInvokeLLMApi(Resource):
response = PluginModelBackwardsInvocation.invoke_llm(user_model.id, tenant_model, payload)
return PluginModelBackwardsInvocation.convert_to_event_stream(response)
return compact_generate_response(generator())
return length_prefixed_response(0xF, generator())
class PluginInvokeLLMWithStructuredOutputApi(Resource):
@setup_required
@plugin_inner_api_only
@get_user_tenant
@plugin_data(payload_type=RequestInvokeLLMWithStructuredOutput)
def post(self, user_model: Account | EndUser, tenant_model: Tenant, payload: RequestInvokeLLMWithStructuredOutput):
def generator():
response = PluginModelBackwardsInvocation.invoke_llm_with_structured_output(
user_model.id, tenant_model, payload
)
return PluginModelBackwardsInvocation.convert_to_event_stream(response)
return length_prefixed_response(0xF, generator())
class PluginInvokeTextEmbeddingApi(Resource):
@ -101,7 +117,7 @@ class PluginInvokeTTSApi(Resource):
)
return PluginModelBackwardsInvocation.convert_to_event_stream(response)
return compact_generate_response(generator())
return length_prefixed_response(0xF, generator())
class PluginInvokeSpeech2TextApi(Resource):
@ -162,7 +178,7 @@ class PluginInvokeToolApi(Resource):
),
)
return compact_generate_response(generator())
return length_prefixed_response(0xF, generator())
class PluginInvokeParameterExtractorNodeApi(Resource):
@ -228,7 +244,7 @@ class PluginInvokeAppApi(Resource):
files=payload.files,
)
return compact_generate_response(PluginAppBackwardsInvocation.convert_to_event_stream(response))
return length_prefixed_response(0xF, PluginAppBackwardsInvocation.convert_to_event_stream(response))
class PluginInvokeEncryptApi(Resource):
@ -291,6 +307,7 @@ class PluginFetchAppInfoApi(Resource):
api.add_resource(PluginInvokeLLMApi, "/invoke/llm")
api.add_resource(PluginInvokeLLMWithStructuredOutputApi, "/invoke/llm/structured-output")
api.add_resource(PluginInvokeTextEmbeddingApi, "/invoke/text-embedding")
api.add_resource(PluginInvokeRerankApi, "/invoke/rerank")
api.add_resource(PluginInvokeTTSApi, "/invoke/tts")

View File

@ -2,12 +2,14 @@ from collections.abc import Callable
from functools import wraps
from typing import Optional
from flask import request
from flask import current_app, request
from flask_login import user_logged_in
from flask_restful import reqparse
from pydantic import BaseModel
from sqlalchemy.orm import Session
from extensions.ext_database import db
from libs.login import _get_user
from models.account import Account, Tenant
from models.model import EndUser
from services.account_service import AccountService
@ -30,6 +32,7 @@ def get_user(tenant_id: str, user_id: str | None) -> Account | EndUser:
)
session.add(user_model)
session.commit()
session.refresh(user_model)
else:
user_model = AccountService.load_user(user_id)
if not user_model:
@ -80,7 +83,12 @@ def get_user_tenant(view: Optional[Callable] = None):
raise ValueError("tenant not found")
kwargs["tenant_model"] = tenant_model
kwargs["user_model"] = get_user(tenant_id, user_id)
user = get_user(tenant_id, user_id)
kwargs["user_model"] = user
current_app.login_manager._update_request_context_with_user(user) # type: ignore
user_logged_in.send(current_app._get_current_object(), user=_get_user()) # type: ignore
return view_func(*args, **kwargs)

View File

@ -29,7 +29,19 @@ class EnterpriseWorkspace(Resource):
tenant_was_created.send(tenant)
return {"message": "enterprise workspace created."}
resp = {
"id": tenant.id,
"name": tenant.name,
"plan": tenant.plan,
"status": tenant.status,
"created_at": tenant.created_at.isoformat() + "Z" if tenant.created_at else None,
"updated_at": tenant.updated_at.isoformat() + "Z" if tenant.updated_at else None,
}
return {
"message": "enterprise workspace created.",
"tenant": resp,
}
class EnterpriseWorkspaceNoOwnerEmail(Resource):

View File

@ -0,0 +1,8 @@
from flask import Blueprint
from libs.external_api import ExternalApi
bp = Blueprint("mcp", __name__, url_prefix="/mcp")
api = ExternalApi(bp)
from . import mcp

104
api/controllers/mcp/mcp.py Normal file
View File

@ -0,0 +1,104 @@
from flask_restful import Resource, reqparse
from pydantic import ValidationError
from controllers.console.app.mcp_server import AppMCPServerStatus
from controllers.mcp import api
from core.app.app_config.entities import VariableEntity
from core.mcp import types
from core.mcp.server.streamable_http import MCPServerStreamableHTTPRequestHandler
from core.mcp.types import ClientNotification, ClientRequest
from core.mcp.utils import create_mcp_error_response
from extensions.ext_database import db
from libs import helper
from models.model import App, AppMCPServer, AppMode
class MCPAppApi(Resource):
def post(self, server_code):
def int_or_str(value):
if isinstance(value, (int, str)):
return value
else:
return None
parser = reqparse.RequestParser()
parser.add_argument("jsonrpc", type=str, required=True, location="json")
parser.add_argument("method", type=str, required=True, location="json")
parser.add_argument("params", type=dict, required=False, location="json")
parser.add_argument("id", type=int_or_str, required=False, location="json")
args = parser.parse_args()
request_id = args.get("id")
server = db.session.query(AppMCPServer).filter(AppMCPServer.server_code == server_code).first()
if not server:
return helper.compact_generate_response(
create_mcp_error_response(request_id, types.INVALID_REQUEST, "Server Not Found")
)
if server.status != AppMCPServerStatus.ACTIVE:
return helper.compact_generate_response(
create_mcp_error_response(request_id, types.INVALID_REQUEST, "Server is not active")
)
app = db.session.query(App).filter(App.id == server.app_id).first()
if not app:
return helper.compact_generate_response(
create_mcp_error_response(request_id, types.INVALID_REQUEST, "App Not Found")
)
if app.mode in {AppMode.ADVANCED_CHAT.value, AppMode.WORKFLOW.value}:
workflow = app.workflow
if workflow is None:
return helper.compact_generate_response(
create_mcp_error_response(request_id, types.INVALID_REQUEST, "App is unavailable")
)
user_input_form = workflow.user_input_form(to_old_structure=True)
else:
app_model_config = app.app_model_config
if app_model_config is None:
return helper.compact_generate_response(
create_mcp_error_response(request_id, types.INVALID_REQUEST, "App is unavailable")
)
features_dict = app_model_config.to_dict()
user_input_form = features_dict.get("user_input_form", [])
converted_user_input_form: list[VariableEntity] = []
try:
for item in user_input_form:
variable_type = item.get("type", "") or list(item.keys())[0]
variable = item[variable_type]
converted_user_input_form.append(
VariableEntity(
type=variable_type,
variable=variable.get("variable"),
description=variable.get("description") or "",
label=variable.get("label"),
required=variable.get("required", False),
max_length=variable.get("max_length"),
options=variable.get("options") or [],
)
)
except ValidationError as e:
return helper.compact_generate_response(
create_mcp_error_response(request_id, types.INVALID_PARAMS, f"Invalid user_input_form: {str(e)}")
)
try:
request: ClientRequest | ClientNotification = ClientRequest.model_validate(args)
except ValidationError as e:
try:
notification = ClientNotification.model_validate(args)
request = notification
except ValidationError as e:
return helper.compact_generate_response(
create_mcp_error_response(request_id, types.INVALID_PARAMS, f"Invalid MCP request: {str(e)}")
)
mcp_server_handler = MCPServerStreamableHTTPRequestHandler(app, request, converted_user_input_form)
response = mcp_server_handler.handle()
return helper.compact_generate_response(response)
api.add_resource(MCPAppApi, "/server/<string:server_code>/mcp")

View File

@ -3,19 +3,19 @@ from flask_restful import Resource, marshal, marshal_with, reqparse
from werkzeug.exceptions import Forbidden
from controllers.service_api import api
from controllers.service_api.wraps import FetchUserArg, WhereisUserArg, validate_app_token
from controllers.service_api.wraps import validate_app_token
from extensions.ext_redis import redis_client
from fields.annotation_fields import (
annotation_fields,
)
from libs.login import current_user
from models.model import App, EndUser
from models.model import App
from services.annotation_service import AppAnnotationService
class AnnotationReplyActionApi(Resource):
@validate_app_token(fetch_user_arg=FetchUserArg(fetch_from=WhereisUserArg.JSON))
def post(self, app_model: App, end_user: EndUser, action):
@validate_app_token
def post(self, app_model: App, action):
parser = reqparse.RequestParser()
parser.add_argument("score_threshold", required=True, type=float, location="json")
parser.add_argument("embedding_provider_name", required=True, type=str, location="json")
@ -31,8 +31,8 @@ class AnnotationReplyActionApi(Resource):
class AnnotationReplyActionStatusApi(Resource):
@validate_app_token(fetch_user_arg=FetchUserArg(fetch_from=WhereisUserArg.QUERY))
def get(self, app_model: App, end_user: EndUser, job_id, action):
@validate_app_token
def get(self, app_model: App, job_id, action):
job_id = str(job_id)
app_annotation_job_key = "{}_app_annotation_job_{}".format(action, str(job_id))
cache_result = redis_client.get(app_annotation_job_key)
@ -49,8 +49,8 @@ class AnnotationReplyActionStatusApi(Resource):
class AnnotationListApi(Resource):
@validate_app_token(fetch_user_arg=FetchUserArg(fetch_from=WhereisUserArg.QUERY))
def get(self, app_model: App, end_user: EndUser):
@validate_app_token
def get(self, app_model: App):
page = request.args.get("page", default=1, type=int)
limit = request.args.get("limit", default=20, type=int)
keyword = request.args.get("keyword", default="", type=str)
@ -65,9 +65,9 @@ class AnnotationListApi(Resource):
}
return response, 200
@validate_app_token(fetch_user_arg=FetchUserArg(fetch_from=WhereisUserArg.JSON))
@validate_app_token
@marshal_with(annotation_fields)
def post(self, app_model: App, end_user: EndUser):
def post(self, app_model: App):
parser = reqparse.RequestParser()
parser.add_argument("question", required=True, type=str, location="json")
parser.add_argument("answer", required=True, type=str, location="json")
@ -77,9 +77,9 @@ class AnnotationListApi(Resource):
class AnnotationUpdateDeleteApi(Resource):
@validate_app_token(fetch_user_arg=FetchUserArg(fetch_from=WhereisUserArg.JSON))
@validate_app_token
@marshal_with(annotation_fields)
def put(self, app_model: App, end_user: EndUser, annotation_id):
def put(self, app_model: App, annotation_id):
if not current_user.is_editor:
raise Forbidden()
@ -91,8 +91,8 @@ class AnnotationUpdateDeleteApi(Resource):
annotation = AppAnnotationService.update_app_annotation_directly(args, app_model.id, annotation_id)
return annotation
@validate_app_token(fetch_user_arg=FetchUserArg(fetch_from=WhereisUserArg.QUERY))
def delete(self, app_model: App, end_user: EndUser, annotation_id):
@validate_app_token
def delete(self, app_model: App, annotation_id):
if not current_user.is_editor:
raise Forbidden()

View File

@ -47,7 +47,13 @@ class AppInfoApi(Resource):
def get(self, app_model: App):
"""Get app information"""
tags = [tag.name for tag in app_model.tags]
return {"name": app_model.name, "description": app_model.description, "tags": tags, "mode": app_model.mode}
return {
"name": app_model.name,
"description": app_model.description,
"tags": tags,
"mode": app_model.mode,
"author_name": app_model.author_name,
}
api.add_resource(AppParameterApi, "/parameters")

View File

@ -20,7 +20,7 @@ from controllers.service_api.app.error import (
from controllers.service_api.wraps import FetchUserArg, WhereisUserArg, validate_app_token
from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError
from core.model_runtime.errors.invoke import InvokeError
from models.model import App, AppMode, EndUser
from models.model import App, EndUser
from services.audio_service import AudioService
from services.errors.audio import (
AudioTooLargeServiceError,
@ -78,20 +78,9 @@ class TextApi(Resource):
message_id = args.get("message_id", None)
text = args.get("text", None)
if (
app_model.mode in {AppMode.ADVANCED_CHAT.value, AppMode.WORKFLOW.value}
and app_model.workflow
and app_model.workflow.features_dict
):
text_to_speech = app_model.workflow.features_dict.get("text_to_speech", {})
voice = args.get("voice") or text_to_speech.get("voice")
else:
try:
voice = args.get("voice") or app_model.app_model_config.text_to_speech_dict.get("voice")
except Exception:
voice = None
voice = args.get("voice", None)
response = AudioService.transcript_tts(
app_model=app_model, message_id=message_id, end_user=end_user.external_user_id, voice=voice, text=text
app_model=app_model, text=text, voice=voice, end_user=end_user.external_user_id, message_id=message_id
)
return response

View File

@ -3,7 +3,7 @@ import logging
from dateutil.parser import isoparse
from flask_restful import Resource, fields, marshal_with, reqparse
from flask_restful.inputs import int_range
from sqlalchemy.orm import Session
from sqlalchemy.orm import Session, sessionmaker
from werkzeug.exceptions import InternalServerError
from controllers.service_api import api
@ -24,12 +24,13 @@ from core.errors.error import (
QuotaExceededError,
)
from core.model_runtime.errors.invoke import InvokeError
from core.workflow.entities.workflow_execution import WorkflowExecutionStatus
from extensions.ext_database import db
from fields.workflow_app_log_fields import workflow_app_log_pagination_fields
from libs import helper
from libs.helper import TimestampField
from models.model import App, AppMode, EndUser
from models.workflow import WorkflowRun, WorkflowRunStatus
from repositories.factory import DifyAPIRepositoryFactory
from services.app_generate_service import AppGenerateService
from services.errors.llm import InvokeRateLimitError
from services.workflow_app_service import WorkflowAppService
@ -62,7 +63,15 @@ class WorkflowRunDetailApi(Resource):
if app_mode not in [AppMode.WORKFLOW, AppMode.ADVANCED_CHAT]:
raise NotWorkflowAppError()
workflow_run = db.session.query(WorkflowRun).filter(WorkflowRun.id == workflow_run_id).first()
# Use repository to get workflow run
session_maker = sessionmaker(bind=db.engine, expire_on_commit=False)
workflow_run_repo = DifyAPIRepositoryFactory.create_api_workflow_run_repository(session_maker)
workflow_run = workflow_run_repo.get_workflow_run_by_id(
tenant_id=app_model.tenant_id,
app_id=app_model.id,
run_id=workflow_run_id,
)
return workflow_run
@ -134,11 +143,25 @@ class WorkflowAppLogApi(Resource):
parser.add_argument("status", type=str, choices=["succeeded", "failed", "stopped"], location="args")
parser.add_argument("created_at__before", type=str, location="args")
parser.add_argument("created_at__after", type=str, location="args")
parser.add_argument(
"created_by_end_user_session_id",
type=str,
location="args",
required=False,
default=None,
)
parser.add_argument(
"created_by_account",
type=str,
location="args",
required=False,
default=None,
)
parser.add_argument("page", type=int_range(1, 99999), default=1, location="args")
parser.add_argument("limit", type=int_range(1, 100), default=20, location="args")
args = parser.parse_args()
args.status = WorkflowRunStatus(args.status) if args.status else None
args.status = WorkflowExecutionStatus(args.status) if args.status else None
if args.created_at__before:
args.created_at__before = isoparse(args.created_at__before)
@ -157,6 +180,8 @@ class WorkflowAppLogApi(Resource):
created_at_after=args.created_at__after,
page=args.page,
limit=args.limit,
created_by_end_user_session_id=args.created_by_end_user_session_id,
created_by_account=args.created_by_account,
)
return workflow_app_log_pagination

View File

@ -1,19 +1,25 @@
from flask import request
from flask_restful import marshal, reqparse
from flask_restful import marshal, marshal_with, reqparse
from werkzeug.exceptions import Forbidden, NotFound
import services.dataset_service
from controllers.service_api import api
from controllers.service_api.dataset.error import DatasetInUseError, DatasetNameDuplicateError
from controllers.service_api.wraps import DatasetApiResource
from controllers.service_api.dataset.error import DatasetInUseError, DatasetNameDuplicateError, InvalidActionError
from controllers.service_api.wraps import (
DatasetApiResource,
cloud_edition_billing_rate_limit_check,
validate_dataset_token,
)
from core.model_runtime.entities.model_entities import ModelType
from core.plugin.entities.plugin import ModelProviderID
from core.provider_manager import ProviderManager
from fields.dataset_fields import dataset_detail_fields
from fields.tag_fields import tag_fields
from libs.login import current_user
from models.dataset import Dataset, DatasetPermissionEnum
from services.dataset_service import DatasetPermissionService, DatasetService
from services.dataset_service import DatasetPermissionService, DatasetService, DocumentService
from services.entities.knowledge_entities.knowledge_entities import RetrievalModel
from services.tag_service import TagService
def _validate_name(name):
@ -68,6 +74,7 @@ class DatasetListApi(DatasetApiResource):
response = {"data": data, "has_more": len(datasets) == limit, "limit": limit, "total": total, "page": page}
return response, 200
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def post(self, tenant_id):
"""Resource for creating datasets."""
parser = reqparse.RequestParser()
@ -126,6 +133,22 @@ class DatasetListApi(DatasetApiResource):
parser.add_argument("embedding_model_provider", type=str, required=False, nullable=True, location="json")
args = parser.parse_args()
if args.get("embedding_model_provider"):
DatasetService.check_embedding_model_setting(
tenant_id, args.get("embedding_model_provider"), args.get("embedding_model")
)
if (
args.get("retrieval_model")
and args.get("retrieval_model").get("reranking_model")
and args.get("retrieval_model").get("reranking_model").get("reranking_provider_name")
):
DatasetService.check_reranking_model_setting(
tenant_id,
args.get("retrieval_model").get("reranking_model").get("reranking_provider_name"),
args.get("retrieval_model").get("reranking_model").get("reranking_model_name"),
)
try:
dataset = DatasetService.create_empty_dataset(
tenant_id=tenant_id,
@ -191,6 +214,7 @@ class DatasetApi(DatasetApiResource):
return data, 200
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def patch(self, _, dataset_id):
dataset_id_str = str(dataset_id)
dataset = DatasetService.get_dataset(dataset_id_str)
@ -257,10 +281,20 @@ class DatasetApi(DatasetApiResource):
data = request.get_json()
# check embedding model setting
if data.get("indexing_technique") == "high_quality":
if data.get("indexing_technique") == "high_quality" or data.get("embedding_model_provider"):
DatasetService.check_embedding_model_setting(
dataset.tenant_id, data.get("embedding_model_provider"), data.get("embedding_model")
)
if (
data.get("retrieval_model")
and data.get("retrieval_model").get("reranking_model")
and data.get("retrieval_model").get("reranking_model").get("reranking_provider_name")
):
DatasetService.check_reranking_model_setting(
dataset.tenant_id,
data.get("retrieval_model").get("reranking_model").get("reranking_provider_name"),
data.get("retrieval_model").get("reranking_model").get("reranking_model_name"),
)
# The role of the current user in the ta table must be admin, owner, editor, or dataset_operator
DatasetPermissionService.check_permission(
@ -291,6 +325,7 @@ class DatasetApi(DatasetApiResource):
return result_data, 200
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def delete(self, _, dataset_id):
"""
Deletes a dataset given its ID.
@ -320,5 +355,186 @@ class DatasetApi(DatasetApiResource):
raise DatasetInUseError()
class DocumentStatusApi(DatasetApiResource):
"""Resource for batch document status operations."""
def patch(self, tenant_id, dataset_id, action):
"""
Batch update document status.
Args:
tenant_id: tenant id
dataset_id: dataset id
action: action to perform (enable, disable, archive, un_archive)
Returns:
dict: A dictionary with a key 'result' and a value 'success'
int: HTTP status code 200 indicating that the operation was successful.
Raises:
NotFound: If the dataset with the given ID does not exist.
Forbidden: If the user does not have permission.
InvalidActionError: If the action is invalid or cannot be performed.
"""
dataset_id_str = str(dataset_id)
dataset = DatasetService.get_dataset(dataset_id_str)
if dataset is None:
raise NotFound("Dataset not found.")
# Check user's permission
try:
DatasetService.check_dataset_permission(dataset, current_user)
except services.errors.account.NoPermissionError as e:
raise Forbidden(str(e))
# Check dataset model setting
DatasetService.check_dataset_model_setting(dataset)
# Get document IDs from request body
data = request.get_json()
document_ids = data.get("document_ids", [])
try:
DocumentService.batch_update_document_status(dataset, document_ids, action, current_user)
except services.errors.document.DocumentIndexingError as e:
raise InvalidActionError(str(e))
except ValueError as e:
raise InvalidActionError(str(e))
return {"result": "success"}, 200
class DatasetTagsApi(DatasetApiResource):
@validate_dataset_token
@marshal_with(tag_fields)
def get(self, _, dataset_id):
"""Get all knowledge type tags."""
tags = TagService.get_tags("knowledge", current_user.current_tenant_id)
return tags, 200
@validate_dataset_token
def post(self, _, dataset_id):
"""Add a knowledge type tag."""
if not (current_user.is_editor or current_user.is_dataset_editor):
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument(
"name",
nullable=False,
required=True,
help="Name must be between 1 to 50 characters.",
type=DatasetTagsApi._validate_tag_name,
)
args = parser.parse_args()
args["type"] = "knowledge"
tag = TagService.save_tags(args)
response = {"id": tag.id, "name": tag.name, "type": tag.type, "binding_count": 0}
return response, 200
@validate_dataset_token
def patch(self, _, dataset_id):
if not (current_user.is_editor or current_user.is_dataset_editor):
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument(
"name",
nullable=False,
required=True,
help="Name must be between 1 to 50 characters.",
type=DatasetTagsApi._validate_tag_name,
)
parser.add_argument("tag_id", nullable=False, required=True, help="Id of a tag.", type=str)
args = parser.parse_args()
args["type"] = "knowledge"
tag = TagService.update_tags(args, args.get("tag_id"))
binding_count = TagService.get_tag_binding_count(args.get("tag_id"))
response = {"id": tag.id, "name": tag.name, "type": tag.type, "binding_count": binding_count}
return response, 200
@validate_dataset_token
def delete(self, _, dataset_id):
"""Delete a knowledge type tag."""
if not current_user.is_editor:
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument("tag_id", nullable=False, required=True, help="Id of a tag.", type=str)
args = parser.parse_args()
TagService.delete_tag(args.get("tag_id"))
return 204
@staticmethod
def _validate_tag_name(name):
if not name or len(name) < 1 or len(name) > 50:
raise ValueError("Name must be between 1 to 50 characters.")
return name
class DatasetTagBindingApi(DatasetApiResource):
@validate_dataset_token
def post(self, _, dataset_id):
# The role of the current user in the ta table must be admin, owner, editor, or dataset_operator
if not (current_user.is_editor or current_user.is_dataset_editor):
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument(
"tag_ids", type=list, nullable=False, required=True, location="json", help="Tag IDs is required."
)
parser.add_argument(
"target_id", type=str, nullable=False, required=True, location="json", help="Target Dataset ID is required."
)
args = parser.parse_args()
args["type"] = "knowledge"
TagService.save_tag_binding(args)
return 204
class DatasetTagUnbindingApi(DatasetApiResource):
@validate_dataset_token
def post(self, _, dataset_id):
# The role of the current user in the ta table must be admin, owner, editor, or dataset_operator
if not (current_user.is_editor or current_user.is_dataset_editor):
raise Forbidden()
parser = reqparse.RequestParser()
parser.add_argument("tag_id", type=str, nullable=False, required=True, help="Tag ID is required.")
parser.add_argument("target_id", type=str, nullable=False, required=True, help="Target ID is required.")
args = parser.parse_args()
args["type"] = "knowledge"
TagService.delete_tag_binding(args)
return 204
class DatasetTagsBindingStatusApi(DatasetApiResource):
@validate_dataset_token
def get(self, _, *args, **kwargs):
"""Get all knowledge type tags."""
dataset_id = kwargs.get("dataset_id")
tags = TagService.get_tags_by_target_id("knowledge", current_user.current_tenant_id, str(dataset_id))
tags_list = [{"id": tag.id, "name": tag.name} for tag in tags]
response = {"data": tags_list, "total": len(tags)}
return response, 200
api.add_resource(DatasetListApi, "/datasets")
api.add_resource(DatasetApi, "/datasets/<uuid:dataset_id>")
api.add_resource(DocumentStatusApi, "/datasets/<uuid:dataset_id>/documents/status/<string:action>")
api.add_resource(DatasetTagsApi, "/datasets/tags")
api.add_resource(DatasetTagBindingApi, "/datasets/tags/binding")
api.add_resource(DatasetTagUnbindingApi, "/datasets/tags/unbinding")
api.add_resource(DatasetTagsBindingStatusApi, "/datasets/<uuid:dataset_id>/tags")

View File

@ -3,7 +3,7 @@ import json
from flask import request
from flask_restful import marshal, reqparse
from sqlalchemy import desc, select
from werkzeug.exceptions import NotFound
from werkzeug.exceptions import Forbidden, NotFound
import services
from controllers.common.errors import FilenameNotExistsError
@ -18,14 +18,19 @@ from controllers.service_api.app.error import (
from controllers.service_api.dataset.error import (
ArchivedDocumentImmutableError,
DocumentIndexingError,
InvalidMetadataError,
)
from controllers.service_api.wraps import (
DatasetApiResource,
cloud_edition_billing_rate_limit_check,
cloud_edition_billing_resource_check,
)
from controllers.service_api.wraps import DatasetApiResource, cloud_edition_billing_resource_check
from core.errors.error import ProviderTokenNotInitError
from extensions.ext_database import db
from fields.document_fields import document_fields, document_status_fields
from libs.login import current_user
from models.dataset import Dataset, Document, DocumentSegment
from services.dataset_service import DocumentService
from services.dataset_service import DatasetService, DocumentService
from services.entities.knowledge_entities.knowledge_entities import KnowledgeConfig
from services.file_service import FileService
@ -35,6 +40,7 @@ class DocumentAddByTextApi(DatasetApiResource):
@cloud_edition_billing_resource_check("vector_space", "dataset")
@cloud_edition_billing_resource_check("documents", "dataset")
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def post(self, tenant_id, dataset_id):
"""Create document by text."""
parser = reqparse.RequestParser()
@ -54,6 +60,7 @@ class DocumentAddByTextApi(DatasetApiResource):
parser.add_argument("embedding_model_provider", type=str, required=False, nullable=True, location="json")
args = parser.parse_args()
dataset_id = str(dataset_id)
tenant_id = str(tenant_id)
dataset = db.session.query(Dataset).filter(Dataset.tenant_id == tenant_id, Dataset.id == dataset_id).first()
@ -69,6 +76,21 @@ class DocumentAddByTextApi(DatasetApiResource):
if text is None or name is None:
raise ValueError("Both 'text' and 'name' must be non-null values.")
if args.get("embedding_model_provider"):
DatasetService.check_embedding_model_setting(
tenant_id, args.get("embedding_model_provider"), args.get("embedding_model")
)
if (
args.get("retrieval_model")
and args.get("retrieval_model").get("reranking_model")
and args.get("retrieval_model").get("reranking_model").get("reranking_provider_name")
):
DatasetService.check_reranking_model_setting(
tenant_id,
args.get("retrieval_model").get("reranking_model").get("reranking_provider_name"),
args.get("retrieval_model").get("reranking_model").get("reranking_model_name"),
)
upload_file = FileService.upload_text(text=str(text), text_name=str(name))
data_source = {
"type": "upload_file",
@ -99,6 +121,7 @@ class DocumentUpdateByTextApi(DatasetApiResource):
"""Resource for update documents."""
@cloud_edition_billing_resource_check("vector_space", "dataset")
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def post(self, tenant_id, dataset_id, document_id):
"""Update document by text."""
parser = reqparse.RequestParser()
@ -118,6 +141,17 @@ class DocumentUpdateByTextApi(DatasetApiResource):
if not dataset:
raise ValueError("Dataset does not exist.")
if (
args.get("retrieval_model")
and args.get("retrieval_model").get("reranking_model")
and args.get("retrieval_model").get("reranking_model").get("reranking_provider_name")
):
DatasetService.check_reranking_model_setting(
tenant_id,
args.get("retrieval_model").get("reranking_model").get("reranking_provider_name"),
args.get("retrieval_model").get("reranking_model").get("reranking_model_name"),
)
# indexing_technique is already set in dataset since this is an update
args["indexing_technique"] = dataset.indexing_technique
@ -158,6 +192,7 @@ class DocumentAddByFileApi(DatasetApiResource):
@cloud_edition_billing_resource_check("vector_space", "dataset")
@cloud_edition_billing_resource_check("documents", "dataset")
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def post(self, tenant_id, dataset_id):
"""Create document by upload file."""
args = {}
@ -175,8 +210,29 @@ class DocumentAddByFileApi(DatasetApiResource):
if not dataset:
raise ValueError("Dataset does not exist.")
if not dataset.indexing_technique and not args.get("indexing_technique"):
if dataset.provider == "external":
raise ValueError("External datasets are not supported.")
indexing_technique = args.get("indexing_technique") or dataset.indexing_technique
if not indexing_technique:
raise ValueError("indexing_technique is required.")
args["indexing_technique"] = indexing_technique
if "embedding_model_provider" in args:
DatasetService.check_embedding_model_setting(
tenant_id, args["embedding_model_provider"], args["embedding_model"]
)
if (
"retrieval_model" in args
and args["retrieval_model"].get("reranking_model")
and args["retrieval_model"].get("reranking_model").get("reranking_provider_name")
):
DatasetService.check_reranking_model_setting(
tenant_id,
args["retrieval_model"].get("reranking_model").get("reranking_provider_name"),
args["retrieval_model"].get("reranking_model").get("reranking_model_name"),
)
# save file info
file = request.files["file"]
@ -206,12 +262,16 @@ class DocumentAddByFileApi(DatasetApiResource):
knowledge_config = KnowledgeConfig(**args)
DocumentService.document_create_args_validate(knowledge_config)
dataset_process_rule = dataset.latest_process_rule if "process_rule" not in args else None
if not knowledge_config.original_document_id and not dataset_process_rule and not knowledge_config.process_rule:
raise ValueError("process_rule is required.")
try:
documents, batch = DocumentService.save_document_with_dataset_id(
dataset=dataset,
knowledge_config=knowledge_config,
account=dataset.created_by_account,
dataset_process_rule=dataset.latest_process_rule if "process_rule" not in args else None,
dataset_process_rule=dataset_process_rule,
created_from="api",
)
except ProviderTokenNotInitError as ex:
@ -225,6 +285,7 @@ class DocumentUpdateByFileApi(DatasetApiResource):
"""Resource for update documents."""
@cloud_edition_billing_resource_check("vector_space", "dataset")
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def post(self, tenant_id, dataset_id, document_id):
"""Update document by upload file."""
args = {}
@ -243,6 +304,9 @@ class DocumentUpdateByFileApi(DatasetApiResource):
if not dataset:
raise ValueError("Dataset does not exist.")
if dataset.provider == "external":
raise ValueError("External datasets are not supported.")
# indexing_technique is already set in dataset since this is an update
args["indexing_technique"] = dataset.indexing_technique
@ -295,6 +359,7 @@ class DocumentUpdateByFileApi(DatasetApiResource):
class DocumentDeleteApi(DatasetApiResource):
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def delete(self, tenant_id, dataset_id, document_id):
"""Delete document."""
document_id = str(document_id)
@ -388,15 +453,121 @@ class DocumentIndexingStatusApi(DatasetApiResource):
.filter(DocumentSegment.document_id == str(document.id), DocumentSegment.status != "re_segment")
.count()
)
document.completed_segments = completed_segments
document.total_segments = total_segments
if document.is_paused:
document.indexing_status = "paused"
documents_status.append(marshal(document, document_status_fields))
# Create a dictionary with document attributes and additional fields
document_dict = {
"id": document.id,
"indexing_status": "paused" if document.is_paused else document.indexing_status,
"processing_started_at": document.processing_started_at,
"parsing_completed_at": document.parsing_completed_at,
"cleaning_completed_at": document.cleaning_completed_at,
"splitting_completed_at": document.splitting_completed_at,
"completed_at": document.completed_at,
"paused_at": document.paused_at,
"error": document.error,
"stopped_at": document.stopped_at,
"completed_segments": completed_segments,
"total_segments": total_segments,
}
documents_status.append(marshal(document_dict, document_status_fields))
data = {"data": documents_status}
return data
class DocumentDetailApi(DatasetApiResource):
METADATA_CHOICES = {"all", "only", "without"}
def get(self, tenant_id, dataset_id, document_id):
dataset_id = str(dataset_id)
document_id = str(document_id)
dataset = self.get_dataset(dataset_id, tenant_id)
document = DocumentService.get_document(dataset.id, document_id)
if not document:
raise NotFound("Document not found.")
if document.tenant_id != str(tenant_id):
raise Forbidden("No permission.")
metadata = request.args.get("metadata", "all")
if metadata not in self.METADATA_CHOICES:
raise InvalidMetadataError(f"Invalid metadata value: {metadata}")
if metadata == "only":
response = {"id": document.id, "doc_type": document.doc_type, "doc_metadata": document.doc_metadata_details}
elif metadata == "without":
dataset_process_rules = DatasetService.get_process_rules(dataset_id)
document_process_rules = document.dataset_process_rule.to_dict()
data_source_info = document.data_source_detail_dict
response = {
"id": document.id,
"position": document.position,
"data_source_type": document.data_source_type,
"data_source_info": data_source_info,
"dataset_process_rule_id": document.dataset_process_rule_id,
"dataset_process_rule": dataset_process_rules,
"document_process_rule": document_process_rules,
"name": document.name,
"created_from": document.created_from,
"created_by": document.created_by,
"created_at": document.created_at.timestamp(),
"tokens": document.tokens,
"indexing_status": document.indexing_status,
"completed_at": int(document.completed_at.timestamp()) if document.completed_at else None,
"updated_at": int(document.updated_at.timestamp()) if document.updated_at else None,
"indexing_latency": document.indexing_latency,
"error": document.error,
"enabled": document.enabled,
"disabled_at": int(document.disabled_at.timestamp()) if document.disabled_at else None,
"disabled_by": document.disabled_by,
"archived": document.archived,
"segment_count": document.segment_count,
"average_segment_length": document.average_segment_length,
"hit_count": document.hit_count,
"display_status": document.display_status,
"doc_form": document.doc_form,
"doc_language": document.doc_language,
}
else:
dataset_process_rules = DatasetService.get_process_rules(dataset_id)
document_process_rules = document.dataset_process_rule.to_dict()
data_source_info = document.data_source_detail_dict
response = {
"id": document.id,
"position": document.position,
"data_source_type": document.data_source_type,
"data_source_info": data_source_info,
"dataset_process_rule_id": document.dataset_process_rule_id,
"dataset_process_rule": dataset_process_rules,
"document_process_rule": document_process_rules,
"name": document.name,
"created_from": document.created_from,
"created_by": document.created_by,
"created_at": document.created_at.timestamp(),
"tokens": document.tokens,
"indexing_status": document.indexing_status,
"completed_at": int(document.completed_at.timestamp()) if document.completed_at else None,
"updated_at": int(document.updated_at.timestamp()) if document.updated_at else None,
"indexing_latency": document.indexing_latency,
"error": document.error,
"enabled": document.enabled,
"disabled_at": int(document.disabled_at.timestamp()) if document.disabled_at else None,
"disabled_by": document.disabled_by,
"archived": document.archived,
"doc_type": document.doc_type,
"doc_metadata": document.doc_metadata_details,
"segment_count": document.segment_count,
"average_segment_length": document.average_segment_length,
"hit_count": document.hit_count,
"display_status": document.display_status,
"doc_form": document.doc_form,
"doc_language": document.doc_language,
}
return response
api.add_resource(
DocumentAddByTextApi,
"/datasets/<uuid:dataset_id>/document/create_by_text",
@ -420,3 +591,4 @@ api.add_resource(
api.add_resource(DocumentDeleteApi, "/datasets/<uuid:dataset_id>/documents/<uuid:document_id>")
api.add_resource(DocumentListApi, "/datasets/<uuid:dataset_id>/documents")
api.add_resource(DocumentIndexingStatusApi, "/datasets/<uuid:dataset_id>/documents/<string:batch>/indexing-status")
api.add_resource(DocumentDetailApi, "/datasets/<uuid:dataset_id>/documents/<uuid:document_id>")

View File

@ -1,9 +1,10 @@
from controllers.console.datasets.hit_testing_base import DatasetsHitTestingBase
from controllers.service_api import api
from controllers.service_api.wraps import DatasetApiResource
from controllers.service_api.wraps import DatasetApiResource, cloud_edition_billing_rate_limit_check
class HitTestingApi(DatasetApiResource, DatasetsHitTestingBase):
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def post(self, tenant_id, dataset_id):
dataset_id_str = str(dataset_id)

View File

@ -3,7 +3,7 @@ from flask_restful import marshal, reqparse
from werkzeug.exceptions import NotFound
from controllers.service_api import api
from controllers.service_api.wraps import DatasetApiResource
from controllers.service_api.wraps import DatasetApiResource, cloud_edition_billing_rate_limit_check
from fields.dataset_fields import dataset_metadata_fields
from services.dataset_service import DatasetService
from services.entities.knowledge_entities.knowledge_entities import (
@ -14,6 +14,7 @@ from services.metadata_service import MetadataService
class DatasetMetadataCreateServiceApi(DatasetApiResource):
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def post(self, tenant_id, dataset_id):
parser = reqparse.RequestParser()
parser.add_argument("type", type=str, required=True, nullable=True, location="json")
@ -39,6 +40,7 @@ class DatasetMetadataCreateServiceApi(DatasetApiResource):
class DatasetMetadataServiceApi(DatasetApiResource):
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def patch(self, tenant_id, dataset_id, metadata_id):
parser = reqparse.RequestParser()
parser.add_argument("name", type=str, required=True, nullable=True, location="json")
@ -54,6 +56,7 @@ class DatasetMetadataServiceApi(DatasetApiResource):
metadata = MetadataService.update_metadata_name(dataset_id_str, metadata_id_str, args.get("name"))
return marshal(metadata, dataset_metadata_fields), 200
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def delete(self, tenant_id, dataset_id, metadata_id):
dataset_id_str = str(dataset_id)
metadata_id_str = str(metadata_id)
@ -73,6 +76,7 @@ class DatasetMetadataBuiltInFieldServiceApi(DatasetApiResource):
class DatasetMetadataBuiltInFieldActionServiceApi(DatasetApiResource):
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def post(self, tenant_id, dataset_id, action):
dataset_id_str = str(dataset_id)
dataset = DatasetService.get_dataset(dataset_id_str)
@ -88,6 +92,7 @@ class DatasetMetadataBuiltInFieldActionServiceApi(DatasetApiResource):
class DocumentMetadataEditServiceApi(DatasetApiResource):
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def post(self, tenant_id, dataset_id):
dataset_id_str = str(dataset_id)
dataset = DatasetService.get_dataset(dataset_id_str)

View File

@ -8,6 +8,7 @@ from controllers.service_api.app.error import ProviderNotInitializeError
from controllers.service_api.wraps import (
DatasetApiResource,
cloud_edition_billing_knowledge_limit_check,
cloud_edition_billing_rate_limit_check,
cloud_edition_billing_resource_check,
)
from core.errors.error import LLMBadRequestError, ProviderTokenNotInitError
@ -35,6 +36,7 @@ class SegmentApi(DatasetApiResource):
@cloud_edition_billing_resource_check("vector_space", "dataset")
@cloud_edition_billing_knowledge_limit_check("add_segment", "dataset")
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def post(self, tenant_id, dataset_id, document_id):
"""Create single segment."""
# check dataset
@ -139,6 +141,7 @@ class SegmentApi(DatasetApiResource):
class DatasetSegmentApi(DatasetApiResource):
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def delete(self, tenant_id, dataset_id, document_id, segment_id):
# check dataset
dataset_id = str(dataset_id)
@ -162,6 +165,7 @@ class DatasetSegmentApi(DatasetApiResource):
return 204
@cloud_edition_billing_resource_check("vector_space", "dataset")
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def post(self, tenant_id, dataset_id, document_id, segment_id):
# check dataset
dataset_id = str(dataset_id)
@ -208,12 +212,35 @@ class DatasetSegmentApi(DatasetApiResource):
)
return {"data": marshal(updated_segment, segment_fields), "doc_form": document.doc_form}, 200
def get(self, tenant_id, dataset_id, document_id, segment_id):
# check dataset
dataset_id = str(dataset_id)
tenant_id = str(tenant_id)
dataset = db.session.query(Dataset).filter(Dataset.tenant_id == tenant_id, Dataset.id == dataset_id).first()
if not dataset:
raise NotFound("Dataset not found.")
# check user's model setting
DatasetService.check_dataset_model_setting(dataset)
# check document
document_id = str(document_id)
document = DocumentService.get_document(dataset_id, document_id)
if not document:
raise NotFound("Document not found.")
# check segment
segment_id = str(segment_id)
segment = SegmentService.get_segment_by_id(segment_id=segment_id, tenant_id=current_user.current_tenant_id)
if not segment:
raise NotFound("Segment not found.")
return {"data": marshal(segment, segment_fields), "doc_form": document.doc_form}, 200
class ChildChunkApi(DatasetApiResource):
"""Resource for child chunks."""
@cloud_edition_billing_resource_check("vector_space", "dataset")
@cloud_edition_billing_knowledge_limit_check("add_segment", "dataset")
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def post(self, tenant_id, dataset_id, document_id, segment_id):
"""Create child chunk."""
# check dataset
@ -310,6 +337,7 @@ class DatasetChildChunkApi(DatasetApiResource):
"""Resource for updating child chunks."""
@cloud_edition_billing_knowledge_limit_check("add_segment", "dataset")
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def delete(self, tenant_id, dataset_id, document_id, segment_id, child_chunk_id):
"""Delete child chunk."""
# check dataset
@ -348,6 +376,7 @@ class DatasetChildChunkApi(DatasetApiResource):
@cloud_edition_billing_resource_check("vector_space", "dataset")
@cloud_edition_billing_knowledge_limit_check("add_segment", "dataset")
@cloud_edition_billing_rate_limit_check("knowledge", "dataset")
def patch(self, tenant_id, dataset_id, document_id, segment_id, child_chunk_id):
"""Update child chunk."""
# check dataset

View File

@ -9,7 +9,7 @@ class IndexApi(Resource):
return {
"welcome": "Dify OpenAPI",
"api_version": "v1",
"server_version": dify_config.CURRENT_VERSION,
"server_version": dify_config.project.version,
}

Some files were not shown because too many files have changed in this diff Show More