Compare commits

...

253 Commits

Author SHA1 Message Date
yyh
2f081fa6fa refactor(skill-editor): adopt 4-generic StateCreator pattern for type-safe cross-slice access
Use explicit StateCreator<FullStore, [], [], SliceType> pattern instead of
StateCreator<SliceType> for all skill-editor slices. This enables:
- Type-safe cross-slice state access via get()
- Explicit type contracts instead of relying on spread args behavior
- Better maintainability following Lobe-chat's proven pattern

Extract all type definitions to types.ts to avoid circular dependencies.
2026-01-18 13:24:34 +08:00
yyh
3b27d9e819 refactor(skill-editor): remove type assertions by using spread args pattern
Replace explicit parameter destructuring with spread args pattern to
eliminate `as unknown as` type assertions when composing sub-slices.
This aligns with the pattern used in the main workflow store.
2026-01-18 13:11:06 +08:00
yyh
c0a76220dd fix(skill-editor): resolve React Compiler memoization warnings
Consolidate file type derivations into a single useMemo with stable
dependencies (currentFileNode?.name and currentFileNode?.extension)
to help React Compiler track stability.

Extract originalContent as a separate variable to avoid property access
in useCallback dependencies, which caused Compiler to infer broader
dependencies than specified.
2026-01-17 22:01:33 +08:00
yyh
9d04fb4992 fix(skill-editor): resolve React Compiler memoization warnings
Wrap isEditable in useMemo to help React Compiler track its stability
and preserve memoization for callbacks that depend on it. Also replace
Record<string, any> with Record<string, unknown> to satisfy no-explicit-any.
2026-01-17 21:51:25 +08:00
yyh
02fcf33067 fix(skill-editor): remove unnecessary store subscriptions in tool-picker-block
Move activeTabId and fileMetadata reads from selector subscriptions to
getState() calls inside the callback. These values were only used in the
insertTools callback, not for rendering, causing unnecessary re-renders
when they changed.
2026-01-17 21:47:31 +08:00
yyh
bbf1247f80 fix(skill-editor): compare content with original to determine dirty state
Previously, any edit would mark the file as dirty even if the content
was restored to its original state. Now we compare against the original
content and clear the dirty flag when they match.
2026-01-17 17:52:00 +08:00
yyh
b82b73ef94 refactor(skill-editor): split slice into separate files for better organization
Split the monolithic skill-editor-slice.ts into a dedicated directory with
individual slice files (tab, file-tree, dirty, metadata, file-operations-menu)
to improve maintainability and code organization.
2026-01-17 17:28:25 +08:00
yyh
15d6f60f25 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-17 17:03:32 +08:00
yyh
ad8c5f5452 perf: lazy load SkillMain component using next/dynamic
Reduce initial bundle size by dynamically importing SkillMain
component. This prevents loading the entire Skill module (including
Monaco and Lexical editors) when users only access the Graph view.
2026-01-16 21:31:56 +08:00
721d82b91a refactor(sandbox): modify sandbox provider configuration by adding 'configure_type' column and updating unique constraints 2026-01-16 19:02:16 +08:00
d542a74733 feat: panel ui 2026-01-16 18:39:13 +08:00
16078a9df6 refactor(sandbox): update DifyCliLocator path resolution and enhance sandbox provider configuration logic 2026-01-16 18:37:43 +08:00
0bd17c6d0f refactor(sandbox): sandbox provider system default configuration 2026-01-16 18:22:44 +08:00
8b42435f7a feat: support set default value when choose tool 2026-01-16 18:16:01 +08:00
fad6fa141d chore: improve accessibility for learn more link (#31120)
Co-authored-by: khmandarrin <jeong-ga-eun@jeong-ga-eun-ui-MacBookAir.local>
2026-01-16 18:12:07 +08:00
30821fd26c chore: Update outdated GitHub Actions versions (#31114) 2026-01-16 17:56:55 +08:00
3147e850be fix: click tool not show current 2026-01-16 17:52:40 +08:00
1a9fdd9a65 refactor: migrate tag list API query parameters to Pydantic (#31097)
Co-authored-by: fghpdf <fghpdf@users.noreply.github.com>
2026-01-16 17:49:52 +08:00
0b33381efb feat: support save settings 2026-01-16 17:44:40 +08:00
de610cbf39 fix: call get_text_content() instead of casting to str (#31121)
Signed-off-by: Stream <Stream_2@qq.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-16 18:41:00 +09:00
yyh
ee7a9a34e0 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-16 17:25:19 +08:00
148f92f92d fix: allow all fileds and not allow model set to auto 2026-01-16 17:20:11 +08:00
f79df6982d feat: support setting show on click 2026-01-16 16:58:58 +08:00
yyh
6903c31b84 fix(search-input): retain focus after clearing input (#31107) 2026-01-16 16:22:14 +08:00
649283df09 fix: not popup and use new setting 2026-01-16 15:09:25 +08:00
yyh
06b6625c01 feat(skill): implement file tree search with debounced filtering
Add search functionality to skill sidebar using react-arborist's built-in
searchTerm and searchMatch props. Search input is debounced at 300ms and
filters tree nodes by name (case-insensitive). Also add success toast for
rename operations.
2026-01-16 14:44:44 +08:00
eb4f57fb8b chore: split tool config 2026-01-16 14:39:33 +08:00
b2cc9b255d chore: Update coding agent workflow for backend (#31093) 2026-01-16 14:28:47 +08:00
yyh
0f5d3f38da refactor(skill): use node.parent chain for ancestor traversal
Replace getAncestorIds(treeData) with node.parent chain traversal
for more efficient ancestor lookup. This avoids re-traversing the
tree data structure and uses react-arborist's built-in parent refs.

Also rename hook to useSyncTreeWithActiveTab for clarity.
2026-01-16 14:27:21 +08:00
e9f0e1e839 fix(web): replace Response.json with legacy Response constructor for pre-Chrome 105 compatibility(#31091) (#31095)
Co-authored-by: Xiaoba Yu <xb1823725853@gmail.com>
2026-01-16 14:26:23 +08:00
yyh
76da178cc1 refactor(skill): extract tree node handlers into reusable hooks
Extract complex event handling and side effects from file tree components
into dedicated hooks for better separation of concerns and reusability.
2026-01-16 14:15:21 +08:00
yyh
38a2d2fe68 fix(skill): isolate more button click from tree node click handling
Use split button pattern to separate main content area from more button.
This prevents click events on the more button from bubbling up to the
parent element's click/double-click handlers, which caused unintended
file opening when clicking the menu button multiple times.
2026-01-16 14:07:07 +08:00
yyh
9397ba5bd2 refactor: move skill store to workflow/store/ 2026-01-16 13:51:50 +08:00
yyh
7093962f30 refactor(skill): move skill editor slice to core workflow store
Move SkillEditorSlice from injection pattern to core workflow store,
making it available to all workflow contexts (workflow-app, chatflow,
and future rag-pipeline).

- Add createSkillEditorSlice to core createWorkflowStore
- Remove complex type conversion logic from workflow-app/index.tsx
- Remove optional chaining (?.) and non-null assertions (!) from components
- Simplify slice composition with type assertions via unknown
2026-01-16 13:51:50 +08:00
yyh
7022e4b9ca fix(skill): add key prop to editors to fix content sync on tab switch
Lexical editor only uses initialConfig.editorState on mount, ignoring
subsequent value prop changes when the component is reused by React.
Adding key={activeTabId} forces React to remount editors when switching
tabs, ensuring correct content is displayed.
2026-01-16 13:51:50 +08:00
yyh
b8d67a42bd refactor(skill): migrate skill editor store to workflow store slice injection
Refactor the skill editor state management from a standalone Zustand store
with Context provider pattern to a slice injection pattern that integrates
with the existing workflow store. This aligns with how rag-pipeline already
injects its slice.

- Remove SkillEditorProvider and SkillEditorContext
- Export createSkillEditorSlice for injection into workflow store
- Update all components to use useStore/useWorkflowStore from workflow store
- Add SkillEditorSliceShape to SliceFromInjection union type
- Use type-safe slice creator args without any types
2026-01-16 13:51:49 +08:00
yyh
106cb8e373 refactor(skill): unify node menu components with cva variants
Merge file-node-menu.tsx and folder-node-menu.tsx into a single
declarative NodeMenu component that uses type prop to determine
menu items. Add cva-based variant support to MenuItem for consistent
destructive styling.
2026-01-16 13:51:49 +08:00
9492eda5ef chore: tool format and render problem 2026-01-16 13:50:20 +08:00
cd497a8c52 fix(web): use portal for variable picker in code editor (Fixes #31063) (#31066) 2026-01-16 13:31:57 +08:00
7aab4529e6 chore: lint for state hooks (#31088) 2026-01-16 11:58:28 +08:00
E.G
4bff0cd0ab fix: resolve 'Expand all chunks' button not working (#31074)
Co-authored-by: GlobalStar117 <GlobalStar117@users.noreply.github.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: crazywoola <427733928@qq.com>
2026-01-16 11:34:42 +08:00
64ddcc8960 chore: fix choose provder id 2026-01-16 11:31:03 +08:00
yyh
c7bca6a3fb fix(skill): restore auto-pin on edit behavior (VS Code style) 2026-01-16 11:26:13 +08:00
yyh
f1ce933b33 fix(skill): address code review issues for tab management
1. Add confirmation dialog when closing dirty tabs
2. Fix file double-click race condition with useDelayedClick hook
3. Fix previewTabId orphan state in closeTab
4. Remove auto-pin on every keystroke (VS Code behavior)
5. Extract shared MenuItem component to eliminate duplication
6. Make nodeId optional when node is provided (reduce props drilling)
2026-01-16 11:20:49 +08:00
yyh
17990512ce fix(skill): add throttle to folder toggle and validate pinTab
- Use es-toolkit throttle with leading edge to prevent folder toggle
  flickering on double-click (3 toggles reduced to 1)
- Add validation in pinTab to check if file exists in openTabIds
2026-01-16 11:20:49 +08:00
yyh
a30fb5909b feat(skill): implement VS Code-style preview/pinned tab management
- Single-click file in tree opens in preview mode (temporary, replaceable)
- Double-click file opens in pinned mode (permanent)
- Preview tabs display with italic filename
- Editing content auto-converts preview tab to pinned
- Double-clicking preview tab header converts to pinned
- Only one preview tab can exist at a time
2026-01-16 11:20:49 +08:00
3dea5adf5c fix: change caused problem 2026-01-16 11:00:56 +08:00
yyh
5aca563a01 fix: migrations 2026-01-16 10:26:53 +08:00
yyh
bf1ebcdf8f Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-16 10:05:12 +08:00
yyh
3252748345 feat(skill): add oRPC contract and hook for file download URL
Add frontend oRPC integration for the existing backend download URL
endpoint to enable file downloads from the asset tree.
2026-01-16 09:55:17 +08:00
c98870c3f4 refactor: always preserve marketplace search state in URL (#31069)
Co-authored-by: Stephen Zhou <38493346+hyoban@users.noreply.github.com>
2026-01-16 08:52:53 +09:00
b06c7c8f33 ci: disable limit annotation (#31072) 2026-01-15 23:04:26 +08:00
1a2fce7055 ci: eslint annotation (#31056) 2026-01-15 21:49:46 +08:00
yyh
783cdb1357 feat(skill): add inline rename and guide lines to file tree
Add TreeEditInput component for inline file/folder renaming with keyboard
support (Enter to submit, Escape to cancel). Add TreeGuideLines component
to render vertical indent lines based on node depth for better visual
hierarchy in the tree view.

Reorganize file tree components into dedicated `file-tree` subdirectory
for better code organization.
2026-01-15 21:30:02 +08:00
yyh
2de17cb1a4 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-15 20:47:34 +08:00
yyh
3b6946d3da refactor(skill): centralize asset tree data fetching with custom hooks
Extract repeated appId retrieval and tree data fetching patterns into
dedicated hooks (useSkillAssetTreeData, useSkillAssetNodeMap) to reduce
code duplication across 6 components and leverage TanStack Query's
select option for efficient nodeMap computation.
2026-01-15 19:45:33 +08:00
yyh
b8adc8f498 fix(web): memoize skill sidebar menu offset 2026-01-15 19:45:32 +08:00
yyh
ca7c4d2c86 fix(skill): improve accessibility for file tree and tabs
- Convert div with onClick to proper button elements for keyboard access
- Add focus-visible ring styles to all interactive elements
- Add ARIA attributes (role, aria-selected, aria-expanded) to tree nodes
- Add keyboard navigation (Enter/Space) support to tree items
- Mark decorative icons with aria-hidden="true"
- Add missing i18n keys for accessibility labels
- Fix typography: use ellipsis character (…) instead of three dots
2026-01-15 19:45:32 +08:00
d8bafb0d1c refactor(app-asset): remove deprecated file download resource and streamline download URL handling with pre-signed storage 2026-01-15 19:28:15 +08:00
cd0724b827 refactor(app-asset-service): remove unused signed proxy URL generation and improve error handling for download URL 2026-01-15 19:28:15 +08:00
yyh
6e66e2591b feat(skill): disable file tree during mutations
- Add useIsMutating hook to track ongoing mutations
- Apply pointer-events-none and opacity-50 when mutating
- Prevents user interaction during file operations
2026-01-15 18:14:10 +08:00
yyh
fd0556909f fix(skill): default folders to collapsed state on load
- Add openByDefault={false} to Tree component
- react-arborist defaults openByDefault to true, causing all folders
  to be expanded on page refresh
2026-01-15 18:05:42 +08:00
yyh
ac2120da1e refactor(skill): separate DropTip from tree container
- Move DropTip component outside the tree flex container
- Use Fragment to group tree container, DropTip and context menu
- DropTip is now an independent fixed element at the bottom
2026-01-15 18:05:42 +08:00
yyh
f3904a7e39 fix(skill): use dynamic height for file tree to fix scroll issues
- Replace fixed height={1000} with dynamic containerSize.height
- Use useSize hook from ahooks to observe container dimensions
- Fallback to 400px default height for initial render
- Fixes scroll issues when collapsing folders
2026-01-15 18:05:42 +08:00
yyh
b3923ec3ca fix: translations 2026-01-15 18:05:41 +08:00
9ffdad6465 fix: click tool inner caused blur 2026-01-15 17:58:38 +08:00
lif
2b021e8752 fix: remove hardcoded 48-character limit from text inputs (#30156)
Signed-off-by: majiayu000 <1835304752@qq.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2026-01-15 17:43:00 +08:00
yyh
713e040481 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-15 17:26:58 +08:00
yyh
f58f36fc8f feat(skill): add file right-click/more menu and refactor naming
- Add right-click context menu and '...' more button for files
  - Files now support Rename and Delete operations
  - Created file-node-menu.tsx for file-specific menu

- Refactor component naming for consistency
  - file-item-menu.tsx -> file-node-menu.tsx (unify 'node' terminology)
  - file-operations-menu.tsx -> folder-node-menu.tsx (clarify folder menu)
  - file-tree-context-menu.tsx -> tree-context-menu.tsx (simplify)
  - file-tree-node.tsx -> tree-node.tsx (simplify)
  - files.tsx -> file-tree.tsx (more descriptive)
  - Renamed internal components: FileTreeNode -> TreeNode, Files -> FileTree

- Add context menu node highlight
  - When right-clicking a node, it now shows hover highlight
  - Subscribed to contextMenu.nodeId in TreeNode component
2026-01-15 17:26:12 +08:00
195cd2c898 chore: show line numbers to skill editor 2026-01-15 17:21:12 +08:00
6bb09dc58c feat(app-assets): add file download functionality with pre-signed URLs and enhance asset management 2026-01-15 17:20:10 +08:00
33f3374ea6 refactor(sandbox): simplify sandbox_layer by removing ArchiveSandboxStorage and updating event handling 2026-01-15 17:20:10 +08:00
41baaca21d feat(sandbox): integrate ArchiveSandboxStorage into AdvancedChat and Workflow app generators 2026-01-15 17:20:10 +08:00
d650cde323 feat: skill editor choose tool 2026-01-15 17:16:01 +08:00
yyh
e651c6cacf fix: css 2026-01-15 16:45:40 +08:00
yyh
eab395f58a refactor: sync file tree open state 2026-01-15 16:39:22 +08:00
yyh
2f92957e15 fix: css 2026-01-15 16:14:51 +08:00
yyh
7bc1390366 feat(skill-editor): enhance + button with full operations and smart target folder
- Refactor sidebar-search-add to reuse useFileOperations hook
- Add getTargetFolderIdFromSelection utility for smart folder targeting
- Expand + button menu: New File, New Folder, Upload File, Upload Folder
- Target folder based on selection: file's parent, folder itself, or root
2026-01-15 16:10:01 +08:00
e91fb94d0e chore: palceholder 2026-01-15 16:08:26 +08:00
yyh
5c03a2e251 refactor(skill-editor): extract hooks and utils into separate directories
- Extract useFileOperations hook to hooks/use-file-operations.ts
- Move tree utilities to utils/tree-utils.ts
- Move file utilities to utils/file-utils.ts (renamed from utils.ts)
- Remove unnecessary JSDoc comments throughout components
- Simplify type.ts to only contain local type definitions
- Clean up store/index.ts by removing verbose comments
2026-01-15 16:00:42 +08:00
yyh
1741fcf84d feat(skill-editor): add rename and delete operations for folder context menu
- Add Rename using react-arborist native inline editing (node.edit())
- Add Delete with Confirm modal and automatic tab cleanup
- Add getAllDescendantFileIds utility for finding files to close on delete
- Add i18n strings for rename/delete operations (en-US, zh-Hans)
2026-01-15 16:00:41 +08:00
yyh
52215e9166 fix(prompt-editor): show border on hover for better scroll boundary visibility
Add hover state border to prompt editor so users can see the boundary
while scrolling even when the editor is not focused.
2026-01-15 16:00:41 +08:00
4cfc135652 feat: prompt editor support line num 2026-01-15 15:56:49 +08:00
yyh
ff632bf9b8 feat(workflow): persist view tab state to URL search params
Use nuqs to sync graph/skill view selection to URL, enabling
shareable links and browser history navigation. Hoists
SkillEditorProvider to maintain state across view switches.
2026-01-15 15:09:36 +08:00
yyh
ce9ed88b03 refactor(skill-editor): hoist SkillEditorProvider for state persistence
Move SkillEditorProvider from SkillMain to WorkflowAppWrapper so that
store state persists across view switches between Graph and Skill views.
Also add URL query state for view type using nuqs.
2026-01-15 15:09:12 +08:00
yyh
e6a4a08120 refactor(skill-editor): simplify code by extracting MenuItem component and removing dead code
- Extract reusable MenuItem component for menu buttons in FileOperationsMenu
- Remove unused handleUploadFileClick/handleUploadFolderClick callbacks
- Remove unused handleDropdownClose callback, inline directly
- Remove unused _fileId parameter from revealFile function
- Simplify toOpensObject using Object.fromEntries
2026-01-15 15:05:43 +08:00
yyh
388ee087c0 feat(skill-editor): add folder context menu with file operations
Add right-click context menu and "..." dropdown button for folders in
the file tree, enabling file operations within any folder:

- New File: Create empty file via Blob upload
- New Folder: Create subfolder
- Upload File: Upload multiple files to folder
- Upload Folder: Upload entire folder structure preserving hierarchy

Implementation includes:
- FileOperationsMenu: Shared menu component for both triggers
- FileTreeContextMenu: Right-click menu with absolute positioning
- FileTreeNode: Added context menu and dropdown button for folders
- Store slice for context menu state management
- i18n strings for en-US and zh-Hans
2026-01-15 14:56:31 +08:00
2fb8883918 feat: split different filetypes 2026-01-15 14:53:00 +08:00
yyh
28ccd42a1c refactor(skill-editor): simplify SkillEditorProvider
Remove verbose comments and appId reset logic since parent component
remounts on appId change. Consolidate imports and use function declaration.
2026-01-15 14:10:41 +08:00
yyh
fcd814a2c3 refactor(skill-editor): simplify state management and remove dead code
- Replace useRef pattern with useMemo for store creation in context.tsx
- Remove unused extension prop from EditorTabItem
- Fix useMemo dependency warnings in editor-tabs.tsx and skill-doc-editor.tsx
- Add proper OnMount type for Monaco editor instead of any
- Delete unused file-item.tsx and fold-item.tsx components
- Remove unused getExtension and fromOpensObject utilities from type.ts
- Refactor auto-reveal effect in files.tsx for better readability
2026-01-15 14:02:15 +08:00
yyh
fe17cbc1a8 feat(skill-editor): implement file tree, tab management, and dirty state tracking
Implement MVP features for skill editor based on design doc:
- Add Zustand store with Tab, FileTree, and Dirty slices
- Rewrite file tree using react-arborist for virtual scrolling
- Implement Tab↔FileTree sync with auto-reveal on tab activation
- Add upload functionality (new folder, upload file)
- Implement Monaco editor with dirty state tracking and Ctrl+S save
- Add i18n translations (en-US and zh-Hans)
2026-01-15 13:53:19 +08:00
63b3e71909 refactor(sandbox): redesign sandbox_layer & reorganize import paths 2026-01-15 13:22:49 +08:00
c1c8b6af44 chore: remove duplicate secret field in CliApiSession 2026-01-15 12:10:53 +08:00
3bd434ddf2 chore: ui enchance 2026-01-15 11:35:48 +08:00
834a5df580 fix: switch zindex 2026-01-15 11:31:08 +08:00
e40c2354d5 chore: remove useless props 2026-01-15 11:24:59 +08:00
b0eca12d88 feat: tabs 2026-01-15 11:22:43 +08:00
yyh
3a86983207 refactor(web): nest sandbox provider contracts 2026-01-15 11:04:43 +08:00
f461ddeb7e missing files 2026-01-15 11:04:15 +08:00
7b534baf15 chore: file type utils 2026-01-15 11:02:07 +08:00
74d8bdd3a7 chore: search ui 2026-01-15 11:02:07 +08:00
yyh
657739d48b Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	api/models/model.py
#	web/contract/router.ts
2026-01-15 10:59:45 +08:00
yyh
f8b27dd662 fix(web): accept 2xx status codes in upload function for HTTP semantics
The upload helper was hardcoded to only accept HTTP 201, which broke
PUT requests that return 200. This aligns with standard HTTP semantics
where POST returns 201 Created and PUT returns 200 OK.
2026-01-15 10:54:42 +08:00
yyh
18c7f4698a feat(web): add oRPC contracts and service hooks for app asset API
- Add TypeScript types for app asset management (types/app-asset.ts)
- Add oRPC contract definitions with nested router pattern (contract/console/app-asset.ts)
- Add React Query hooks for all asset operations (service/use-app-asset.ts)
- Integrate app asset contracts into console router

Endpoints covered: tree, createFolder, createFile, getFileContent,
updateFileContent, deleteNode, renameNode, moveNode, reorderNode, publish
2026-01-15 09:50:05 +08:00
6cb8d03bf6 feat(sandbox): enhance SandboxLayer with app_id handling and storage integration
- Introduce _app_id attribute to store application ID from system variables
- Add _get_app_id method to retrieve and validate app_id
- Update on_graph_start to log app_id during sandbox initialization
- Integrate ArchiveSandboxStorage for persisting and restoring sandbox files
- Ensure proper error handling for sandbox file operations
2026-01-15 00:28:41 +08:00
94ff904a04 feat(sandbox): add AppAssetsInitializer and refactor VMFactory to VMBuilder
- Add AppAssetsInitializer to load published app assets into sandbox
- Refactor VMFactory.create() to VMBuilder with builder pattern
- Extract SandboxInitializer base class and DifyCliInitializer
- Simplify SandboxLayer constructor (remove options/environments params)
- Fix circular import in sandbox module by removing eager SandboxBashTool export
- Update SandboxProviderService to return VMBuilder instead of VirtualEnvironment
2026-01-15 00:13:52 +08:00
a0c388f283 refactor(sandbox): extract connection helpers and move run_command to helper module
- Add helpers.py with connection management utilities:
    - with_connection: context manager for connection lifecycle
    - submit_command: execute command and return CommandFuture
    - execute: run command with auto connection, raise on failure
    - try_execute: run command with auto connection, return result

  - Add CommandExecutionError to exec.py for typed error handling
    with access to exit_code, stderr, and full result

  - Remove run_command method from VirtualEnvironment base class
    (now available as submit_command helper)

  - Update all call sites to use new helper functions:
    - sandbox/session.py
    - sandbox/storage/archive_storage.py
    - sandbox/bash/bash_tool.py
    - workflow/nodes/command/node.py

  - Add comprehensive unit tests for helpers with connection reuse
2026-01-15 00:13:52 +08:00
yyh
31427e9c42 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-14 21:15:23 +08:00
yyh
384b99435b Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	api/.env.example
#	api/uv.lock
2026-01-14 21:14:36 +08:00
425d182f21 refactor: move app_asset_tree module and update imports in app_asset and app_asset_service 2026-01-14 20:31:40 +08:00
4394ba1fe1 feat(skill): implement app asset management features including folder and file operations, error handling, and database migration for app asset drafts 2026-01-14 20:25:17 +08:00
be5a4cf5e3 temp fix: tab change caused empty the nodes 2026-01-14 17:20:40 +08:00
yyh
d17a92f713 refactor(web): split sandbox provider contracts into separate file
Move sandbox provider related contracts from contract/console.ts
to contract/console/sandbox-provider.ts for better organization
2026-01-14 16:46:04 +08:00
5ac2230c5d feat: sandbox storage 2026-01-14 16:31:24 +08:00
ab531d946e feat: add main skill struct 2026-01-14 16:28:14 +08:00
1a8fd08563 chore: add list define and mock data 2026-01-14 16:28:14 +08:00
yyh
c6ddf89980 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-14 16:24:47 +08:00
yyh
71c39ae583 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-14 16:23:57 +08:00
yyh
7209ef4aa7 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-14 16:16:28 +08:00
6b55e6781f feat: graph skill main struct 2026-01-14 15:41:02 +08:00
yyh
4887c9ea6f refactor(web): simplify MCP tool availability context and hook
- Add useMemo to prevent unnecessary re-renders of context value
- Extract ProviderProps type for better readability
- Convert arrow functions to standard function declarations
- Remove unused versionSupported/sandboxEnabled from hook return type
2026-01-14 14:15:07 +08:00
yyh
18170a1de5 feat(web): add sandbox mode check for MCP tool availability
Extend MCP tool availability context to include sandbox mode check
alongside version support. MCP tools are now blocked when sandbox
is disabled, with appropriate tooltip messages for each blocking
condition.
2026-01-14 14:01:56 +08:00
yyh
7ce144f493 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-14 13:40:39 +08:00
yyh
2279b605c6 refactor: import SandboxProvider type from @/types and remove retry:0
Move type imports to @/types/sandbox-provider instead of re-exporting
from service file. Remove unnecessary retry:0 options to use React
Query's default retry behavior.
2026-01-14 10:10:04 +08:00
yyh
3b78f9c2a5 refactor: migrate sandbox-provider API to ORPC
Replace manual fetch calls in use-sandbox-provider.ts with typed ORPC
contracts and client. Adds type definitions to types/sandbox-provider.ts
and registers contracts in the console router for consistent API handling.
2026-01-14 10:07:27 +08:00
yyh
7c029ce808 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	api/services/workflow_service.py
2026-01-14 09:54:07 +08:00
f28ded8455 feat(agent-sandbox): new tool resolver and bash execution implementation 2026-01-13 18:16:48 +08:00
yyh
c6ba51127f fix(sandbox-provider): allow admin role to manage sandbox providers
Change permission check from isCurrentWorkspaceOwner to
isCurrentWorkspaceManager so both owner and admin roles can
configure sandbox providers.
2026-01-13 17:17:36 +08:00
yyh
5675a44ffd fix(sandbox-provider): use Loading component and add daytona doc link
- Replace hardcoded "Loading..." text with Loading component
- Add daytona documentation link to PROVIDER_DOC_LINKS
2026-01-13 16:37:58 +08:00
yyh
48295e5161 refactor(sandbox-provider): extract shared constants and remove redundant cache invalidation
- Extract PROVIDER_ICONS and PROVIDER_DESCRIPTION_KEYS to constants.ts
- Create shared ProviderIcon component with size and withBorder props
- Remove manual invalidateList() calls from config-modal and switch-modal
  (mutations already invalidate cache in onSuccess)
- Remove unused useInvalidSandboxProviderList hook
2026-01-13 16:18:08 +08:00
yyh
ffc39b0235 refactor: rename ACCOUNT_SETTING_TAB.PROVIDER to MODEL_PROVIDER
Rename the constant for clarity and consistency with the new
sandbox-provider tab naming convention. Update all references
across the codebase to use the new constant name.
2026-01-13 15:07:04 +08:00
yyh
f72f58dbc4 fix: loading state 2026-01-13 14:38:19 +08:00
yyh
9d0f4a2152 fix(sandbox-provider): prevent permission hint flash on page load
Use strict equality check to only show no-permission message when
isCurrentWorkspaceOwner is explicitly false, not undefined.
2026-01-13 14:23:52 +08:00
yyh
1ed4ab4299 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-13 14:19:04 +08:00
yyh
3f69d348a1 chore: add translations 2026-01-13 14:05:41 +08:00
yyh
63fff151c7 fix: provider card style 2026-01-13 13:50:28 +08:00
yyh
9920e0b89a fix(sandbox-provider): hide config controls in read-only mode
Hide config button, divider, and enable button for non-owner users.
Adjust right padding to 24px in read-only mode for proper alignment.
2026-01-13 13:32:18 +08:00
yyh
3042f29c15 fix(sandbox-provider): update switch modal warning style to match design
Replace yellow warning box with red text for destructive emphasis.
Bold the provider name in confirmation text using Trans component.
2026-01-13 13:23:03 +08:00
yyh
99273e1118 style: provider card 2026-01-13 13:18:09 +08:00
yyh
041dbd482d fix(sandbox-provider): use i18n for provider card descriptions
Use PROVIDER_DESCRIPTION_KEYS mapping to display localized descriptions
instead of raw backend data, ensuring descriptions match Figma design.
2026-01-13 11:43:49 +08:00
yyh
b4aa1de10a fix(sandbox-provider): update provider descriptions to match Figma design
Update E2B, Daytona, and Docker descriptions with unique copy from design:
- E2B: "E2B Gives AI Agents Secure Computers with Real-World Tools."
- Daytona: "Deploy AI code with confidence using Daytona's lightning-fast infrastructure."
- Docker: "The Easiest Way to Build, Run, and Secure Agents."
2026-01-13 11:41:20 +08:00
yyh
c5a9b98cbe refactor(sandbox-provider): add centralized query keys management
Add sandboxProviderQueryKeys object for type-safe and maintainable
query key management, following the pattern used in use-common.ts.
2026-01-13 11:39:01 +08:00
yyh
21f47fbe58 fix(sandbox-provider): fix config modal header spacing and icon style
- Use custom header with 8px gap between title and subtitle
- Fix icon overflow-clip for proper border-radius
2026-01-13 11:12:51 +08:00
yyh
49f115dce3 fix(sandbox-provider): fix config modal subtitle icon to fill container 2026-01-13 11:11:03 +08:00
yyh
a81d0327d2 feat(sandbox-provider): update UI to match Figma design
- Update settings icon to RiEqualizer2Line
- Add 4px rounded container for provider icons in config modal
- Update section titles to uppercase style
- Change switch modal confirm button to warning variant
- Add i18n keys for setAsActive, readDocLink, securityTip
2026-01-13 11:04:11 +08:00
yyh
9eafe982ee fix: migration 2026-01-13 10:21:38 +08:00
yyh
a46bfdd0fc Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-13 10:15:59 +08:00
16f26c4f99 feat(cli_api): implement CLI API for external sandbox interactions, including session management and request handling 2026-01-12 20:57:07 +08:00
42fd0a0a62 refactor(sandbox): simplify command execution by using shlex for command parsing and improve output formatting 2026-01-12 16:35:09 +08:00
b78439b334 refactor(llm): update model features handling and change agent strategy to FUNCTION_CALLING 2026-01-12 15:52:26 +08:00
1082d73355 refactor(sandbox): remove unused SANDBOX_WORK_DIR constant and update bash command descriptions for clarity 2026-01-12 15:02:30 +08:00
201a18d6ba refactor(virtual_environment): add cwd parameter to execute_command method across all providers for improved command execution context 2026-01-12 14:20:03 +08:00
f990f4a8d4 refactor(sandbox): update DIFY_CLI_PATH and DIFY_CLI_CONFIG_PATH to use SANDBOX_WORK_DIR and enhance error handling in SandboxSession 2026-01-12 14:07:54 +08:00
e7c89b6153 refactor(sandbox): update imports and remove unused bash tool files, adjust DIFY_CLI_CONFIG_PATH 2026-01-12 13:36:19 +08:00
3e49d6b900 refactor: using initializer to replace hardcoded dify cli initialization 2026-01-12 12:13:56 +08:00
8aaff7fec1 refactor(sandbox): move VMFactory and related classes, update imports to reflect new structure 2026-01-12 12:01:21 +08:00
51ac23c9f1 refactor(sandbox): reorganize sandbox-related imports and rename SandboxFactory to VMFactory for clarity 2026-01-12 02:07:31 +08:00
9dd0361d0e refactor: rename new runtime as sandbox feature 2026-01-12 01:53:39 +08:00
3d2840edb6 feat: sandbox session and dify cli 2026-01-12 01:49:08 +08:00
ce0a59b60d feat: ad os field to virtual enviroment 2026-01-12 01:26:55 +08:00
2d8acf92f0 refactor(sandbox): remove Chinese translation for bash command execution description in SandboxBashTool 2026-01-12 01:16:53 +08:00
bc2ffa39fc refactor(sandbox): remove unused bash tool methods and streamline sandbox session handling in LLMNode 2026-01-12 00:09:40 +08:00
390c805ef4 feat(sandbox): implement sandbox runtime checks and integrate bash tool invocation in LLMNode 2026-01-11 22:56:05 +08:00
5b753dfd6e fix(sandbox): update FIXME comments to specify sandbox context for runtime config checks 2026-01-09 18:12:36 +08:00
5c8b80b01a feat(app): update default runtime mode and adjust runtime selection component styling 2026-01-09 18:12:36 +08:00
95d62039b1 feat(ui): change runtime selection component 2026-01-09 18:12:36 +08:00
78acfb0040 feat(sandbox): add command to setup system-level sandbox provider configuration 2026-01-09 18:12:35 +08:00
eb821efda7 refactor(encryption): update encryption utility references and clean up sandbox provider service logic 2026-01-09 18:12:35 +08:00
925825a41b refactor(encryption): using oauth encryption as a general encryption util. 2026-01-09 18:12:34 +08:00
07ff8df58d Merge branch 'main' into feat/support-agent-sandbox 2026-01-09 16:20:33 +08:00
0a0f02c0c6 chore(migrations): re-arrange migration of "add llm generation details table" 2026-01-09 15:55:25 +08:00
d2f41ae9ef Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-09 15:37:29 +08:00
5a4f5f54a7 chore: apply ruff 2026-01-09 14:47:21 +08:00
eabfa8f3af fix(migrations): update down_revision for sandbox_providers migration 2026-01-09 14:45:56 +08:00
1557f48740 Merge branch 'feat/agent-node-v2' into feat/support-agent-sandbox 2026-01-09 14:19:27 +08:00
00d787a75b feat(workflows): add deployment workflow for agent development
- Created a new GitHub Actions workflow to automate deployment for the agent development branch.
- Configured the workflow to trigger upon successful completion of the "Build and Push API & Web" workflow.
- Implemented SSH deployment steps using appleboy/ssh-action for secure server updates.
2026-01-09 13:11:37 +08:00
3b454fa95a refactor(sandbox-manager): implement sharded locking for sandbox management
- Enhanced the SandboxManager to use a sharded locking mechanism for improved concurrency and performance.
- Replaced the global lock with shard-specific locks, allowing for lock-free reads and reducing contention.
- Updated methods for registering, retrieving, unregistering, and counting sandboxes to work with the new sharded structure.
- Improved documentation within the class to clarify the purpose and functionality of the sharding approach.
2026-01-09 12:13:41 +08:00
0da4d64d38 feat(sandbox-layer): refactor sandbox management and integrate with SandboxManager
- Simplified the SandboxLayer initialization by removing unused parameters and consolidating sandbox creation logic.
- Integrated SandboxManager for better lifecycle management of sandboxes during workflow execution.
- Updated error handling to ensure proper initialization and cleanup of sandboxes.
- Enhanced CommandNode to retrieve sandboxes from SandboxManager, improving sandbox availability checks.
- Added unit tests to validate the new sandbox management approach and ensure robust error handling.
2026-01-09 11:23:03 +08:00
b09a831d15 feat: add tenant_id support to Sandbox and VirtualEnvironment initialization 2026-01-08 16:19:29 +08:00
94dbda503f refactor(llm-panel): update layout and enhance Max Iterations component
- Adjusted padding in the LLM panel for better visual alignment.
- Refactored the Max Iterations component to accept a className prop for flexible styling.
- Maintained the structure of advanced settings while ensuring consistent rendering of fields.
2026-01-08 14:15:58 +08:00
beefff3d48 feat(docker-demuxer): implement producer-consumer pattern for stream demultiplexing
- Introduced threading to handle Docker's stdout/stderr streams, improving thread safety and preventing race conditions.
- Replaced buffer-based reading with queue-based reading for stdout and stderr.
- Updated read methods to handle errors and end-of-stream conditions more gracefully.
- Enhanced documentation to reflect changes in the demuxing process.
2026-01-08 14:15:41 +08:00
c2e5081437 feat(llm-panel): collapse panel with advanced settings and max iterations
- Introduced a collapsible section for advanced settings in the LLM panel.
- Added Max Iterations component with conditional rendering based on the new hideMaxIterations prop.
- Updated context field and vision configuration to be part of the advanced settings.
- Added new translation key for advanced settings in the workflow localization file.
2026-01-08 12:16:18 +08:00
786c3e4137 chore: apply ruff 2026-01-08 11:14:44 +08:00
0d33714f28 fix(command-node): enhance error message formatting in command execution
- Improved error message handling by assigning the stderr output to a variable for better readability.
- Ensured consistent error reporting when a command fails, maintaining clarity in the output.
2026-01-08 11:14:37 +08:00
1fbba38436 fix(command-node): improve error reporting in command execution
- Updated error handling to provide detailed stderr output when a command fails.
- Streamlined working directory and command rendering by combining operations into single lines.
2026-01-08 11:14:23 +08:00
15c3d712d3 feat: sandbox provider configuration 2026-01-08 11:04:12 +08:00
5b01f544d1 refactor(command-node): streamline command execution and directory checks
- Simplified the command execution logic by removing unnecessary shell invocations.
- Enhanced working directory validation by directly using the `test` command.
- Improved command parsing with `shlex.split` for better handling of raw commands.
2026-01-08 11:04:11 +08:00
fe4c591cfd feat(daytona-environment): enhance command management with threading support and default API URL 2026-01-07 18:47:22 +08:00
0cd613ae52 fix(docker-daemon): update default Docker socket to use Unix socket 2026-01-07 18:35:49 +08:00
0082f468b4 Refactor code structure for improved readability and maintainability 2026-01-07 18:33:13 +08:00
eec57e84e4 Merge branch 'main' into feat/agent-node-v2 2026-01-07 17:34:23 +08:00
cd0f41a3e0 fix(command-node): improve working directory handling in CommandNode
- Added checks to verify the existence of the specified working directory before executing commands.
- Updated command execution logic to conditionally change the working directory if provided.
- Included FIXME comments to address future enhancements for native cwd support in VirtualEnvironment.run_command.
2026-01-07 15:30:59 +08:00
094c9fd802 fix: command node single debug run
- Added FIXME comments to indicate the need for unifying runtime config checking in AdvancedChatAppGenerator and WorkflowAppGenerator.
- Introduced sandbox management in WorkflowService with proper error handling for sandbox release.
- Enhanced runtime feature handling in the workflow execution process.
2026-01-07 15:22:12 +08:00
1584a78fc9 chore: add model name in detail 2026-01-07 15:05:18 +08:00
1a203031e0 fix(virtual-env): fix Docker stdout/stderr demuxing and exit code parsing
- Add _DockerDemuxer to properly separate stdout/stderr from multiplexed stream
- Fix binary header garbage in Docker exec output (tty=False 8-byte header)
- Fix LocalVirtualEnvironment.get_command_status() to use os.WEXITSTATUS()
- Update tests to use Transport API instead of raw file descriptors
2026-01-07 12:20:07 +08:00
05c3344554 feat: future interface for easy way to use VM.execute_command 2026-01-07 11:57:00 +08:00
888be71639 feat: command node output variables 2026-01-07 11:15:52 +08:00
3902929d9f feat: new runtime options 2026-01-07 00:01:55 +08:00
1c7c475c43 feat: add Command node support
- Introduced Command node type in workflow with associated UI components and translations.
- Enhanced SandboxLayer to manage sandbox attachment for Command nodes during execution.
- Updated various components and constants to integrate Command node functionality across the workflow.
2026-01-06 19:30:38 +08:00
cef7fd484b chore: add trace metadata and streaming icon 2026-01-06 16:30:33 +08:00
caabca3f02 feat: sandbox layer for workflow execution 2026-01-06 15:47:20 +08:00
36b7075cf4 Merge feat/llm-node-support-tools and fix type errors
- Merge origin/feat/llm-node-support-tools branch
- Fix unused variable tenant_id in dsl.py
- Add None checks for app and workflow in dsl.py
- Add type ignore for e2b_code_interpreter import

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 18:32:15 +08:00
f3761c26e9 Merge remote-tracking branch 'origin/main' into feat/llm-node-support-tools 2026-01-05 18:17:05 +08:00
43daf4f82c refactor: rename construct_environment method to _construct_environment for consistency across virtual environment providers 2026-01-05 18:13:13 +08:00
932be0ad64 feat: session management for InnerAPI&VM 2026-01-05 18:13:13 +08:00
81547c5981 feat: add tests for QueueTransportReadCloser to handle blocking reads and first chunk returns 2026-01-04 17:58:04 +08:00
a911b268aa feat: improve read behavior in QueueTransportReadCloser to handle initial data wait and subsequent immediate returns 2026-01-04 17:58:04 +08:00
dc8a618b6a feat: add think start end tag 2026-01-04 11:09:43 +08:00
f3e7fea628 feat: add tool call time 2026-01-04 10:29:02 +08:00
926349b1f8 feat: transform tool file message for external access 2026-01-02 15:23:16 +08:00
ec29c24916 feat: enhance QueueTransportReadCloser to handle reading with available data and improve EOF handling 2026-01-02 15:03:17 +08:00
3842eade67 feat: add API endpoint to fetch list of available tools and corresponding request model 2026-01-02 15:00:42 +08:00
cf7e2d5d75 feat: add unit tests for transport classes including queue, pipe, and socket transports 2026-01-01 18:57:03 +08:00
2673fe05a5 feat: introduce TransportEOFError for handling closed transport scenarios and update transport classes to raise it 2026-01-01 18:46:08 +08:00
180fdffab1 feat: update E2BEnvironment options to include default template, list file depth, and API URL 2025-12-31 18:29:22 +08:00
62e422f75a feat: add NotSupportedOperationError and update E2BEnvironment to raise it for unsupported command status retrieval 2025-12-31 18:09:14 +08:00
41565e91ed feat: add support for passing environment variables to E2B sandbox 2025-12-31 18:07:43 +08:00
c9610e9949 feat: implement transport abstractions for virtual environments and add E2B environment provider 2025-12-31 17:51:38 +08:00
29dc083d8d feat: enhance DockerDaemonEnvironment with options handling and default values 2025-12-31 16:19:47 +08:00
f679065d2c feat: extend construct_environment method to accept environments parameter in virtual environment classes 2025-12-30 21:07:16 +08:00
0a97e87a8e docs: clarify usage of close() method in PipeTransport docstring 2025-12-30 20:58:51 +08:00
4d81455a83 fix: correct PipeTransport file descriptor assignments and architecture matching case sensitivity 2025-12-30 20:54:39 +08:00
39091fe4df feat: enhance command execution and status retrieval in virtual environments with transport abstractions 2025-12-30 19:37:30 +08:00
bac5245cd0 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2025-12-30 19:11:29 +08:00
274f9a3f32 Refactor code structure for improved readability and maintainability 2025-12-30 16:31:34 +08:00
a513ab9a59 feat: implement DSL prediction API and virtual environment base classes 2025-12-30 15:24:54 +08:00
152fd52cd7 [autofix.ci] apply automated fixes 2025-12-30 02:23:25 +00:00
ccabdbc83b Merge branch 'main' into feat/agent-node-v2 2025-12-30 10:20:42 +08:00
56c8221b3f chore: remove frontend changes 2025-12-30 10:19:40 +08:00
d132abcdb4 merge main 2025-12-29 15:55:45 +08:00
d60348572e feat: llm node support tools 2025-12-29 14:55:26 +08:00
f55faae31b chore: strip reasoning from chatflow answers and persist generation details 2025-12-25 13:59:38 +08:00
0cff94d90e Merge branch 'main' into feat/llm-node-support-tools 2025-12-25 13:45:49 +08:00
7fc25cafb2 feat: basic app add thought field 2025-12-25 10:28:21 +08:00
a7859de625 feat: llm node support tools 2025-12-24 14:15:55 +08:00
047ea8c143 chore: improve type checking 2025-12-18 10:09:31 +08:00
f54b9b12b0 feat: add process data 2025-12-17 17:34:02 +08:00
cb99b8f04d chore: handle migrations 2025-12-17 15:59:09 +08:00
7c03bcba2b Merge branch 'main' into feat/agent-node-v2 2025-12-17 15:55:27 +08:00
92fa7271ed refactor(llm node): remove unused args 2025-12-17 15:42:23 +08:00
d3486cab31 refactor(llm node): tool call tool result entity 2025-12-17 10:30:21 +08:00
dd0a870969 Merge branch 'main' into feat/agent-node-v2 2025-12-16 15:17:29 +08:00
0c4c268003 chore: fix ci issues 2025-12-16 15:14:42 +08:00
ff57848268 [autofix.ci] apply automated fixes 2025-12-15 07:29:20 +00:00
d223fee9b9 Merge branch 'main' into feat/agent-node-v2 2025-12-15 15:26:48 +08:00
ad18d084f3 feat: add sequence output variable. 2025-12-15 14:59:06 +08:00
9941d1f160 feat: add llm log metadata 2025-12-15 14:18:53 +08:00
13fa56b5b1 feat: add tracing metadata 2025-12-12 16:24:49 +08:00
9ce48b4dc4 fix: llm generation variable 2025-12-12 11:08:49 +08:00
abb2b860f2 chore: remove unused changes 2025-12-10 15:04:19 +08:00
930c36e757 fix: llm detail store 2025-12-09 20:56:54 +08:00
2d2ce5df85 feat: generation stream output. 2025-12-09 16:22:17 +08:00
2b23c43434 feat: add agent package 2025-12-09 11:36:47 +08:00
338 changed files with 25021 additions and 4955 deletions

View File

@ -16,14 +16,14 @@ jobs:
- name: Check Docker Compose inputs
id: docker-compose-changes
uses: tj-actions/changed-files@v46
uses: tj-actions/changed-files@v47
with:
files: |
docker/generate_docker_compose
docker/.env.example
docker/docker-compose-template.yaml
docker/docker-compose.yaml
- uses: actions/setup-python@v5
- uses: actions/setup-python@v6
with:
python-version: "3.11"

View File

@ -112,7 +112,7 @@ jobs:
context: "web"
steps:
- name: Download digests
uses: actions/download-artifact@v4
uses: actions/download-artifact@v7
with:
path: /tmp/digests
pattern: digests-${{ matrix.context }}-*

View File

@ -19,7 +19,7 @@ jobs:
github.event.workflow_run.head_branch == 'deploy/agent-dev'
steps:
- name: Deploy to server
uses: appleboy/ssh-action@v0.1.8
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.AGENT_DEV_SSH_HOST }}
username: ${{ secrets.SSH_USER }}

View File

@ -16,7 +16,7 @@ jobs:
github.event.workflow_run.head_branch == 'deploy/dev'
steps:
- name: Deploy to server
uses: appleboy/ssh-action@v0.1.8
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.SSH_HOST }}
username: ${{ secrets.SSH_USER }}

View File

@ -20,7 +20,7 @@ jobs:
)
steps:
- name: Deploy to server
uses: appleboy/ssh-action@v0.1.8
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.HITL_SSH_HOST }}
username: ${{ secrets.SSH_USER }}

View File

@ -18,7 +18,7 @@ jobs:
pull-requests: write
steps:
- uses: actions/stale@v5
- uses: actions/stale@v10
with:
days-before-issue-stale: 15
days-before-issue-close: 3

View File

@ -65,6 +65,9 @@ jobs:
defaults:
run:
working-directory: ./web
permissions:
checks: write
pull-requests: read
steps:
- name: Checkout code
@ -103,7 +106,15 @@ jobs:
if: steps.changed-files.outputs.any_changed == 'true'
working-directory: ./web
run: |
pnpm run lint
pnpm run lint:report
continue-on-error: true
# - name: Annotate Code
# if: steps.changed-files.outputs.any_changed == 'true' && github.event_name == 'pull_request'
# uses: DerLev/eslint-annotations@51347b3a0abfb503fc8734d5ae31c4b151297fae
# with:
# eslint-report: web/eslint_report.json
# github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Web type check
if: steps.changed-files.outputs.any_changed == 'true'

View File

@ -21,7 +21,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
fetch-depth: 0

View File

@ -12,12 +12,8 @@ The codebase is split into:
## Backend Workflow
- Read `api/AGENTS.md` for details
- Run backend CLI commands through `uv run --project api <command>`.
- Before submission, all backend modifications must pass local checks: `make lint`, `make type-check`, and `uv run --project api --dev dev/pytest/pytest_unit_tests.sh`.
- Use Makefile targets for linting and formatting; `make lint` and `make type-check` cover the required checks.
- Integration tests are CI-only and are not expected to run in the local environment.
## Frontend Workflow

View File

@ -61,7 +61,8 @@ check:
lint:
@echo "🔧 Running ruff format, check with fixes, import linter, and dotenv-linter..."
@uv run --project api --dev sh -c 'ruff format ./api && ruff check --fix ./api'
@uv run --project api --dev ruff format ./api
@uv run --project api --dev ruff check --fix ./api
@uv run --directory api --dev lint-imports
@uv run --project api --dev dotenv-linter ./api/.env.example ./web/.env.example
@echo "✅ Linting complete"
@ -73,7 +74,12 @@ type-check:
test:
@echo "🧪 Running backend unit tests..."
@uv run --project api --dev dev/pytest/pytest_unit_tests.sh
@if [ -n "$(TARGET_TESTS)" ]; then \
echo "Target: $(TARGET_TESTS)"; \
uv run --project api --dev pytest $(TARGET_TESTS); \
else \
uv run --project api --dev dev/pytest/pytest_unit_tests.sh; \
fi
@echo "✅ Tests complete"
# Build Docker images
@ -125,7 +131,7 @@ help:
@echo " make check - Check code with ruff"
@echo " make lint - Format, fix, and lint code (ruff, imports, dotenv)"
@echo " make type-check - Run type checking with basedpyright"
@echo " make test - Run backend unit tests"
@echo " make test - Run backend unit tests (or TARGET_TESTS=./api/tests/<target_tests>)"
@echo ""
@echo "Docker Build Targets:"
@echo " make build-web - Build web Docker image"

0
agent-notes/.gitkeep Normal file
View File

View File

@ -716,3 +716,13 @@ SANDBOX_EXPIRED_RECORDS_CLEAN_GRACEFUL_PERIOD=21
SANDBOX_EXPIRED_RECORDS_CLEAN_BATCH_SIZE=1000
SANDBOX_EXPIRED_RECORDS_RETENTION_DAYS=30
# Sandbox Dify CLI configuration
# Directory containing dify CLI binaries (dify-cli-<os>-<arch>). Defaults to api/bin when unset.
SANDBOX_DIFY_CLI_ROOT=
# CLI API URL for sandbox (dify-sandbox or e2b) to call back to Dify API.
# This URL must be accessible from the sandbox environment.
# For local development: use http://localhost:5001 or http://127.0.0.1:5001
# For Docker deployment: use http://api:5001 (internal Docker network)
# For external sandbox (e.g., e2b): use a publicly accessible URL
CLI_API_URL=http://localhost:5001

View File

@ -1,62 +1,236 @@
# Agent Skill Index
# API Agent Guide
## Agent Notes (must-check)
Before you start work on any backend file under `api/`, you MUST check whether a related note exists under:
- `agent-notes/<same-relative-path-as-target-file>.md`
Rules:
- **Path mapping**: for a target file `<path>/<name>.py`, the note must be `agent-notes/<path>/<name>.py.md` (same folder structure, same filename, plus `.md`).
- **Before working**:
- If the note exists, read it first and follow any constraints/decisions recorded there.
- If the note conflicts with the current code, or references an "origin" file/path that has been deleted, renamed, or migrated, treat the **code as the single source of truth** and update the note to match reality.
- If the note does not exist, create it with a short architecture/intent summary and any relevant invariants/edge cases.
- **During working**:
- Keep the note in sync as you discover constraints, make decisions, or change approach.
- If you move/rename a file, migrate its note to the new mapped path (and fix any outdated references inside the note).
- Record non-obvious edge cases, trade-offs, and the test/verification plan as you go (not just at the end).
- Keep notes **coherent**: integrate new findings into the relevant sections and rewrite for clarity; avoid append-only “recent fix” / changelog-style additions unless the note is explicitly intended to be a changelog.
- **When finishing work**:
- Update the related note(s) to reflect what changed, why, and any new edge cases/tests.
- If a file is deleted, remove or clearly deprecate the corresponding note so it cannot be mistaken as current guidance.
- Keep notes concise and accurate; they are meant to prevent repeated rediscovery.
## Skill Index
Start with the section that best matches your need. Each entry lists the problems it solves plus key files/concepts so you know what to expect before opening it.
______________________________________________________________________
### Platform Foundations
## Platform Foundations
- **[Infrastructure Overview](agent_skills/infra.md)**\
When to read this:
#### [Infrastructure Overview](agent_skills/infra.md)
- **When to read this**
- You need to understand where a feature belongs in the architecture.
- Youre wiring storage, Redis, vector stores, or OTEL.
- Youre about to add CLI commands or async jobs.\
What it covers: configuration stack (`configs/app_config.py`, remote settings), storage entry points (`extensions/ext_storage.py`, `core/file/file_manager.py`), Redis conventions (`extensions/ext_redis.py`), plugin runtime topology, vector-store factory (`core/rag/datasource/vdb/*`), observability hooks, SSRF proxy usage, and core CLI commands.
- Youre about to add CLI commands or async jobs.
- **What it covers**
- Configuration stack (`configs/app_config.py`, remote settings)
- Storage entry points (`extensions/ext_storage.py`, `core/file/file_manager.py`)
- Redis conventions (`extensions/ext_redis.py`)
- Plugin runtime topology
- Vector-store factory (`core/rag/datasource/vdb/*`)
- Observability hooks
- SSRF proxy usage
- Core CLI commands
- **[Coding Style](agent_skills/coding_style.md)**\
When to read this:
### Plugin & Extension Development
- Youre writing or reviewing backend code and need the authoritative checklist.
- Youre unsure about Pydantic validators, SQLAlchemy session usage, or logging patterns.
- You want the exact lint/type/test commands used in PRs.\
Includes: Ruff & BasedPyright commands, no-annotation policy, session examples (`with Session(db.engine, ...)`), `@field_validator` usage, logging expectations, and the rule set for file size, helpers, and package management.
______________________________________________________________________
## Plugin & Extension Development
- **[Plugin Systems](agent_skills/plugin.md)**\
When to read this:
#### [Plugin Systems](agent_skills/plugin.md)
- **When to read this**
- Youre building or debugging a marketplace plugin.
- You need to know how manifests, providers, daemons, and migrations fit together.\
What it covers: plugin manifests (`core/plugin/entities/plugin.py`), installation/upgrade flows (`services/plugin/plugin_service.py`, CLI commands), runtime adapters (`core/plugin/impl/*` for tool/model/datasource/trigger/endpoint/agent), daemon coordination (`core/plugin/entities/plugin_daemon.py`), and how provider registries surface capabilities to the rest of the platform.
- You need to know how manifests, providers, daemons, and migrations fit together.
- **What it covers**
- Plugin manifests (`core/plugin/entities/plugin.py`)
- Installation/upgrade flows (`services/plugin/plugin_service.py`, CLI commands)
- Runtime adapters (`core/plugin/impl/*` for tool/model/datasource/trigger/endpoint/agent)
- Daemon coordination (`core/plugin/entities/plugin_daemon.py`)
- How provider registries surface capabilities to the rest of the platform
- **[Plugin OAuth](agent_skills/plugin_oauth.md)**\
When to read this:
#### [Plugin OAuth](agent_skills/plugin_oauth.md)
- **When to read this**
- You must integrate OAuth for a plugin or datasource.
- Youre handling credential encryption or refresh flows.\
Topics: credential storage, encryption helpers (`core/helper/provider_encryption.py`), OAuth client bootstrap (`services/plugin/oauth_service.py`, `services/plugin/plugin_parameter_service.py`), and how console/API layers expose the flows.
- Youre handling credential encryption or refresh flows.
- **Topics**
- Credential storage
- Encryption helpers (`core/helper/provider_encryption.py`)
- OAuth client bootstrap (`services/plugin/oauth_service.py`, `services/plugin/plugin_parameter_service.py`)
- How console/API layers expose the flows
______________________________________________________________________
### Workflow Entry & Execution
## Workflow Entry & Execution
#### [Trigger Concepts](agent_skills/trigger.md)
- **[Trigger Concepts](agent_skills/trigger.md)**\
When to read this:
- **When to read this**
- Youre debugging why a workflow didnt start.
- Youre adding a new trigger type or hook.
- You need to trace async execution, draft debugging, or webhook/schedule pipelines.\
Details: Start-node taxonomy, webhook & schedule internals (`core/workflow/nodes/trigger_*`, `services/trigger/*`), async orchestration (`services/async_workflow_service.py`, Celery queues), debug event bus, and storage/logging interactions.
- You need to trace async execution, draft debugging, or webhook/schedule pipelines.
- **Details**
- Start-node taxonomy
- Webhook & schedule internals (`core/workflow/nodes/trigger_*`, `services/trigger/*`)
- Async orchestration (`services/async_workflow_service.py`, Celery queues)
- Debug event bus
- Storage/logging interactions
______________________________________________________________________
## General Reminders
## Additional Notes for Agents
- All skill docs assume you follow the coding style guide—run Ruff/BasedPyright/tests listed there before submitting changes.
- All skill docs assume you follow the coding style rules below—run the lint/type/test commands before submitting changes.
- When you cannot find an answer in these briefs, search the codebase using the paths referenced (e.g., `core/plugin/impl/tool.py`, `services/dataset_service.py`).
- If you run into cross-cutting concerns (tenancy, configuration, storage), check the infrastructure guide first; it links to most supporting modules.
- Keep multi-tenancy and configuration central: everything flows through `configs.dify_config` and `tenant_id`.
- When touching plugins or triggers, consult both the system overview and the specialised doc to ensure you adjust lifecycle, storage, and observability consistently.
## Coding Style
This is the default standard for backend code in this repo. Follow it for new code and use it as the checklist when reviewing changes.
### Linting & Formatting
- Use Ruff for formatting and linting (follow `.ruff.toml`).
- Keep each line under 120 characters (including spaces).
### Naming Conventions
- Use `snake_case` for variables and functions.
- Use `PascalCase` for classes.
- Use `UPPER_CASE` for constants.
### Typing & Class Layout
- Code should usually include type annotations that match the repos current Python version (avoid untyped public APIs and “mystery” values).
- Prefer modern typing forms (e.g. `list[str]`, `dict[str, int]`) and avoid `Any` unless theres a strong reason.
- For classes, declare member variables at the top of the class body (before `__init__`) so the class shape is obvious at a glance:
```python
from datetime import datetime
class Example:
user_id: str
created_at: datetime
def __init__(self, user_id: str, created_at: datetime) -> None:
self.user_id = user_id
self.created_at = created_at
```
### General Rules
- Use Pydantic v2 conventions.
- Use `uv` for Python package management in this repo (usually with `--project api`).
- Prefer simple functions over small “utility classes” for lightweight helpers.
- Avoid implementing dunder methods unless its clearly needed and matches existing patterns.
- Never start long-running services as part of agent work (`uv run app.py`, `flask run`, etc.); running tests is allowed.
- Keep files below ~800 lines; split when necessary.
- Keep code readable and explicit—avoid clever hacks.
### Architecture & Boundaries
- Mirror the layered architecture: controller → service → core/domain.
- Reuse existing helpers in `core/`, `services/`, and `libs/` before creating new abstractions.
- Optimise for observability: deterministic control flow, clear logging, actionable errors.
### Logging & Errors
- Never use `print`; use a module-level logger:
- `logger = logging.getLogger(__name__)`
- Include tenant/app/workflow identifiers in log context when relevant.
- Raise domain-specific exceptions (`services/errors`, `core/errors`) and translate them into HTTP responses in controllers.
- Log retryable events at `warning`, terminal failures at `error`.
### SQLAlchemy Patterns
- Models inherit from `models.base.TypeBase`; do not create ad-hoc metadata or engines.
- Open sessions with context managers:
```python
from sqlalchemy.orm import Session
with Session(db.engine, expire_on_commit=False) as session:
stmt = select(Workflow).where(
Workflow.id == workflow_id,
Workflow.tenant_id == tenant_id,
)
workflow = session.execute(stmt).scalar_one_or_none()
```
- Prefer SQLAlchemy expressions; avoid raw SQL unless necessary.
- Always scope queries by `tenant_id` and protect write paths with safeguards (`FOR UPDATE`, row counts, etc.).
- Introduce repository abstractions only for very large tables (e.g., workflow executions) or when alternative storage strategies are required.
### Storage & External I/O
- Access storage via `extensions.ext_storage.storage`.
- Use `core.helper.ssrf_proxy` for outbound HTTP fetches.
- Background tasks that touch storage must be idempotent, and should log relevant object identifiers.
### Pydantic Usage
- Define DTOs with Pydantic v2 models and forbid extras by default.
- Use `@field_validator` / `@model_validator` for domain rules.
Example:
```python
from pydantic import BaseModel, ConfigDict, HttpUrl, field_validator
class TriggerConfig(BaseModel):
endpoint: HttpUrl
secret: str
model_config = ConfigDict(extra="forbid")
@field_validator("secret")
def ensure_secret_prefix(cls, value: str) -> str:
if not value.startswith("dify_"):
raise ValueError("secret must start with dify_")
return value
```
### Generics & Protocols
- Use `typing.Protocol` to define behavioural contracts (e.g., cache interfaces).
- Apply generics (`TypeVar`, `Generic`) for reusable utilities like caches or providers.
- Validate dynamic inputs at runtime when generics cannot enforce safety alone.
### Tooling & Checks
Quick checks while iterating:
- Format: `make format`
- Lint (includes auto-fix): `make lint`
- Type check: `make type-check`
- Targeted tests: `make test TARGET_TESTS=./api/tests/<target_tests>`
Before opening a PR / submitting:
- `make lint`
- `make type-check`
- `make test`
### Controllers & Services
- Controllers: parse input via Pydantic, invoke services, return serialised responses; no business logic.
- Services: coordinate repositories, providers, background tasks; keep side effects explicit.
- Document non-obvious behaviour with concise comments.
### Miscellaneous
- Use `configs.dify_config` for configuration—never read environment variables directly.
- Maintain tenant awareness end-to-end; `tenant_id` must flow through every layer touching shared resources.
- Queue async work through `services/async_workflow_service`; implement tasks under `tasks/` with explicit queue selection.
- Keep experimental scripts under `dev/`; do not ship them in production builds.

View File

@ -1,115 +0,0 @@
## Linter
- Always follow `.ruff.toml`.
- Run `uv run ruff check --fix --unsafe-fixes`.
- Keep each line under 100 characters (including spaces).
## Code Style
- `snake_case` for variables and functions.
- `PascalCase` for classes.
- `UPPER_CASE` for constants.
## Rules
- Use Pydantic v2 standard.
- Use `uv` for package management.
- Do not override dunder methods like `__init__`, `__iadd__`, etc.
- Never launch services (`uv run app.py`, `flask run`, etc.); running tests under `tests/` is allowed.
- Prefer simple functions over classes for lightweight helpers.
- Keep files below 800 lines; split when necessary.
- Keep code readable—no clever hacks.
- Never use `print`; log with `logger = logging.getLogger(__name__)`.
## Guiding Principles
- Mirror the projects layered architecture: controller → service → core/domain.
- Reuse existing helpers in `core/`, `services/`, and `libs/` before creating new abstractions.
- Optimise for observability: deterministic control flow, clear logging, actionable errors.
## SQLAlchemy Patterns
- Models inherit from `models.base.Base`; never create ad-hoc metadata or engines.
- Open sessions with context managers:
```python
from sqlalchemy.orm import Session
with Session(db.engine, expire_on_commit=False) as session:
stmt = select(Workflow).where(
Workflow.id == workflow_id,
Workflow.tenant_id == tenant_id,
)
workflow = session.execute(stmt).scalar_one_or_none()
```
- Use SQLAlchemy expressions; avoid raw SQL unless necessary.
- Introduce repository abstractions only for very large tables (e.g., workflow executions) to support alternative storage strategies.
- Always scope queries by `tenant_id` and protect write paths with safeguards (`FOR UPDATE`, row counts, etc.).
## Storage & External IO
- Access storage via `extensions.ext_storage.storage`.
- Use `core.helper.ssrf_proxy` for outbound HTTP fetches.
- Background tasks that touch storage must be idempotent and log the relevant object identifiers.
## Pydantic Usage
- Define DTOs with Pydantic v2 models and forbid extras by default.
- Use `@field_validator` / `@model_validator` for domain rules.
- Example:
```python
from pydantic import BaseModel, ConfigDict, HttpUrl, field_validator
class TriggerConfig(BaseModel):
endpoint: HttpUrl
secret: str
model_config = ConfigDict(extra="forbid")
@field_validator("secret")
def ensure_secret_prefix(cls, value: str) -> str:
if not value.startswith("dify_"):
raise ValueError("secret must start with dify_")
return value
```
## Generics & Protocols
- Use `typing.Protocol` to define behavioural contracts (e.g., cache interfaces).
- Apply generics (`TypeVar`, `Generic`) for reusable utilities like caches or providers.
- Validate dynamic inputs at runtime when generics cannot enforce safety alone.
## Error Handling & Logging
- Raise domain-specific exceptions (`services/errors`, `core/errors`) and translate to HTTP responses in controllers.
- Declare `logger = logging.getLogger(__name__)` at module top.
- Include tenant/app/workflow identifiers in log context.
- Log retryable events at `warning`, terminal failures at `error`.
## Tooling & Checks
- Format/lint: `uv run --project api --dev ruff format ./api` and `uv run --project api --dev ruff check --fix --unsafe-fixes ./api`.
- Type checks: `uv run --directory api --dev basedpyright`.
- Tests: `uv run --project api --dev dev/pytest/pytest_unit_tests.sh`.
- Run all of the above before submitting your work.
## Controllers & Services
- Controllers: parse input via Pydantic, invoke services, return serialised responses; no business logic.
- Services: coordinate repositories, providers, background tasks; keep side effects explicit.
- Avoid repositories unless necessary; direct SQLAlchemy usage is preferred for typical tables.
- Document non-obvious behaviour with concise comments.
## Miscellaneous
- Use `configs.dify_config` for configuration—never read environment variables directly.
- Maintain tenant awareness end-to-end; `tenant_id` must flow through every layer touching shared resources.
- Queue async work through `services/async_workflow_service`; implement tasks under `tasks/` with explicit queue selection.
- Keep experimental scripts under `dev/`; do not ship them in production builds.

BIN
api/bin/dify-cli-darwin-amd64 Executable file

Binary file not shown.

BIN
api/bin/dify-cli-darwin-arm64 Executable file

Binary file not shown.

BIN
api/bin/dify-cli-linux-amd64 Executable file

Binary file not shown.

BIN
api/bin/dify-cli-linux-arm64 Executable file

Binary file not shown.

View File

@ -23,7 +23,7 @@ from core.rag.datasource.vdb.vector_factory import Vector
from core.rag.datasource.vdb.vector_type import VectorType
from core.rag.index_processor.constant.built_in_field import BuiltInField
from core.rag.models.document import Document
from core.tools.utils.system_oauth_encryption import encrypt_system_oauth_params
from core.tools.utils.system_encryption import encrypt_system_params
from events.app_event import app_was_created
from extensions.ext_database import db
from extensions.ext_redis import redis_client
@ -1211,7 +1211,7 @@ def remove_orphaned_files_on_storage(force: bool):
click.echo(click.style(f"- Scanning files on storage path {storage_path}", fg="white"))
files = storage.scan(path=storage_path, files=True, directories=False)
all_files_on_storage.extend(files)
except FileNotFoundError as e:
except FileNotFoundError:
click.echo(click.style(f" -> Skipping path {storage_path} as it does not exist.", fg="yellow"))
continue
except Exception as e:
@ -1459,6 +1459,60 @@ def file_usage(
click.echo(click.style(f"Use --offset {offset + limit} to see next page", fg="white"))
@click.command("setup-sandbox-system-config", help="Setup system-level sandbox provider configuration.")
@click.option(
"--provider-type", prompt=True, type=click.Choice(["e2b", "docker", "local"]), help="Sandbox provider type"
)
@click.option("--config", prompt=True, help='Configuration JSON (e.g., {"api_key": "xxx"} for e2b)')
def setup_sandbox_system_config(provider_type: str, config: str):
"""
Setup system-level sandbox provider configuration.
Examples:
flask setup-sandbox-system-config --provider-type e2b --config '{"api_key": "e2b_xxx"}'
flask setup-sandbox-system-config --provider-type docker --config '{"docker_sock": "unix:///var/run/docker.sock"}'
flask setup-sandbox-system-config --provider-type local --config '{}'
"""
from models.sandbox import SandboxProviderSystemConfig
from services.sandbox.sandbox_provider_service import PROVIDER_CONFIG_MODELS
try:
click.echo(click.style(f"Validating config: {config}", fg="yellow"))
config_dict = TypeAdapter(dict[str, Any]).validate_json(config)
click.echo(click.style("Config validated successfully.", fg="green"))
click.echo(click.style(f"Validating config schema for provider type: {provider_type}", fg="yellow"))
model_class = PROVIDER_CONFIG_MODELS.get(provider_type)
if model_class:
model_class.model_validate(config_dict)
click.echo(click.style("Config schema validated successfully.", fg="green"))
click.echo(click.style("Encrypting config...", fg="yellow"))
click.echo(click.style(f"Using SECRET_KEY: `{dify_config.SECRET_KEY}`", fg="yellow"))
encrypted_config = encrypt_system_params(config_dict)
click.echo(click.style("Config encrypted successfully.", fg="green"))
except Exception as e:
click.echo(click.style(f"Error validating/encrypting config: {str(e)}", fg="red"))
return
deleted_count = db.session.query(SandboxProviderSystemConfig).filter_by(provider_type=provider_type).delete()
if deleted_count > 0:
click.echo(
click.style(
f"Deleted {deleted_count} existing system config for provider type: {provider_type}", fg="yellow"
)
)
system_config = SandboxProviderSystemConfig(
provider_type=provider_type,
encrypted_config=encrypted_config,
)
db.session.add(system_config)
db.session.commit()
click.echo(click.style(f"Sandbox system config setup successfully. id: {system_config.id}", fg="green"))
click.echo(click.style(f"Provider type: {provider_type}", fg="green"))
@click.command("setup-system-tool-oauth-client", help="Setup system tool oauth client.")
@click.option("--provider", prompt=True, help="Provider name")
@click.option("--client-params", prompt=True, help="Client Params")
@ -1478,7 +1532,7 @@ def setup_system_tool_oauth_client(provider, client_params):
click.echo(click.style(f"Encrypting client params: {client_params}", fg="yellow"))
click.echo(click.style(f"Using SECRET_KEY: `{dify_config.SECRET_KEY}`", fg="yellow"))
oauth_client_params = encrypt_system_oauth_params(client_params_dict)
oauth_client_params = encrypt_system_params(client_params_dict)
click.echo(click.style("Client params encrypted successfully.", fg="green"))
except Exception as e:
click.echo(click.style(f"Error parsing client params: {str(e)}", fg="red"))
@ -1527,7 +1581,7 @@ def setup_system_trigger_oauth_client(provider, client_params):
click.echo(click.style(f"Encrypting client params: {client_params}", fg="yellow"))
click.echo(click.style(f"Using SECRET_KEY: `{dify_config.SECRET_KEY}`", fg="yellow"))
oauth_client_params = encrypt_system_oauth_params(client_params_dict)
oauth_client_params = encrypt_system_params(client_params_dict)
click.echo(click.style("Client params encrypted successfully.", fg="green"))
except Exception as e:
click.echo(click.style(f"Error parsing client params: {str(e)}", fg="red"))

View File

@ -2,6 +2,7 @@ import logging
from pathlib import Path
from typing import Any
from pydantic import Field
from pydantic.fields import FieldInfo
from pydantic_settings import BaseSettings, PydanticBaseSettingsSource, SettingsConfigDict, TomlConfigSettingsSource
@ -82,6 +83,14 @@ class DifyConfig(
extra="ignore",
)
SANDBOX_DIFY_CLI_ROOT: str | None = Field(
default=None,
description=(
"Filesystem directory containing dify CLI binaries named dify-cli-<os>-<arch>. "
"Defaults to api/bin when unset."
),
)
# Before adding any config,
# please consider to arrange it in the proper config group of existed or added
# for better readability and maintainability.

View File

@ -244,6 +244,17 @@ class PluginConfig(BaseSettings):
)
class CliApiConfig(BaseSettings):
"""
Configuration for CLI API (for dify-cli to call back from external sandbox environments)
"""
CLI_API_URL: str = Field(
description="CLI API URL for external sandbox (e.g., e2b) to call back.",
default="http://localhost:5001",
)
class MarketplaceConfig(BaseSettings):
"""
Configuration for marketplace
@ -1309,6 +1320,7 @@ class FeatureConfig(
TriggerConfig,
AsyncWorkflowConfig,
PluginConfig,
CliApiConfig,
MarketplaceConfig,
DataSetConfig,
EndpointConfig,

View File

@ -0,0 +1,27 @@
from flask import Blueprint
from flask_restx import Namespace
from libs.external_api import ExternalApi
bp = Blueprint("cli_api", __name__, url_prefix="/cli/api")
api = ExternalApi(
bp,
version="1.0",
title="CLI API",
description="APIs for Dify CLI to call back from external sandbox environments (e.g., e2b)",
)
# Create namespace
cli_api_ns = Namespace("cli_api", description="CLI API operations", path="/")
from .plugin import plugin as _plugin
api.add_namespace(cli_api_ns)
__all__ = [
"_plugin",
"api",
"bp",
"cli_api_ns",
]

View File

@ -0,0 +1,137 @@
from flask_restx import Resource
from controllers.cli_api import cli_api_ns
from controllers.cli_api.plugin.wraps import get_user_tenant, plugin_data
from controllers.cli_api.wraps import cli_api_only
from controllers.console.wraps import setup_required
from core.file.helpers import get_signed_file_url_for_plugin
from core.plugin.backwards_invocation.app import PluginAppBackwardsInvocation
from core.plugin.backwards_invocation.base import BaseBackwardsInvocationResponse
from core.plugin.backwards_invocation.model import PluginModelBackwardsInvocation
from core.plugin.backwards_invocation.tool import PluginToolBackwardsInvocation
from core.plugin.entities.request import (
RequestInvokeApp,
RequestInvokeLLM,
RequestInvokeTool,
RequestRequestUploadFile,
)
from core.tools.entities.tool_entities import ToolProviderType
from libs.helper import length_prefixed_response
from models import Account, Tenant
from models.model import EndUser
@cli_api_ns.route("/invoke/llm")
class CliInvokeLLMApi(Resource):
@get_user_tenant
@setup_required
@cli_api_only
@plugin_data(payload_type=RequestInvokeLLM)
def post(self, user_model: Account | EndUser, tenant_model: Tenant, payload: RequestInvokeLLM):
def generator():
response = PluginModelBackwardsInvocation.invoke_llm(user_model.id, tenant_model, payload)
return PluginModelBackwardsInvocation.convert_to_event_stream(response)
return length_prefixed_response(0xF, generator())
@cli_api_ns.route("/invoke/tool")
class CliInvokeToolApi(Resource):
@get_user_tenant
@setup_required
@cli_api_only
@plugin_data(payload_type=RequestInvokeTool)
def post(self, user_model: Account | EndUser, tenant_model: Tenant, payload: RequestInvokeTool):
def generator():
return PluginToolBackwardsInvocation.convert_to_event_stream(
PluginToolBackwardsInvocation.invoke_tool(
tenant_id=tenant_model.id,
user_id=user_model.id,
tool_type=ToolProviderType.value_of(payload.tool_type),
provider=payload.provider,
tool_name=payload.tool,
tool_parameters=payload.tool_parameters,
credential_id=payload.credential_id,
),
)
return length_prefixed_response(0xF, generator())
@cli_api_ns.route("/invoke/app")
class CliInvokeAppApi(Resource):
@get_user_tenant
@setup_required
@cli_api_only
@plugin_data(payload_type=RequestInvokeApp)
def post(self, user_model: Account | EndUser, tenant_model: Tenant, payload: RequestInvokeApp):
response = PluginAppBackwardsInvocation.invoke_app(
app_id=payload.app_id,
user_id=user_model.id,
tenant_id=tenant_model.id,
conversation_id=payload.conversation_id,
query=payload.query,
stream=payload.response_mode == "streaming",
inputs=payload.inputs,
files=payload.files,
)
return length_prefixed_response(0xF, PluginAppBackwardsInvocation.convert_to_event_stream(response))
@cli_api_ns.route("/upload/file/request")
class CliUploadFileRequestApi(Resource):
@get_user_tenant
@setup_required
@cli_api_only
@plugin_data(payload_type=RequestRequestUploadFile)
def post(self, user_model: Account | EndUser, tenant_model: Tenant, payload: RequestRequestUploadFile):
# generate signed url
url = get_signed_file_url_for_plugin(
filename=payload.filename,
mimetype=payload.mimetype,
tenant_id=tenant_model.id,
user_id=user_model.id,
)
return BaseBackwardsInvocationResponse(data={"url": url}).model_dump()
@cli_api_ns.route("/fetch/tools/list")
class CliFetchToolsListApi(Resource):
@get_user_tenant
@setup_required
@cli_api_only
def post(self, user_model: Account | EndUser, tenant_model: Tenant):
from sqlalchemy.orm import Session
from extensions.ext_database import db
from services.tools.api_tools_manage_service import ApiToolManageService
from services.tools.builtin_tools_manage_service import BuiltinToolManageService
from services.tools.mcp_tools_manage_service import MCPToolManageService
from services.tools.workflow_tools_manage_service import WorkflowToolManageService
providers = []
# Get builtin tools
builtin_providers = BuiltinToolManageService.list_builtin_tools(user_model.id, tenant_model.id)
for provider in builtin_providers:
providers.append(provider.to_dict())
# Get API tools
api_providers = ApiToolManageService.list_api_tools(tenant_model.id)
for provider in api_providers:
providers.append(provider.to_dict())
# Get workflow tools
workflow_providers = WorkflowToolManageService.list_tenant_workflow_tools(user_model.id, tenant_model.id)
for provider in workflow_providers:
providers.append(provider.to_dict())
# Get MCP tools
with Session(db.engine) as session:
mcp_service = MCPToolManageService(session)
mcp_providers = mcp_service.list_providers(tenant_id=tenant_model.id, for_list=True)
for provider in mcp_providers:
providers.append(provider.to_dict())
return BaseBackwardsInvocationResponse(data={"providers": providers}).model_dump()

View File

@ -0,0 +1,145 @@
from collections.abc import Callable
from functools import wraps
from typing import ParamSpec, TypeVar
from flask import current_app, request
from flask_login import user_logged_in
from pydantic import BaseModel
from sqlalchemy.orm import Session
from core.session.cli_api import CliApiSession, CliApiSessionManager
from extensions.ext_database import db
from libs.login import current_user
from models.account import Tenant
from models.model import DefaultEndUserSessionID, EndUser
P = ParamSpec("P")
R = TypeVar("R")
class TenantUserPayload(BaseModel):
tenant_id: str
user_id: str
def get_user(tenant_id: str, user_id: str | None) -> EndUser:
"""
Get current user
NOTE: user_id is not trusted, it could be maliciously set to any value.
As a result, it could only be considered as an end user id.
"""
if not user_id:
user_id = DefaultEndUserSessionID.DEFAULT_SESSION_ID
is_anonymous = user_id == DefaultEndUserSessionID.DEFAULT_SESSION_ID
try:
with Session(db.engine) as session:
user_model = None
if is_anonymous:
user_model = (
session.query(EndUser)
.where(
EndUser.session_id == user_id,
EndUser.tenant_id == tenant_id,
)
.first()
)
else:
user_model = (
session.query(EndUser)
.where(
EndUser.id == user_id,
EndUser.tenant_id == tenant_id,
)
.first()
)
if not user_model:
user_model = EndUser(
tenant_id=tenant_id,
type="service_api",
is_anonymous=is_anonymous,
session_id=user_id,
)
session.add(user_model)
session.commit()
session.refresh(user_model)
except Exception:
raise ValueError("user not found")
return user_model
def get_user_tenant(view_func: Callable[P, R]):
@wraps(view_func)
def decorated_view(*args: P.args, **kwargs: P.kwargs):
session_id = request.headers.get("X-Cli-Api-Session-Id")
if session_id:
session: CliApiSession | None = CliApiSessionManager().get(session_id)
if not session:
raise ValueError("session not found")
user_id = session.user_id
tenant_id = session.tenant_id
else:
payload = TenantUserPayload.model_validate(request.get_json(silent=True) or {})
user_id = payload.user_id
tenant_id = payload.tenant_id
if not tenant_id:
raise ValueError("tenant_id is required")
if not user_id:
user_id = DefaultEndUserSessionID.DEFAULT_SESSION_ID
try:
tenant_model = (
db.session.query(Tenant)
.where(
Tenant.id == tenant_id,
)
.first()
)
except Exception:
raise ValueError("tenant not found")
if not tenant_model:
raise ValueError("tenant not found")
kwargs["tenant_model"] = tenant_model
user = get_user(tenant_id, user_id)
kwargs["user_model"] = user
current_app.login_manager._update_request_context_with_user(user) # type: ignore
user_logged_in.send(current_app._get_current_object(), user=current_user) # type: ignore
return view_func(*args, **kwargs)
return decorated_view
def plugin_data(view: Callable[P, R] | None = None, *, payload_type: type[BaseModel]):
def decorator(view_func: Callable[P, R]):
def decorated_view(*args: P.args, **kwargs: P.kwargs):
try:
data = request.get_json()
except Exception:
raise ValueError("invalid json")
try:
payload = payload_type.model_validate(data)
except Exception as e:
raise ValueError(f"invalid payload: {str(e)}")
kwargs["payload"] = payload
return view_func(*args, **kwargs)
return decorated_view
if view is None:
return decorator
else:
return decorator(view)

View File

@ -0,0 +1,54 @@
import hashlib
import hmac
import time
from collections.abc import Callable
from functools import wraps
from typing import ParamSpec, TypeVar
from flask import abort, request
from core.session.cli_api import CliApiSessionManager
P = ParamSpec("P")
R = TypeVar("R")
SIGNATURE_TTL_SECONDS = 300
def _verify_signature(session_secret: str, timestamp: str, body: bytes, signature: str) -> bool:
expected = hmac.new(
session_secret.encode(),
f"{timestamp}.".encode() + body,
hashlib.sha256,
).hexdigest()
return hmac.compare_digest(f"sha256={expected}", signature)
def cli_api_only(view: Callable[P, R]):
@wraps(view)
def decorated(*args: P.args, **kwargs: P.kwargs):
session_id = request.headers.get("X-Cli-Api-Session-Id")
timestamp = request.headers.get("X-Cli-Api-Timestamp")
signature = request.headers.get("X-Cli-Api-Signature")
if not session_id or not timestamp or not signature:
abort(401)
try:
ts = int(timestamp)
if abs(time.time() - ts) > SIGNATURE_TTL_SECONDS:
abort(401)
except ValueError:
abort(401)
session = CliApiSessionManager().get(session_id)
if not session:
abort(401)
body = request.get_data()
if not _verify_signature(session.secret, timestamp, body, signature):
abort(401)
return view(*args, **kwargs)
return decorated

View File

@ -50,6 +50,7 @@ from .app import (
agent,
annotation,
app,
app_asset,
audio,
completion,
conversation,
@ -126,6 +127,7 @@ from .workspace import (
model_providers,
models,
plugin,
sandbox_providers,
tool_providers,
trigger_providers,
workspace,
@ -144,6 +146,7 @@ __all__ = [
"api",
"apikey",
"app",
"app_asset",
"audio",
"billing",
"bp",
@ -191,6 +194,7 @@ __all__ = [
"rag_pipeline_import",
"rag_pipeline_workflow",
"recommended_app",
"sandbox_providers",
"saved_message",
"setup",
"site",

View File

@ -0,0 +1,274 @@
from flask import request
from flask_restx import Resource
from pydantic import BaseModel, Field, field_validator
from controllers.console import console_ns
from controllers.console.app.error import (
AppAssetFileRequiredError,
AppAssetNodeNotFoundError,
AppAssetPathConflictError,
)
from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import account_initialization_required, setup_required
from libs.login import current_account_with_tenant, login_required
from models import App
from models.model import AppMode
from services.app_asset_service import AppAssetService
from services.errors.app_asset import (
AppAssetNodeNotFoundError as ServiceNodeNotFoundError,
)
from services.errors.app_asset import (
AppAssetParentNotFoundError,
)
from services.errors.app_asset import (
AppAssetPathConflictError as ServicePathConflictError,
)
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class CreateFolderPayload(BaseModel):
name: str = Field(..., min_length=1, max_length=255)
parent_id: str | None = None
class CreateFilePayload(BaseModel):
name: str = Field(..., min_length=1, max_length=255)
parent_id: str | None = None
@field_validator("name", mode="before")
@classmethod
def strip_name(cls, v: str) -> str:
return v.strip() if isinstance(v, str) else v
@field_validator("parent_id", mode="before")
@classmethod
def empty_to_none(cls, v: str | None) -> str | None:
return v or None
class UpdateFileContentPayload(BaseModel):
content: str
class RenameNodePayload(BaseModel):
name: str = Field(..., min_length=1, max_length=255)
class MoveNodePayload(BaseModel):
parent_id: str | None = None
class ReorderNodePayload(BaseModel):
after_node_id: str | None = Field(default=None, description="Place after this node, None for first position")
def reg(cls: type[BaseModel]) -> None:
console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
reg(CreateFolderPayload)
reg(CreateFilePayload)
reg(UpdateFileContentPayload)
reg(RenameNodePayload)
reg(MoveNodePayload)
reg(ReorderNodePayload)
@console_ns.route("/apps/<string:app_id>/assets/tree")
class AppAssetTreeResource(Resource):
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def get(self, app_model: App):
current_user, _ = current_account_with_tenant()
tree = AppAssetService.get_asset_tree(app_model, current_user.id)
return {"children": [view.model_dump() for view in tree.transform()]}
@console_ns.route("/apps/<string:app_id>/assets/folders")
class AppAssetFolderResource(Resource):
@console_ns.expect(console_ns.models[CreateFolderPayload.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def post(self, app_model: App):
current_user, _ = current_account_with_tenant()
payload = CreateFolderPayload.model_validate(console_ns.payload or {})
try:
node = AppAssetService.create_folder(app_model, current_user.id, payload.name, payload.parent_id)
return node.model_dump(), 201
except AppAssetParentNotFoundError:
raise AppAssetNodeNotFoundError()
except ServicePathConflictError:
raise AppAssetPathConflictError()
@console_ns.route("/apps/<string:app_id>/assets/files")
class AppAssetFileResource(Resource):
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def post(self, app_model: App):
current_user, _ = current_account_with_tenant()
file = request.files.get("file")
if not file:
raise AppAssetFileRequiredError()
payload = CreateFilePayload.model_validate(request.form.to_dict())
content = file.read()
try:
node = AppAssetService.create_file(app_model, current_user.id, payload.name, content, payload.parent_id)
return node.model_dump(), 201
except AppAssetParentNotFoundError:
raise AppAssetNodeNotFoundError()
except ServicePathConflictError:
raise AppAssetPathConflictError()
@console_ns.route("/apps/<string:app_id>/assets/files/<string:node_id>")
class AppAssetFileDetailResource(Resource):
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def get(self, app_model: App, node_id: str):
current_user, _ = current_account_with_tenant()
try:
content = AppAssetService.get_file_content(app_model, current_user.id, node_id)
return {"content": content.decode("utf-8", errors="replace")}
except ServiceNodeNotFoundError:
raise AppAssetNodeNotFoundError()
@console_ns.expect(console_ns.models[UpdateFileContentPayload.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def put(self, app_model: App, node_id: str):
current_user, _ = current_account_with_tenant()
file = request.files.get("file")
if file:
content = file.read()
else:
payload = UpdateFileContentPayload.model_validate(console_ns.payload or {})
content = payload.content.encode("utf-8")
try:
node = AppAssetService.update_file_content(app_model, current_user.id, node_id, content)
return node.model_dump()
except ServiceNodeNotFoundError:
raise AppAssetNodeNotFoundError()
@console_ns.route("/apps/<string:app_id>/assets/nodes/<string:node_id>")
class AppAssetNodeResource(Resource):
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def delete(self, app_model: App, node_id: str):
current_user, _ = current_account_with_tenant()
try:
AppAssetService.delete_node(app_model, current_user.id, node_id)
return {"result": "success"}, 200
except ServiceNodeNotFoundError:
raise AppAssetNodeNotFoundError()
@console_ns.route("/apps/<string:app_id>/assets/nodes/<string:node_id>/rename")
class AppAssetNodeRenameResource(Resource):
@console_ns.expect(console_ns.models[RenameNodePayload.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def post(self, app_model: App, node_id: str):
current_user, _ = current_account_with_tenant()
payload = RenameNodePayload.model_validate(console_ns.payload or {})
try:
node = AppAssetService.rename_node(app_model, current_user.id, node_id, payload.name)
return node.model_dump()
except ServiceNodeNotFoundError:
raise AppAssetNodeNotFoundError()
except ServicePathConflictError:
raise AppAssetPathConflictError()
@console_ns.route("/apps/<string:app_id>/assets/nodes/<string:node_id>/move")
class AppAssetNodeMoveResource(Resource):
@console_ns.expect(console_ns.models[MoveNodePayload.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def post(self, app_model: App, node_id: str):
current_user, _ = current_account_with_tenant()
payload = MoveNodePayload.model_validate(console_ns.payload or {})
try:
node = AppAssetService.move_node(app_model, current_user.id, node_id, payload.parent_id)
return node.model_dump()
except ServiceNodeNotFoundError:
raise AppAssetNodeNotFoundError()
except AppAssetParentNotFoundError:
raise AppAssetNodeNotFoundError()
except ServicePathConflictError:
raise AppAssetPathConflictError()
@console_ns.route("/apps/<string:app_id>/assets/nodes/<string:node_id>/reorder")
class AppAssetNodeReorderResource(Resource):
@console_ns.expect(console_ns.models[ReorderNodePayload.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def post(self, app_model: App, node_id: str):
current_user, _ = current_account_with_tenant()
payload = ReorderNodePayload.model_validate(console_ns.payload or {})
try:
node = AppAssetService.reorder_node(app_model, current_user.id, node_id, payload.after_node_id)
return node.model_dump()
except ServiceNodeNotFoundError:
raise AppAssetNodeNotFoundError()
@console_ns.route("/apps/<string:app_id>/assets/publish")
class AppAssetPublishResource(Resource):
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def post(self, app_model: App):
current_user, _ = current_account_with_tenant()
published = AppAssetService.publish(app_model, current_user.id)
return {
"id": published.id,
"version": published.version,
"asset_tree": published.asset_tree.model_dump(),
}, 201
@console_ns.route("/apps/<string:app_id>/assets/files/<string:node_id>/download-url")
class AppAssetFileDownloadUrlResource(Resource):
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
def get(self, app_model: App, node_id: str):
current_user, _ = current_account_with_tenant()
try:
download_url = AppAssetService.get_file_download_url(app_model, current_user.id, node_id)
return {"download_url": download_url}
except ServiceNodeNotFoundError:
raise AppAssetNodeNotFoundError()

View File

@ -110,8 +110,24 @@ class TracingConfigCheckError(BaseHTTPException):
class InvokeRateLimitError(BaseHTTPException):
"""Raised when the Invoke returns rate limit error."""
error_code = "rate_limit_error"
description = "Rate Limit Error"
code = 429
class AppAssetNodeNotFoundError(BaseHTTPException):
error_code = "app_asset_node_not_found"
description = "App asset node not found."
code = 404
class AppAssetFileRequiredError(BaseHTTPException):
error_code = "app_asset_file_required"
description = "File is required."
code = 400
class AppAssetPathConflictError(BaseHTTPException):
error_code = "app_asset_path_conflict"
description = "Path already exists."
code = 409

View File

@ -202,6 +202,7 @@ message_detail_model = console_ns.model(
"status": fields.String,
"error": fields.String,
"parent_message_id": fields.String,
"generation_detail": fields.Raw,
},
)

View File

@ -30,6 +30,11 @@ class TagBindingRemovePayload(BaseModel):
type: Literal["knowledge", "app"] | None = Field(default=None, description="Tag type")
class TagListQueryParam(BaseModel):
type: Literal["knowledge", "app", ""] = Field("", description="Tag type filter")
keyword: str | None = Field(None, description="Search keyword")
register_schema_models(
console_ns,
TagBasePayload,
@ -43,12 +48,15 @@ class TagListApi(Resource):
@setup_required
@login_required
@account_initialization_required
@console_ns.doc(
params={"type": 'Tag type filter. Can be "knowledge" or "app".', "keyword": "Search keyword for tag name."}
)
@marshal_with(dataset_tag_fields)
def get(self):
_, current_tenant_id = current_account_with_tenant()
tag_type = request.args.get("type", type=str, default="")
keyword = request.args.get("keyword", default=None, type=str)
tags = TagService.get_tags(tag_type, current_tenant_id, keyword)
raw_args = request.args.to_dict()
param = TagListQueryParam.model_validate(raw_args)
tags = TagService.get_tags(param.type, current_tenant_id, param.keyword)
return tags, 200

View File

@ -0,0 +1,65 @@
import json
import httpx
import yaml
from flask_restx import Resource, reqparse
from sqlalchemy.orm import Session
from werkzeug.exceptions import Forbidden
from controllers.console import console_ns
from controllers.console.wraps import account_initialization_required, setup_required
from core.plugin.impl.exc import PluginPermissionDeniedError
from extensions.ext_database import db
from libs.login import current_account_with_tenant, login_required
from models.model import App
from models.workflow import Workflow
from services.app_dsl_service import AppDslService
@console_ns.route("/workspaces/current/dsl/predict")
class DSLPredictApi(Resource):
@setup_required
@login_required
@account_initialization_required
def post(self):
user, _ = current_account_with_tenant()
if not user.is_admin_or_owner:
raise Forbidden()
parser = (
reqparse.RequestParser()
.add_argument("app_id", type=str, required=True, location="json")
.add_argument("current_node_id", type=str, required=True, location="json")
)
args = parser.parse_args()
app_id: str = args["app_id"]
current_node_id: str = args["current_node_id"]
with Session(db.engine) as session:
app = session.query(App).filter_by(id=app_id).first()
workflow = session.query(Workflow).filter_by(app_id=app_id, version=Workflow.VERSION_DRAFT).first()
if not app:
raise ValueError("App not found")
if not workflow:
raise ValueError("Workflow not found")
try:
i = 0
for node_id, _ in workflow.walk_nodes():
if node_id == current_node_id:
break
i += 1
dsl = yaml.safe_load(AppDslService.export_dsl(app_model=app))
response = httpx.post(
"http://spark-832c:8000/predict",
json={"graph_data": dsl, "source_node_index": i},
)
return {
"nodes": json.loads(response.json()),
}
except PluginPermissionDeniedError as e:
raise ValueError(e.description) from e

View File

@ -0,0 +1,95 @@
import logging
from flask_restx import Resource, fields, reqparse
from controllers.console import console_ns
from controllers.console.wraps import account_initialization_required, setup_required
from core.model_runtime.utils.encoders import jsonable_encoder
from libs.login import current_account_with_tenant, login_required
from services.sandbox.sandbox_provider_service import SandboxProviderService
logger = logging.getLogger(__name__)
@console_ns.route("/workspaces/current/sandbox-providers")
class SandboxProviderListApi(Resource):
@console_ns.doc("list_sandbox_providers")
@console_ns.doc(description="Get list of available sandbox providers with configuration status")
@console_ns.response(200, "Success", fields.List(fields.Raw(description="Sandbox provider information")))
@setup_required
@login_required
@account_initialization_required
def get(self):
_, current_tenant_id = current_account_with_tenant()
providers = SandboxProviderService.list_providers(current_tenant_id)
return jsonable_encoder([p.model_dump() for p in providers])
config_parser = reqparse.RequestParser()
config_parser.add_argument("config", type=dict, required=True, location="json")
@console_ns.route("/workspaces/current/sandbox-provider/<string:provider_type>/config")
class SandboxProviderConfigApi(Resource):
@console_ns.doc("save_sandbox_provider_config")
@console_ns.doc(description="Save or update configuration for a sandbox provider")
@console_ns.expect(config_parser)
@console_ns.response(200, "Success")
@setup_required
@login_required
@account_initialization_required
def post(self, provider_type: str):
_, current_tenant_id = current_account_with_tenant()
args = config_parser.parse_args()
try:
result = SandboxProviderService.save_config(
tenant_id=current_tenant_id,
provider_type=provider_type,
config=args["config"],
)
return result
except ValueError as e:
return {"message": str(e)}, 400
@console_ns.doc("delete_sandbox_provider_config")
@console_ns.doc(description="Delete configuration for a sandbox provider")
@console_ns.response(200, "Success")
@setup_required
@login_required
@account_initialization_required
def delete(self, provider_type: str):
_, current_tenant_id = current_account_with_tenant()
try:
result = SandboxProviderService.delete_config(
tenant_id=current_tenant_id,
provider_type=provider_type,
)
return result
except ValueError as e:
return {"message": str(e)}, 400
@console_ns.route("/workspaces/current/sandbox-provider/<string:provider_type>/activate")
class SandboxProviderActivateApi(Resource):
"""Activate a sandbox provider."""
@console_ns.doc("activate_sandbox_provider")
@console_ns.doc(description="Activate a sandbox provider for the current workspace")
@console_ns.response(200, "Success")
@setup_required
@login_required
@account_initialization_required
def post(self, provider_type: str):
"""Activate a sandbox provider."""
_, current_tenant_id = current_account_with_tenant()
try:
result = SandboxProviderService.activate_provider(
tenant_id=current_tenant_id,
provider_type=provider_type,
)
return result
except ValueError as e:
return {"message": str(e)}, 400

View File

@ -14,7 +14,7 @@ api = ExternalApi(
files_ns = Namespace("files", description="File operations", path="/")
from . import image_preview, tool_files, upload
from . import image_preview, storage_download, tool_files, upload
api.add_namespace(files_ns)
@ -23,6 +23,7 @@ __all__ = [
"bp",
"files_ns",
"image_preview",
"storage_download",
"tool_files",
"upload",
]

View File

@ -0,0 +1,56 @@
from urllib.parse import quote, unquote
from flask import Response, request
from flask_restx import Resource
from pydantic import BaseModel, Field
from werkzeug.exceptions import Forbidden, NotFound
from controllers.files import files_ns
from extensions.ext_storage import storage
from extensions.storage.file_presign_storage import FilePresignStorage
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class StorageDownloadQuery(BaseModel):
timestamp: str = Field(..., description="Unix timestamp used in the signature")
nonce: str = Field(..., description="Random string for signature")
sign: str = Field(..., description="HMAC signature")
files_ns.schema_model(
StorageDownloadQuery.__name__,
StorageDownloadQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
@files_ns.route("/storage/<path:filename>/download")
class StorageFileDownloadApi(Resource):
def get(self, filename: str):
filename = unquote(filename)
args = StorageDownloadQuery.model_validate(request.args.to_dict(flat=True))
if not FilePresignStorage.verify_signature(
filename=filename,
timestamp=args.timestamp,
nonce=args.nonce,
sign=args.sign,
):
raise Forbidden("Invalid or expired download link")
try:
generator = storage.load_stream(filename)
except FileNotFoundError:
raise NotFound("File not found")
encoded_filename = quote(filename.split("/")[-1])
return Response(
generator,
mimetype="application/octet-stream",
direct_passthrough=True,
headers={
"Content-Disposition": f"attachment; filename*=UTF-8''{encoded_filename}",
},
)

View File

@ -448,3 +448,53 @@ class PluginFetchAppInfoApi(Resource):
return BaseBackwardsInvocationResponse(
data=PluginAppBackwardsInvocation.fetch_app_info(payload.app_id, tenant_model.id)
).model_dump()
@inner_api_ns.route("/fetch/tools/list")
class PluginFetchToolsListApi(Resource):
@get_user_tenant
@setup_required
@plugin_inner_api_only
@inner_api_ns.doc("plugin_fetch_tools_list")
@inner_api_ns.doc(description="Fetch all available tools through plugin interface")
@inner_api_ns.doc(
responses={
200: "Tools list retrieved successfully",
401: "Unauthorized - invalid API key",
404: "Service not available",
}
)
def post(self, user_model: Account | EndUser, tenant_model: Tenant):
from sqlalchemy.orm import Session
from extensions.ext_database import db
from services.tools.api_tools_manage_service import ApiToolManageService
from services.tools.builtin_tools_manage_service import BuiltinToolManageService
from services.tools.mcp_tools_manage_service import MCPToolManageService
from services.tools.workflow_tools_manage_service import WorkflowToolManageService
providers = []
# Get builtin tools
builtin_providers = BuiltinToolManageService.list_builtin_tools(user_model.id, tenant_model.id)
for provider in builtin_providers:
providers.append(provider.to_dict())
# Get API tools
api_providers = ApiToolManageService.list_api_tools(tenant_model.id)
for provider in api_providers:
providers.append(provider.to_dict())
# Get workflow tools
workflow_providers = WorkflowToolManageService.list_tenant_workflow_tools(user_model.id, tenant_model.id)
for provider in workflow_providers:
providers.append(provider.to_dict())
# Get MCP tools
with Session(db.engine) as session:
mcp_service = MCPToolManageService(session)
mcp_providers = mcp_service.list_providers(tenant_id=tenant_model.id, for_list=True)
for provider in mcp_providers:
providers.append(provider.to_dict())
return BaseBackwardsInvocationResponse(data={"providers": providers}).model_dump()

View File

@ -75,7 +75,6 @@ def get_user_tenant(view_func: Callable[P, R]):
@wraps(view_func)
def decorated_view(*args: P.args, **kwargs: P.kwargs):
payload = TenantUserPayload.model_validate(request.get_json(silent=True) or {})
user_id = payload.user_id
tenant_id = payload.tenant_id

View File

@ -5,14 +5,15 @@ from hashlib import sha1
from hmac import new as hmac_new
from typing import ParamSpec, TypeVar
P = ParamSpec("P")
R = TypeVar("R")
from flask import abort, request
from configs import dify_config
from extensions.ext_database import db
from models.model import EndUser
P = ParamSpec("P")
R = TypeVar("R")
def billing_inner_api_only(view: Callable[P, R]):
@wraps(view)
@ -88,11 +89,11 @@ def plugin_inner_api_only(view: Callable[P, R]):
if not dify_config.PLUGIN_DAEMON_KEY:
abort(404)
# get header 'X-Inner-Api-Key'
# validate using inner api key
inner_api_key = request.headers.get("X-Inner-Api-Key")
if not inner_api_key or inner_api_key != dify_config.INNER_API_KEY_FOR_PLUGIN:
abort(404)
if inner_api_key and inner_api_key == dify_config.INNER_API_KEY_FOR_PLUGIN:
return view(*args, **kwargs)
return view(*args, **kwargs)
abort(401)
return decorated

View File

@ -0,0 +1,380 @@
import logging
from collections.abc import Generator
from copy import deepcopy
from typing import Any
from core.agent.base_agent_runner import BaseAgentRunner
from core.agent.entities import AgentEntity, AgentLog, AgentResult
from core.agent.patterns.strategy_factory import StrategyFactory
from core.app.apps.base_app_queue_manager import PublishFrom
from core.app.entities.queue_entities import QueueAgentThoughtEvent, QueueMessageEndEvent, QueueMessageFileEvent
from core.file import file_manager
from core.model_runtime.entities import (
AssistantPromptMessage,
LLMResult,
LLMResultChunk,
LLMUsage,
PromptMessage,
PromptMessageContentType,
SystemPromptMessage,
TextPromptMessageContent,
UserPromptMessage,
)
from core.model_runtime.entities.message_entities import ImagePromptMessageContent, PromptMessageContentUnionTypes
from core.prompt.agent_history_prompt_transform import AgentHistoryPromptTransform
from core.tools.__base.tool import Tool
from core.tools.entities.tool_entities import ToolInvokeMeta
from core.tools.tool_engine import ToolEngine
from models.model import Message
logger = logging.getLogger(__name__)
class AgentAppRunner(BaseAgentRunner):
def _create_tool_invoke_hook(self, message: Message):
"""
Create a tool invoke hook that uses ToolEngine.agent_invoke.
This hook handles file creation and returns proper meta information.
"""
# Get trace manager from app generate entity
trace_manager = self.application_generate_entity.trace_manager
def tool_invoke_hook(
tool: Tool, tool_args: dict[str, Any], tool_name: str
) -> tuple[str, list[str], ToolInvokeMeta]:
"""Hook that uses agent_invoke for proper file and meta handling."""
tool_invoke_response, message_files, tool_invoke_meta = ToolEngine.agent_invoke(
tool=tool,
tool_parameters=tool_args,
user_id=self.user_id,
tenant_id=self.tenant_id,
message=message,
invoke_from=self.application_generate_entity.invoke_from,
agent_tool_callback=self.agent_callback,
trace_manager=trace_manager,
app_id=self.application_generate_entity.app_config.app_id,
message_id=message.id,
conversation_id=self.conversation.id,
)
# Publish files and track IDs
for message_file_id in message_files:
self.queue_manager.publish(
QueueMessageFileEvent(message_file_id=message_file_id),
PublishFrom.APPLICATION_MANAGER,
)
self._current_message_file_ids.append(message_file_id)
return tool_invoke_response, message_files, tool_invoke_meta
return tool_invoke_hook
def run(self, message: Message, query: str, **kwargs: Any) -> Generator[LLMResultChunk, None, None]:
"""
Run Agent application
"""
self.query = query
app_generate_entity = self.application_generate_entity
app_config = self.app_config
assert app_config is not None, "app_config is required"
assert app_config.agent is not None, "app_config.agent is required"
# convert tools into ModelRuntime Tool format
tool_instances, _ = self._init_prompt_tools()
assert app_config.agent
# Create tool invoke hook for agent_invoke
tool_invoke_hook = self._create_tool_invoke_hook(message)
# Get instruction for ReAct strategy
instruction = self.app_config.prompt_template.simple_prompt_template or ""
# Use factory to create appropriate strategy
strategy = StrategyFactory.create_strategy(
model_features=self.model_features,
model_instance=self.model_instance,
tools=list(tool_instances.values()),
files=list(self.files),
max_iterations=app_config.agent.max_iteration,
context=self.build_execution_context(),
agent_strategy=self.config.strategy,
tool_invoke_hook=tool_invoke_hook,
instruction=instruction,
)
# Initialize state variables
current_agent_thought_id = None
has_published_thought = False
current_tool_name: str | None = None
self._current_message_file_ids: list[str] = []
# organize prompt messages
prompt_messages = self._organize_prompt_messages()
# Run strategy
generator = strategy.run(
prompt_messages=prompt_messages,
model_parameters=app_generate_entity.model_conf.parameters,
stop=app_generate_entity.model_conf.stop,
stream=True,
)
# Consume generator and collect result
result: AgentResult | None = None
try:
while True:
try:
output = next(generator)
except StopIteration as e:
# Generator finished, get the return value
result = e.value
break
if isinstance(output, LLMResultChunk):
# Handle LLM chunk
if current_agent_thought_id and not has_published_thought:
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=current_agent_thought_id),
PublishFrom.APPLICATION_MANAGER,
)
has_published_thought = True
yield output
elif isinstance(output, AgentLog):
# Handle Agent Log using log_type for type-safe dispatch
if output.status == AgentLog.LogStatus.START:
if output.log_type == AgentLog.LogType.ROUND:
# Start of a new round
message_file_ids: list[str] = []
current_agent_thought_id = self.create_agent_thought(
message_id=message.id,
message="",
tool_name="",
tool_input="",
messages_ids=message_file_ids,
)
has_published_thought = False
elif output.log_type == AgentLog.LogType.TOOL_CALL:
if current_agent_thought_id is None:
continue
# Tool call start - extract data from structured fields
current_tool_name = output.data.get("tool_name", "")
tool_input = output.data.get("tool_args", {})
self.save_agent_thought(
agent_thought_id=current_agent_thought_id,
tool_name=current_tool_name,
tool_input=tool_input,
thought=None,
observation=None,
tool_invoke_meta=None,
answer=None,
messages_ids=[],
)
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=current_agent_thought_id),
PublishFrom.APPLICATION_MANAGER,
)
elif output.status == AgentLog.LogStatus.SUCCESS:
if output.log_type == AgentLog.LogType.THOUGHT:
if current_agent_thought_id is None:
continue
thought_text = output.data.get("thought")
self.save_agent_thought(
agent_thought_id=current_agent_thought_id,
tool_name=None,
tool_input=None,
thought=thought_text,
observation=None,
tool_invoke_meta=None,
answer=None,
messages_ids=[],
)
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=current_agent_thought_id),
PublishFrom.APPLICATION_MANAGER,
)
elif output.log_type == AgentLog.LogType.TOOL_CALL:
if current_agent_thought_id is None:
continue
# Tool call finished
tool_output = output.data.get("output")
# Get meta from strategy output (now properly populated)
tool_meta = output.data.get("meta")
# Wrap tool_meta with tool_name as key (required by agent_service)
if tool_meta and current_tool_name:
tool_meta = {current_tool_name: tool_meta}
self.save_agent_thought(
agent_thought_id=current_agent_thought_id,
tool_name=None,
tool_input=None,
thought=None,
observation=tool_output,
tool_invoke_meta=tool_meta,
answer=None,
messages_ids=self._current_message_file_ids,
)
# Clear message file ids after saving
self._current_message_file_ids = []
current_tool_name = None
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=current_agent_thought_id),
PublishFrom.APPLICATION_MANAGER,
)
elif output.log_type == AgentLog.LogType.ROUND:
if current_agent_thought_id is None:
continue
# Round finished - save LLM usage and answer
llm_usage = output.metadata.get(AgentLog.LogMetadata.LLM_USAGE)
llm_result = output.data.get("llm_result")
final_answer = output.data.get("final_answer")
self.save_agent_thought(
agent_thought_id=current_agent_thought_id,
tool_name=None,
tool_input=None,
thought=llm_result,
observation=None,
tool_invoke_meta=None,
answer=final_answer,
messages_ids=[],
llm_usage=llm_usage,
)
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=current_agent_thought_id),
PublishFrom.APPLICATION_MANAGER,
)
except Exception:
# Re-raise any other exceptions
raise
# Process final result
if isinstance(result, AgentResult):
final_answer = result.text
usage = result.usage or LLMUsage.empty_usage()
# Publish end event
self.queue_manager.publish(
QueueMessageEndEvent(
llm_result=LLMResult(
model=self.model_instance.model,
prompt_messages=prompt_messages,
message=AssistantPromptMessage(content=final_answer),
usage=usage,
system_fingerprint="",
)
),
PublishFrom.APPLICATION_MANAGER,
)
def _init_system_message(self, prompt_template: str, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
Initialize system message
"""
if not prompt_template:
return prompt_messages or []
prompt_messages = prompt_messages or []
if prompt_messages and isinstance(prompt_messages[0], SystemPromptMessage):
prompt_messages[0] = SystemPromptMessage(content=prompt_template)
return prompt_messages
if not prompt_messages:
return [SystemPromptMessage(content=prompt_template)]
prompt_messages.insert(0, SystemPromptMessage(content=prompt_template))
return prompt_messages
def _organize_user_query(self, query: str, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
Organize user query
"""
if self.files:
# get image detail config
image_detail_config = (
self.application_generate_entity.file_upload_config.image_config.detail
if (
self.application_generate_entity.file_upload_config
and self.application_generate_entity.file_upload_config.image_config
)
else None
)
image_detail_config = image_detail_config or ImagePromptMessageContent.DETAIL.LOW
prompt_message_contents: list[PromptMessageContentUnionTypes] = []
for file in self.files:
prompt_message_contents.append(
file_manager.to_prompt_message_content(
file,
image_detail_config=image_detail_config,
)
)
prompt_message_contents.append(TextPromptMessageContent(data=query))
prompt_messages.append(UserPromptMessage(content=prompt_message_contents))
else:
prompt_messages.append(UserPromptMessage(content=query))
return prompt_messages
def _clear_user_prompt_image_messages(self, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
As for now, gpt supports both fc and vision at the first iteration.
We need to remove the image messages from the prompt messages at the first iteration.
"""
prompt_messages = deepcopy(prompt_messages)
for prompt_message in prompt_messages:
if isinstance(prompt_message, UserPromptMessage):
if isinstance(prompt_message.content, list):
prompt_message.content = "\n".join(
[
content.data
if content.type == PromptMessageContentType.TEXT
else "[image]"
if content.type == PromptMessageContentType.IMAGE
else "[file]"
for content in prompt_message.content
]
)
return prompt_messages
def _organize_prompt_messages(self):
# For ReAct strategy, use the agent prompt template
if self.config.strategy == AgentEntity.Strategy.CHAIN_OF_THOUGHT and self.config.prompt:
prompt_template = self.config.prompt.first_prompt
else:
prompt_template = self.app_config.prompt_template.simple_prompt_template or ""
self.history_prompt_messages = self._init_system_message(prompt_template, self.history_prompt_messages)
query_prompt_messages = self._organize_user_query(self.query or "", [])
self.history_prompt_messages = AgentHistoryPromptTransform(
model_config=self.model_config,
prompt_messages=[*query_prompt_messages, *self._current_thoughts],
history_messages=self.history_prompt_messages,
memory=self.memory,
).get_prompt()
prompt_messages = [*self.history_prompt_messages, *query_prompt_messages, *self._current_thoughts]
if len(self._current_thoughts) != 0:
# clear messages after the first iteration
prompt_messages = self._clear_user_prompt_image_messages(prompt_messages)
return prompt_messages

View File

@ -6,7 +6,7 @@ from typing import Union, cast
from sqlalchemy import select
from core.agent.entities import AgentEntity, AgentToolEntity
from core.agent.entities import AgentEntity, AgentToolEntity, ExecutionContext
from core.app.app_config.features.file_upload.manager import FileUploadConfigManager
from core.app.apps.agent_chat.app_config_manager import AgentChatAppConfig
from core.app.apps.base_app_queue_manager import AppQueueManager
@ -116,9 +116,20 @@ class BaseAgentRunner(AppRunner):
features = model_schema.features if model_schema and model_schema.features else []
self.stream_tool_call = ModelFeature.STREAM_TOOL_CALL in features
self.files = application_generate_entity.files if ModelFeature.VISION in features else []
self.model_features = features
self.query: str | None = ""
self._current_thoughts: list[PromptMessage] = []
def build_execution_context(self) -> ExecutionContext:
"""Build execution context."""
return ExecutionContext(
user_id=self.user_id,
app_id=self.app_config.app_id,
conversation_id=self.conversation.id,
message_id=self.message.id,
tenant_id=self.tenant_id,
)
def _repack_app_generate_entity(
self, app_generate_entity: AgentChatAppGenerateEntity
) -> AgentChatAppGenerateEntity:

View File

@ -1,437 +0,0 @@
import json
import logging
from abc import ABC, abstractmethod
from collections.abc import Generator, Mapping, Sequence
from typing import Any
from core.agent.base_agent_runner import BaseAgentRunner
from core.agent.entities import AgentScratchpadUnit
from core.agent.output_parser.cot_output_parser import CotAgentOutputParser
from core.app.apps.base_app_queue_manager import PublishFrom
from core.app.entities.queue_entities import QueueAgentThoughtEvent, QueueMessageEndEvent, QueueMessageFileEvent
from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage
from core.model_runtime.entities.message_entities import (
AssistantPromptMessage,
PromptMessage,
PromptMessageTool,
ToolPromptMessage,
UserPromptMessage,
)
from core.ops.ops_trace_manager import TraceQueueManager
from core.prompt.agent_history_prompt_transform import AgentHistoryPromptTransform
from core.tools.__base.tool import Tool
from core.tools.entities.tool_entities import ToolInvokeMeta
from core.tools.tool_engine import ToolEngine
from core.workflow.nodes.agent.exc import AgentMaxIterationError
from models.model import Message
logger = logging.getLogger(__name__)
class CotAgentRunner(BaseAgentRunner, ABC):
_is_first_iteration = True
_ignore_observation_providers = ["wenxin"]
_historic_prompt_messages: list[PromptMessage]
_agent_scratchpad: list[AgentScratchpadUnit]
_instruction: str
_query: str
_prompt_messages_tools: Sequence[PromptMessageTool]
def run(
self,
message: Message,
query: str,
inputs: Mapping[str, str],
) -> Generator:
"""
Run Cot agent application
"""
app_generate_entity = self.application_generate_entity
self._repack_app_generate_entity(app_generate_entity)
self._init_react_state(query)
trace_manager = app_generate_entity.trace_manager
# check model mode
if "Observation" not in app_generate_entity.model_conf.stop:
if app_generate_entity.model_conf.provider not in self._ignore_observation_providers:
app_generate_entity.model_conf.stop.append("Observation")
app_config = self.app_config
assert app_config.agent
# init instruction
inputs = inputs or {}
instruction = app_config.prompt_template.simple_prompt_template or ""
self._instruction = self._fill_in_inputs_from_external_data_tools(instruction, inputs)
iteration_step = 1
max_iteration_steps = min(app_config.agent.max_iteration, 99) + 1
# convert tools into ModelRuntime Tool format
tool_instances, prompt_messages_tools = self._init_prompt_tools()
self._prompt_messages_tools = prompt_messages_tools
function_call_state = True
llm_usage: dict[str, LLMUsage | None] = {"usage": None}
final_answer = ""
prompt_messages: list = [] # Initialize prompt_messages
agent_thought_id = "" # Initialize agent_thought_id
def increase_usage(final_llm_usage_dict: dict[str, LLMUsage | None], usage: LLMUsage):
if not final_llm_usage_dict["usage"]:
final_llm_usage_dict["usage"] = usage
else:
llm_usage = final_llm_usage_dict["usage"]
llm_usage.prompt_tokens += usage.prompt_tokens
llm_usage.completion_tokens += usage.completion_tokens
llm_usage.total_tokens += usage.total_tokens
llm_usage.prompt_price += usage.prompt_price
llm_usage.completion_price += usage.completion_price
llm_usage.total_price += usage.total_price
model_instance = self.model_instance
while function_call_state and iteration_step <= max_iteration_steps:
# continue to run until there is not any tool call
function_call_state = False
if iteration_step == max_iteration_steps:
# the last iteration, remove all tools
self._prompt_messages_tools = []
message_file_ids: list[str] = []
agent_thought_id = self.create_agent_thought(
message_id=message.id, message="", tool_name="", tool_input="", messages_ids=message_file_ids
)
if iteration_step > 1:
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=agent_thought_id), PublishFrom.APPLICATION_MANAGER
)
# recalc llm max tokens
prompt_messages = self._organize_prompt_messages()
self.recalc_llm_max_tokens(self.model_config, prompt_messages)
# invoke model
chunks = model_instance.invoke_llm(
prompt_messages=prompt_messages,
model_parameters=app_generate_entity.model_conf.parameters,
tools=[],
stop=app_generate_entity.model_conf.stop,
stream=True,
user=self.user_id,
callbacks=[],
)
usage_dict: dict[str, LLMUsage | None] = {}
react_chunks = CotAgentOutputParser.handle_react_stream_output(chunks, usage_dict)
scratchpad = AgentScratchpadUnit(
agent_response="",
thought="",
action_str="",
observation="",
action=None,
)
# publish agent thought if it's first iteration
if iteration_step == 1:
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=agent_thought_id), PublishFrom.APPLICATION_MANAGER
)
for chunk in react_chunks:
if isinstance(chunk, AgentScratchpadUnit.Action):
action = chunk
# detect action
assert scratchpad.agent_response is not None
scratchpad.agent_response += json.dumps(chunk.model_dump())
scratchpad.action_str = json.dumps(chunk.model_dump())
scratchpad.action = action
else:
assert scratchpad.agent_response is not None
scratchpad.agent_response += chunk
assert scratchpad.thought is not None
scratchpad.thought += chunk
yield LLMResultChunk(
model=self.model_config.model,
prompt_messages=prompt_messages,
system_fingerprint="",
delta=LLMResultChunkDelta(index=0, message=AssistantPromptMessage(content=chunk), usage=None),
)
assert scratchpad.thought is not None
scratchpad.thought = scratchpad.thought.strip() or "I am thinking about how to help you"
self._agent_scratchpad.append(scratchpad)
# Check if max iteration is reached and model still wants to call tools
if iteration_step == max_iteration_steps and scratchpad.action:
if scratchpad.action.action_name.lower() != "final answer":
raise AgentMaxIterationError(app_config.agent.max_iteration)
# get llm usage
if "usage" in usage_dict:
if usage_dict["usage"] is not None:
increase_usage(llm_usage, usage_dict["usage"])
else:
usage_dict["usage"] = LLMUsage.empty_usage()
self.save_agent_thought(
agent_thought_id=agent_thought_id,
tool_name=(scratchpad.action.action_name if scratchpad.action and not scratchpad.is_final() else ""),
tool_input={scratchpad.action.action_name: scratchpad.action.action_input} if scratchpad.action else {},
tool_invoke_meta={},
thought=scratchpad.thought or "",
observation="",
answer=scratchpad.agent_response or "",
messages_ids=[],
llm_usage=usage_dict["usage"],
)
if not scratchpad.is_final():
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=agent_thought_id), PublishFrom.APPLICATION_MANAGER
)
if not scratchpad.action:
# failed to extract action, return final answer directly
final_answer = ""
else:
if scratchpad.action.action_name.lower() == "final answer":
# action is final answer, return final answer directly
try:
if isinstance(scratchpad.action.action_input, dict):
final_answer = json.dumps(scratchpad.action.action_input, ensure_ascii=False)
elif isinstance(scratchpad.action.action_input, str):
final_answer = scratchpad.action.action_input
else:
final_answer = f"{scratchpad.action.action_input}"
except TypeError:
final_answer = f"{scratchpad.action.action_input}"
else:
function_call_state = True
# action is tool call, invoke tool
tool_invoke_response, tool_invoke_meta = self._handle_invoke_action(
action=scratchpad.action,
tool_instances=tool_instances,
message_file_ids=message_file_ids,
trace_manager=trace_manager,
)
scratchpad.observation = tool_invoke_response
scratchpad.agent_response = tool_invoke_response
self.save_agent_thought(
agent_thought_id=agent_thought_id,
tool_name=scratchpad.action.action_name,
tool_input={scratchpad.action.action_name: scratchpad.action.action_input},
thought=scratchpad.thought or "",
observation={scratchpad.action.action_name: tool_invoke_response},
tool_invoke_meta={scratchpad.action.action_name: tool_invoke_meta.to_dict()},
answer=scratchpad.agent_response,
messages_ids=message_file_ids,
llm_usage=usage_dict["usage"],
)
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=agent_thought_id), PublishFrom.APPLICATION_MANAGER
)
# update prompt tool message
for prompt_tool in self._prompt_messages_tools:
self.update_prompt_message_tool(tool_instances[prompt_tool.name], prompt_tool)
iteration_step += 1
yield LLMResultChunk(
model=model_instance.model,
prompt_messages=prompt_messages,
delta=LLMResultChunkDelta(
index=0, message=AssistantPromptMessage(content=final_answer), usage=llm_usage["usage"]
),
system_fingerprint="",
)
# save agent thought
self.save_agent_thought(
agent_thought_id=agent_thought_id,
tool_name="",
tool_input={},
tool_invoke_meta={},
thought=final_answer,
observation={},
answer=final_answer,
messages_ids=[],
)
# publish end event
self.queue_manager.publish(
QueueMessageEndEvent(
llm_result=LLMResult(
model=model_instance.model,
prompt_messages=prompt_messages,
message=AssistantPromptMessage(content=final_answer),
usage=llm_usage["usage"] or LLMUsage.empty_usage(),
system_fingerprint="",
)
),
PublishFrom.APPLICATION_MANAGER,
)
def _handle_invoke_action(
self,
action: AgentScratchpadUnit.Action,
tool_instances: Mapping[str, Tool],
message_file_ids: list[str],
trace_manager: TraceQueueManager | None = None,
) -> tuple[str, ToolInvokeMeta]:
"""
handle invoke action
:param action: action
:param tool_instances: tool instances
:param message_file_ids: message file ids
:param trace_manager: trace manager
:return: observation, meta
"""
# action is tool call, invoke tool
tool_call_name = action.action_name
tool_call_args = action.action_input
tool_instance = tool_instances.get(tool_call_name)
if not tool_instance:
answer = f"there is not a tool named {tool_call_name}"
return answer, ToolInvokeMeta.error_instance(answer)
if isinstance(tool_call_args, str):
try:
tool_call_args = json.loads(tool_call_args)
except json.JSONDecodeError:
pass
# invoke tool
tool_invoke_response, message_files, tool_invoke_meta = ToolEngine.agent_invoke(
tool=tool_instance,
tool_parameters=tool_call_args,
user_id=self.user_id,
tenant_id=self.tenant_id,
message=self.message,
invoke_from=self.application_generate_entity.invoke_from,
agent_tool_callback=self.agent_callback,
trace_manager=trace_manager,
)
# publish files
for message_file_id in message_files:
# publish message file
self.queue_manager.publish(
QueueMessageFileEvent(message_file_id=message_file_id), PublishFrom.APPLICATION_MANAGER
)
# add message file ids
message_file_ids.append(message_file_id)
return tool_invoke_response, tool_invoke_meta
def _convert_dict_to_action(self, action: dict) -> AgentScratchpadUnit.Action:
"""
convert dict to action
"""
return AgentScratchpadUnit.Action(action_name=action["action"], action_input=action["action_input"])
def _fill_in_inputs_from_external_data_tools(self, instruction: str, inputs: Mapping[str, Any]) -> str:
"""
fill in inputs from external data tools
"""
for key, value in inputs.items():
try:
instruction = instruction.replace(f"{{{{{key}}}}}", str(value))
except Exception:
continue
return instruction
def _init_react_state(self, query):
"""
init agent scratchpad
"""
self._query = query
self._agent_scratchpad = []
self._historic_prompt_messages = self._organize_historic_prompt_messages()
@abstractmethod
def _organize_prompt_messages(self) -> list[PromptMessage]:
"""
organize prompt messages
"""
def _format_assistant_message(self, agent_scratchpad: list[AgentScratchpadUnit]) -> str:
"""
format assistant message
"""
message = ""
for scratchpad in agent_scratchpad:
if scratchpad.is_final():
message += f"Final Answer: {scratchpad.agent_response}"
else:
message += f"Thought: {scratchpad.thought}\n\n"
if scratchpad.action_str:
message += f"Action: {scratchpad.action_str}\n\n"
if scratchpad.observation:
message += f"Observation: {scratchpad.observation}\n\n"
return message
def _organize_historic_prompt_messages(
self, current_session_messages: list[PromptMessage] | None = None
) -> list[PromptMessage]:
"""
organize historic prompt messages
"""
result: list[PromptMessage] = []
scratchpads: list[AgentScratchpadUnit] = []
current_scratchpad: AgentScratchpadUnit | None = None
for message in self.history_prompt_messages:
if isinstance(message, AssistantPromptMessage):
if not current_scratchpad:
assert isinstance(message.content, str)
current_scratchpad = AgentScratchpadUnit(
agent_response=message.content,
thought=message.content or "I am thinking about how to help you",
action_str="",
action=None,
observation=None,
)
scratchpads.append(current_scratchpad)
if message.tool_calls:
try:
current_scratchpad.action = AgentScratchpadUnit.Action(
action_name=message.tool_calls[0].function.name,
action_input=json.loads(message.tool_calls[0].function.arguments),
)
current_scratchpad.action_str = json.dumps(current_scratchpad.action.to_dict())
except Exception:
logger.exception("Failed to parse tool call from assistant message")
elif isinstance(message, ToolPromptMessage):
if current_scratchpad:
assert isinstance(message.content, str)
current_scratchpad.observation = message.content
else:
raise NotImplementedError("expected str type")
elif isinstance(message, UserPromptMessage):
if scratchpads:
result.append(AssistantPromptMessage(content=self._format_assistant_message(scratchpads)))
scratchpads = []
current_scratchpad = None
result.append(message)
if scratchpads:
result.append(AssistantPromptMessage(content=self._format_assistant_message(scratchpads)))
historic_prompts = AgentHistoryPromptTransform(
model_config=self.model_config,
prompt_messages=current_session_messages or [],
history_messages=result,
memory=self.memory,
).get_prompt()
return historic_prompts

View File

@ -1,118 +0,0 @@
import json
from core.agent.cot_agent_runner import CotAgentRunner
from core.file import file_manager
from core.model_runtime.entities import (
AssistantPromptMessage,
PromptMessage,
SystemPromptMessage,
TextPromptMessageContent,
UserPromptMessage,
)
from core.model_runtime.entities.message_entities import ImagePromptMessageContent, PromptMessageContentUnionTypes
from core.model_runtime.utils.encoders import jsonable_encoder
class CotChatAgentRunner(CotAgentRunner):
def _organize_system_prompt(self) -> SystemPromptMessage:
"""
Organize system prompt
"""
assert self.app_config.agent
assert self.app_config.agent.prompt
prompt_entity = self.app_config.agent.prompt
if not prompt_entity:
raise ValueError("Agent prompt configuration is not set")
first_prompt = prompt_entity.first_prompt
system_prompt = (
first_prompt.replace("{{instruction}}", self._instruction)
.replace("{{tools}}", json.dumps(jsonable_encoder(self._prompt_messages_tools)))
.replace("{{tool_names}}", ", ".join([tool.name for tool in self._prompt_messages_tools]))
)
return SystemPromptMessage(content=system_prompt)
def _organize_user_query(self, query, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
Organize user query
"""
if self.files:
# get image detail config
image_detail_config = (
self.application_generate_entity.file_upload_config.image_config.detail
if (
self.application_generate_entity.file_upload_config
and self.application_generate_entity.file_upload_config.image_config
)
else None
)
image_detail_config = image_detail_config or ImagePromptMessageContent.DETAIL.LOW
prompt_message_contents: list[PromptMessageContentUnionTypes] = []
for file in self.files:
prompt_message_contents.append(
file_manager.to_prompt_message_content(
file,
image_detail_config=image_detail_config,
)
)
prompt_message_contents.append(TextPromptMessageContent(data=query))
prompt_messages.append(UserPromptMessage(content=prompt_message_contents))
else:
prompt_messages.append(UserPromptMessage(content=query))
return prompt_messages
def _organize_prompt_messages(self) -> list[PromptMessage]:
"""
Organize
"""
# organize system prompt
system_message = self._organize_system_prompt()
# organize current assistant messages
agent_scratchpad = self._agent_scratchpad
if not agent_scratchpad:
assistant_messages = []
else:
assistant_message = AssistantPromptMessage(content="")
assistant_message.content = "" # FIXME: type check tell mypy that assistant_message.content is str
for unit in agent_scratchpad:
if unit.is_final():
assert isinstance(assistant_message.content, str)
assistant_message.content += f"Final Answer: {unit.agent_response}"
else:
assert isinstance(assistant_message.content, str)
assistant_message.content += f"Thought: {unit.thought}\n\n"
if unit.action_str:
assistant_message.content += f"Action: {unit.action_str}\n\n"
if unit.observation:
assistant_message.content += f"Observation: {unit.observation}\n\n"
assistant_messages = [assistant_message]
# query messages
query_messages = self._organize_user_query(self._query, [])
if assistant_messages:
# organize historic prompt messages
historic_messages = self._organize_historic_prompt_messages(
[system_message, *query_messages, *assistant_messages, UserPromptMessage(content="continue")]
)
messages = [
system_message,
*historic_messages,
*query_messages,
*assistant_messages,
UserPromptMessage(content="continue"),
]
else:
# organize historic prompt messages
historic_messages = self._organize_historic_prompt_messages([system_message, *query_messages])
messages = [system_message, *historic_messages, *query_messages]
# join all messages
return messages

View File

@ -1,87 +0,0 @@
import json
from core.agent.cot_agent_runner import CotAgentRunner
from core.model_runtime.entities.message_entities import (
AssistantPromptMessage,
PromptMessage,
TextPromptMessageContent,
UserPromptMessage,
)
from core.model_runtime.utils.encoders import jsonable_encoder
class CotCompletionAgentRunner(CotAgentRunner):
def _organize_instruction_prompt(self) -> str:
"""
Organize instruction prompt
"""
if self.app_config.agent is None:
raise ValueError("Agent configuration is not set")
prompt_entity = self.app_config.agent.prompt
if prompt_entity is None:
raise ValueError("prompt entity is not set")
first_prompt = prompt_entity.first_prompt
system_prompt = (
first_prompt.replace("{{instruction}}", self._instruction)
.replace("{{tools}}", json.dumps(jsonable_encoder(self._prompt_messages_tools)))
.replace("{{tool_names}}", ", ".join([tool.name for tool in self._prompt_messages_tools]))
)
return system_prompt
def _organize_historic_prompt(self, current_session_messages: list[PromptMessage] | None = None) -> str:
"""
Organize historic prompt
"""
historic_prompt_messages = self._organize_historic_prompt_messages(current_session_messages)
historic_prompt = ""
for message in historic_prompt_messages:
if isinstance(message, UserPromptMessage):
historic_prompt += f"Question: {message.content}\n\n"
elif isinstance(message, AssistantPromptMessage):
if isinstance(message.content, str):
historic_prompt += message.content + "\n\n"
elif isinstance(message.content, list):
for content in message.content:
if not isinstance(content, TextPromptMessageContent):
continue
historic_prompt += content.data
return historic_prompt
def _organize_prompt_messages(self) -> list[PromptMessage]:
"""
Organize prompt messages
"""
# organize system prompt
system_prompt = self._organize_instruction_prompt()
# organize historic prompt messages
historic_prompt = self._organize_historic_prompt()
# organize current assistant messages
agent_scratchpad = self._agent_scratchpad
assistant_prompt = ""
for unit in agent_scratchpad or []:
if unit.is_final():
assistant_prompt += f"Final Answer: {unit.agent_response}"
else:
assistant_prompt += f"Thought: {unit.thought}\n\n"
if unit.action_str:
assistant_prompt += f"Action: {unit.action_str}\n\n"
if unit.observation:
assistant_prompt += f"Observation: {unit.observation}\n\n"
# query messages
query_prompt = f"Question: {self._query}"
# join all messages
prompt = (
system_prompt.replace("{{historic_messages}}", historic_prompt)
.replace("{{agent_scratchpad}}", assistant_prompt)
.replace("{{query}}", query_prompt)
)
return [UserPromptMessage(content=prompt)]

View File

@ -1,3 +1,5 @@
import uuid
from collections.abc import Mapping
from enum import StrEnum
from typing import Any, Union
@ -92,3 +94,96 @@ class AgentInvokeMessage(ToolInvokeMessage):
"""
pass
class ExecutionContext(BaseModel):
"""Execution context containing trace and audit information.
This context carries all the IDs and metadata that are not part of
the core business logic but needed for tracing, auditing, and
correlation purposes.
"""
user_id: str | None = None
app_id: str | None = None
conversation_id: str | None = None
message_id: str | None = None
tenant_id: str | None = None
@classmethod
def create_minimal(cls, user_id: str | None = None) -> "ExecutionContext":
"""Create a minimal context with only essential fields."""
return cls(user_id=user_id)
def to_dict(self) -> dict[str, Any]:
"""Convert to dictionary for passing to legacy code."""
return {
"user_id": self.user_id,
"app_id": self.app_id,
"conversation_id": self.conversation_id,
"message_id": self.message_id,
"tenant_id": self.tenant_id,
}
def with_updates(self, **kwargs) -> "ExecutionContext":
"""Create a new context with updated fields."""
data = self.to_dict()
data.update(kwargs)
return ExecutionContext(
user_id=data.get("user_id"),
app_id=data.get("app_id"),
conversation_id=data.get("conversation_id"),
message_id=data.get("message_id"),
tenant_id=data.get("tenant_id"),
)
class AgentLog(BaseModel):
"""
Agent Log.
"""
class LogType(StrEnum):
"""Type of agent log entry."""
ROUND = "round" # A complete iteration round
THOUGHT = "thought" # LLM thinking/reasoning
TOOL_CALL = "tool_call" # Tool invocation
class LogMetadata(StrEnum):
STARTED_AT = "started_at"
FINISHED_AT = "finished_at"
ELAPSED_TIME = "elapsed_time"
TOTAL_PRICE = "total_price"
TOTAL_TOKENS = "total_tokens"
PROVIDER = "provider"
CURRENCY = "currency"
LLM_USAGE = "llm_usage"
ICON = "icon"
ICON_DARK = "icon_dark"
class LogStatus(StrEnum):
START = "start"
ERROR = "error"
SUCCESS = "success"
id: str = Field(default_factory=lambda: str(uuid.uuid4()), description="The id of the log")
label: str = Field(..., description="The label of the log")
log_type: LogType = Field(..., description="The type of the log")
parent_id: str | None = Field(default=None, description="Leave empty for root log")
error: str | None = Field(default=None, description="The error message")
status: LogStatus = Field(..., description="The status of the log")
data: Mapping[str, Any] = Field(..., description="Detailed log data")
metadata: Mapping[LogMetadata, Any] = Field(default={}, description="The metadata of the log")
class AgentResult(BaseModel):
"""
Agent execution result.
"""
text: str = Field(default="", description="The generated text")
files: list[Any] = Field(default_factory=list, description="Files produced during execution")
usage: Any | None = Field(default=None, description="LLM usage statistics")
finish_reason: str | None = Field(default=None, description="Reason for completion")

View File

@ -1,468 +0,0 @@
import json
import logging
from collections.abc import Generator
from copy import deepcopy
from typing import Any, Union
from core.agent.base_agent_runner import BaseAgentRunner
from core.app.apps.base_app_queue_manager import PublishFrom
from core.app.entities.queue_entities import QueueAgentThoughtEvent, QueueMessageEndEvent, QueueMessageFileEvent
from core.file import file_manager
from core.model_runtime.entities import (
AssistantPromptMessage,
LLMResult,
LLMResultChunk,
LLMResultChunkDelta,
LLMUsage,
PromptMessage,
PromptMessageContentType,
SystemPromptMessage,
TextPromptMessageContent,
ToolPromptMessage,
UserPromptMessage,
)
from core.model_runtime.entities.message_entities import ImagePromptMessageContent, PromptMessageContentUnionTypes
from core.prompt.agent_history_prompt_transform import AgentHistoryPromptTransform
from core.tools.entities.tool_entities import ToolInvokeMeta
from core.tools.tool_engine import ToolEngine
from core.workflow.nodes.agent.exc import AgentMaxIterationError
from models.model import Message
logger = logging.getLogger(__name__)
class FunctionCallAgentRunner(BaseAgentRunner):
def run(self, message: Message, query: str, **kwargs: Any) -> Generator[LLMResultChunk, None, None]:
"""
Run FunctionCall agent application
"""
self.query = query
app_generate_entity = self.application_generate_entity
app_config = self.app_config
assert app_config is not None, "app_config is required"
assert app_config.agent is not None, "app_config.agent is required"
# convert tools into ModelRuntime Tool format
tool_instances, prompt_messages_tools = self._init_prompt_tools()
assert app_config.agent
iteration_step = 1
max_iteration_steps = min(app_config.agent.max_iteration, 99) + 1
# continue to run until there is not any tool call
function_call_state = True
llm_usage: dict[str, LLMUsage | None] = {"usage": None}
final_answer = ""
prompt_messages: list = [] # Initialize prompt_messages
# get tracing instance
trace_manager = app_generate_entity.trace_manager
def increase_usage(final_llm_usage_dict: dict[str, LLMUsage | None], usage: LLMUsage):
if not final_llm_usage_dict["usage"]:
final_llm_usage_dict["usage"] = usage
else:
llm_usage = final_llm_usage_dict["usage"]
llm_usage.prompt_tokens += usage.prompt_tokens
llm_usage.completion_tokens += usage.completion_tokens
llm_usage.total_tokens += usage.total_tokens
llm_usage.prompt_price += usage.prompt_price
llm_usage.completion_price += usage.completion_price
llm_usage.total_price += usage.total_price
model_instance = self.model_instance
while function_call_state and iteration_step <= max_iteration_steps:
function_call_state = False
if iteration_step == max_iteration_steps:
# the last iteration, remove all tools
prompt_messages_tools = []
message_file_ids: list[str] = []
agent_thought_id = self.create_agent_thought(
message_id=message.id, message="", tool_name="", tool_input="", messages_ids=message_file_ids
)
# recalc llm max tokens
prompt_messages = self._organize_prompt_messages()
self.recalc_llm_max_tokens(self.model_config, prompt_messages)
# invoke model
chunks: Union[Generator[LLMResultChunk, None, None], LLMResult] = model_instance.invoke_llm(
prompt_messages=prompt_messages,
model_parameters=app_generate_entity.model_conf.parameters,
tools=prompt_messages_tools,
stop=app_generate_entity.model_conf.stop,
stream=self.stream_tool_call,
user=self.user_id,
callbacks=[],
)
tool_calls: list[tuple[str, str, dict[str, Any]]] = []
# save full response
response = ""
# save tool call names and inputs
tool_call_names = ""
tool_call_inputs = ""
current_llm_usage = None
if isinstance(chunks, Generator):
is_first_chunk = True
for chunk in chunks:
if is_first_chunk:
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=agent_thought_id), PublishFrom.APPLICATION_MANAGER
)
is_first_chunk = False
# check if there is any tool call
if self.check_tool_calls(chunk):
function_call_state = True
tool_calls.extend(self.extract_tool_calls(chunk) or [])
tool_call_names = ";".join([tool_call[1] for tool_call in tool_calls])
try:
tool_call_inputs = json.dumps(
{tool_call[1]: tool_call[2] for tool_call in tool_calls}, ensure_ascii=False
)
except TypeError:
# fallback: force ASCII to handle non-serializable objects
tool_call_inputs = json.dumps({tool_call[1]: tool_call[2] for tool_call in tool_calls})
if chunk.delta.message and chunk.delta.message.content:
if isinstance(chunk.delta.message.content, list):
for content in chunk.delta.message.content:
response += content.data
else:
response += str(chunk.delta.message.content)
if chunk.delta.usage:
increase_usage(llm_usage, chunk.delta.usage)
current_llm_usage = chunk.delta.usage
yield chunk
else:
result = chunks
# check if there is any tool call
if self.check_blocking_tool_calls(result):
function_call_state = True
tool_calls.extend(self.extract_blocking_tool_calls(result) or [])
tool_call_names = ";".join([tool_call[1] for tool_call in tool_calls])
try:
tool_call_inputs = json.dumps(
{tool_call[1]: tool_call[2] for tool_call in tool_calls}, ensure_ascii=False
)
except TypeError:
# fallback: force ASCII to handle non-serializable objects
tool_call_inputs = json.dumps({tool_call[1]: tool_call[2] for tool_call in tool_calls})
if result.usage:
increase_usage(llm_usage, result.usage)
current_llm_usage = result.usage
if result.message and result.message.content:
if isinstance(result.message.content, list):
for content in result.message.content:
response += content.data
else:
response += str(result.message.content)
if not result.message.content:
result.message.content = ""
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=agent_thought_id), PublishFrom.APPLICATION_MANAGER
)
yield LLMResultChunk(
model=model_instance.model,
prompt_messages=result.prompt_messages,
system_fingerprint=result.system_fingerprint,
delta=LLMResultChunkDelta(
index=0,
message=result.message,
usage=result.usage,
),
)
assistant_message = AssistantPromptMessage(content=response, tool_calls=[])
if tool_calls:
assistant_message.tool_calls = [
AssistantPromptMessage.ToolCall(
id=tool_call[0],
type="function",
function=AssistantPromptMessage.ToolCall.ToolCallFunction(
name=tool_call[1], arguments=json.dumps(tool_call[2], ensure_ascii=False)
),
)
for tool_call in tool_calls
]
self._current_thoughts.append(assistant_message)
# save thought
self.save_agent_thought(
agent_thought_id=agent_thought_id,
tool_name=tool_call_names,
tool_input=tool_call_inputs,
thought=response,
tool_invoke_meta=None,
observation=None,
answer=response,
messages_ids=[],
llm_usage=current_llm_usage,
)
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=agent_thought_id), PublishFrom.APPLICATION_MANAGER
)
final_answer += response + "\n"
# Check if max iteration is reached and model still wants to call tools
if iteration_step == max_iteration_steps and tool_calls:
raise AgentMaxIterationError(app_config.agent.max_iteration)
# call tools
tool_responses = []
for tool_call_id, tool_call_name, tool_call_args in tool_calls:
tool_instance = tool_instances.get(tool_call_name)
if not tool_instance:
tool_response = {
"tool_call_id": tool_call_id,
"tool_call_name": tool_call_name,
"tool_response": f"there is not a tool named {tool_call_name}",
"meta": ToolInvokeMeta.error_instance(f"there is not a tool named {tool_call_name}").to_dict(),
}
else:
# invoke tool
tool_invoke_response, message_files, tool_invoke_meta = ToolEngine.agent_invoke(
tool=tool_instance,
tool_parameters=tool_call_args,
user_id=self.user_id,
tenant_id=self.tenant_id,
message=self.message,
invoke_from=self.application_generate_entity.invoke_from,
agent_tool_callback=self.agent_callback,
trace_manager=trace_manager,
app_id=self.application_generate_entity.app_config.app_id,
message_id=self.message.id,
conversation_id=self.conversation.id,
)
# publish files
for message_file_id in message_files:
# publish message file
self.queue_manager.publish(
QueueMessageFileEvent(message_file_id=message_file_id), PublishFrom.APPLICATION_MANAGER
)
# add message file ids
message_file_ids.append(message_file_id)
tool_response = {
"tool_call_id": tool_call_id,
"tool_call_name": tool_call_name,
"tool_response": tool_invoke_response,
"meta": tool_invoke_meta.to_dict(),
}
tool_responses.append(tool_response)
if tool_response["tool_response"] is not None:
self._current_thoughts.append(
ToolPromptMessage(
content=str(tool_response["tool_response"]),
tool_call_id=tool_call_id,
name=tool_call_name,
)
)
if len(tool_responses) > 0:
# save agent thought
self.save_agent_thought(
agent_thought_id=agent_thought_id,
tool_name="",
tool_input="",
thought="",
tool_invoke_meta={
tool_response["tool_call_name"]: tool_response["meta"] for tool_response in tool_responses
},
observation={
tool_response["tool_call_name"]: tool_response["tool_response"]
for tool_response in tool_responses
},
answer="",
messages_ids=message_file_ids,
)
self.queue_manager.publish(
QueueAgentThoughtEvent(agent_thought_id=agent_thought_id), PublishFrom.APPLICATION_MANAGER
)
# update prompt tool
for prompt_tool in prompt_messages_tools:
self.update_prompt_message_tool(tool_instances[prompt_tool.name], prompt_tool)
iteration_step += 1
# publish end event
self.queue_manager.publish(
QueueMessageEndEvent(
llm_result=LLMResult(
model=model_instance.model,
prompt_messages=prompt_messages,
message=AssistantPromptMessage(content=final_answer),
usage=llm_usage["usage"] or LLMUsage.empty_usage(),
system_fingerprint="",
)
),
PublishFrom.APPLICATION_MANAGER,
)
def check_tool_calls(self, llm_result_chunk: LLMResultChunk) -> bool:
"""
Check if there is any tool call in llm result chunk
"""
if llm_result_chunk.delta.message.tool_calls:
return True
return False
def check_blocking_tool_calls(self, llm_result: LLMResult) -> bool:
"""
Check if there is any blocking tool call in llm result
"""
if llm_result.message.tool_calls:
return True
return False
def extract_tool_calls(self, llm_result_chunk: LLMResultChunk) -> list[tuple[str, str, dict[str, Any]]]:
"""
Extract tool calls from llm result chunk
Returns:
List[Tuple[str, str, Dict[str, Any]]]: [(tool_call_id, tool_call_name, tool_call_args)]
"""
tool_calls = []
for prompt_message in llm_result_chunk.delta.message.tool_calls:
args = {}
if prompt_message.function.arguments != "":
args = json.loads(prompt_message.function.arguments)
tool_calls.append(
(
prompt_message.id,
prompt_message.function.name,
args,
)
)
return tool_calls
def extract_blocking_tool_calls(self, llm_result: LLMResult) -> list[tuple[str, str, dict[str, Any]]]:
"""
Extract blocking tool calls from llm result
Returns:
List[Tuple[str, str, Dict[str, Any]]]: [(tool_call_id, tool_call_name, tool_call_args)]
"""
tool_calls = []
for prompt_message in llm_result.message.tool_calls:
args = {}
if prompt_message.function.arguments != "":
args = json.loads(prompt_message.function.arguments)
tool_calls.append(
(
prompt_message.id,
prompt_message.function.name,
args,
)
)
return tool_calls
def _init_system_message(self, prompt_template: str, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
Initialize system message
"""
if not prompt_messages and prompt_template:
return [
SystemPromptMessage(content=prompt_template),
]
if prompt_messages and not isinstance(prompt_messages[0], SystemPromptMessage) and prompt_template:
prompt_messages.insert(0, SystemPromptMessage(content=prompt_template))
return prompt_messages or []
def _organize_user_query(self, query: str, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
Organize user query
"""
if self.files:
# get image detail config
image_detail_config = (
self.application_generate_entity.file_upload_config.image_config.detail
if (
self.application_generate_entity.file_upload_config
and self.application_generate_entity.file_upload_config.image_config
)
else None
)
image_detail_config = image_detail_config or ImagePromptMessageContent.DETAIL.LOW
prompt_message_contents: list[PromptMessageContentUnionTypes] = []
for file in self.files:
prompt_message_contents.append(
file_manager.to_prompt_message_content(
file,
image_detail_config=image_detail_config,
)
)
prompt_message_contents.append(TextPromptMessageContent(data=query))
prompt_messages.append(UserPromptMessage(content=prompt_message_contents))
else:
prompt_messages.append(UserPromptMessage(content=query))
return prompt_messages
def _clear_user_prompt_image_messages(self, prompt_messages: list[PromptMessage]) -> list[PromptMessage]:
"""
As for now, gpt supports both fc and vision at the first iteration.
We need to remove the image messages from the prompt messages at the first iteration.
"""
prompt_messages = deepcopy(prompt_messages)
for prompt_message in prompt_messages:
if isinstance(prompt_message, UserPromptMessage):
if isinstance(prompt_message.content, list):
prompt_message.content = "\n".join(
[
content.data
if content.type == PromptMessageContentType.TEXT
else "[image]"
if content.type == PromptMessageContentType.IMAGE
else "[file]"
for content in prompt_message.content
]
)
return prompt_messages
def _organize_prompt_messages(self):
prompt_template = self.app_config.prompt_template.simple_prompt_template or ""
self.history_prompt_messages = self._init_system_message(prompt_template, self.history_prompt_messages)
query_prompt_messages = self._organize_user_query(self.query or "", [])
self.history_prompt_messages = AgentHistoryPromptTransform(
model_config=self.model_config,
prompt_messages=[*query_prompt_messages, *self._current_thoughts],
history_messages=self.history_prompt_messages,
memory=self.memory,
).get_prompt()
prompt_messages = [*self.history_prompt_messages, *query_prompt_messages, *self._current_thoughts]
if len(self._current_thoughts) != 0:
# clear messages after the first iteration
prompt_messages = self._clear_user_prompt_image_messages(prompt_messages)
return prompt_messages

View File

@ -0,0 +1,55 @@
# Agent Patterns
A unified agent pattern module that powers both Agent V2 workflow nodes and agent applications. Strategies share a common execution contract while adapting to model capabilities and tool availability.
## Overview
The module applies a strategy pattern around LLM/tool orchestration. `StrategyFactory` auto-selects the best implementation based on model features or an explicit agent strategy, and each strategy streams logs and usage consistently.
## Key Features
- **Dual strategies**
- `FunctionCallStrategy`: uses native LLM function/tool calling when the model exposes `TOOL_CALL`, `MULTI_TOOL_CALL`, or `STREAM_TOOL_CALL`.
- `ReActStrategy`: ReAct (reasoning + acting) flow driven by `CotAgentOutputParser`, used when function calling is unavailable or explicitly requested.
- **Explicit or auto selection**
- `StrategyFactory.create_strategy` prefers an explicit `AgentEntity.Strategy` (FUNCTION_CALLING or CHAIN_OF_THOUGHT).
- Otherwise it falls back to function calling when tool-call features exist, or ReAct when they do not.
- **Unified execution contract**
- `AgentPattern.run` yields streaming `AgentLog` entries and `LLMResultChunk` data, returning an `AgentResult` with text, files, usage, and `finish_reason`.
- Iterations are configurable and hard-capped at 99 rounds; the last round forces a final answer by withholding tools.
- **Tool handling and hooks**
- Tools convert to `PromptMessageTool` objects before invocation.
- Optional `tool_invoke_hook` lets callers override tool execution (e.g., agent apps) while workflow runs use `ToolEngine.generic_invoke`.
- Tool outputs support text, links, JSON, variables, blobs, retriever resources, and file attachments; `target=="self"` files are reloaded into model context, others are returned as outputs.
- **File-aware arguments**
- Tool args accept `[File: <id>]` or `[Files: <id1, id2>]` placeholders that resolve to `File` objects before invocation, enabling models to reference uploaded files safely.
- **ReAct prompt shaping**
- System prompts replace `{{instruction}}`, `{{tools}}`, and `{{tool_names}}` placeholders.
- Adds `Observation` to stop sequences and appends scratchpad text so the model sees prior Thought/Action/Observation history.
- **Observability and accounting**
- Standardized `AgentLog` entries for rounds, model thoughts, and tool calls, including usage aggregation (`LLMUsage`) across streaming and non-streaming paths.
## Architecture
```
agent/patterns/
├── base.py # Shared utilities: logging, usage, tool invocation, file handling
├── function_call.py # Native function-calling loop with tool execution
├── react.py # ReAct loop with CoT parsing and scratchpad wiring
└── strategy_factory.py # Strategy selection by model features or explicit override
```
## Usage
- For auto-selection:
- Call `StrategyFactory.create_strategy(model_features, model_instance, context, tools, files, ...)` and run the returned strategy with prompt messages and model params.
- For explicit behavior:
- Pass `agent_strategy=AgentEntity.Strategy.FUNCTION_CALLING` to force native calls (falls back to ReAct if unsupported), or `CHAIN_OF_THOUGHT` to force ReAct.
- Both strategies stream chunks and logs; collect the generator output until it returns an `AgentResult`.
## Integration Points
- **Model runtime**: delegates to `ModelInstance.invoke_llm` for both streaming and non-streaming calls.
- **Tool system**: defaults to `ToolEngine.generic_invoke`, with `tool_invoke_hook` for custom callers.
- **Files**: flows through `File` objects for tool inputs/outputs and model-context attachments.
- **Execution context**: `ExecutionContext` fields (user/app/conversation/message) propagate to tool invocations and logging.

View File

@ -0,0 +1,19 @@
"""Agent patterns module.
This module provides different strategies for agent execution:
- FunctionCallStrategy: Uses native function/tool calling
- ReActStrategy: Uses ReAct (Reasoning + Acting) approach
- StrategyFactory: Factory for creating strategies based on model features
"""
from .base import AgentPattern
from .function_call import FunctionCallStrategy
from .react import ReActStrategy
from .strategy_factory import StrategyFactory
__all__ = [
"AgentPattern",
"FunctionCallStrategy",
"ReActStrategy",
"StrategyFactory",
]

View File

@ -0,0 +1,474 @@
"""Base class for agent strategies."""
from __future__ import annotations
import json
import re
import time
from abc import ABC, abstractmethod
from collections.abc import Callable, Generator
from typing import TYPE_CHECKING, Any
from core.agent.entities import AgentLog, AgentResult, ExecutionContext
from core.file import File
from core.model_manager import ModelInstance
from core.model_runtime.entities import (
AssistantPromptMessage,
LLMResult,
LLMResultChunk,
LLMResultChunkDelta,
PromptMessage,
PromptMessageTool,
)
from core.model_runtime.entities.llm_entities import LLMUsage
from core.model_runtime.entities.message_entities import TextPromptMessageContent
from core.tools.entities.tool_entities import ToolInvokeMessage, ToolInvokeMeta
if TYPE_CHECKING:
from core.tools.__base.tool import Tool
# Type alias for tool invoke hook
# Returns: (response_content, message_file_ids, tool_invoke_meta)
ToolInvokeHook = Callable[["Tool", dict[str, Any], str], tuple[str, list[str], ToolInvokeMeta]]
class AgentPattern(ABC):
"""Base class for agent execution strategies."""
def __init__(
self,
model_instance: ModelInstance,
tools: list[Tool],
context: ExecutionContext,
max_iterations: int = 10,
workflow_call_depth: int = 0,
files: list[File] = [],
tool_invoke_hook: ToolInvokeHook | None = None,
):
"""Initialize the agent strategy."""
self.model_instance = model_instance
self.tools = tools
self.context = context
self.max_iterations = min(max_iterations, 99) # Cap at 99 iterations
self.workflow_call_depth = workflow_call_depth
self.files: list[File] = files
self.tool_invoke_hook = tool_invoke_hook
@abstractmethod
def run(
self,
prompt_messages: list[PromptMessage],
model_parameters: dict[str, Any],
stop: list[str] = [],
stream: bool = True,
) -> Generator[LLMResultChunk | AgentLog, None, AgentResult]:
"""Execute the agent strategy."""
pass
def _accumulate_usage(self, total_usage: dict[str, Any], delta_usage: LLMUsage) -> None:
"""Accumulate LLM usage statistics."""
if not total_usage.get("usage"):
# Create a copy to avoid modifying the original
total_usage["usage"] = LLMUsage(
prompt_tokens=delta_usage.prompt_tokens,
prompt_unit_price=delta_usage.prompt_unit_price,
prompt_price_unit=delta_usage.prompt_price_unit,
prompt_price=delta_usage.prompt_price,
completion_tokens=delta_usage.completion_tokens,
completion_unit_price=delta_usage.completion_unit_price,
completion_price_unit=delta_usage.completion_price_unit,
completion_price=delta_usage.completion_price,
total_tokens=delta_usage.total_tokens,
total_price=delta_usage.total_price,
currency=delta_usage.currency,
latency=delta_usage.latency,
)
else:
current: LLMUsage = total_usage["usage"]
current.prompt_tokens += delta_usage.prompt_tokens
current.completion_tokens += delta_usage.completion_tokens
current.total_tokens += delta_usage.total_tokens
current.prompt_price += delta_usage.prompt_price
current.completion_price += delta_usage.completion_price
current.total_price += delta_usage.total_price
def _extract_content(self, content: Any) -> str:
"""Extract text content from message content."""
if isinstance(content, list):
# Content items are PromptMessageContentUnionTypes
text_parts = []
for c in content:
# Check if it's a TextPromptMessageContent (which has data attribute)
if isinstance(c, TextPromptMessageContent):
text_parts.append(c.data)
return "".join(text_parts)
return str(content)
def _has_tool_calls(self, chunk: LLMResultChunk) -> bool:
"""Check if chunk contains tool calls."""
# LLMResultChunk always has delta attribute
return bool(chunk.delta.message and chunk.delta.message.tool_calls)
def _has_tool_calls_result(self, result: LLMResult) -> bool:
"""Check if result contains tool calls (non-streaming)."""
# LLMResult always has message attribute
return bool(result.message and result.message.tool_calls)
def _extract_tool_calls(self, chunk: LLMResultChunk) -> list[tuple[str, str, dict[str, Any]]]:
"""Extract tool calls from streaming chunk."""
tool_calls: list[tuple[str, str, dict[str, Any]]] = []
if chunk.delta.message and chunk.delta.message.tool_calls:
for tool_call in chunk.delta.message.tool_calls:
if tool_call.function:
try:
args = json.loads(tool_call.function.arguments) if tool_call.function.arguments else {}
except json.JSONDecodeError:
args = {}
tool_calls.append((tool_call.id or "", tool_call.function.name, args))
return tool_calls
def _extract_tool_calls_result(self, result: LLMResult) -> list[tuple[str, str, dict[str, Any]]]:
"""Extract tool calls from non-streaming result."""
tool_calls = []
if result.message and result.message.tool_calls:
for tool_call in result.message.tool_calls:
if tool_call.function:
try:
args = json.loads(tool_call.function.arguments) if tool_call.function.arguments else {}
except json.JSONDecodeError:
args = {}
tool_calls.append((tool_call.id or "", tool_call.function.name, args))
return tool_calls
def _extract_text_from_message(self, message: PromptMessage) -> str:
"""Extract text content from a prompt message."""
# PromptMessage always has content attribute
content = message.content
if isinstance(content, str):
return content
elif isinstance(content, list):
# Extract text from content list
text_parts = []
for item in content:
if isinstance(item, TextPromptMessageContent):
text_parts.append(item.data)
return " ".join(text_parts)
return ""
def _get_tool_metadata(self, tool_instance: Tool) -> dict[AgentLog.LogMetadata, Any]:
"""Get metadata for a tool including provider and icon info."""
from core.tools.tool_manager import ToolManager
metadata: dict[AgentLog.LogMetadata, Any] = {}
if tool_instance.entity and tool_instance.entity.identity:
identity = tool_instance.entity.identity
if identity.provider:
metadata[AgentLog.LogMetadata.PROVIDER] = identity.provider
# Get icon using ToolManager for proper URL generation
tenant_id = self.context.tenant_id
if tenant_id and identity.provider:
try:
provider_type = tool_instance.tool_provider_type()
icon = ToolManager.get_tool_icon(tenant_id, provider_type, identity.provider)
if isinstance(icon, str):
metadata[AgentLog.LogMetadata.ICON] = icon
elif isinstance(icon, dict):
# Handle icon dict with background/content or light/dark variants
metadata[AgentLog.LogMetadata.ICON] = icon
except Exception:
# Fallback to identity.icon if ToolManager fails
if identity.icon:
metadata[AgentLog.LogMetadata.ICON] = identity.icon
elif identity.icon:
metadata[AgentLog.LogMetadata.ICON] = identity.icon
return metadata
def _create_log(
self,
label: str,
log_type: AgentLog.LogType,
status: AgentLog.LogStatus,
data: dict[str, Any] | None = None,
parent_id: str | None = None,
extra_metadata: dict[AgentLog.LogMetadata, Any] | None = None,
) -> AgentLog:
"""Create a new AgentLog with standard metadata."""
metadata: dict[AgentLog.LogMetadata, Any] = {
AgentLog.LogMetadata.STARTED_AT: time.perf_counter(),
}
if extra_metadata:
metadata.update(extra_metadata)
return AgentLog(
label=label,
log_type=log_type,
status=status,
data=data or {},
parent_id=parent_id,
metadata=metadata,
)
def _finish_log(
self,
log: AgentLog,
data: dict[str, Any] | None = None,
usage: LLMUsage | None = None,
) -> AgentLog:
"""Finish an AgentLog by updating its status and metadata."""
log.status = AgentLog.LogStatus.SUCCESS
if data is not None:
log.data = data
# Calculate elapsed time
started_at = log.metadata.get(AgentLog.LogMetadata.STARTED_AT, time.perf_counter())
finished_at = time.perf_counter()
# Update metadata
log.metadata = {
**log.metadata,
AgentLog.LogMetadata.FINISHED_AT: finished_at,
# Calculate elapsed time in seconds
AgentLog.LogMetadata.ELAPSED_TIME: round(finished_at - started_at, 4),
}
# Add usage information if provided
if usage:
log.metadata.update(
{
AgentLog.LogMetadata.TOTAL_PRICE: usage.total_price,
AgentLog.LogMetadata.CURRENCY: usage.currency,
AgentLog.LogMetadata.TOTAL_TOKENS: usage.total_tokens,
AgentLog.LogMetadata.LLM_USAGE: usage,
}
)
return log
def _replace_file_references(self, tool_args: dict[str, Any]) -> dict[str, Any]:
"""
Replace file references in tool arguments with actual File objects.
Args:
tool_args: Dictionary of tool arguments
Returns:
Updated tool arguments with file references replaced
"""
# Process each argument in the dictionary
processed_args: dict[str, Any] = {}
for key, value in tool_args.items():
processed_args[key] = self._process_file_reference(value)
return processed_args
def _process_file_reference(self, data: Any) -> Any:
"""
Recursively process data to replace file references.
Supports both single file [File: file_id] and multiple files [Files: file_id1, file_id2, ...].
Args:
data: The data to process (can be dict, list, str, or other types)
Returns:
Processed data with file references replaced
"""
single_file_pattern = re.compile(r"^\[File:\s*([^\]]+)\]$")
multiple_files_pattern = re.compile(r"^\[Files:\s*([^\]]+)\]$")
if isinstance(data, dict):
# Process dictionary recursively
return {key: self._process_file_reference(value) for key, value in data.items()}
elif isinstance(data, list):
# Process list recursively
return [self._process_file_reference(item) for item in data]
elif isinstance(data, str):
# Check for single file pattern [File: file_id]
single_match = single_file_pattern.match(data.strip())
if single_match:
file_id = single_match.group(1).strip()
# Find the file in self.files
for file in self.files:
if file.id and str(file.id) == file_id:
return file
# If file not found, return original value
return data
# Check for multiple files pattern [Files: file_id1, file_id2, ...]
multiple_match = multiple_files_pattern.match(data.strip())
if multiple_match:
file_ids_str = multiple_match.group(1).strip()
# Split by comma and strip whitespace
file_ids = [fid.strip() for fid in file_ids_str.split(",")]
# Find all matching files
matched_files: list[File] = []
for file_id in file_ids:
for file in self.files:
if file.id and str(file.id) == file_id:
matched_files.append(file)
break
# Return list of files if any were found, otherwise return original
return matched_files or data
return data
else:
# Return other types as-is
return data
def _create_text_chunk(self, text: str, prompt_messages: list[PromptMessage]) -> LLMResultChunk:
"""Create a text chunk for streaming."""
return LLMResultChunk(
model=self.model_instance.model,
prompt_messages=prompt_messages,
delta=LLMResultChunkDelta(
index=0,
message=AssistantPromptMessage(content=text),
usage=None,
),
system_fingerprint="",
)
def _invoke_tool(
self,
tool_instance: Tool,
tool_args: dict[str, Any],
tool_name: str,
) -> tuple[str, list[File], ToolInvokeMeta | None]:
"""
Invoke a tool and collect its response.
Args:
tool_instance: The tool instance to invoke
tool_args: Tool arguments
tool_name: Name of the tool
Returns:
Tuple of (response_content, tool_files, tool_invoke_meta)
"""
# Process tool_args to replace file references with actual File objects
tool_args = self._replace_file_references(tool_args)
# If a tool invoke hook is set, use it instead of generic_invoke
if self.tool_invoke_hook:
response_content, _, tool_invoke_meta = self.tool_invoke_hook(tool_instance, tool_args, tool_name)
# Note: message_file_ids are stored in DB, we don't convert them to File objects here
# The caller (AgentAppRunner) handles file publishing
return response_content, [], tool_invoke_meta
# Default: use generic_invoke for workflow scenarios
# Import here to avoid circular import
from core.tools.tool_engine import DifyWorkflowCallbackHandler, ToolEngine
tool_response = ToolEngine().generic_invoke(
tool=tool_instance,
tool_parameters=tool_args,
user_id=self.context.user_id or "",
workflow_tool_callback=DifyWorkflowCallbackHandler(),
workflow_call_depth=self.workflow_call_depth,
app_id=self.context.app_id,
conversation_id=self.context.conversation_id,
message_id=self.context.message_id,
)
# Collect response and files
response_content = ""
tool_files: list[File] = []
for response in tool_response:
if response.type == ToolInvokeMessage.MessageType.TEXT:
assert isinstance(response.message, ToolInvokeMessage.TextMessage)
response_content += response.message.text
elif response.type == ToolInvokeMessage.MessageType.LINK:
# Handle link messages
if isinstance(response.message, ToolInvokeMessage.TextMessage):
response_content += f"[Link: {response.message.text}]"
elif response.type == ToolInvokeMessage.MessageType.IMAGE:
# Handle image URL messages
if isinstance(response.message, ToolInvokeMessage.TextMessage):
response_content += f"[Image: {response.message.text}]"
elif response.type == ToolInvokeMessage.MessageType.IMAGE_LINK:
# Handle image link messages
if isinstance(response.message, ToolInvokeMessage.TextMessage):
response_content += f"[Image: {response.message.text}]"
elif response.type == ToolInvokeMessage.MessageType.BINARY_LINK:
# Handle binary file link messages
if isinstance(response.message, ToolInvokeMessage.TextMessage):
filename = response.meta.get("filename", "file") if response.meta else "file"
response_content += f"[File: {filename} - {response.message.text}]"
elif response.type == ToolInvokeMessage.MessageType.JSON:
# Handle JSON messages
if isinstance(response.message, ToolInvokeMessage.JsonMessage):
response_content += json.dumps(response.message.json_object, ensure_ascii=False, indent=2)
elif response.type == ToolInvokeMessage.MessageType.BLOB:
# Handle blob messages - convert to text representation
if isinstance(response.message, ToolInvokeMessage.BlobMessage):
mime_type = (
response.meta.get("mime_type", "application/octet-stream")
if response.meta
else "application/octet-stream"
)
size = len(response.message.blob)
response_content += f"[Binary data: {mime_type}, size: {size} bytes]"
elif response.type == ToolInvokeMessage.MessageType.VARIABLE:
# Handle variable messages
if isinstance(response.message, ToolInvokeMessage.VariableMessage):
var_name = response.message.variable_name
var_value = response.message.variable_value
if isinstance(var_value, str):
response_content += var_value
else:
response_content += f"[Variable {var_name}: {json.dumps(var_value, ensure_ascii=False)}]"
elif response.type == ToolInvokeMessage.MessageType.BLOB_CHUNK:
# Handle blob chunk messages - these are parts of a larger blob
if isinstance(response.message, ToolInvokeMessage.BlobChunkMessage):
response_content += f"[Blob chunk {response.message.sequence}: {len(response.message.blob)} bytes]"
elif response.type == ToolInvokeMessage.MessageType.RETRIEVER_RESOURCES:
# Handle retriever resources messages
if isinstance(response.message, ToolInvokeMessage.RetrieverResourceMessage):
response_content += response.message.context
elif response.type == ToolInvokeMessage.MessageType.FILE:
# Extract file from meta
if response.meta and "file" in response.meta:
file = response.meta["file"]
if isinstance(file, File):
# Check if file is for model or tool output
if response.meta.get("target") == "self":
# File is for model - add to files for next prompt
self.files.append(file)
response_content += f"File '{file.filename}' has been loaded into your context."
else:
# File is tool output
tool_files.append(file)
return response_content, tool_files, None
def _find_tool_by_name(self, tool_name: str) -> Tool | None:
"""Find a tool instance by its name."""
for tool in self.tools:
if tool.entity.identity.name == tool_name:
return tool
return None
def _convert_tools_to_prompt_format(self) -> list[PromptMessageTool]:
"""Convert tools to prompt message format."""
prompt_tools: list[PromptMessageTool] = []
for tool in self.tools:
prompt_tools.append(tool.to_prompt_message_tool())
return prompt_tools
def _update_usage_with_empty(self, llm_usage: dict[str, Any]) -> None:
"""Initialize usage tracking with empty usage if not set."""
if "usage" not in llm_usage or llm_usage["usage"] is None:
llm_usage["usage"] = LLMUsage.empty_usage()

View File

@ -0,0 +1,299 @@
"""Function Call strategy implementation."""
import json
from collections.abc import Generator
from typing import Any, Union
from core.agent.entities import AgentLog, AgentResult
from core.file import File
from core.model_runtime.entities import (
AssistantPromptMessage,
LLMResult,
LLMResultChunk,
LLMResultChunkDelta,
LLMUsage,
PromptMessage,
PromptMessageTool,
ToolPromptMessage,
)
from core.tools.entities.tool_entities import ToolInvokeMeta
from .base import AgentPattern
class FunctionCallStrategy(AgentPattern):
"""Function Call strategy using model's native tool calling capability."""
def run(
self,
prompt_messages: list[PromptMessage],
model_parameters: dict[str, Any],
stop: list[str] = [],
stream: bool = True,
) -> Generator[LLMResultChunk | AgentLog, None, AgentResult]:
"""Execute the function call agent strategy."""
# Convert tools to prompt format
prompt_tools: list[PromptMessageTool] = self._convert_tools_to_prompt_format()
# Initialize tracking
iteration_step: int = 1
max_iterations: int = self.max_iterations + 1
function_call_state: bool = True
total_usage: dict[str, LLMUsage | None] = {"usage": None}
messages: list[PromptMessage] = list(prompt_messages) # Create mutable copy
final_text: str = ""
finish_reason: str | None = None
output_files: list[File] = [] # Track files produced by tools
while function_call_state and iteration_step <= max_iterations:
function_call_state = False
round_log = self._create_log(
label=f"ROUND {iteration_step}",
log_type=AgentLog.LogType.ROUND,
status=AgentLog.LogStatus.START,
data={},
)
yield round_log
# On last iteration, remove tools to force final answer
current_tools: list[PromptMessageTool] = [] if iteration_step == max_iterations else prompt_tools
model_log = self._create_log(
label=f"{self.model_instance.model} Thought",
log_type=AgentLog.LogType.THOUGHT,
status=AgentLog.LogStatus.START,
data={},
parent_id=round_log.id,
extra_metadata={
AgentLog.LogMetadata.PROVIDER: self.model_instance.provider,
},
)
yield model_log
# Track usage for this round only
round_usage: dict[str, LLMUsage | None] = {"usage": None}
# Invoke model
chunks: Union[Generator[LLMResultChunk, None, None], LLMResult] = self.model_instance.invoke_llm(
prompt_messages=messages,
model_parameters=model_parameters,
tools=current_tools,
stop=stop,
stream=stream,
user=self.context.user_id,
callbacks=[],
)
# Process response
tool_calls, response_content, chunk_finish_reason = yield from self._handle_chunks(
chunks, round_usage, model_log
)
messages.append(self._create_assistant_message(response_content, tool_calls))
# Accumulate to total usage
round_usage_value = round_usage.get("usage")
if round_usage_value:
self._accumulate_usage(total_usage, round_usage_value)
# Update final text if no tool calls (this is likely the final answer)
if not tool_calls:
final_text = response_content
# Update finish reason
if chunk_finish_reason:
finish_reason = chunk_finish_reason
# Process tool calls
tool_outputs: dict[str, str] = {}
if tool_calls:
function_call_state = True
# Execute tools
for tool_call_id, tool_name, tool_args in tool_calls:
tool_response, tool_files, _ = yield from self._handle_tool_call(
tool_name, tool_args, tool_call_id, messages, round_log
)
tool_outputs[tool_name] = tool_response
# Track files produced by tools
output_files.extend(tool_files)
yield self._finish_log(
round_log,
data={
"llm_result": response_content,
"tool_calls": [
{"name": tc[1], "args": tc[2], "output": tool_outputs.get(tc[1], "")} for tc in tool_calls
]
if tool_calls
else [],
"final_answer": final_text if not function_call_state else None,
},
usage=round_usage.get("usage"),
)
iteration_step += 1
# Return final result
from core.agent.entities import AgentResult
return AgentResult(
text=final_text,
files=output_files,
usage=total_usage.get("usage") or LLMUsage.empty_usage(),
finish_reason=finish_reason,
)
def _handle_chunks(
self,
chunks: Union[Generator[LLMResultChunk, None, None], LLMResult],
llm_usage: dict[str, LLMUsage | None],
start_log: AgentLog,
) -> Generator[
LLMResultChunk | AgentLog,
None,
tuple[list[tuple[str, str, dict[str, Any]]], str, str | None],
]:
"""Handle LLM response chunks and extract tool calls and content.
Returns a tuple of (tool_calls, response_content, finish_reason).
"""
tool_calls: list[tuple[str, str, dict[str, Any]]] = []
response_content: str = ""
finish_reason: str | None = None
if isinstance(chunks, Generator):
# Streaming response
for chunk in chunks:
# Extract tool calls
if self._has_tool_calls(chunk):
tool_calls.extend(self._extract_tool_calls(chunk))
# Extract content
if chunk.delta.message and chunk.delta.message.content:
response_content += self._extract_content(chunk.delta.message.content)
# Track usage
if chunk.delta.usage:
self._accumulate_usage(llm_usage, chunk.delta.usage)
# Capture finish reason
if chunk.delta.finish_reason:
finish_reason = chunk.delta.finish_reason
yield chunk
else:
# Non-streaming response
result: LLMResult = chunks
if self._has_tool_calls_result(result):
tool_calls.extend(self._extract_tool_calls_result(result))
if result.message and result.message.content:
response_content += self._extract_content(result.message.content)
if result.usage:
self._accumulate_usage(llm_usage, result.usage)
# Convert to streaming format
yield LLMResultChunk(
model=result.model,
prompt_messages=result.prompt_messages,
delta=LLMResultChunkDelta(index=0, message=result.message, usage=result.usage),
)
yield self._finish_log(
start_log,
data={
"result": response_content,
},
usage=llm_usage.get("usage"),
)
return tool_calls, response_content, finish_reason
def _create_assistant_message(
self, content: str, tool_calls: list[tuple[str, str, dict[str, Any]]] | None = None
) -> AssistantPromptMessage:
"""Create assistant message with tool calls."""
if tool_calls is None:
return AssistantPromptMessage(content=content)
return AssistantPromptMessage(
content=content or "",
tool_calls=[
AssistantPromptMessage.ToolCall(
id=tc[0],
type="function",
function=AssistantPromptMessage.ToolCall.ToolCallFunction(name=tc[1], arguments=json.dumps(tc[2])),
)
for tc in tool_calls
],
)
def _handle_tool_call(
self,
tool_name: str,
tool_args: dict[str, Any],
tool_call_id: str,
messages: list[PromptMessage],
round_log: AgentLog,
) -> Generator[AgentLog, None, tuple[str, list[File], ToolInvokeMeta | None]]:
"""Handle a single tool call and return response with files and meta."""
# Find tool
tool_instance = self._find_tool_by_name(tool_name)
if not tool_instance:
raise ValueError(f"Tool {tool_name} not found")
# Get tool metadata (provider, icon, etc.)
tool_metadata = self._get_tool_metadata(tool_instance)
# Create tool call log
tool_call_log = self._create_log(
label=f"CALL {tool_name}",
log_type=AgentLog.LogType.TOOL_CALL,
status=AgentLog.LogStatus.START,
data={
"tool_call_id": tool_call_id,
"tool_name": tool_name,
"tool_args": tool_args,
},
parent_id=round_log.id,
extra_metadata=tool_metadata,
)
yield tool_call_log
# Invoke tool using base class method with error handling
try:
response_content, tool_files, tool_invoke_meta = self._invoke_tool(tool_instance, tool_args, tool_name)
yield self._finish_log(
tool_call_log,
data={
**tool_call_log.data,
"output": response_content,
"files": len(tool_files),
"meta": tool_invoke_meta.to_dict() if tool_invoke_meta else None,
},
)
final_content = response_content or "Tool executed successfully"
# Add tool response to messages
messages.append(
ToolPromptMessage(
content=final_content,
tool_call_id=tool_call_id,
name=tool_name,
)
)
return response_content, tool_files, tool_invoke_meta
except Exception as e:
# Tool invocation failed, yield error log
error_message = str(e)
tool_call_log.status = AgentLog.LogStatus.ERROR
tool_call_log.error = error_message
tool_call_log.data = {
**tool_call_log.data,
"error": error_message,
}
yield tool_call_log
# Add error message to conversation
error_content = f"Tool execution failed: {error_message}"
messages.append(
ToolPromptMessage(
content=error_content,
tool_call_id=tool_call_id,
name=tool_name,
)
)
return error_content, [], None

View File

@ -0,0 +1,418 @@
"""ReAct strategy implementation."""
from __future__ import annotations
import json
from collections.abc import Generator
from typing import TYPE_CHECKING, Any, Union
from core.agent.entities import AgentLog, AgentResult, AgentScratchpadUnit, ExecutionContext
from core.agent.output_parser.cot_output_parser import CotAgentOutputParser
from core.file import File
from core.model_manager import ModelInstance
from core.model_runtime.entities import (
AssistantPromptMessage,
LLMResult,
LLMResultChunk,
LLMResultChunkDelta,
PromptMessage,
SystemPromptMessage,
)
from .base import AgentPattern, ToolInvokeHook
if TYPE_CHECKING:
from core.tools.__base.tool import Tool
class ReActStrategy(AgentPattern):
"""ReAct strategy using reasoning and acting approach."""
def __init__(
self,
model_instance: ModelInstance,
tools: list[Tool],
context: ExecutionContext,
max_iterations: int = 10,
workflow_call_depth: int = 0,
files: list[File] = [],
tool_invoke_hook: ToolInvokeHook | None = None,
instruction: str = "",
):
"""Initialize the ReAct strategy with instruction support."""
super().__init__(
model_instance=model_instance,
tools=tools,
context=context,
max_iterations=max_iterations,
workflow_call_depth=workflow_call_depth,
files=files,
tool_invoke_hook=tool_invoke_hook,
)
self.instruction = instruction
def run(
self,
prompt_messages: list[PromptMessage],
model_parameters: dict[str, Any],
stop: list[str] = [],
stream: bool = True,
) -> Generator[LLMResultChunk | AgentLog, None, AgentResult]:
"""Execute the ReAct agent strategy."""
# Initialize tracking
agent_scratchpad: list[AgentScratchpadUnit] = []
iteration_step: int = 1
max_iterations: int = self.max_iterations + 1
react_state: bool = True
total_usage: dict[str, Any] = {"usage": None}
output_files: list[File] = [] # Track files produced by tools
final_text: str = ""
finish_reason: str | None = None
# Add "Observation" to stop sequences
if "Observation" not in stop:
stop = stop.copy()
stop.append("Observation")
while react_state and iteration_step <= max_iterations:
react_state = False
round_log = self._create_log(
label=f"ROUND {iteration_step}",
log_type=AgentLog.LogType.ROUND,
status=AgentLog.LogStatus.START,
data={},
)
yield round_log
# Build prompt with/without tools based on iteration
include_tools = iteration_step < max_iterations
current_messages = self._build_prompt_with_react_format(
prompt_messages, agent_scratchpad, include_tools, self.instruction
)
model_log = self._create_log(
label=f"{self.model_instance.model} Thought",
log_type=AgentLog.LogType.THOUGHT,
status=AgentLog.LogStatus.START,
data={},
parent_id=round_log.id,
extra_metadata={
AgentLog.LogMetadata.PROVIDER: self.model_instance.provider,
},
)
yield model_log
# Track usage for this round only
round_usage: dict[str, Any] = {"usage": None}
# Use current messages directly (files are handled by base class if needed)
messages_to_use = current_messages
# Invoke model
chunks: Union[Generator[LLMResultChunk, None, None], LLMResult] = self.model_instance.invoke_llm(
prompt_messages=messages_to_use,
model_parameters=model_parameters,
stop=stop,
stream=stream,
user=self.context.user_id or "",
callbacks=[],
)
# Process response
scratchpad, chunk_finish_reason = yield from self._handle_chunks(
chunks, round_usage, model_log, current_messages
)
agent_scratchpad.append(scratchpad)
# Accumulate to total usage
round_usage_value = round_usage.get("usage")
if round_usage_value:
self._accumulate_usage(total_usage, round_usage_value)
# Update finish reason
if chunk_finish_reason:
finish_reason = chunk_finish_reason
# Check if we have an action to execute
if scratchpad.action and scratchpad.action.action_name.lower() != "final answer":
react_state = True
# Execute tool
observation, tool_files = yield from self._handle_tool_call(
scratchpad.action, current_messages, round_log
)
scratchpad.observation = observation
# Track files produced by tools
output_files.extend(tool_files)
# Add observation to scratchpad for display
yield self._create_text_chunk(f"\nObservation: {observation}\n", current_messages)
else:
# Extract final answer
if scratchpad.action and scratchpad.action.action_input:
final_answer = scratchpad.action.action_input
if isinstance(final_answer, dict):
final_answer = json.dumps(final_answer, ensure_ascii=False)
final_text = str(final_answer)
elif scratchpad.thought:
# If no action but we have thought, use thought as final answer
final_text = scratchpad.thought
yield self._finish_log(
round_log,
data={
"thought": scratchpad.thought,
"action": scratchpad.action_str if scratchpad.action else None,
"observation": scratchpad.observation or None,
"final_answer": final_text if not react_state else None,
},
usage=round_usage.get("usage"),
)
iteration_step += 1
# Return final result
from core.agent.entities import AgentResult
return AgentResult(
text=final_text, files=output_files, usage=total_usage.get("usage"), finish_reason=finish_reason
)
def _build_prompt_with_react_format(
self,
original_messages: list[PromptMessage],
agent_scratchpad: list[AgentScratchpadUnit],
include_tools: bool = True,
instruction: str = "",
) -> list[PromptMessage]:
"""Build prompt messages with ReAct format."""
# Copy messages to avoid modifying original
messages = list(original_messages)
# Find and update the system prompt that should already exist
system_prompt_found = False
for i, msg in enumerate(messages):
if isinstance(msg, SystemPromptMessage):
system_prompt_found = True
# The system prompt from frontend already has the template, just replace placeholders
# Format tools
tools_str = ""
tool_names = []
if include_tools and self.tools:
# Convert tools to prompt message tools format
prompt_tools = [tool.to_prompt_message_tool() for tool in self.tools]
tool_names = [tool.name for tool in prompt_tools]
# Format tools as JSON for comprehensive information
from core.model_runtime.utils.encoders import jsonable_encoder
tools_str = json.dumps(jsonable_encoder(prompt_tools), indent=2)
tool_names_str = ", ".join(f'"{name}"' for name in tool_names)
else:
tools_str = "No tools available"
tool_names_str = ""
# Replace placeholders in the existing system prompt
updated_content = msg.content
assert isinstance(updated_content, str)
updated_content = updated_content.replace("{{instruction}}", instruction)
updated_content = updated_content.replace("{{tools}}", tools_str)
updated_content = updated_content.replace("{{tool_names}}", tool_names_str)
# Create new SystemPromptMessage with updated content
messages[i] = SystemPromptMessage(content=updated_content)
break
# If no system prompt found, that's unexpected but add scratchpad anyway
if not system_prompt_found:
# This shouldn't happen if frontend is working correctly
pass
# Format agent scratchpad
scratchpad_str = ""
if agent_scratchpad:
scratchpad_parts: list[str] = []
for unit in agent_scratchpad:
if unit.thought:
scratchpad_parts.append(f"Thought: {unit.thought}")
if unit.action_str:
scratchpad_parts.append(f"Action:\n```\n{unit.action_str}\n```")
if unit.observation:
scratchpad_parts.append(f"Observation: {unit.observation}")
scratchpad_str = "\n".join(scratchpad_parts)
# If there's a scratchpad, append it to the last message
if scratchpad_str:
messages.append(AssistantPromptMessage(content=scratchpad_str))
return messages
def _handle_chunks(
self,
chunks: Union[Generator[LLMResultChunk, None, None], LLMResult],
llm_usage: dict[str, Any],
model_log: AgentLog,
current_messages: list[PromptMessage],
) -> Generator[
LLMResultChunk | AgentLog,
None,
tuple[AgentScratchpadUnit, str | None],
]:
"""Handle LLM response chunks and extract action/thought.
Returns a tuple of (scratchpad_unit, finish_reason).
"""
usage_dict: dict[str, Any] = {}
# Convert non-streaming to streaming format if needed
if isinstance(chunks, LLMResult):
# Create a generator from the LLMResult
def result_to_chunks() -> Generator[LLMResultChunk, None, None]:
yield LLMResultChunk(
model=chunks.model,
prompt_messages=chunks.prompt_messages,
delta=LLMResultChunkDelta(
index=0,
message=chunks.message,
usage=chunks.usage,
finish_reason=None, # LLMResult doesn't have finish_reason, only streaming chunks do
),
system_fingerprint=chunks.system_fingerprint or "",
)
streaming_chunks = result_to_chunks()
else:
streaming_chunks = chunks
react_chunks = CotAgentOutputParser.handle_react_stream_output(streaming_chunks, usage_dict)
# Initialize scratchpad unit
scratchpad = AgentScratchpadUnit(
agent_response="",
thought="",
action_str="",
observation="",
action=None,
)
finish_reason: str | None = None
# Process chunks
for chunk in react_chunks:
if isinstance(chunk, AgentScratchpadUnit.Action):
# Action detected
action_str = json.dumps(chunk.model_dump())
scratchpad.agent_response = (scratchpad.agent_response or "") + action_str
scratchpad.action_str = action_str
scratchpad.action = chunk
yield self._create_text_chunk(json.dumps(chunk.model_dump()), current_messages)
else:
# Text chunk
chunk_text = str(chunk)
scratchpad.agent_response = (scratchpad.agent_response or "") + chunk_text
scratchpad.thought = (scratchpad.thought or "") + chunk_text
yield self._create_text_chunk(chunk_text, current_messages)
# Update usage
if usage_dict.get("usage"):
if llm_usage.get("usage"):
self._accumulate_usage(llm_usage, usage_dict["usage"])
else:
llm_usage["usage"] = usage_dict["usage"]
# Clean up thought
scratchpad.thought = (scratchpad.thought or "").strip() or "I am thinking about how to help you"
# Finish model log
yield self._finish_log(
model_log,
data={
"thought": scratchpad.thought,
"action": scratchpad.action_str if scratchpad.action else None,
},
usage=llm_usage.get("usage"),
)
return scratchpad, finish_reason
def _handle_tool_call(
self,
action: AgentScratchpadUnit.Action,
prompt_messages: list[PromptMessage],
round_log: AgentLog,
) -> Generator[AgentLog, None, tuple[str, list[File]]]:
"""Handle tool call and return observation with files."""
tool_name = action.action_name
tool_args: dict[str, Any] | str = action.action_input
# Find tool instance first to get metadata
tool_instance = self._find_tool_by_name(tool_name)
tool_metadata = self._get_tool_metadata(tool_instance) if tool_instance else {}
# Start tool log with tool metadata
tool_log = self._create_log(
label=f"CALL {tool_name}",
log_type=AgentLog.LogType.TOOL_CALL,
status=AgentLog.LogStatus.START,
data={
"tool_name": tool_name,
"tool_args": tool_args,
},
parent_id=round_log.id,
extra_metadata=tool_metadata,
)
yield tool_log
if not tool_instance:
# Finish tool log with error
yield self._finish_log(
tool_log,
data={
**tool_log.data,
"error": f"Tool {tool_name} not found",
},
)
return f"Tool {tool_name} not found", []
# Ensure tool_args is a dict
tool_args_dict: dict[str, Any]
if isinstance(tool_args, str):
try:
tool_args_dict = json.loads(tool_args)
except json.JSONDecodeError:
tool_args_dict = {"input": tool_args}
elif not isinstance(tool_args, dict):
tool_args_dict = {"input": str(tool_args)}
else:
tool_args_dict = tool_args
# Invoke tool using base class method with error handling
try:
response_content, tool_files, tool_invoke_meta = self._invoke_tool(tool_instance, tool_args_dict, tool_name)
# Finish tool log
yield self._finish_log(
tool_log,
data={
**tool_log.data,
"output": response_content,
"files": len(tool_files),
"meta": tool_invoke_meta.to_dict() if tool_invoke_meta else None,
},
)
return response_content or "Tool executed successfully", tool_files
except Exception as e:
# Tool invocation failed, yield error log
error_message = str(e)
tool_log.status = AgentLog.LogStatus.ERROR
tool_log.error = error_message
tool_log.data = {
**tool_log.data,
"error": error_message,
}
yield tool_log
return f"Tool execution failed: {error_message}", []

View File

@ -0,0 +1,107 @@
"""Strategy factory for creating agent strategies."""
from __future__ import annotations
from typing import TYPE_CHECKING
from core.agent.entities import AgentEntity, ExecutionContext
from core.file.models import File
from core.model_manager import ModelInstance
from core.model_runtime.entities.model_entities import ModelFeature
from .base import AgentPattern, ToolInvokeHook
from .function_call import FunctionCallStrategy
from .react import ReActStrategy
if TYPE_CHECKING:
from core.tools.__base.tool import Tool
class StrategyFactory:
"""Factory for creating agent strategies based on model features."""
# Tool calling related features
TOOL_CALL_FEATURES = {ModelFeature.TOOL_CALL, ModelFeature.MULTI_TOOL_CALL, ModelFeature.STREAM_TOOL_CALL}
@staticmethod
def create_strategy(
model_features: list[ModelFeature],
model_instance: ModelInstance,
context: ExecutionContext,
tools: list[Tool],
files: list[File],
max_iterations: int = 10,
workflow_call_depth: int = 0,
agent_strategy: AgentEntity.Strategy | None = None,
tool_invoke_hook: ToolInvokeHook | None = None,
instruction: str = "",
) -> AgentPattern:
"""
Create an appropriate strategy based on model features.
Args:
model_features: List of model features/capabilities
model_instance: Model instance to use
context: Execution context containing trace/audit information
tools: Available tools
files: Available files
max_iterations: Maximum iterations for the strategy
workflow_call_depth: Depth of workflow calls
agent_strategy: Optional explicit strategy override
tool_invoke_hook: Optional hook for custom tool invocation (e.g., agent_invoke)
instruction: Optional instruction for ReAct strategy
Returns:
AgentStrategy instance
"""
# If explicit strategy is provided and it's Function Calling, try to use it if supported
if agent_strategy == AgentEntity.Strategy.FUNCTION_CALLING:
if set(model_features) & StrategyFactory.TOOL_CALL_FEATURES:
return FunctionCallStrategy(
model_instance=model_instance,
context=context,
tools=tools,
files=files,
max_iterations=max_iterations,
workflow_call_depth=workflow_call_depth,
tool_invoke_hook=tool_invoke_hook,
)
# Fallback to ReAct if FC is requested but not supported
# If explicit strategy is Chain of Thought (ReAct)
if agent_strategy == AgentEntity.Strategy.CHAIN_OF_THOUGHT:
return ReActStrategy(
model_instance=model_instance,
context=context,
tools=tools,
files=files,
max_iterations=max_iterations,
workflow_call_depth=workflow_call_depth,
tool_invoke_hook=tool_invoke_hook,
instruction=instruction,
)
# Default auto-selection logic
if set(model_features) & StrategyFactory.TOOL_CALL_FEATURES:
# Model supports native function calling
return FunctionCallStrategy(
model_instance=model_instance,
context=context,
tools=tools,
files=files,
max_iterations=max_iterations,
workflow_call_depth=workflow_call_depth,
tool_invoke_hook=tool_invoke_hook,
)
else:
# Use ReAct strategy for models without function calling
return ReActStrategy(
model_instance=model_instance,
context=context,
tools=tools,
files=files,
max_iterations=max_iterations,
workflow_call_depth=workflow_call_depth,
tool_invoke_hook=tool_invoke_hook,
instruction=instruction,
)

View File

@ -24,11 +24,13 @@ from core.app.apps.message_based_app_generator import MessageBasedAppGenerator
from core.app.apps.message_based_app_queue_manager import MessageBasedAppQueueManager
from core.app.entities.app_invoke_entities import AdvancedChatAppGenerateEntity, InvokeFrom
from core.app.entities.task_entities import ChatbotAppBlockingResponse, ChatbotAppStreamResponse
from core.app.layers.sandbox_layer import SandboxLayer
from core.helper.trace_id_helper import extract_external_trace_id_from_args
from core.model_runtime.errors.invoke import InvokeAuthorizationError
from core.ops.ops_trace_manager import TraceQueueManager
from core.prompt.utils.get_thread_messages_length import get_thread_messages_length
from core.repositories import DifyCoreRepositoryFactory
from core.sandbox.storage.archive_storage import ArchiveSandboxStorage
from core.workflow.repositories.draft_variable_repository import (
DraftVariableSaverFactory,
)
@ -40,6 +42,7 @@ from factories import file_factory
from libs.flask_utils import preserve_flask_contexts
from models import Account, App, Conversation, EndUser, Message, Workflow, WorkflowNodeExecutionTriggeredFrom
from models.enums import WorkflowRunTriggeredFrom
from models.workflow_features import WorkflowFeatures
from services.conversation_service import ConversationService
from services.workflow_draft_variable_service import (
DraftVarLoader,
@ -512,6 +515,22 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
if workflow is None:
raise ValueError("Workflow not found")
graph_engine_layers: tuple = ()
if workflow.get_feature(WorkflowFeatures.SANDBOX).enabled:
if application_generate_entity.workflow_run_id is None:
raise ValueError("workflow_run_id is required when sandbox is enabled")
graph_engine_layers = (
SandboxLayer(
tenant_id=application_generate_entity.app_config.tenant_id,
app_id=application_generate_entity.app_config.app_id,
sandbox_id=application_generate_entity.workflow_run_id,
sandbox_storage=ArchiveSandboxStorage(
tenant_id=application_generate_entity.app_config.tenant_id,
sandbox_id=application_generate_entity.workflow_run_id,
),
),
)
# Determine system_user_id based on invocation source
is_external_api_call = application_generate_entity.invoke_from in {
InvokeFrom.WEB_APP,
@ -542,6 +561,7 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
app=app,
workflow_execution_repository=workflow_execution_repository,
workflow_node_execution_repository=workflow_node_execution_repository,
graph_engine_layers=graph_engine_layers,
)
try:

View File

@ -4,6 +4,7 @@ import re
import time
from collections.abc import Callable, Generator, Mapping
from contextlib import contextmanager
from dataclasses import dataclass, field
from threading import Thread
from typing import Any, Union
@ -19,6 +20,7 @@ from core.app.entities.app_invoke_entities import (
InvokeFrom,
)
from core.app.entities.queue_entities import (
ChunkType,
MessageQueueMessage,
QueueAdvancedChatMessageEndEvent,
QueueAgentLogEvent,
@ -70,13 +72,122 @@ from core.workflow.runtime import GraphRuntimeState
from core.workflow.system_variable import SystemVariable
from extensions.ext_database import db
from libs.datetime_utils import naive_utc_now
from models import Account, Conversation, EndUser, Message, MessageFile
from models import Account, Conversation, EndUser, LLMGenerationDetail, Message, MessageFile
from models.enums import CreatorUserRole
from models.workflow import Workflow
logger = logging.getLogger(__name__)
@dataclass
class StreamEventBuffer:
"""
Buffer for recording stream events in order to reconstruct the generation sequence.
Records the exact order of text chunks, thoughts, and tool calls as they stream.
"""
# Accumulated reasoning content (each thought block is a separate element)
reasoning_content: list[str] = field(default_factory=list)
# Current reasoning buffer (accumulates until we see a different event type)
_current_reasoning: str = ""
# Tool calls with their details
tool_calls: list[dict] = field(default_factory=list)
# Tool call ID to index mapping for updating results
_tool_call_id_map: dict[str, int] = field(default_factory=dict)
# Sequence of events in stream order
sequence: list[dict] = field(default_factory=list)
# Current position in answer text
_content_position: int = 0
# Track last event type to detect transitions
_last_event_type: str | None = None
def _flush_current_reasoning(self) -> None:
"""Flush accumulated reasoning to the list and add to sequence."""
if self._current_reasoning.strip():
self.reasoning_content.append(self._current_reasoning.strip())
self.sequence.append({"type": "reasoning", "index": len(self.reasoning_content) - 1})
self._current_reasoning = ""
def record_text_chunk(self, text: str) -> None:
"""Record a text chunk event."""
if not text:
return
# Flush any pending reasoning first
if self._last_event_type == "thought":
self._flush_current_reasoning()
text_len = len(text)
start_pos = self._content_position
# If last event was also content, extend it; otherwise create new
if self.sequence and self.sequence[-1].get("type") == "content":
self.sequence[-1]["end"] = start_pos + text_len
else:
self.sequence.append({"type": "content", "start": start_pos, "end": start_pos + text_len})
self._content_position += text_len
self._last_event_type = "content"
def record_thought_chunk(self, text: str) -> None:
"""Record a thought/reasoning chunk event."""
if not text:
return
# Accumulate thought content
self._current_reasoning += text
self._last_event_type = "thought"
def record_tool_call(self, tool_call_id: str, tool_name: str, tool_arguments: str) -> None:
"""Record a tool call event."""
if not tool_call_id:
return
# Flush any pending reasoning first
if self._last_event_type == "thought":
self._flush_current_reasoning()
# Check if this tool call already exists (we might get multiple chunks)
if tool_call_id in self._tool_call_id_map:
idx = self._tool_call_id_map[tool_call_id]
# Update arguments if provided
if tool_arguments:
self.tool_calls[idx]["arguments"] = tool_arguments
else:
# New tool call
tool_call = {
"id": tool_call_id or "",
"name": tool_name or "",
"arguments": tool_arguments or "",
"result": "",
"elapsed_time": None,
}
self.tool_calls.append(tool_call)
idx = len(self.tool_calls) - 1
self._tool_call_id_map[tool_call_id] = idx
self.sequence.append({"type": "tool_call", "index": idx})
self._last_event_type = "tool_call"
def record_tool_result(self, tool_call_id: str, result: str, tool_elapsed_time: float | None = None) -> None:
"""Record a tool result event (update existing tool call)."""
if not tool_call_id:
return
if tool_call_id in self._tool_call_id_map:
idx = self._tool_call_id_map[tool_call_id]
self.tool_calls[idx]["result"] = result
self.tool_calls[idx]["elapsed_time"] = tool_elapsed_time
def finalize(self) -> None:
"""Finalize the buffer, flushing any pending data."""
if self._last_event_type == "thought":
self._flush_current_reasoning()
def has_data(self) -> bool:
"""Check if there's any meaningful data recorded."""
return bool(self.reasoning_content or self.tool_calls or self.sequence)
class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
"""
AdvancedChatAppGenerateTaskPipeline is a class that generate stream output and state management for Application.
@ -144,6 +255,8 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
self._workflow_run_id: str = ""
self._draft_var_saver_factory = draft_var_saver_factory
self._graph_runtime_state: GraphRuntimeState | None = None
# Stream event buffer for recording generation sequence
self._stream_buffer = StreamEventBuffer()
self._seed_graph_runtime_state_from_queue_manager()
def process(self) -> Union[ChatbotAppBlockingResponse, Generator[ChatbotAppStreamResponse, None, None]]:
@ -383,7 +496,7 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
queue_message: Union[WorkflowQueueMessage, MessageQueueMessage] | None = None,
**kwargs,
) -> Generator[StreamResponse, None, None]:
"""Handle text chunk events."""
"""Handle text chunk events and record to stream buffer for sequence reconstruction."""
delta_text = event.text
if delta_text is None:
return
@ -405,9 +518,52 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
if tts_publisher and queue_message:
tts_publisher.publish(queue_message)
self._task_state.answer += delta_text
tool_call = event.tool_call
tool_result = event.tool_result
tool_payload = tool_call or tool_result
tool_call_id = tool_payload.id if tool_payload and tool_payload.id else ""
tool_name = tool_payload.name if tool_payload and tool_payload.name else ""
tool_arguments = tool_call.arguments if tool_call and tool_call.arguments else ""
tool_files = tool_result.files if tool_result else []
tool_elapsed_time = tool_result.elapsed_time if tool_result else None
tool_icon = tool_payload.icon if tool_payload else None
tool_icon_dark = tool_payload.icon_dark if tool_payload else None
# Record stream event based on chunk type
chunk_type = event.chunk_type or ChunkType.TEXT
match chunk_type:
case ChunkType.TEXT:
self._stream_buffer.record_text_chunk(delta_text)
self._task_state.answer += delta_text
case ChunkType.THOUGHT:
# Reasoning should not be part of final answer text
self._stream_buffer.record_thought_chunk(delta_text)
case ChunkType.TOOL_CALL:
self._stream_buffer.record_tool_call(
tool_call_id=tool_call_id,
tool_name=tool_name,
tool_arguments=tool_arguments,
)
case ChunkType.TOOL_RESULT:
self._stream_buffer.record_tool_result(
tool_call_id=tool_call_id,
result=delta_text,
tool_elapsed_time=tool_elapsed_time,
)
self._task_state.answer += delta_text
case _:
pass
yield self._message_cycle_manager.message_to_stream_response(
answer=delta_text, message_id=self._message_id, from_variable_selector=event.from_variable_selector
answer=delta_text,
message_id=self._message_id,
from_variable_selector=event.from_variable_selector,
chunk_type=event.chunk_type.value if event.chunk_type else None,
tool_call_id=tool_call_id or None,
tool_name=tool_name or None,
tool_arguments=tool_arguments or None,
tool_files=tool_files,
tool_elapsed_time=tool_elapsed_time,
tool_icon=tool_icon,
tool_icon_dark=tool_icon_dark,
)
def _handle_iteration_start_event(
@ -775,6 +931,7 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
# If there are assistant files, remove markdown image links from answer
answer_text = self._task_state.answer
answer_text = self._strip_think_blocks(answer_text)
if self._recorded_files:
# Remove markdown image links since we're storing files separately
answer_text = re.sub(r"!\[.*?\]\(.*?\)", "", answer_text).strip()
@ -826,6 +983,54 @@ class AdvancedChatAppGenerateTaskPipeline(GraphRuntimeStateSupport):
]
session.add_all(message_files)
# Save generation detail (reasoning/tool calls/sequence) from stream buffer
self._save_generation_detail(session=session, message=message)
@staticmethod
def _strip_think_blocks(text: str) -> str:
"""Remove <think>...</think> blocks (including their content) from text."""
if not text or "<think" not in text.lower():
return text
clean_text = re.sub(r"<think[^>]*>.*?</think>", "", text, flags=re.IGNORECASE | re.DOTALL)
clean_text = re.sub(r"\n\s*\n", "\n\n", clean_text).strip()
return clean_text
def _save_generation_detail(self, *, session: Session, message: Message) -> None:
"""
Save LLM generation detail for Chatflow using stream event buffer.
The buffer records the exact order of events as they streamed,
allowing accurate reconstruction of the generation sequence.
"""
# Finalize the stream buffer to flush any pending data
self._stream_buffer.finalize()
# Only save if there's meaningful data
if not self._stream_buffer.has_data():
return
reasoning_content = self._stream_buffer.reasoning_content
tool_calls = self._stream_buffer.tool_calls
sequence = self._stream_buffer.sequence
# Check if generation detail already exists for this message
existing = session.query(LLMGenerationDetail).filter_by(message_id=message.id).first()
if existing:
existing.reasoning_content = json.dumps(reasoning_content) if reasoning_content else None
existing.tool_calls = json.dumps(tool_calls) if tool_calls else None
existing.sequence = json.dumps(sequence) if sequence else None
else:
generation_detail = LLMGenerationDetail(
tenant_id=self._application_generate_entity.app_config.tenant_id,
app_id=self._application_generate_entity.app_config.app_id,
message_id=message.id,
reasoning_content=json.dumps(reasoning_content) if reasoning_content else None,
tool_calls=json.dumps(tool_calls) if tool_calls else None,
sequence=json.dumps(sequence) if sequence else None,
)
session.add(generation_detail)
def _seed_graph_runtime_state_from_queue_manager(self) -> None:
"""Bootstrap the cached runtime state from the queue manager when present."""
candidate = self._base_task_pipeline.queue_manager.graph_runtime_state

View File

@ -3,10 +3,8 @@ from typing import cast
from sqlalchemy import select
from core.agent.cot_chat_agent_runner import CotChatAgentRunner
from core.agent.cot_completion_agent_runner import CotCompletionAgentRunner
from core.agent.agent_app_runner import AgentAppRunner
from core.agent.entities import AgentEntity
from core.agent.fc_agent_runner import FunctionCallAgentRunner
from core.app.apps.agent_chat.app_config_manager import AgentChatAppConfig
from core.app.apps.base_app_queue_manager import AppQueueManager, PublishFrom
from core.app.apps.base_app_runner import AppRunner
@ -14,8 +12,7 @@ from core.app.entities.app_invoke_entities import AgentChatAppGenerateEntity
from core.app.entities.queue_entities import QueueAnnotationReplyEvent
from core.memory.token_buffer_memory import TokenBufferMemory
from core.model_manager import ModelInstance
from core.model_runtime.entities.llm_entities import LLMMode
from core.model_runtime.entities.model_entities import ModelFeature, ModelPropertyKey
from core.model_runtime.entities.model_entities import ModelFeature
from core.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel
from core.moderation.base import ModerationError
from extensions.ext_database import db
@ -194,22 +191,7 @@ class AgentChatAppRunner(AppRunner):
raise ValueError("Message not found")
db.session.close()
runner_cls: type[FunctionCallAgentRunner] | type[CotChatAgentRunner] | type[CotCompletionAgentRunner]
# start agent runner
if agent_entity.strategy == AgentEntity.Strategy.CHAIN_OF_THOUGHT:
# check LLM mode
if model_schema.model_properties.get(ModelPropertyKey.MODE) == LLMMode.CHAT:
runner_cls = CotChatAgentRunner
elif model_schema.model_properties.get(ModelPropertyKey.MODE) == LLMMode.COMPLETION:
runner_cls = CotCompletionAgentRunner
else:
raise ValueError(f"Invalid LLM mode: {model_schema.model_properties.get(ModelPropertyKey.MODE)}")
elif agent_entity.strategy == AgentEntity.Strategy.FUNCTION_CALLING:
runner_cls = FunctionCallAgentRunner
else:
raise ValueError(f"Invalid agent strategy: {agent_entity.strategy}")
runner = runner_cls(
runner = AgentAppRunner(
tenant_id=app_config.tenant_id,
application_generate_entity=application_generate_entity,
conversation=conversation_result,

View File

@ -671,7 +671,7 @@ class WorkflowResponseConverter:
task_id=task_id,
data=AgentLogStreamResponse.Data(
node_execution_id=event.node_execution_id,
id=event.id,
message_id=event.id,
parent_id=event.parent_id,
label=event.label,
error=event.error,

View File

@ -23,10 +23,12 @@ from core.app.apps.workflow.generate_response_converter import WorkflowAppGenera
from core.app.apps.workflow.generate_task_pipeline import WorkflowAppGenerateTaskPipeline
from core.app.entities.app_invoke_entities import InvokeFrom, WorkflowAppGenerateEntity
from core.app.entities.task_entities import WorkflowAppBlockingResponse, WorkflowAppStreamResponse
from core.app.layers.sandbox_layer import SandboxLayer
from core.helper.trace_id_helper import extract_external_trace_id_from_args
from core.model_runtime.errors.invoke import InvokeAuthorizationError
from core.ops.ops_trace_manager import TraceQueueManager
from core.repositories import DifyCoreRepositoryFactory
from core.sandbox.storage.archive_storage import ArchiveSandboxStorage
from core.workflow.graph_engine.layers.base import GraphEngineLayer
from core.workflow.repositories.draft_variable_repository import DraftVariableSaverFactory
from core.workflow.repositories.workflow_execution_repository import WorkflowExecutionRepository
@ -37,6 +39,7 @@ from factories import file_factory
from libs.flask_utils import preserve_flask_contexts
from models import Account, App, EndUser, Workflow, WorkflowNodeExecutionTriggeredFrom
from models.enums import WorkflowRunTriggeredFrom
from models.workflow_features import WorkflowFeatures
from services.workflow_draft_variable_service import DraftVarLoader, WorkflowDraftVariableService
SKIP_PREPARE_USER_INPUTS_KEY = "_skip_prepare_user_inputs"
@ -487,6 +490,20 @@ class WorkflowAppGenerator(BaseAppGenerator):
if workflow is None:
raise ValueError("Workflow not found")
if workflow.get_feature(WorkflowFeatures.SANDBOX).enabled:
graph_engine_layers = (
*graph_engine_layers,
SandboxLayer(
tenant_id=application_generate_entity.app_config.tenant_id,
app_id=application_generate_entity.app_config.app_id,
sandbox_id=application_generate_entity.workflow_execution_id,
sandbox_storage=ArchiveSandboxStorage(
tenant_id=application_generate_entity.app_config.tenant_id,
sandbox_id=application_generate_entity.workflow_execution_id,
),
),
)
# Determine system_user_id based on invocation source
is_external_api_call = application_generate_entity.invoke_from in {
InvokeFrom.WEB_APP,

View File

@ -13,6 +13,7 @@ from core.app.apps.common.workflow_response_converter import WorkflowResponseCon
from core.app.entities.app_invoke_entities import InvokeFrom, WorkflowAppGenerateEntity
from core.app.entities.queue_entities import (
AppQueueEvent,
ChunkType,
MessageQueueMessage,
QueueAgentLogEvent,
QueueErrorEvent,
@ -483,11 +484,33 @@ class WorkflowAppGenerateTaskPipeline(GraphRuntimeStateSupport):
if delta_text is None:
return
tool_call = event.tool_call
tool_result = event.tool_result
tool_payload = tool_call or tool_result
tool_call_id = tool_payload.id if tool_payload and tool_payload.id else None
tool_name = tool_payload.name if tool_payload and tool_payload.name else None
tool_arguments = tool_call.arguments if tool_call else None
tool_elapsed_time = tool_result.elapsed_time if tool_result else None
tool_files = tool_result.files if tool_result else []
tool_icon = tool_payload.icon if tool_payload else None
tool_icon_dark = tool_payload.icon_dark if tool_payload else None
# only publish tts message at text chunk streaming
if tts_publisher and queue_message:
tts_publisher.publish(queue_message)
yield self._text_chunk_to_stream_response(delta_text, from_variable_selector=event.from_variable_selector)
yield self._text_chunk_to_stream_response(
text=delta_text,
from_variable_selector=event.from_variable_selector,
chunk_type=event.chunk_type,
tool_call_id=tool_call_id,
tool_name=tool_name,
tool_arguments=tool_arguments,
tool_files=tool_files,
tool_elapsed_time=tool_elapsed_time,
tool_icon=tool_icon,
tool_icon_dark=tool_icon_dark,
)
def _handle_agent_log_event(self, event: QueueAgentLogEvent, **kwargs) -> Generator[StreamResponse, None, None]:
"""Handle agent log events."""
@ -650,16 +673,61 @@ class WorkflowAppGenerateTaskPipeline(GraphRuntimeStateSupport):
session.add(workflow_app_log)
def _text_chunk_to_stream_response(
self, text: str, from_variable_selector: list[str] | None = None
self,
text: str,
from_variable_selector: list[str] | None = None,
chunk_type: ChunkType | None = None,
tool_call_id: str | None = None,
tool_name: str | None = None,
tool_arguments: str | None = None,
tool_files: list[str] | None = None,
tool_error: str | None = None,
tool_elapsed_time: float | None = None,
tool_icon: str | dict | None = None,
tool_icon_dark: str | dict | None = None,
) -> TextChunkStreamResponse:
"""
Handle completed event.
:param text: text
:return:
"""
from core.app.entities.task_entities import ChunkType as ResponseChunkType
response_chunk_type = ResponseChunkType(chunk_type.value) if chunk_type else ResponseChunkType.TEXT
data = TextChunkStreamResponse.Data(
text=text,
from_variable_selector=from_variable_selector,
chunk_type=response_chunk_type,
)
if response_chunk_type == ResponseChunkType.TOOL_CALL:
data = data.model_copy(
update={
"tool_call_id": tool_call_id,
"tool_name": tool_name,
"tool_arguments": tool_arguments,
"tool_icon": tool_icon,
"tool_icon_dark": tool_icon_dark,
}
)
elif response_chunk_type == ResponseChunkType.TOOL_RESULT:
data = data.model_copy(
update={
"tool_call_id": tool_call_id,
"tool_name": tool_name,
"tool_arguments": tool_arguments,
"tool_files": tool_files,
"tool_error": tool_error,
"tool_elapsed_time": tool_elapsed_time,
"tool_icon": tool_icon,
"tool_icon_dark": tool_icon_dark,
}
)
response = TextChunkStreamResponse(
task_id=self._application_generate_entity.task_id,
data=TextChunkStreamResponse.Data(text=text, from_variable_selector=from_variable_selector),
data=data,
)
return response

View File

@ -463,12 +463,20 @@ class WorkflowBasedAppRunner:
)
)
elif isinstance(event, NodeRunStreamChunkEvent):
from core.app.entities.queue_entities import ChunkType as QueueChunkType
if event.is_final and not event.chunk:
return
self._publish_event(
QueueTextChunkEvent(
text=event.chunk,
from_variable_selector=list(event.selector),
in_iteration_id=event.in_iteration_id,
in_loop_id=event.in_loop_id,
chunk_type=QueueChunkType(event.chunk_type.value),
tool_call=event.tool_call,
tool_result=event.tool_result,
)
)
elif isinstance(event, NodeRunRetrieverResourceEvent):

View File

@ -0,0 +1,236 @@
from __future__ import annotations
from collections import defaultdict
from collections.abc import Generator
from enum import StrEnum
from pydantic import BaseModel, Field
class AssetNodeType(StrEnum):
FILE = "file"
FOLDER = "folder"
class AppAssetNode(BaseModel):
id: str = Field(description="Unique identifier for the node")
node_type: AssetNodeType = Field(description="Type of node: file or folder")
name: str = Field(description="Name of the file or folder")
parent_id: str | None = Field(default=None, description="Parent folder ID, None for root level")
order: int = Field(default=0, description="Sort order within parent folder, lower values first")
extension: str = Field(default="", description="File extension without dot, empty for folders")
size: int = Field(default=0, description="File size in bytes, 0 for folders")
checksum: str = Field(default="", description="SHA-256 checksum of file content, empty for folders")
@classmethod
def create_folder(cls, node_id: str, name: str, parent_id: str | None = None) -> AppAssetNode:
return cls(id=node_id, node_type=AssetNodeType.FOLDER, name=name, parent_id=parent_id)
@classmethod
def create_file(
cls, node_id: str, name: str, parent_id: str | None = None, size: int = 0, checksum: str = ""
) -> AppAssetNode:
return cls(
id=node_id,
node_type=AssetNodeType.FILE,
name=name,
parent_id=parent_id,
extension=name.rsplit(".", 1)[-1] if "." in name else "",
size=size,
checksum=checksum,
)
class AppAssetNodeView(BaseModel):
id: str = Field(description="Unique identifier for the node")
node_type: str = Field(description="Type of node: 'file' or 'folder'")
name: str = Field(description="Name of the file or folder")
path: str = Field(description="Full path from root, e.g. '/folder/file.txt'")
extension: str = Field(default="", description="File extension without dot")
size: int = Field(default=0, description="File size in bytes")
checksum: str = Field(default="", description="SHA-256 checksum of file content")
children: list[AppAssetNodeView] = Field(default_factory=list, description="Child nodes for folders")
class TreeNodeNotFoundError(Exception):
"""Tree internal: node not found"""
pass
class TreeParentNotFoundError(Exception):
"""Tree internal: parent folder not found"""
pass
class TreePathConflictError(Exception):
"""Tree internal: path already exists"""
pass
class AppAssetFileTree(BaseModel):
"""
File tree structure for app assets using adjacency list pattern.
Design:
- Storage: Flat list with parent_id references (adjacency list)
- Path: Computed dynamically via get_path(), not stored
- Order: Integer field for user-defined sorting within each folder
- API response: transform() builds nested tree with computed paths
Why adjacency list over nested tree or materialized path:
- Simpler CRUD: move/rename only updates one node's parent_id
- No path cascade: renaming parent doesn't require updating all descendants
- JSON-friendly: flat list serializes cleanly to database JSON column
- Trade-off: path lookup is O(depth), acceptable for typical file trees
"""
nodes: list[AppAssetNode] = Field(default_factory=list, description="Flat list of all nodes in the tree")
def get(self, node_id: str) -> AppAssetNode | None:
return next((n for n in self.nodes if n.id == node_id), None)
def get_children(self, parent_id: str | None) -> list[AppAssetNode]:
return [n for n in self.nodes if n.parent_id == parent_id]
def has_child_named(self, parent_id: str | None, name: str) -> bool:
return any(n.name == name and n.parent_id == parent_id for n in self.nodes)
def get_path(self, node_id: str) -> str:
node = self.get(node_id)
if not node:
raise TreeNodeNotFoundError(node_id)
parts: list[str] = []
current: AppAssetNode | None = node
while current:
parts.append(current.name)
current = self.get(current.parent_id) if current.parent_id else None
return "/" + "/".join(reversed(parts))
def get_descendant_ids(self, node_id: str) -> list[str]:
result: list[str] = []
stack = [node_id]
while stack:
current_id = stack.pop()
for child in self.nodes:
if child.parent_id == current_id:
result.append(child.id)
stack.append(child.id)
return result
def add(self, node: AppAssetNode) -> AppAssetNode:
if self.get(node.id):
raise TreePathConflictError(node.id)
if self.has_child_named(node.parent_id, node.name):
raise TreePathConflictError(node.name)
if node.parent_id:
parent = self.get(node.parent_id)
if not parent or parent.node_type != AssetNodeType.FOLDER:
raise TreeParentNotFoundError(node.parent_id)
siblings = self.get_children(node.parent_id)
node.order = max((s.order for s in siblings), default=-1) + 1
self.nodes.append(node)
return node
def update(self, node_id: str, size: int, checksum: str) -> AppAssetNode:
node = self.get(node_id)
if not node or node.node_type != AssetNodeType.FILE:
raise TreeNodeNotFoundError(node_id)
node.size = size
node.checksum = checksum
return node
def rename(self, node_id: str, new_name: str) -> AppAssetNode:
node = self.get(node_id)
if not node:
raise TreeNodeNotFoundError(node_id)
if node.name != new_name and self.has_child_named(node.parent_id, new_name):
raise TreePathConflictError(new_name)
node.name = new_name
if node.node_type == AssetNodeType.FILE:
node.extension = new_name.rsplit(".", 1)[-1] if "." in new_name else ""
return node
def move(self, node_id: str, new_parent_id: str | None) -> AppAssetNode:
node = self.get(node_id)
if not node:
raise TreeNodeNotFoundError(node_id)
if new_parent_id:
parent = self.get(new_parent_id)
if not parent or parent.node_type != AssetNodeType.FOLDER:
raise TreeParentNotFoundError(new_parent_id)
if self.has_child_named(new_parent_id, node.name):
raise TreePathConflictError(node.name)
node.parent_id = new_parent_id
siblings = self.get_children(new_parent_id)
node.order = max((s.order for s in siblings if s.id != node_id), default=-1) + 1
return node
def reorder(self, node_id: str, after_node_id: str | None) -> AppAssetNode:
node = self.get(node_id)
if not node:
raise TreeNodeNotFoundError(node_id)
siblings = sorted(self.get_children(node.parent_id), key=lambda x: x.order)
siblings = [s for s in siblings if s.id != node_id]
if after_node_id is None:
insert_idx = 0
else:
after_node = self.get(after_node_id)
if not after_node or after_node.parent_id != node.parent_id:
raise TreeNodeNotFoundError(after_node_id)
insert_idx = next((i for i, s in enumerate(siblings) if s.id == after_node_id), -1) + 1
siblings.insert(insert_idx, node)
for idx, sibling in enumerate(siblings):
sibling.order = idx
return node
def remove(self, node_id: str) -> list[str]:
node = self.get(node_id)
if not node:
raise TreeNodeNotFoundError(node_id)
ids_to_remove = [node_id] + self.get_descendant_ids(node_id)
self.nodes = [n for n in self.nodes if n.id not in ids_to_remove]
return ids_to_remove
def walk_files(self) -> Generator[AppAssetNode, None, None]:
return (n for n in self.nodes if n.node_type == AssetNodeType.FILE)
def transform(self) -> list[AppAssetNodeView]:
by_parent: dict[str | None, list[AppAssetNode]] = defaultdict(list)
for n in self.nodes:
by_parent[n.parent_id].append(n)
for children in by_parent.values():
children.sort(key=lambda x: x.order)
paths: dict[str, str] = {}
tree_views: dict[str, AppAssetNodeView] = {}
def build_view(node: AppAssetNode, parent_path: str) -> None:
path = f"{parent_path}/{node.name}"
paths[node.id] = path
child_views: list[AppAssetNodeView] = []
for child in by_parent.get(node.id, []):
build_view(child, path)
child_views.append(tree_views[child.id])
tree_views[node.id] = AppAssetNodeView(
id=node.id,
node_type=node.node_type.value,
name=node.name,
path=path,
extension=node.extension,
size=node.size,
checksum=node.checksum,
children=child_views,
)
for root_node in by_parent.get(None, []):
build_view(root_node, "")
return [tree_views[n.id] for n in by_parent.get(None, [])]

View File

@ -36,6 +36,9 @@ class InvokeFrom(StrEnum):
# this is used for plugin trigger and webhook trigger.
TRIGGER = "trigger"
# AGENT indicates that this invocation is from an agent.
AGENT = "agent"
# EXPLORE indicates that this invocation is from
# the workflow (or chatflow) explore page.
EXPLORE = "explore"

View File

@ -0,0 +1,70 @@
"""
LLM Generation Detail entities.
Defines the structure for storing and transmitting LLM generation details
including reasoning content, tool calls, and their sequence.
"""
from typing import Literal
from pydantic import BaseModel, Field
class ContentSegment(BaseModel):
"""Represents a content segment in the generation sequence."""
type: Literal["content"] = "content"
start: int = Field(..., description="Start position in the text")
end: int = Field(..., description="End position in the text")
class ReasoningSegment(BaseModel):
"""Represents a reasoning segment in the generation sequence."""
type: Literal["reasoning"] = "reasoning"
index: int = Field(..., description="Index into reasoning_content array")
class ToolCallSegment(BaseModel):
"""Represents a tool call segment in the generation sequence."""
type: Literal["tool_call"] = "tool_call"
index: int = Field(..., description="Index into tool_calls array")
SequenceSegment = ContentSegment | ReasoningSegment | ToolCallSegment
class ToolCallDetail(BaseModel):
"""Represents a tool call with its arguments and result."""
id: str = Field(default="", description="Unique identifier for the tool call")
name: str = Field(..., description="Name of the tool")
arguments: str = Field(default="", description="JSON string of tool arguments")
result: str = Field(default="", description="Result from the tool execution")
elapsed_time: float | None = Field(default=None, description="Elapsed time in seconds")
class LLMGenerationDetailData(BaseModel):
"""
Domain model for LLM generation detail.
Contains the structured data for reasoning content, tool calls,
and their display sequence.
"""
reasoning_content: list[str] = Field(default_factory=list, description="List of reasoning segments")
tool_calls: list[ToolCallDetail] = Field(default_factory=list, description="List of tool call details")
sequence: list[SequenceSegment] = Field(default_factory=list, description="Display order of segments")
def is_empty(self) -> bool:
"""Check if there's any meaningful generation detail."""
return not self.reasoning_content and not self.tool_calls
def to_response_dict(self) -> dict:
"""Convert to dictionary for API response."""
return {
"reasoning_content": self.reasoning_content,
"tool_calls": [tc.model_dump() for tc in self.tool_calls],
"sequence": [seg.model_dump() for seg in self.sequence],
}

View File

@ -7,7 +7,7 @@ from pydantic import BaseModel, ConfigDict, Field
from core.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk
from core.rag.entities.citation_metadata import RetrievalSourceMetadata
from core.workflow.entities import AgentNodeStrategyInit
from core.workflow.entities import AgentNodeStrategyInit, ToolCall, ToolResult
from core.workflow.enums import WorkflowNodeExecutionMetadataKey
from core.workflow.nodes import NodeType
@ -177,6 +177,17 @@ class QueueLoopCompletedEvent(AppQueueEvent):
error: str | None = None
class ChunkType(StrEnum):
"""Stream chunk type for LLM-related events."""
TEXT = "text" # Normal text streaming
TOOL_CALL = "tool_call" # Tool call arguments streaming
TOOL_RESULT = "tool_result" # Tool execution result
THOUGHT = "thought" # Agent thinking process (ReAct)
THOUGHT_START = "thought_start" # Agent thought start
THOUGHT_END = "thought_end" # Agent thought end
class QueueTextChunkEvent(AppQueueEvent):
"""
QueueTextChunkEvent entity
@ -191,6 +202,16 @@ class QueueTextChunkEvent(AppQueueEvent):
in_loop_id: str | None = None
"""loop id if node is in loop"""
# Extended fields for Agent/Tool streaming
chunk_type: ChunkType = ChunkType.TEXT
"""type of the chunk"""
# Tool streaming payloads
tool_call: ToolCall | None = None
"""structured tool call info"""
tool_result: ToolResult | None = None
"""structured tool result info"""
class QueueAgentMessageEvent(AppQueueEvent):
"""

View File

@ -113,6 +113,38 @@ class MessageStreamResponse(StreamResponse):
answer: str
from_variable_selector: list[str] | None = None
# Extended fields for Agent/Tool streaming (imported at runtime to avoid circular import)
chunk_type: str | None = None
"""type of the chunk: text, tool_call, tool_result, thought"""
# Tool call fields (when chunk_type == "tool_call")
tool_call_id: str | None = None
"""unique identifier for this tool call"""
tool_name: str | None = None
"""name of the tool being called"""
tool_arguments: str | None = None
"""accumulated tool arguments JSON"""
# Tool result fields (when chunk_type == "tool_result")
tool_files: list[str] | None = None
"""file IDs produced by tool"""
tool_error: str | None = None
"""error message if tool failed"""
tool_elapsed_time: float | None = None
"""elapsed time spent executing the tool"""
tool_icon: str | dict | None = None
"""icon of the tool"""
tool_icon_dark: str | dict | None = None
"""dark theme icon of the tool"""
def model_dump(self, *args, **kwargs) -> dict[str, object]:
kwargs.setdefault("exclude_none", True)
return super().model_dump(*args, **kwargs)
def model_dump_json(self, *args, **kwargs) -> str:
kwargs.setdefault("exclude_none", True)
return super().model_dump_json(*args, **kwargs)
class MessageAudioStreamResponse(StreamResponse):
"""
@ -582,6 +614,17 @@ class LoopNodeCompletedStreamResponse(StreamResponse):
data: Data
class ChunkType(StrEnum):
"""Stream chunk type for LLM-related events."""
TEXT = "text" # Normal text streaming
TOOL_CALL = "tool_call" # Tool call arguments streaming
TOOL_RESULT = "tool_result" # Tool execution result
THOUGHT = "thought" # Agent thinking process (ReAct)
THOUGHT_START = "thought_start" # Agent thought start
THOUGHT_END = "thought_end" # Agent thought end
class TextChunkStreamResponse(StreamResponse):
"""
TextChunkStreamResponse entity
@ -595,6 +638,36 @@ class TextChunkStreamResponse(StreamResponse):
text: str
from_variable_selector: list[str] | None = None
# Extended fields for Agent/Tool streaming
chunk_type: ChunkType = ChunkType.TEXT
"""type of the chunk"""
# Tool call fields (when chunk_type == TOOL_CALL)
tool_call_id: str | None = None
"""unique identifier for this tool call"""
tool_name: str | None = None
"""name of the tool being called"""
tool_arguments: str | None = None
"""accumulated tool arguments JSON"""
# Tool result fields (when chunk_type == TOOL_RESULT)
tool_files: list[str] | None = None
"""file IDs produced by tool"""
tool_error: str | None = None
"""error message if tool failed"""
# Tool elapsed time fields (when chunk_type == TOOL_RESULT)
tool_elapsed_time: float | None = None
"""elapsed time spent executing the tool"""
def model_dump(self, *args, **kwargs) -> dict[str, object]:
kwargs.setdefault("exclude_none", True)
return super().model_dump(*args, **kwargs)
def model_dump_json(self, *args, **kwargs) -> str:
kwargs.setdefault("exclude_none", True)
return super().model_dump_json(*args, **kwargs)
event: StreamEvent = StreamEvent.TEXT_CHUNK
data: Data
@ -743,7 +816,7 @@ class AgentLogStreamResponse(StreamResponse):
"""
node_execution_id: str
id: str
message_id: str
label: str
parent_id: str | None = None
error: str | None = None

View File

@ -0,0 +1,94 @@
import logging
from core.sandbox import SandboxManager
from core.sandbox.storage.sandbox_storage import SandboxStorage
from core.virtual_environment.__base.virtual_environment import VirtualEnvironment
from core.workflow.graph_engine.layers.base import GraphEngineLayer
from core.workflow.graph_events.base import GraphEngineEvent
from core.workflow.graph_events.graph import GraphRunPausedEvent
logger = logging.getLogger(__name__)
class SandboxInitializationError(Exception):
pass
class SandboxLayer(GraphEngineLayer):
def __init__(self, tenant_id: str, app_id: str, sandbox_id: str, sandbox_storage: SandboxStorage) -> None:
super().__init__()
self._tenant_id = tenant_id
self._app_id = app_id
self._sandbox_id = sandbox_id
self._sandbox_storage = sandbox_storage
@property
def sandbox(self) -> VirtualEnvironment:
sandbox = SandboxManager.get(self._sandbox_id)
if sandbox is None:
raise RuntimeError(f"Sandbox not found or not initialized for sandbox_id={self._sandbox_id}")
return sandbox
def on_graph_start(self) -> None:
try:
# Initialize sandbox
from core.sandbox import AppAssetsInitializer, DifyCliInitializer
from services.sandbox.sandbox_provider_service import SandboxProviderService
logger.info("Initializing sandbox for tenant_id=%s, app_id=%s", self._tenant_id, self._app_id)
builder = (
SandboxProviderService.create_sandbox_builder(self._tenant_id)
.initializer(DifyCliInitializer())
.initializer(AppAssetsInitializer(self._tenant_id, self._app_id))
)
sandbox = builder.build()
SandboxManager.register(self._sandbox_id, sandbox)
logger.info(
"Sandbox initialized, workflow_execution_id=%s, sandbox_id=%s, sandbox_arch=%s",
self._sandbox_id,
sandbox.metadata.id,
sandbox.metadata.arch,
)
# Check if sandbox is initialized
if self._sandbox_storage.mount(sandbox):
logger.info("Sandbox files restored, sandbox_id=%s", self._sandbox_id)
except Exception as e:
logger.exception("Failed to initialize sandbox")
raise SandboxInitializationError(f"Failed to initialize sandbox: {e}") from e
def on_event(self, event: GraphEngineEvent) -> None:
# TODO: handle graph run paused event
if not isinstance(event, GraphRunPausedEvent):
return
def on_graph_end(self, error: Exception | None) -> None:
if self._sandbox_id is None:
logger.debug("No workflow_execution_id set, nothing to release")
return
sandbox = SandboxManager.unregister(self._sandbox_id)
if sandbox is None:
logger.debug("No sandbox to release for sandbox_id=%s", self._sandbox_id)
return
sandbox_id = sandbox.metadata.id
logger.info(
"Releasing sandbox, workflow_execution_id=%s, sandbox_id=%s",
self._sandbox_id,
sandbox_id,
)
try:
self._sandbox_storage.unmount(sandbox)
logger.info("Sandbox files persisted, sandbox_id=%s", self._sandbox_id)
except Exception:
logger.exception("Failed to persist sandbox files, sandbox_id=%s", self._sandbox_id)
try:
sandbox.release_environment()
logger.info("Sandbox released, sandbox_id=%s", sandbox_id)
except Exception:
logger.exception("Failed to release sandbox, sandbox_id=%s", sandbox_id)

View File

@ -1,4 +1,5 @@
import logging
import re
import time
from collections.abc import Generator
from threading import Thread
@ -58,7 +59,7 @@ from core.prompt.utils.prompt_template_parser import PromptTemplateParser
from events.message_event import message_was_created
from extensions.ext_database import db
from libs.datetime_utils import naive_utc_now
from models.model import AppMode, Conversation, Message, MessageAgentThought
from models.model import AppMode, Conversation, LLMGenerationDetail, Message, MessageAgentThought
logger = logging.getLogger(__name__)
@ -68,6 +69,8 @@ class EasyUIBasedGenerateTaskPipeline(BasedGenerateTaskPipeline):
EasyUIBasedGenerateTaskPipeline is a class that generate stream output and state management for Application.
"""
_THINK_PATTERN = re.compile(r"<think[^>]*>(.*?)</think>", re.IGNORECASE | re.DOTALL)
_task_state: EasyUITaskState
_application_generate_entity: Union[ChatAppGenerateEntity, CompletionAppGenerateEntity, AgentChatAppGenerateEntity]
@ -409,11 +412,136 @@ class EasyUIBasedGenerateTaskPipeline(BasedGenerateTaskPipeline):
)
)
# Save LLM generation detail if there's reasoning_content
self._save_generation_detail(session=session, message=message, llm_result=llm_result)
message_was_created.send(
message,
application_generate_entity=self._application_generate_entity,
)
def _save_generation_detail(self, *, session: Session, message: Message, llm_result: LLMResult) -> None:
"""
Save LLM generation detail for Completion/Chat/Agent-Chat applications.
For Agent-Chat, also merges MessageAgentThought records.
"""
import json
reasoning_list: list[str] = []
tool_calls_list: list[dict] = []
sequence: list[dict] = []
answer = message.answer or ""
# Check if this is Agent-Chat mode by looking for agent thoughts
agent_thoughts = (
session.query(MessageAgentThought)
.filter_by(message_id=message.id)
.order_by(MessageAgentThought.position.asc())
.all()
)
if agent_thoughts:
# Agent-Chat mode: merge MessageAgentThought records
content_pos = 0
cleaned_answer_parts: list[str] = []
for thought in agent_thoughts:
# Add thought/reasoning
if thought.thought:
reasoning_text = thought.thought
if "<think" in reasoning_text.lower():
clean_text, extracted_reasoning = self._split_reasoning_from_answer(reasoning_text)
if extracted_reasoning:
reasoning_text = extracted_reasoning
thought.thought = clean_text or extracted_reasoning
reasoning_list.append(reasoning_text)
sequence.append({"type": "reasoning", "index": len(reasoning_list) - 1})
# Add tool calls
if thought.tool:
tool_calls_list.append(
{
"name": thought.tool,
"arguments": thought.tool_input or "",
"result": thought.observation or "",
}
)
sequence.append({"type": "tool_call", "index": len(tool_calls_list) - 1})
# Add answer content if present
if thought.answer:
content_text = thought.answer
if "<think" in content_text.lower():
clean_answer, extracted_reasoning = self._split_reasoning_from_answer(content_text)
if extracted_reasoning:
reasoning_list.append(extracted_reasoning)
sequence.append({"type": "reasoning", "index": len(reasoning_list) - 1})
content_text = clean_answer
thought.answer = clean_answer or content_text
if content_text:
start = content_pos
end = content_pos + len(content_text)
sequence.append({"type": "content", "start": start, "end": end})
content_pos = end
cleaned_answer_parts.append(content_text)
if cleaned_answer_parts:
merged_answer = "".join(cleaned_answer_parts)
message.answer = merged_answer
llm_result.message.content = merged_answer
else:
# Completion/Chat mode: use reasoning_content from llm_result
reasoning_content = llm_result.reasoning_content
if not reasoning_content and answer:
# Extract reasoning from <think> blocks and clean the final answer
clean_answer, reasoning_content = self._split_reasoning_from_answer(answer)
if reasoning_content:
answer = clean_answer
llm_result.message.content = clean_answer
llm_result.reasoning_content = reasoning_content
message.answer = clean_answer
if reasoning_content:
reasoning_list = [reasoning_content]
# Content comes first, then reasoning
if answer:
sequence.append({"type": "content", "start": 0, "end": len(answer)})
sequence.append({"type": "reasoning", "index": 0})
# Only save if there's meaningful generation detail
if not reasoning_list and not tool_calls_list:
return
# Check if generation detail already exists
existing = session.query(LLMGenerationDetail).filter_by(message_id=message.id).first()
if existing:
existing.reasoning_content = json.dumps(reasoning_list) if reasoning_list else None
existing.tool_calls = json.dumps(tool_calls_list) if tool_calls_list else None
existing.sequence = json.dumps(sequence) if sequence else None
else:
generation_detail = LLMGenerationDetail(
tenant_id=self._application_generate_entity.app_config.tenant_id,
app_id=self._application_generate_entity.app_config.app_id,
message_id=message.id,
reasoning_content=json.dumps(reasoning_list) if reasoning_list else None,
tool_calls=json.dumps(tool_calls_list) if tool_calls_list else None,
sequence=json.dumps(sequence) if sequence else None,
)
session.add(generation_detail)
@classmethod
def _split_reasoning_from_answer(cls, text: str) -> tuple[str, str]:
"""
Extract reasoning segments from <think> blocks and return (clean_text, reasoning).
"""
matches = cls._THINK_PATTERN.findall(text)
reasoning_content = "\n".join(match.strip() for match in matches) if matches else ""
clean_text = cls._THINK_PATTERN.sub("", text)
clean_text = re.sub(r"\n\s*\n", "\n\n", clean_text).strip()
return clean_text, reasoning_content or ""
def _handle_stop(self, event: QueueStopEvent):
"""
Handle stop.

View File

@ -232,15 +232,31 @@ class MessageCycleManager:
answer: str,
message_id: str,
from_variable_selector: list[str] | None = None,
chunk_type: str | None = None,
tool_call_id: str | None = None,
tool_name: str | None = None,
tool_arguments: str | None = None,
tool_files: list[str] | None = None,
tool_error: str | None = None,
tool_elapsed_time: float | None = None,
tool_icon: str | dict | None = None,
tool_icon_dark: str | dict | None = None,
event_type: StreamEvent | None = None,
) -> MessageStreamResponse:
"""
Message to stream response.
:param answer: answer
:param message_id: message id
:param from_variable_selector: from variable selector
:param chunk_type: type of the chunk (text, function_call, tool_result, thought)
:param tool_call_id: unique identifier for this tool call
:param tool_name: name of the tool being called
:param tool_arguments: accumulated tool arguments JSON
:param tool_files: file IDs produced by tool
:param tool_error: error message if tool failed
:return:
"""
return MessageStreamResponse(
response = MessageStreamResponse(
task_id=self._application_generate_entity.task_id,
id=message_id,
answer=answer,
@ -248,6 +264,35 @@ class MessageCycleManager:
event=event_type or StreamEvent.MESSAGE,
)
if chunk_type:
response = response.model_copy(update={"chunk_type": chunk_type})
if chunk_type == "tool_call":
response = response.model_copy(
update={
"tool_call_id": tool_call_id,
"tool_name": tool_name,
"tool_arguments": tool_arguments,
"tool_icon": tool_icon,
"tool_icon_dark": tool_icon_dark,
}
)
elif chunk_type == "tool_result":
response = response.model_copy(
update={
"tool_call_id": tool_call_id,
"tool_name": tool_name,
"tool_arguments": tool_arguments,
"tool_files": tool_files,
"tool_error": tool_error,
"tool_elapsed_time": tool_elapsed_time,
"tool_icon": tool_icon,
"tool_icon_dark": tool_icon_dark,
}
)
return response
def message_replace_to_stream_response(self, answer: str, reason: str = "") -> MessageReplaceStreamResponse:
"""
Message replace to stream response.

View File

@ -5,7 +5,6 @@ from sqlalchemy import select
from core.app.apps.base_app_queue_manager import AppQueueManager, PublishFrom
from core.app.entities.app_invoke_entities import InvokeFrom
from core.app.entities.queue_entities import QueueRetrieverResourcesEvent
from core.rag.entities.citation_metadata import RetrievalSourceMetadata
from core.rag.index_processor.constant.index_type import IndexStructureType
from core.rag.models.document import Document
@ -90,6 +89,8 @@ class DatasetIndexToolCallbackHandler:
# TODO(-LAN-): Improve type check
def return_retriever_resource_info(self, resource: Sequence[RetrievalSourceMetadata]):
"""Handle return_retriever_resource_info."""
from core.app.entities.queue_entities import QueueRetrieverResourcesEvent
self._queue_manager.publish(
QueueRetrieverResourcesEvent(retriever_resources=resource), PublishFrom.APPLICATION_MANAGER
)

View File

@ -71,8 +71,8 @@ class LLMGenerator:
response: LLMResult = model_instance.invoke_llm(
prompt_messages=list(prompts), model_parameters={"max_tokens": 500, "temperature": 1}, stream=False
)
answer = cast(str, response.message.content)
if answer is None:
answer = response.message.get_text_content()
if answer == "":
return ""
try:
result_dict = json.loads(answer)
@ -184,7 +184,7 @@ class LLMGenerator:
prompt_messages=list(prompt_messages), model_parameters=model_parameters, stream=False
)
rule_config["prompt"] = cast(str, response.message.content)
rule_config["prompt"] = response.message.get_text_content()
except InvokeError as e:
error = str(e)
@ -237,13 +237,11 @@ class LLMGenerator:
return rule_config
rule_config["prompt"] = cast(str, prompt_content.message.content)
rule_config["prompt"] = prompt_content.message.get_text_content()
if not isinstance(prompt_content.message.content, str):
raise NotImplementedError("prompt content is not a string")
parameter_generate_prompt = parameter_template.format(
inputs={
"INPUT_TEXT": prompt_content.message.content,
"INPUT_TEXT": prompt_content.message.get_text_content(),
},
remove_template_variables=False,
)
@ -253,7 +251,7 @@ class LLMGenerator:
statement_generate_prompt = statement_template.format(
inputs={
"TASK_DESCRIPTION": instruction,
"INPUT_TEXT": prompt_content.message.content,
"INPUT_TEXT": prompt_content.message.get_text_content(),
},
remove_template_variables=False,
)
@ -263,7 +261,7 @@ class LLMGenerator:
parameter_content: LLMResult = model_instance.invoke_llm(
prompt_messages=list(parameter_messages), model_parameters=model_parameters, stream=False
)
rule_config["variables"] = re.findall(r'"\s*([^"]+)\s*"', cast(str, parameter_content.message.content))
rule_config["variables"] = re.findall(r'"\s*([^"]+)\s*"', parameter_content.message.get_text_content())
except InvokeError as e:
error = str(e)
error_step = "generate variables"
@ -272,7 +270,7 @@ class LLMGenerator:
statement_content: LLMResult = model_instance.invoke_llm(
prompt_messages=list(statement_messages), model_parameters=model_parameters, stream=False
)
rule_config["opening_statement"] = cast(str, statement_content.message.content)
rule_config["opening_statement"] = statement_content.message.get_text_content()
except InvokeError as e:
error = str(e)
error_step = "generate conversation opener"
@ -315,7 +313,7 @@ class LLMGenerator:
prompt_messages=list(prompt_messages), model_parameters=model_parameters, stream=False
)
generated_code = cast(str, response.message.content)
generated_code = response.message.get_text_content()
return {"code": generated_code, "language": code_language, "error": ""}
except InvokeError as e:
@ -351,7 +349,7 @@ class LLMGenerator:
raise TypeError("Expected LLMResult when stream=False")
response = result
answer = cast(str, response.message.content)
answer = response.message.get_text_content()
return answer.strip()
@classmethod
@ -375,10 +373,7 @@ class LLMGenerator:
prompt_messages=list(prompt_messages), model_parameters=model_parameters, stream=False
)
raw_content = response.message.content
if not isinstance(raw_content, str):
raise ValueError(f"LLM response content must be a string, got: {type(raw_content)}")
raw_content = response.message.get_text_content()
try:
parsed_content = json.loads(raw_content)

View File

@ -4,6 +4,7 @@ from typing import Any
from core.callback_handler.workflow_tool_callback_handler import DifyWorkflowCallbackHandler
from core.plugin.backwards_invocation.base import BaseBackwardsInvocation
from core.tools.entities.tool_entities import ToolInvokeMessage, ToolProviderType
from core.tools.signature import sign_tool_file
from core.tools.tool_engine import ToolEngine
from core.tools.tool_manager import ToolManager
from core.tools.utils.message_transformer import ToolFileMessageTransformer
@ -41,6 +42,43 @@ class PluginToolBackwardsInvocation(BaseBackwardsInvocation):
response, user_id=user_id, tenant_id=tenant_id
)
return response
return cls._sign_tool_file_urls(response)
except Exception as e:
raise e
# FIXME: this method should be gracefully deprecated
@classmethod
def _sign_tool_file_urls(
cls, messages: Generator[ToolInvokeMessage, None, None]
) -> Generator[ToolInvokeMessage, None, None]:
"""
Sign file URLs in tool invoke messages for external access.
"""
for message in messages:
if message.type in {
ToolInvokeMessage.MessageType.IMAGE_LINK,
ToolInvokeMessage.MessageType.BINARY_LINK,
ToolInvokeMessage.MessageType.FILE,
}:
if isinstance(message.message, ToolInvokeMessage.TextMessage):
url = message.message.text
# Check if it's an unsigned internal path
if url.startswith("/files/tools/"):
parts = url.split("/")[-1]
if "." in parts:
file_id, ext = parts.rsplit(".", 1)
extension = f".{ext}"
else:
file_id = parts
extension = ".bin"
signed_url = sign_tool_file(tool_file_id=file_id, extension=extension)
yield ToolInvokeMessage(
type=message.type,
message=ToolInvokeMessage.TextMessage(text=signed_url),
meta=message.meta,
)
continue
yield message

View File

@ -282,3 +282,11 @@ class TriggerDispatchResponse(BaseModel):
return deserialize_response(binascii.unhexlify(v.encode()))
except Exception as e:
raise ValueError("Failed to deserialize response from hex string") from e
class RequestListTools(BaseModel):
"""
Request to list all available tools
"""
pass

View File

@ -29,6 +29,7 @@ from models import (
Account,
CreatorUserRole,
EndUser,
LLMGenerationDetail,
WorkflowNodeExecutionModel,
WorkflowNodeExecutionTriggeredFrom,
)
@ -457,6 +458,113 @@ class SQLAlchemyWorkflowNodeExecutionRepository(WorkflowNodeExecutionRepository)
session.merge(db_model)
session.flush()
# Save LLMGenerationDetail for LLM nodes with successful execution
if (
domain_model.node_type == NodeType.LLM
and domain_model.status == WorkflowNodeExecutionStatus.SUCCEEDED
and domain_model.outputs is not None
):
self._save_llm_generation_detail(session, domain_model)
def _save_llm_generation_detail(self, session, execution: WorkflowNodeExecution) -> None:
"""
Save LLM generation detail for LLM nodes.
Extracts reasoning_content, tool_calls, and sequence from outputs and metadata.
"""
outputs = execution.outputs or {}
metadata = execution.metadata or {}
reasoning_list = self._extract_reasoning(outputs)
tool_calls_list = self._extract_tool_calls(metadata.get(WorkflowNodeExecutionMetadataKey.AGENT_LOG))
if not reasoning_list and not tool_calls_list:
return
sequence = self._build_generation_sequence(outputs.get("text", ""), reasoning_list, tool_calls_list)
self._upsert_generation_detail(session, execution, reasoning_list, tool_calls_list, sequence)
def _extract_reasoning(self, outputs: Mapping[str, Any]) -> list[str]:
"""Extract reasoning_content as a clean list of non-empty strings."""
reasoning_content = outputs.get("reasoning_content")
if isinstance(reasoning_content, str):
trimmed = reasoning_content.strip()
return [trimmed] if trimmed else []
if isinstance(reasoning_content, list):
return [item.strip() for item in reasoning_content if isinstance(item, str) and item.strip()]
return []
def _extract_tool_calls(self, agent_log: Any) -> list[dict[str, str]]:
"""Extract tool call records from agent logs."""
if not agent_log or not isinstance(agent_log, list):
return []
tool_calls: list[dict[str, str]] = []
for log in agent_log:
log_data = log.data if hasattr(log, "data") else (log.get("data", {}) if isinstance(log, dict) else {})
tool_name = log_data.get("tool_name")
if tool_name and str(tool_name).strip():
tool_calls.append(
{
"id": log_data.get("tool_call_id", ""),
"name": tool_name,
"arguments": json.dumps(log_data.get("tool_args", {})),
"result": str(log_data.get("output", "")),
}
)
return tool_calls
def _build_generation_sequence(
self, text: str, reasoning_list: list[str], tool_calls_list: list[dict[str, str]]
) -> list[dict[str, Any]]:
"""Build a simple content/reasoning/tool_call sequence."""
sequence: list[dict[str, Any]] = []
if text:
sequence.append({"type": "content", "start": 0, "end": len(text)})
for index in range(len(reasoning_list)):
sequence.append({"type": "reasoning", "index": index})
for index in range(len(tool_calls_list)):
sequence.append({"type": "tool_call", "index": index})
return sequence
def _upsert_generation_detail(
self,
session,
execution: WorkflowNodeExecution,
reasoning_list: list[str],
tool_calls_list: list[dict[str, str]],
sequence: list[dict[str, Any]],
) -> None:
"""Insert or update LLMGenerationDetail with serialized fields."""
existing = (
session.query(LLMGenerationDetail)
.filter_by(
workflow_run_id=execution.workflow_execution_id,
node_id=execution.node_id,
)
.first()
)
reasoning_json = json.dumps(reasoning_list) if reasoning_list else None
tool_calls_json = json.dumps(tool_calls_list) if tool_calls_list else None
sequence_json = json.dumps(sequence) if sequence else None
if existing:
existing.reasoning_content = reasoning_json
existing.tool_calls = tool_calls_json
existing.sequence = sequence_json
return
generation_detail = LLMGenerationDetail(
tenant_id=self._tenant_id,
app_id=self._app_id,
workflow_run_id=execution.workflow_execution_id,
node_id=execution.node_id,
reasoning_content=reasoning_json,
tool_calls=tool_calls_json,
sequence=sequence_json,
)
session.add(generation_detail)
def get_db_models_by_workflow_run(
self,
workflow_run_id: str,

View File

@ -0,0 +1,47 @@
from .bash.dify_cli import (
DifyCliBinary,
DifyCliConfig,
DifyCliEnvConfig,
DifyCliLocator,
DifyCliToolConfig,
)
from .constants import (
APP_ASSETS_PATH,
APP_ASSETS_ZIP_PATH,
DIFY_CLI_CONFIG_PATH,
DIFY_CLI_PATH,
DIFY_CLI_PATH_PATTERN,
)
from .initializer import AppAssetsInitializer, DifyCliInitializer, SandboxInitializer
from .manager import SandboxManager
from .session import SandboxSession
from .storage import ArchiveSandboxStorage, SandboxStorage
from .utils.debug import sandbox_debug
from .utils.encryption import create_sandbox_config_encrypter, masked_config
from .vm import SandboxBuilder, SandboxType, VMConfig
__all__ = [
"APP_ASSETS_PATH",
"APP_ASSETS_ZIP_PATH",
"DIFY_CLI_CONFIG_PATH",
"DIFY_CLI_PATH",
"DIFY_CLI_PATH_PATTERN",
"AppAssetsInitializer",
"ArchiveSandboxStorage",
"DifyCliBinary",
"DifyCliConfig",
"DifyCliEnvConfig",
"DifyCliInitializer",
"DifyCliLocator",
"DifyCliToolConfig",
"SandboxBuilder",
"SandboxInitializer",
"SandboxManager",
"SandboxSession",
"SandboxStorage",
"SandboxType",
"VMConfig",
"create_sandbox_config_encrypter",
"masked_config",
"sandbox_debug",
]

View File

@ -0,0 +1,15 @@
from .dify_cli import (
DifyCliBinary,
DifyCliConfig,
DifyCliEnvConfig,
DifyCliLocator,
DifyCliToolConfig,
)
__all__ = [
"DifyCliBinary",
"DifyCliConfig",
"DifyCliEnvConfig",
"DifyCliLocator",
"DifyCliToolConfig",
]

View File

@ -0,0 +1,96 @@
from collections.abc import Generator
from typing import Any
from core.tools.__base.tool import Tool
from core.tools.__base.tool_runtime import ToolRuntime
from core.tools.entities.common_entities import I18nObject
from core.tools.entities.tool_entities import (
ToolDescription,
ToolEntity,
ToolIdentity,
ToolInvokeMessage,
ToolParameter,
ToolProviderType,
)
from core.virtual_environment.__base.helpers import submit_command, with_connection
from core.virtual_environment.__base.virtual_environment import VirtualEnvironment
from ..utils.debug import sandbox_debug
COMMAND_TIMEOUT_SECONDS = 60
class SandboxBashTool(Tool):
def __init__(self, sandbox: VirtualEnvironment, tenant_id: str):
self._sandbox = sandbox
entity = ToolEntity(
identity=ToolIdentity(
author="Dify",
name="bash",
label=I18nObject(en_US="Bash", zh_Hans="Bash"),
provider="sandbox",
),
parameters=[
ToolParameter.get_simple_instance(
name="command",
llm_description="The bash command to execute in current working directory",
typ=ToolParameter.ToolParameterType.STRING,
required=True,
),
],
description=ToolDescription(
human=I18nObject(
en_US="Execute bash commands in current working directory",
),
llm="Execute bash commands in current working directory. "
"Use this tool to run shell commands, scripts, or interact with the system. "
"The command will be executed in the current working directory.",
),
)
runtime = ToolRuntime(tenant_id=tenant_id)
super().__init__(entity=entity, runtime=runtime)
def tool_provider_type(self) -> ToolProviderType:
return ToolProviderType.BUILT_IN
def _invoke(
self,
user_id: str,
tool_parameters: dict[str, Any],
conversation_id: str | None = None,
app_id: str | None = None,
message_id: str | None = None,
) -> Generator[ToolInvokeMessage, None, None]:
command = tool_parameters.get("command", "")
if not command:
yield self.create_text_message("Error: No command provided")
return
try:
with with_connection(self._sandbox) as conn:
cmd_list = ["bash", "-c", command]
sandbox_debug("bash_tool", "cmd_list", cmd_list)
future = submit_command(self._sandbox, conn, cmd_list)
timeout = COMMAND_TIMEOUT_SECONDS if COMMAND_TIMEOUT_SECONDS > 0 else None
result = future.result(timeout=timeout)
stdout = result.stdout.decode("utf-8", errors="replace") if result.stdout else ""
stderr = result.stderr.decode("utf-8", errors="replace") if result.stderr else ""
exit_code = result.exit_code
output_parts: list[str] = []
if stdout:
output_parts.append(f"\n{stdout}")
if stderr:
output_parts.append(f"\n{stderr}")
output_parts.append(f"\nCommand exited with code {exit_code}")
yield self.create_text_message("\n".join(output_parts))
except TimeoutError:
yield self.create_text_message(f"Error: Command timed out after {COMMAND_TIMEOUT_SECONDS}s")
except Exception as e:
yield self.create_text_message(f"Error: {e!s}")

View File

@ -0,0 +1,134 @@
from __future__ import annotations
from pathlib import Path
from typing import TYPE_CHECKING, Any
from pydantic import BaseModel, Field
from core.model_runtime.utils.encoders import jsonable_encoder
from core.session.cli_api import CliApiSession
from core.tools.entities.tool_entities import ToolParameter, ToolProviderType
from core.virtual_environment.__base.entities import Arch, OperatingSystem
from ..constants import DIFY_CLI_PATH_PATTERN
if TYPE_CHECKING:
from core.tools.__base.tool import Tool
class DifyCliBinary(BaseModel):
operating_system: OperatingSystem = Field(alias="os")
arch: Arch
path: Path
model_config = {
"populate_by_name": True,
"arbitrary_types_allowed": True,
}
class DifyCliLocator:
def __init__(self, root: str | Path | None = None) -> None:
from configs import dify_config
if root is not None:
self._root = Path(root)
elif dify_config.SANDBOX_DIFY_CLI_ROOT:
self._root = Path(dify_config.SANDBOX_DIFY_CLI_ROOT)
else:
api_root = Path(__file__).resolve().parents[3]
self._root = api_root / "bin"
def resolve(self, operating_system: OperatingSystem, arch: Arch) -> DifyCliBinary:
filename = DIFY_CLI_PATH_PATTERN.format(os=operating_system.value, arch=arch.value)
candidate = self._root / filename
if not candidate.is_file():
raise FileNotFoundError(
f"dify CLI binary not found: {candidate}. Configure SANDBOX_DIFY_CLI_ROOT or ensure the file exists."
)
return DifyCliBinary(os=operating_system, arch=arch, path=candidate)
class DifyCliEnvConfig(BaseModel):
files_url: str
cli_api_url: str
cli_api_session_id: str
cli_api_secret: str
class DifyCliToolConfig(BaseModel):
provider_type: str
identity: dict[str, Any]
description: dict[str, Any]
parameters: list[dict[str, Any]]
@classmethod
def transform_provider_type(cls, tool_provider_type: ToolProviderType) -> str:
provider_type = tool_provider_type
match tool_provider_type:
case ToolProviderType.BUILT_IN | ToolProviderType.PLUGIN:
provider_type = "builtin"
case ToolProviderType.MCP | ToolProviderType.WORKFLOW | ToolProviderType.API:
provider_type = provider_type
case _:
raise ValueError(f"Invalid tool provider type: {tool_provider_type}")
return provider_type
@classmethod
def create_from_tool(cls, tool: Tool) -> DifyCliToolConfig:
return cls(
provider_type=cls.transform_provider_type(tool.tool_provider_type()),
identity=to_json(tool.entity.identity),
description=to_json(tool.entity.description),
parameters=[cls.transform_parameter(parameter) for parameter in tool.entity.parameters],
)
@classmethod
def transform_parameter(cls, parameter: ToolParameter) -> dict[str, Any]:
transformed_parameter = to_json(parameter)
transformed_parameter.pop("input_schema", None)
transformed_parameter.pop("form", None)
match parameter.type:
case (
ToolParameter.ToolParameterType.SYSTEM_FILES
| ToolParameter.ToolParameterType.FILE
| ToolParameter.ToolParameterType.FILES
):
return transformed_parameter
case _:
return transformed_parameter
class DifyCliConfig(BaseModel):
env: DifyCliEnvConfig
tools: list[DifyCliToolConfig]
@classmethod
def create(cls, session: CliApiSession, tools: list[Tool]) -> DifyCliConfig:
from configs import dify_config
cli_api_url = dify_config.CLI_API_URL
return cls(
env=DifyCliEnvConfig(
files_url=dify_config.FILES_URL,
cli_api_url=cli_api_url,
cli_api_session_id=session.id,
cli_api_secret=session.secret,
),
tools=[DifyCliToolConfig.create_from_tool(tool) for tool in tools],
)
def to_json(obj: Any) -> dict[str, Any]:
return jsonable_encoder(obj, exclude_unset=True, exclude_defaults=True, exclude_none=True)
__all__ = [
"DifyCliBinary",
"DifyCliConfig",
"DifyCliEnvConfig",
"DifyCliLocator",
"DifyCliToolConfig",
]

View File

@ -0,0 +1,11 @@
from typing import Final
DIFY_CLI_PATH: Final[str] = ".dify/bin/dify"
DIFY_CLI_PATH_PATTERN: Final[str] = "dify-cli-{os}-{arch}"
DIFY_CLI_CONFIG_PATH: Final[str] = ".dify_cli.json"
# App Assets
APP_ASSETS_PATH: Final[str] = "assets"
APP_ASSETS_ZIP_PATH: Final[str] = ".dify/tmp/assets.zip"

View File

@ -0,0 +1,3 @@
from .providers import SandboxProviderApiEntity
__all__ = ["SandboxProviderApiEntity"]

View File

@ -0,0 +1,21 @@
from collections.abc import Mapping
from typing import Any
from pydantic import BaseModel, Field
class SandboxProviderApiEntity(BaseModel):
provider_type: str = Field(..., description="Provider type identifier")
is_system_configured: bool = Field(default=False)
is_tenant_configured: bool = Field(default=False)
is_active: bool = Field(default=False)
config: Mapping[str, Any] = Field(default_factory=dict)
config_schema: list[dict[str, Any]] = Field(default_factory=list)
class SandboxProviderEntity(BaseModel):
id: str = Field(..., description="Provider identifier")
provider_type: str = Field(..., description="Provider type identifier")
is_active: bool = Field(default=False)
config: Mapping[str, Any] = Field(default_factory=dict)
config_schema: list[dict[str, Any]] = Field(default_factory=list)

View File

@ -0,0 +1,9 @@
from .app_assets_initializer import AppAssetsInitializer
from .base import SandboxInitializer
from .dify_cli_initializer import DifyCliInitializer
__all__ = [
"AppAssetsInitializer",
"DifyCliInitializer",
"SandboxInitializer",
]

View File

@ -0,0 +1,87 @@
import logging
from io import BytesIO
from sqlalchemy.orm import Session
from core.virtual_environment.__base.helpers import execute, with_connection
from core.virtual_environment.__base.virtual_environment import VirtualEnvironment
from extensions.ext_database import db
from extensions.ext_storage import storage
from models.app_asset import AppAssets
from ..constants import APP_ASSETS_PATH, APP_ASSETS_ZIP_PATH
from .base import SandboxInitializer
logger = logging.getLogger(__name__)
class AppAssetsInitializer(SandboxInitializer):
def __init__(self, tenant_id: str, app_id: str) -> None:
self._tenant_id = tenant_id
self._app_id = app_id
def initialize(self, env: VirtualEnvironment) -> None:
published = self._get_latest_published()
if not published:
logger.debug("No published assets for app_id=%s, skipping", self._app_id)
return
zip_key = AppAssets.get_published_storage_key(self._tenant_id, self._app_id, published.id)
try:
zip_data = storage.load_once(zip_key)
except Exception:
logger.warning(
"Failed to load assets zip for app_id=%s, key=%s",
self._app_id,
zip_key,
exc_info=True,
)
return
env.upload_file(APP_ASSETS_ZIP_PATH, BytesIO(zip_data))
with with_connection(env) as conn:
execute(
env,
["mkdir", "-p", ".dify/tmp"],
connection=conn,
error_message="Failed to create temp directory",
)
execute(
env,
["mkdir", "-p", APP_ASSETS_PATH],
connection=conn,
error_message="Failed to create assets directory",
)
execute(
env,
["unzip", "-o", APP_ASSETS_ZIP_PATH, "-d", APP_ASSETS_PATH],
connection=conn,
timeout=60,
error_message="Failed to unzip assets",
)
execute(
env,
["rm", "-f", APP_ASSETS_ZIP_PATH],
connection=conn,
error_message="Failed to cleanup temp zip file",
)
logger.info(
"App assets initialized for app_id=%s, published_id=%s",
self._app_id,
published.id,
)
def _get_latest_published(self) -> AppAssets | None:
with Session(db.engine) as session:
return (
session.query(AppAssets)
.filter(
AppAssets.tenant_id == self._tenant_id,
AppAssets.app_id == self._app_id,
AppAssets.version != AppAssets.VERSION_DRAFT,
)
.order_by(AppAssets.created_at.desc())
.first()
)

View File

@ -0,0 +1,8 @@
from abc import ABC, abstractmethod
from core.virtual_environment.__base.virtual_environment import VirtualEnvironment
class SandboxInitializer(ABC):
@abstractmethod
def initialize(self, env: VirtualEnvironment) -> None: ...

View File

@ -0,0 +1,29 @@
import logging
from io import BytesIO
from pathlib import Path
from core.virtual_environment.__base.helpers import execute
from core.virtual_environment.__base.virtual_environment import VirtualEnvironment
from ..bash.dify_cli import DifyCliLocator
from ..constants import DIFY_CLI_PATH
from .base import SandboxInitializer
logger = logging.getLogger(__name__)
class DifyCliInitializer(SandboxInitializer):
def __init__(self, cli_root: str | Path | None = None) -> None:
self._locator = DifyCliLocator(root=cli_root)
def initialize(self, env: VirtualEnvironment) -> None:
binary = self._locator.resolve(env.metadata.os, env.metadata.arch)
env.upload_file(DIFY_CLI_PATH, BytesIO(binary.path.read_bytes()))
execute(
env,
["chmod", "+x", DIFY_CLI_PATH],
timeout=10,
error_message="Failed to mark dify CLI as executable",
)
logger.info("Dify CLI uploaded to sandbox, path=%s", DIFY_CLI_PATH)

106
api/core/sandbox/manager.py Normal file
View File

@ -0,0 +1,106 @@
from __future__ import annotations
import logging
import threading
from typing import Final
from core.virtual_environment.__base.virtual_environment import VirtualEnvironment
logger = logging.getLogger(__name__)
class SandboxManager:
"""Process-local registry for workflow sandboxes.
Stores `VirtualEnvironment` references keyed by `workflow_execution_id`.
Concurrency: the registry is split into hash shards and each shard is updated with
copy-on-write under a shard lock. Reads are lock-free (snapshot dict) to reduce
contention in hot paths like `get()`.
"""
# FIXME:(sandbox) Prefer a workflow-level context on GraphRuntimeState to store workflow-scoped shared objects.
_NUM_SHARDS: Final[int] = 1024
_SHARD_MASK: Final[int] = _NUM_SHARDS - 1
_shard_locks: Final[tuple[threading.Lock, ...]] = tuple(threading.Lock() for _ in range(_NUM_SHARDS))
_shards: list[dict[str, VirtualEnvironment]] = [{} for _ in range(_NUM_SHARDS)]
@classmethod
def _shard_index(cls, workflow_execution_id: str) -> int:
return hash(workflow_execution_id) & cls._SHARD_MASK
@classmethod
def register(cls, workflow_execution_id: str, sandbox: VirtualEnvironment) -> None:
if not workflow_execution_id:
raise ValueError("workflow_execution_id cannot be empty")
shard_index = cls._shard_index(workflow_execution_id)
with cls._shard_locks[shard_index]:
shard = cls._shards[shard_index]
if workflow_execution_id in shard:
raise RuntimeError(
f"Sandbox already registered for workflow_execution_id={workflow_execution_id}. "
"Call unregister() first if you need to replace it."
)
new_shard = dict(shard)
new_shard[workflow_execution_id] = sandbox
cls._shards[shard_index] = new_shard
logger.debug(
"Registered sandbox for workflow_execution_id=%s, sandbox_id=%s",
workflow_execution_id,
sandbox.metadata.id,
)
@classmethod
def get(cls, workflow_execution_id: str) -> VirtualEnvironment | None:
shard_index = cls._shard_index(workflow_execution_id)
return cls._shards[shard_index].get(workflow_execution_id)
@classmethod
def unregister(cls, workflow_execution_id: str) -> VirtualEnvironment | None:
shard_index = cls._shard_index(workflow_execution_id)
with cls._shard_locks[shard_index]:
shard = cls._shards[shard_index]
sandbox = shard.get(workflow_execution_id)
if sandbox is None:
return None
new_shard = dict(shard)
new_shard.pop(workflow_execution_id, None)
cls._shards[shard_index] = new_shard
logger.debug(
"Unregistered sandbox for workflow_execution_id=%s, sandbox_id=%s",
workflow_execution_id,
sandbox.metadata.id,
)
return sandbox
@classmethod
def has(cls, workflow_execution_id: str) -> bool:
shard_index = cls._shard_index(workflow_execution_id)
return workflow_execution_id in cls._shards[shard_index]
@classmethod
def is_sandbox_runtime(cls, workflow_execution_id: str) -> bool:
return cls.has(workflow_execution_id)
@classmethod
def clear(cls) -> None:
for lock in cls._shard_locks:
lock.acquire()
try:
for i in range(cls._NUM_SHARDS):
cls._shards[i] = {}
logger.debug("Cleared all registered sandboxes")
finally:
for lock in reversed(cls._shard_locks):
lock.release()
@classmethod
def count(cls) -> int:
return sum(len(shard) for shard in cls._shards)

101
api/core/sandbox/session.py Normal file
View File

@ -0,0 +1,101 @@
from __future__ import annotations
import json
import logging
from io import BytesIO
from types import TracebackType
from typing import TYPE_CHECKING
from core.session.cli_api import CliApiSessionManager
from core.virtual_environment.__base.helpers import execute
from core.virtual_environment.__base.virtual_environment import VirtualEnvironment
from .bash.dify_cli import DifyCliConfig
from .constants import DIFY_CLI_CONFIG_PATH, DIFY_CLI_PATH
from .manager import SandboxManager
from .utils.debug import sandbox_debug
if TYPE_CHECKING:
from core.tools.__base.tool import Tool
from .bash.bash_tool import SandboxBashTool
logger = logging.getLogger(__name__)
class SandboxSession:
def __init__(
self,
*,
workflow_execution_id: str,
tenant_id: str,
user_id: str,
tools: list[Tool],
) -> None:
self._workflow_execution_id = workflow_execution_id
self._tenant_id = tenant_id
self._user_id = user_id
self._tools = tools
self._sandbox: VirtualEnvironment | None = None
self._bash_tool: SandboxBashTool | None = None
self._session_id: str | None = None
def __enter__(self) -> SandboxSession:
sandbox = SandboxManager.get(self._workflow_execution_id)
if sandbox is None:
raise RuntimeError(f"Sandbox not found for workflow_execution_id={self._workflow_execution_id}")
session = CliApiSessionManager().create(tenant_id=self._tenant_id, user_id=self._user_id)
self._session_id = session.id
try:
config = DifyCliConfig.create(session, self._tools)
config_json = json.dumps(config.model_dump(mode="json"), ensure_ascii=False)
sandbox_debug("sandbox", "config_json", config_json)
sandbox.upload_file(DIFY_CLI_CONFIG_PATH, BytesIO(config_json.encode("utf-8")))
execute(
sandbox,
[DIFY_CLI_PATH, "init"],
timeout=30,
error_message="Failed to initialize Dify CLI in sandbox",
)
except Exception:
CliApiSessionManager().delete(session.id)
self._session_id = None
raise
from .bash.bash_tool import SandboxBashTool
self._sandbox = sandbox
self._bash_tool = SandboxBashTool(sandbox=sandbox, tenant_id=self._tenant_id)
return self
def __exit__(
self,
exc_type: type[BaseException] | None,
exc: BaseException | None,
tb: TracebackType | None,
) -> bool:
try:
self.cleanup()
except Exception:
logger.exception("Failed to cleanup SandboxSession")
return False
@property
def bash_tool(self) -> SandboxBashTool:
if self._bash_tool is None:
raise RuntimeError("SandboxSession is not initialized")
return self._bash_tool
def cleanup(self) -> None:
if self._session_id is None:
return
CliApiSessionManager().delete(self._session_id)
logger.debug("Cleaned up SandboxSession session_id=%s", self._session_id)
self._session_id = None

View File

@ -0,0 +1,4 @@
from .archive_storage import ArchiveSandboxStorage
from .sandbox_storage import SandboxStorage
__all__ = ["ArchiveSandboxStorage", "SandboxStorage"]

View File

@ -0,0 +1,66 @@
import logging
from io import BytesIO
from core.virtual_environment.__base.helpers import try_execute
from core.virtual_environment.__base.virtual_environment import VirtualEnvironment
from extensions.ext_storage import storage
from .sandbox_storage import SandboxStorage
logger = logging.getLogger(__name__)
ARCHIVE_NAME = "workspace.tar.gz"
WORKSPACE_DIR = "."
class ArchiveSandboxStorage(SandboxStorage):
def __init__(self, tenant_id: str, sandbox_id: str):
self._storage = storage
self._tenant_id = tenant_id
self._sandbox_id = sandbox_id
@property
def _storage_key(self) -> str:
return f"sandbox/{self._tenant_id}/{self._sandbox_id}.tar.gz"
def mount(self, sandbox: VirtualEnvironment) -> bool:
if not self.exists():
logger.debug("No archive found for sandbox %s, skipping mount", self._sandbox_id)
return False
archive_data = self._storage.load_once(self._storage_key)
sandbox.upload_file(ARCHIVE_NAME, BytesIO(archive_data))
result = try_execute(sandbox, ["tar", "-xzf", ARCHIVE_NAME], timeout=60)
if result.is_error:
logger.error("Failed to extract archive: %s", result.error_message)
return False
try_execute(sandbox, ["rm", ARCHIVE_NAME], timeout=10)
logger.info("Mounted archive for sandbox %s", self._sandbox_id)
return True
def unmount(self, sandbox: VirtualEnvironment) -> bool:
result = try_execute(
sandbox,
["tar", "-czf", ARCHIVE_NAME, "-C", WORKSPACE_DIR, "."],
timeout=120,
)
if result.is_error:
logger.error("Failed to create archive: %s", result.error_message)
return False
archive_content = sandbox.download_file(ARCHIVE_NAME)
self._storage.save(self._storage_key, archive_content.getvalue())
logger.info("Unmounted archive for sandbox %s", self._sandbox_id)
return True
def exists(self) -> bool:
return self._storage.exists(self._storage_key)
def delete(self) -> None:
if self.exists():
self._storage.delete(self._storage_key)
logger.info("Deleted archive for sandbox %s", self._sandbox_id)

View File

@ -0,0 +1,21 @@
from abc import ABC, abstractmethod
from core.virtual_environment.__base.virtual_environment import VirtualEnvironment
class SandboxStorage(ABC):
@abstractmethod
def mount(self, sandbox: VirtualEnvironment) -> bool:
"""Load files from storage into VM. Returns True if files were loaded."""
@abstractmethod
def unmount(self, sandbox: VirtualEnvironment) -> bool:
"""Save files from VM to storage. Returns True if files were saved."""
@abstractmethod
def exists(self) -> bool:
"""Check if storage has saved data."""
@abstractmethod
def delete(self) -> None:
"""Delete saved data from storage."""

View File

@ -0,0 +1,2 @@
# Sandbox utilities
# Connection helpers have been moved to core.virtual_environment.helpers

View File

@ -0,0 +1,22 @@
"""Sandbox debug utilities. TODO: Remove this module when sandbox debugging is complete."""
from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
pass
SANDBOX_DEBUG_ENABLED = True
def sandbox_debug(tag: str, message: str, data: Any = None) -> None:
if not SANDBOX_DEBUG_ENABLED:
return
# Lazy import to avoid circular dependency
from core.callback_handler.agent_tool_callback_handler import print_text
print_text(f"\n[{tag}]\n", color="blue")
if data is not None:
print_text(f"{message}: {data}\n", color="blue")
else:
print_text(f"{message}\n", color="blue")

View File

@ -0,0 +1,48 @@
from collections.abc import Mapping
from typing import Any
from core.entities.provider_entities import BasicProviderConfig
from core.helper.provider_cache import ProviderCredentialsCache
from core.helper.provider_encryption import ProviderConfigCache, ProviderConfigEncrypter, create_provider_encrypter
class SandboxProviderConfigCache(ProviderCredentialsCache):
def __init__(self, tenant_id: str, provider_type: str):
super().__init__(tenant_id=tenant_id, provider_type=provider_type)
def _generate_cache_key(self, **kwargs) -> str:
tenant_id = kwargs["tenant_id"]
provider_type = kwargs["provider_type"]
return f"sandbox_config:tenant_id:{tenant_id}:provider_type:{provider_type}"
def create_sandbox_config_encrypter(
tenant_id: str,
config_schema: list[BasicProviderConfig],
provider_type: str,
) -> tuple[ProviderConfigEncrypter, ProviderConfigCache]:
cache = SandboxProviderConfigCache(tenant_id=tenant_id, provider_type=provider_type)
return create_provider_encrypter(tenant_id=tenant_id, config=config_schema, cache=cache)
def masked_config(
schemas: list[BasicProviderConfig],
config: Mapping[str, Any],
) -> Mapping[str, Any]:
masked = dict(config)
configs = {x.name: x for x in schemas}
for key, value in config.items():
schema = configs.get(key)
if not schema:
masked[key] = value
continue
if schema.type == BasicProviderConfig.Type.SECRET_INPUT:
if not isinstance(value, str):
continue
if len(value) <= 4:
masked[key] = "*" * len(value)
else:
masked[key] = value[:2] + "*" * (len(value) - 4) + value[-2:]
else:
masked[key] = value
return masked

109
api/core/sandbox/vm.py Normal file
View File

@ -0,0 +1,109 @@
"""
Facade module for virtual machine providers.
Provides unified interfaces to access different VM provider implementations
(E2B, Docker, Local) through VMType, VMBuilder, and VMConfig.
"""
from __future__ import annotations
from collections.abc import Mapping, Sequence
from enum import StrEnum
from typing import Any
from configs import dify_config
from core.entities.provider_entities import BasicProviderConfig
from core.virtual_environment.__base.virtual_environment import VirtualEnvironment
from .initializer import SandboxInitializer
class SandboxType(StrEnum):
"""
Sandbox types.
"""
DOCKER = "docker"
E2B = "e2b"
LOCAL = "local"
@classmethod
def get_all(cls) -> list[str]:
"""
Get all available sandbox types.
"""
if dify_config.EDITION == "SELF_HOSTED":
return [p.value for p in cls]
else:
return [p.value for p in cls if p != SandboxType.LOCAL]
def _get_sandbox_class(sandbox_type: SandboxType) -> type[VirtualEnvironment]:
match sandbox_type:
case SandboxType.DOCKER:
from core.virtual_environment.providers.docker_daemon_sandbox import DockerDaemonEnvironment
return DockerDaemonEnvironment
case SandboxType.E2B:
from core.virtual_environment.providers.e2b_sandbox import E2BEnvironment
return E2BEnvironment
case SandboxType.LOCAL:
from core.virtual_environment.providers.local_without_isolation import LocalVirtualEnvironment
return LocalVirtualEnvironment
case _:
raise ValueError(f"Unsupported sandbox type: {sandbox_type}")
class SandboxBuilder:
def __init__(self, tenant_id: str, sandbox_type: SandboxType) -> None:
self._tenant_id = tenant_id
self._sandbox_type = sandbox_type
self._user_id: str | None = None
self._options: dict[str, Any] = {}
self._environments: dict[str, str] = {}
self._initializers: list[SandboxInitializer] = []
def user(self, user_id: str) -> SandboxBuilder:
self._user_id = user_id
return self
def options(self, options: Mapping[str, Any]) -> SandboxBuilder:
self._options = dict(options)
return self
def environments(self, environments: Mapping[str, str]) -> SandboxBuilder:
self._environments = dict(environments)
return self
def initializer(self, initializer: SandboxInitializer) -> SandboxBuilder:
self._initializers.append(initializer)
return self
def initializers(self, initializers: Sequence[SandboxInitializer]) -> SandboxBuilder:
self._initializers.extend(initializers)
return self
def build(self) -> VirtualEnvironment:
vm_class = _get_sandbox_class(self._sandbox_type)
vm = vm_class(
tenant_id=self._tenant_id,
options=self._options,
environments=self._environments,
user_id=self._user_id,
)
for init in self._initializers:
init.initialize(vm)
return vm
@staticmethod
def validate(vm_type: SandboxType, options: Mapping[str, Any]) -> None:
vm_class = _get_sandbox_class(vm_type)
vm_class.validate(options)
class VMConfig:
@staticmethod
def get_schema(vm_type: SandboxType) -> list[BasicProviderConfig]:
return _get_sandbox_class(vm_type).get_config_schema()

View File

@ -0,0 +1,11 @@
from .cli_api import CliApiSession, CliApiSessionManager
from .session import BaseSession, RedisSessionStorage, SessionManager, SessionStorage
__all__ = [
"BaseSession",
"CliApiSession",
"CliApiSessionManager",
"RedisSessionStorage",
"SessionManager",
"SessionStorage",
]

View File

@ -0,0 +1,20 @@
import secrets
from typing import Any
from pydantic import Field
from .session import BaseSession, SessionManager
class CliApiSession(BaseSession):
secret: str = Field(default_factory=lambda: secrets.token_urlsafe(32))
class CliApiSessionManager(SessionManager[CliApiSession]):
def __init__(self, ttl: int | None = None):
super().__init__(key_prefix="cli_api_session", session_class=CliApiSession, ttl=ttl)
def create(self, tenant_id: str, user_id: str, context: dict[str, Any] | None = None) -> CliApiSession:
session = CliApiSession(tenant_id=tenant_id, user_id=user_id, context=context or {})
self.save(session)
return session

106
api/core/session/session.py Normal file
View File

@ -0,0 +1,106 @@
import json
import logging
import uuid
from datetime import UTC, datetime
from typing import Any, Generic, Protocol, TypeVar
from pydantic import BaseModel, Field, ValidationError
from extensions.ext_redis import redis_client
logger = logging.getLogger(__name__)
class SessionStorage(Protocol):
"""Session storage interface."""
def get(self, key: str) -> str | None: ...
def set(self, key: str, value: str, ttl: int) -> None: ...
def delete(self, key: str) -> bool: ...
def exists(self, key: str) -> bool: ...
def refresh_ttl(self, key: str, ttl: int) -> bool: ...
class RedisSessionStorage:
"""Redis storage implementation (default)."""
def get(self, key: str) -> str | None:
result = redis_client.get(key)
if result is None:
return None
return result.decode() if isinstance(result, bytes) else result
def set(self, key: str, value: str, ttl: int) -> None:
redis_client.setex(key, ttl, value)
def delete(self, key: str) -> bool:
return redis_client.delete(key) > 0
def exists(self, key: str) -> bool:
return redis_client.exists(key) > 0
def refresh_ttl(self, key: str, ttl: int) -> bool:
return bool(redis_client.expire(key, ttl))
class BaseSession(BaseModel):
"""Base session model."""
id: str = Field(default_factory=lambda: str(uuid.uuid4()))
tenant_id: str
user_id: str
created_at: datetime = Field(default_factory=lambda: datetime.now(UTC))
updated_at: datetime = Field(default_factory=lambda: datetime.now(UTC))
context: dict[str, Any] = Field(default_factory=dict)
def update_timestamp(self) -> None:
self.updated_at = datetime.now(UTC)
T = TypeVar("T", bound=BaseSession)
class SessionManager(Generic[T]):
"""Generic session manager."""
DEFAULT_TTL = 7200 # 2 hours
def __init__(
self,
key_prefix: str,
session_class: type[T],
storage: SessionStorage | None = None,
ttl: int | None = None,
):
self._key_prefix = key_prefix
self._session_class = session_class
self._storage = storage or RedisSessionStorage()
self._ttl = ttl or self.DEFAULT_TTL
def _get_key(self, session_id: str) -> str:
return f"{self._key_prefix}:{session_id}"
def save(self, session: T) -> None:
session.update_timestamp()
key = self._get_key(session.id)
self._storage.set(key, session.model_dump_json(), self._ttl)
def get(self, session_id: str) -> T | None:
key = self._get_key(session_id)
data = self._storage.get(key)
if data is None:
return None
try:
return self._session_class.model_validate(json.loads(data))
except (json.JSONDecodeError, ValidationError) as e:
logger.warning("Failed to deserialize session %s: %s", session_id, e)
return None
def delete(self, session_id: str) -> bool:
return self._storage.delete(self._get_key(session_id))
def exists(self, session_id: str) -> bool:
return self._storage.exists(self._get_key(session_id))
def refresh_ttl(self, session_id: str) -> bool:
return self._storage.refresh_ttl(self._get_key(session_id), self._ttl)

View File

@ -8,6 +8,7 @@ from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from models.model import File
from core.model_runtime.entities.message_entities import PromptMessageTool
from core.tools.__base.tool_runtime import ToolRuntime
from core.tools.entities.tool_entities import (
ToolEntity,
@ -154,6 +155,60 @@ class Tool(ABC):
return parameters
def to_prompt_message_tool(self) -> PromptMessageTool:
message_tool = PromptMessageTool(
name=self.entity.identity.name,
description=self.entity.description.llm if self.entity.description else "",
parameters={
"type": "object",
"properties": {},
"required": [],
},
)
parameters = self.get_merged_runtime_parameters()
for parameter in parameters:
if parameter.form != ToolParameter.ToolParameterForm.LLM:
continue
parameter_type = parameter.type.as_normal_type()
if parameter.type in {
ToolParameter.ToolParameterType.SYSTEM_FILES,
ToolParameter.ToolParameterType.FILE,
ToolParameter.ToolParameterType.FILES,
}:
# Determine the description based on parameter type
if parameter.type == ToolParameter.ToolParameterType.FILE:
file_format_desc = " Input the file id with format: [File: file_id]."
else:
file_format_desc = "Input the file id with format: [Files: file_id1, file_id2, ...]. "
message_tool.parameters["properties"][parameter.name] = {
"type": "string",
"description": (parameter.llm_description or "") + file_format_desc,
}
continue
enum = []
if parameter.type == ToolParameter.ToolParameterType.SELECT:
enum = [option.value for option in parameter.options] if parameter.options else []
message_tool.parameters["properties"][parameter.name] = (
{
"type": parameter_type,
"description": parameter.llm_description or "",
}
if parameter.input_schema is None
else parameter.input_schema
)
if len(enum) > 0:
message_tool.parameters["properties"][parameter.name]["enum"] = enum
if parameter.required:
message_tool.parameters["required"].append(parameter.name)
return message_tool
def create_image_message(
self,
image: str,

View File

@ -14,23 +14,23 @@ from configs import dify_config
logger = logging.getLogger(__name__)
class OAuthEncryptionError(Exception):
"""OAuth encryption/decryption specific error"""
class EncryptionError(Exception):
"""Encryption/decryption specific error"""
pass
class SystemOAuthEncrypter:
class SystemEncrypter:
"""
A simple OAuth parameters encrypter using AES-CBC encryption.
A simple parameters encrypter using AES-CBC encryption.
This class provides methods to encrypt and decrypt OAuth parameters
This class provides methods to encrypt and decrypt parameters
using AES-CBC mode with a key derived from the application's SECRET_KEY.
"""
def __init__(self, secret_key: str | None = None):
"""
Initialize the OAuth encrypter.
Initialize the encrypter.
Args:
secret_key: Optional secret key. If not provided, uses dify_config.SECRET_KEY
@ -43,19 +43,19 @@ class SystemOAuthEncrypter:
# Generate a fixed 256-bit key using SHA-256
self.key = hashlib.sha256(secret_key.encode()).digest()
def encrypt_oauth_params(self, oauth_params: Mapping[str, Any]) -> str:
def encrypt_params(self, params: Mapping[str, Any]) -> str:
"""
Encrypt OAuth parameters.
Encrypt parameters.
Args:
oauth_params: OAuth parameters dictionary, e.g., {"client_id": "xxx", "client_secret": "xxx"}
params: parameters dictionary, e.g., {"client_id": "xxx", "client_secret": "xxx"}
Returns:
Base64-encoded encrypted string
Raises:
OAuthEncryptionError: If encryption fails
ValueError: If oauth_params is invalid
EncryptionError: If encryption fails
ValueError: If params is invalid
"""
try:
@ -66,7 +66,7 @@ class SystemOAuthEncrypter:
cipher = AES.new(self.key, AES.MODE_CBC, iv)
# Encrypt data
padded_data = pad(TypeAdapter(dict).dump_json(dict(oauth_params)), AES.block_size)
padded_data = pad(TypeAdapter(dict).dump_json(dict(params)), AES.block_size)
encrypted_data = cipher.encrypt(padded_data)
# Combine IV and encrypted data
@ -76,20 +76,20 @@ class SystemOAuthEncrypter:
return base64.b64encode(combined).decode()
except Exception as e:
raise OAuthEncryptionError(f"Encryption failed: {str(e)}") from e
raise EncryptionError(f"Encryption failed: {str(e)}") from e
def decrypt_oauth_params(self, encrypted_data: str) -> Mapping[str, Any]:
def decrypt_params(self, encrypted_data: str) -> Mapping[str, Any]:
"""
Decrypt OAuth parameters.
Decrypt parameters.
Args:
encrypted_data: Base64-encoded encrypted string
Returns:
Decrypted OAuth parameters dictionary
Decrypted parameters dictionary
Raises:
OAuthEncryptionError: If decryption fails
EncryptionError: If decryption fails
ValueError: If encrypted_data is invalid
"""
if not isinstance(encrypted_data, str):
@ -118,70 +118,70 @@ class SystemOAuthEncrypter:
unpadded_data = unpad(decrypted_data, AES.block_size)
# Parse JSON
oauth_params: Mapping[str, Any] = TypeAdapter(Mapping[str, Any]).validate_json(unpadded_data)
params: Mapping[str, Any] = TypeAdapter(Mapping[str, Any]).validate_json(unpadded_data)
if not isinstance(oauth_params, dict):
if not isinstance(params, dict):
raise ValueError("Decrypted data is not a valid dictionary")
return oauth_params
return params
except Exception as e:
raise OAuthEncryptionError(f"Decryption failed: {str(e)}") from e
raise EncryptionError(f"Decryption failed: {str(e)}") from e
# Factory function for creating encrypter instances
def create_system_oauth_encrypter(secret_key: str | None = None) -> SystemOAuthEncrypter:
def create_system_encrypter(secret_key: str | None = None) -> SystemEncrypter:
"""
Create an OAuth encrypter instance.
Create an encrypter instance.
Args:
secret_key: Optional secret key. If not provided, uses dify_config.SECRET_KEY
Returns:
SystemOAuthEncrypter instance
SystemEncrypter instance
"""
return SystemOAuthEncrypter(secret_key=secret_key)
return SystemEncrypter(secret_key=secret_key)
# Global encrypter instance (for backward compatibility)
_oauth_encrypter: SystemOAuthEncrypter | None = None
_encrypter: SystemEncrypter | None = None
def get_system_oauth_encrypter() -> SystemOAuthEncrypter:
def get_system_encrypter() -> SystemEncrypter:
"""
Get the global OAuth encrypter instance.
Get the global encrypter instance.
Returns:
SystemOAuthEncrypter instance
SystemEncrypter instance
"""
global _oauth_encrypter
if _oauth_encrypter is None:
_oauth_encrypter = SystemOAuthEncrypter()
return _oauth_encrypter
global _encrypter
if _encrypter is None:
_encrypter = SystemEncrypter()
return _encrypter
# Convenience functions for backward compatibility
def encrypt_system_oauth_params(oauth_params: Mapping[str, Any]) -> str:
def encrypt_system_params(params: Mapping[str, Any]) -> str:
"""
Encrypt OAuth parameters using the global encrypter.
Encrypt parameters using the global encrypter.
Args:
oauth_params: OAuth parameters dictionary
params: parameters dictionary
Returns:
Base64-encoded encrypted string
"""
return get_system_oauth_encrypter().encrypt_oauth_params(oauth_params)
return get_system_encrypter().encrypt_params(params)
def decrypt_system_oauth_params(encrypted_data: str) -> Mapping[str, Any]:
def decrypt_system_params(encrypted_data: str) -> Mapping[str, Any]:
"""
Decrypt OAuth parameters using the global encrypter.
Decrypt parameters using the global encrypter.
Args:
encrypted_data: Base64-encoded encrypted string
Returns:
Decrypted OAuth parameters dictionary
Decrypted parameters dictionary
"""
return get_system_oauth_encrypter().decrypt_oauth_params(encrypted_data)
return get_system_encrypter().decrypt_params(encrypted_data)

View File

@ -0,0 +1,169 @@
import contextlib
import logging
import threading
import time
from collections.abc import Callable
from concurrent.futures import ThreadPoolExecutor
from core.virtual_environment.__base.entities import CommandResult, CommandStatus
from core.virtual_environment.__base.exec import NotSupportedOperationError
from core.virtual_environment.channel.exec import TransportEOFError
from core.virtual_environment.channel.transport import TransportReadCloser, TransportWriteCloser
logger = logging.getLogger(__name__)
class CommandTimeoutError(Exception):
pass
class CommandCancelledError(Exception):
pass
class CommandFuture:
"""
Lightweight future for command execution.
Mirrors concurrent.futures.Future API with 4 essential methods:
result(), done(), cancel(), cancelled().
"""
def __init__(
self,
pid: str,
stdin_transport: TransportWriteCloser,
stdout_transport: TransportReadCloser,
stderr_transport: TransportReadCloser,
poll_status: Callable[[], CommandStatus],
poll_interval: float = 0.1,
):
self._pid = pid
self._stdin_transport = stdin_transport
self._stdout_transport = stdout_transport
self._stderr_transport = stderr_transport
self._poll_status = poll_status
self._poll_interval = poll_interval
self._done_event = threading.Event()
self._lock = threading.Lock()
self._result: CommandResult | None = None
self._exception: BaseException | None = None
self._cancelled = False
self._started = False
def result(self, timeout: float | None = None) -> CommandResult:
"""
Block until command completes and return result.
Args:
timeout: Maximum seconds to wait. None means wait forever.
Raises:
CommandTimeoutError: If timeout exceeded.
CommandCancelledError: If command was cancelled.
"""
self._ensure_started()
if not self._done_event.wait(timeout):
raise CommandTimeoutError(f"Command timed out after {timeout}s")
if self._cancelled:
raise CommandCancelledError("Command was cancelled")
if self._exception is not None:
raise self._exception
assert self._result is not None
return self._result
def done(self) -> bool:
self._ensure_started()
return self._done_event.is_set()
def cancel(self) -> bool:
"""
Attempt to cancel command by closing transports.
Returns True if cancelled, False if already completed.
"""
with self._lock:
if self._done_event.is_set():
return False
self._cancelled = True
self._close_transports()
self._done_event.set()
return True
def cancelled(self) -> bool:
return self._cancelled
def _ensure_started(self) -> None:
with self._lock:
if not self._started:
self._started = True
thread = threading.Thread(target=self._execute, daemon=True)
thread.start()
def _execute(self) -> None:
stdout_buf = bytearray()
stderr_buf = bytearray()
is_combined_stream = self._stdout_transport is self._stderr_transport
try:
with ThreadPoolExecutor(max_workers=2) as executor:
stdout_future = executor.submit(self._drain_transport, self._stdout_transport, stdout_buf)
stderr_future = None
if not is_combined_stream:
stderr_future = executor.submit(self._drain_transport, self._stderr_transport, stderr_buf)
exit_code = self._wait_for_completion()
stdout_future.result()
if stderr_future:
stderr_future.result()
with self._lock:
if not self._cancelled:
self._result = CommandResult(
stdout=bytes(stdout_buf),
stderr=b"" if is_combined_stream else bytes(stderr_buf),
exit_code=exit_code,
pid=self._pid,
)
self._done_event.set()
except Exception as e:
logger.exception("Command execution failed for pid %s", self._pid)
with self._lock:
if not self._cancelled:
self._exception = e
self._done_event.set()
finally:
self._close_transports()
def _wait_for_completion(self) -> int | None:
while not self._cancelled:
try:
status = self._poll_status()
except NotSupportedOperationError:
return None
if status.status == CommandStatus.Status.COMPLETED:
return status.exit_code
time.sleep(self._poll_interval)
return None
def _drain_transport(self, transport: TransportReadCloser, buffer: bytearray) -> None:
try:
while True:
buffer.extend(transport.read(4096))
except TransportEOFError:
pass
except Exception:
logger.exception("Failed reading transport")
def _close_transports(self) -> None:
for transport in (self._stdin_transport, self._stdout_transport, self._stderr_transport):
with contextlib.suppress(Exception):
transport.close()

View File

@ -0,0 +1,91 @@
from collections.abc import Mapping
from enum import StrEnum
from typing import Any
from pydantic import BaseModel, Field
class Arch(StrEnum):
"""
Architecture types for virtual environments.
"""
ARM64 = "arm64"
AMD64 = "amd64"
class OperatingSystem(StrEnum):
"""
Operating system types for virtual environments.
"""
LINUX = "linux"
DARWIN = "darwin"
class Metadata(BaseModel):
"""
Returned metadata about a virtual environment.
"""
id: str = Field(description="The unique identifier of the virtual environment.")
arch: Arch = Field(description="Which architecture was used to create the virtual environment.")
os: OperatingSystem = Field(description="The operating system of the virtual environment.")
store: Mapping[str, Any] = Field(
default_factory=dict, description="The store information of the virtual environment., Additional data."
)
class ConnectionHandle(BaseModel):
"""
Handle for managing connections to the virtual environment.
"""
id: str = Field(description="The unique identifier of the connection handle.")
class CommandStatus(BaseModel):
"""
Status of a command executed in the virtual environment.
"""
class Status(StrEnum):
RUNNING = "running"
COMPLETED = "completed"
status: Status = Field(description="The status of the command execution.")
exit_code: int | None = Field(description="The return code of the command execution.")
class FileState(BaseModel):
"""
State of a file in the virtual environment.
"""
size: int = Field(description="The size of the file in bytes.")
path: str = Field(description="The path of the file in the virtual environment.")
created_at: int = Field(description="The creation timestamp of the file.")
updated_at: int = Field(description="The last modified timestamp of the file.")
class CommandResult(BaseModel):
"""
Result of a synchronous command execution.
"""
stdout: bytes = Field(description="Standard output content.")
stderr: bytes = Field(description="Standard error content.")
exit_code: int | None = Field(description="Exit code of the command. None if unavailable.")
pid: str = Field(description="Process ID of the executed command.")
@property
def is_error(self) -> bool:
return self.exit_code not in (None, 0) or bool(self.stderr.decode("utf-8", errors="replace"))
@property
def error_message(self) -> str:
return self.stderr.decode("utf-8", errors="replace") if self.stderr else ""
@property
def info_message(self) -> str:
return self.stdout.decode("utf-8", errors="replace") if self.stdout else ""

View File

@ -0,0 +1,46 @@
from __future__ import annotations
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from core.virtual_environment.__base.entities import CommandResult
class ArchNotSupportedError(Exception):
"""Exception raised when the architecture is not supported."""
pass
class VirtualEnvironmentLaunchFailedError(Exception):
"""Exception raised when launching the virtual environment fails."""
pass
class NotSupportedOperationError(Exception):
"""Exception raised when an operation is not supported."""
pass
class SandboxConfigValidationError(ValueError):
"""Exception raised when sandbox configuration validation fails."""
pass
class CommandExecutionError(Exception):
"""Raised when a command execution fails."""
def __init__(self, message: str, result: CommandResult):
super().__init__(message)
self.result = result
@property
def exit_code(self) -> int | None:
return self.result.exit_code
@property
def stderr(self) -> bytes:
return self.result.stderr

View File

@ -0,0 +1,149 @@
from __future__ import annotations
import contextlib
from collections.abc import Generator, Mapping
from contextlib import contextmanager
from functools import partial
from core.virtual_environment.__base.command_future import CommandFuture
from core.virtual_environment.__base.entities import CommandResult, ConnectionHandle
from core.virtual_environment.__base.exec import CommandExecutionError
from core.virtual_environment.__base.virtual_environment import VirtualEnvironment
@contextmanager
def with_connection(env: VirtualEnvironment) -> Generator[ConnectionHandle, None, None]:
"""Context manager for VirtualEnvironment connection lifecycle.
Automatically establishes and releases connection handles.
Usage:
with with_connection(env) as conn:
future = run_command(env, conn, ["echo", "hello"])
result = future.result(timeout=10)
"""
connection_handle = env.establish_connection()
try:
yield connection_handle
finally:
with contextlib.suppress(Exception):
env.release_connection(connection_handle)
def submit_command(
env: VirtualEnvironment,
connection: ConnectionHandle,
command: list[str],
environments: Mapping[str, str] | None = None,
*,
cwd: str | None = None,
) -> CommandFuture:
"""Execute a command and return a Future for the result.
High-level interface that handles IO draining internally.
For streaming output, use env.execute_command() instead.
Args:
env: The virtual environment to execute the command in.
connection: The connection handle.
command: Command as list of strings.
environments: Environment variables.
cwd: Working directory for the command. If None, uses the provider's default.
Returns:
CommandFuture that can be used to get result with timeout or cancel.
Example:
with with_connection(env) as conn:
result = run_command(env, conn, ["ls", "-la"]).result(timeout=30)
"""
pid, stdin_transport, stdout_transport, stderr_transport = env.execute_command(
connection, command, environments, cwd
)
return CommandFuture(
pid=pid,
stdin_transport=stdin_transport,
stdout_transport=stdout_transport,
stderr_transport=stderr_transport,
poll_status=partial(env.get_command_status, connection, pid),
)
def _execute_with_connection(
env: VirtualEnvironment,
conn: ConnectionHandle,
command: list[str],
timeout: float | None,
cwd: str | None,
) -> CommandResult:
"""Internal helper to execute command with given connection."""
future = submit_command(env, conn, command, cwd=cwd)
return future.result(timeout=timeout)
def execute(
env: VirtualEnvironment,
command: list[str],
*,
timeout: float | None = 30,
cwd: str | None = None,
error_message: str = "Command failed",
connection: ConnectionHandle | None = None,
) -> CommandResult:
"""Execute a command with automatic connection management.
Raises CommandExecutionError if the command fails (non-zero exit code).
Args:
env: The virtual environment to execute the command in.
command: The command to execute as a list of strings.
timeout: Maximum time to wait for the command to complete (seconds).
cwd: Working directory for the command.
error_message: Custom error message prefix for failures.
connection: Optional connection handle to reuse. If None, creates and releases a new connection.
Returns:
CommandResult on success.
Raises:
CommandExecutionError: If the command fails.
"""
if connection is not None:
result = _execute_with_connection(env, connection, command, timeout, cwd)
else:
with with_connection(env) as conn:
result = _execute_with_connection(env, conn, command, timeout, cwd)
if result.is_error:
raise CommandExecutionError(f"{error_message}: {result.error_message}", result)
return result
def try_execute(
env: VirtualEnvironment,
command: list[str],
*,
timeout: float | None = 30,
cwd: str | None = None,
connection: ConnectionHandle | None = None,
) -> CommandResult:
"""Execute a command with automatic connection management.
Does not raise on failure - returns the result for caller to handle.
Args:
env: The virtual environment to execute the command in.
command: The command to execute as a list of strings.
timeout: Maximum time to wait for the command to complete (seconds).
cwd: Working directory for the command.
connection: Optional connection handle to reuse. If None, creates and releases a new connection.
Returns:
CommandResult containing stdout, stderr, and exit_code.
"""
if connection is not None:
return _execute_with_connection(env, connection, command, timeout, cwd)
with with_connection(env) as conn:
return _execute_with_connection(env, conn, command, timeout, cwd)

Some files were not shown because too many files have changed in this diff Show More