- Removed the storage proxy controller and its associated endpoints for file download and upload.
- Updated the file controller to use the new storage ticket service for generating download and upload URLs.
- Modified the file presign storage to fallback to ticket-based URLs instead of signed proxy URLs.
- Enhanced unit tests to validate the new ticket generation and retrieval logic.
- Removed unused app asset download and upload endpoints, along with sandbox archive and file download endpoints.
- Updated imports in the file controller to reflect the removal of these endpoints.
- Simplified the generator.py file by consolidating the code context field definition.
- Enhanced the storage layer with a unified presign wrapper for better handling of presigned URLs.
- Moved the import of Queue and Empty from the top of the file to within the QueueTransportReadCloser class.
- This change improves encapsulation and ensures that the imports are only available where needed.
- Add BundleManifest with dsl_filename for 100% tree ID restoration
- Implement two-step import flow: prepare (get upload URL) + confirm
- Use sandbox for zip extraction and file upload via presigned URLs
- Store import session in Redis with 1h TTL
- Add SandboxUploadItem for symmetric download/upload API
- Remove legacy source_zip_extractor, inline logic in service
- Update frontend to use new prepare/confirm API flow
Migrate all icon usage in the skill directory from @remixicon/react
and custom SVG components to Tailwind CSS icon classes (i-ri-*, i-custom-*).
Update MenuItem API to accept string class names instead of React.ElementType.
Hardcoded colors (#354052, #676F83, #98A2B3, #155EEF, white) prevent
the Iconify parseColors callback from converting them to currentColor,
causing the icons to render as background-image instead of mask-image
and making text-* color utilities ineffective.
The toolbar download button now uses the already-fetched download URL
from useQuery (zero extra requests), while tree node downloads keep
using useMutation with React Query-managed isPending state instead of
a hand-rolled useState wrapper.
Introduce a dedicated Zustand ArtifactSlice to manage artifact selection
state with mutual exclusion against the main file tree. Artifact files
from the sandbox can now be opened as tabs in the skill editor, rendered
via a lightweight ArtifactContentPanel that reuses ReadOnlyFilePreview.
Backend returns extensions with a leading dot (e.g., `.png` from
`os.path.splitext`), causing binary/media files to be misclassified
as text since they didn't match the dot-free extension lists.
Implement ReadOnlyFilePreview to render sandbox files by type
(code, markdown, image, video, SQLite, unsupported) using existing
skill viewer components with readOnly support. Add
useSandboxFileDownloadUrl and useFetchTextContent hooks for data
fetching, and generalize useFileTypeInfo to accept any file-like
object.
Extract shared empty state card into ArtifactsEmpty component to
deduplicate the no-files and no-selection empty states. Align the
split-panel right-side empty state with the variables tab pattern.
Remove FC type annotations in favor of inline parameter types.
Mark the empty state SearchLinesSparkle icon as aria-hidden for screen
readers. Replace array-index keys with cumulative path keys (O(n) vs
O(n²)) to satisfy react/no-array-index-key and improve key stability.
Replace the minimal centered text card in artifacts tab empty state with
a full-height card layout matching the variables tab, including icon
container (SearchLinesSparkle), title, description, and learn more link.
Replace useClickAway + fixed positioning in file tree context menu with
a floating-ui based hook that provides collision detection (flip/shift),
ARIA role="menu", Escape/outside-click dismiss, and scroll dismiss via
passive capture listener with ref-stabilized callback.
Replaces window.open with the downloadUrl helper from utils/download.ts
to trigger proper browser download behavior via <a download> instead of
opening a new tab that may display garbled content.
- Added an `enabled` field to `DifyCliToolConfig` and `ToolDependency` to manage tool activation status.
- Updated `DifyCliConfig` to handle tool dependencies more effectively, ensuring only enabled tools are processed.
- Refactored `SkillCompiler` to utilize `tool_id` for better identification of tools and improved handling of disabled tools.
- Introduced a new method `_extract_disabled_tools` in `LLMNode` to streamline the extraction of disabled tools from node data.
- Enhanced metadata parsing to account for tool enablement, improving overall tool management.
Features:
- Add refresh button to clear test run history (data, inputs, node highlights)
- Persist workflowRunningData when closing panel (from previous commit)
Code quality improvements:
- Refactor to declarative pattern: effectiveTab derived from state, not set in effects
- Replace && with ternary operators for conditional rendering (Vercel best practices)
- Fix created_by type: change from string to object to match backend API
- Remove `as any` type assertion, use proper type-safe access
- Title now declaratively shows status based on workflowRunningData presence
Files changed:
- use-workflow-interactions.ts: add handleClearWorkflowRunHistory hook
- workflow-preview.tsx: declarative tab state, clear button, type-safe props
- types.ts: fix created_by type definition
- test files: update mock data to match corrected types
- Introduced a new regex pattern for tool groups to support multiple tool placeholders.
- Updated the DefaultToolResolver to format outputs for specific built-in tools (bash, python).
- Enhanced the SkillCompiler to filter out disabled tools in tool groups, ensuring only enabled tools are rendered.
- Added tests to verify the correct behavior of tool group filtering and rendering.
- Introduced a new boolean field `computer_use` in LLMNodeData to indicate whether the computer use feature should be enabled.
- Updated LLMNode to check the `computer_use` field when determining sandbox usage, ensuring proper error handling if sandbox is not available.
- Removed the obsolete `_has_skill_prompt` method to streamline the code.
Position the clear button relative to the right divider instead of
following the tab labels, ensuring consistent positioning across
different language translations. Also fix tab switching jitter by
setting a fixed header height.
Use shared object reference instead of separate variables to track
upload progress across concurrent Promise.all operations, preventing
progress bar from showing incorrect or regressing values.
Introduce prepareSkillUploadFile utility that wraps markdown file content
in a JSON payload format before uploading. This ensures consistent handling
of skill files across file upload, folder upload, and drag-and-drop operations.
Replace simple uploading/success indicator with a full three-state
tooltip (uploading, success, partial_error) that overlays the DropTip
position. Add upload slice to skill editor store and wire progress
tracking into file/folder upload operations.
- add SplitPanel to share left/right shell and narrow menu handling
- reuse InspectHeaderProps for tab header + actions across tabs
- refactor variables/artifacts tabs to plug into shared split layout
- align right-side header/close behavior and consolidate empty/loading flows
Updated multiple modules to utilize gevent for concurrency, ensuring compatibility with gevent-based WSGI servers. This includes replacing threading.Thread and threading.Event with gevent.spawn and gevent.event.Event, respectively, to prevent blocking and improve performance during I/O operations.
- Refactored SandboxBuilder, Sandbox, CommandFuture, and DockerDemuxer to use gevent.
- Added detailed docstrings explaining the changes and benefits of using gevent primitives.
This change enhances the responsiveness and efficiency of the application in a gevent environment.
Consolidate duplicated TabHeader + close button layout (8 occurrences) into a single
InspectLayout wrapper. Replace boolean props with children slots for better composition.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Enhance getFileIconType to accept an extension parameter and cover all
13 FileAppearanceTypeEnum types using an O(1) Map lookup. Update all
call sites to pass the API-provided extension for accurate icon display.
Refactor the variable inspect panel into a tabbed layout with Variables
and Artifacts tabs. Extract variable logic into VariablesTab, add new
ArtifactsTab with sandbox file tree selection and preview pane, and
improve accessibility across tree nodes and interactive elements.
Use useRef to store saveFile reference and remove it from useEffect
dependencies to prevent cleanup from re-triggering on reference changes.
Also normalize metadata before comparison when clearing dirty state to
ensure filtered tools match correctly.
Simpler approach: disable the view picker toggle when preview is running,
preventing users from switching views during active runs.
This replaces the previous mounted ref guard approach (commits a0188bd9b5,
b7f1eb9b7b, 8332f0de2b) which added complexity to handle post-unmount
operations. Disabling the toggle is more direct and follows KISS principle.
Changes:
- Add disabled prop to ViewPicker based on isResponding state
- Revert mounted ref guards in use-chat-flow-control.ts
- Revert isMountedRef parameter in use-nodes/edges-interactions-without-sync.ts
- Revert defensive type check in markdown-utils.ts (no longer needed)
In React StrictMode (dev mode), effects are run twice to help detect
side effects. The cleanup-only pattern left isMountedRef as false after
StrictMode's simulated unmount-remount cycle, causing stop/cancel
operations to be skipped even when the component was mounted.
Now the effect setup explicitly sets isMountedRef.current = true,
ensuring correct behavior in both development and production.
Related to a9c5201485 - when switching views during active preview run,
the markdown preprocessors could receive non-string content (e.g., frozen
arrays from immer). Returning the original value caused ReactMarkdown to
fail with "Cannot assign to read only property" error.
Now both preprocessLaTeX and preprocessThinkTag return '' for non-string
input, preventing runtime errors during view switches.
When switching from graph view to skill view during an active preview run,
SSE callbacks continue executing and attempt to update ReactFlow node/edge
states. This could cause errors since the component is unmounted.
Add optional `isMountedRef` parameter to `useNodesInteractionsWithoutSync`
and `useEdgesInteractionsWithoutSync` hooks. When provided, operations are
skipped if the component has unmounted, preventing potential errors while
allowing the SSE connection to continue running in the background.
BREAKING CHANGE: `useNodesInteractionsWithoutSync` and
`useEdgesInteractionsWithoutSync` now accept an optional `isMountedRef`
parameter. Existing callers are unaffected as the parameter is optional.
The sidebar layout was broken when ArtifactsSection expanded - it would
squeeze the FileTree and neither area could scroll. This restructures the
layout so each section has its own scroll container with proper height
constraints.
Remove enabled condition so data is fetched when component mounts,
allowing blue dot to show when files exist even before expanding.
TanStack Query handles request deduplication automatically.
- Replace placeholder with functional ArtifactsSection component
- Add ArtifactsTree component for file tree rendering
- Support expand/collapse with lazy loading
- Show blue dot indicator when collapsed with files
- Add empty state card with hint text
- Add download button on file hover
- Add i18n translations (en-US, zh-Hans)
- Added sandbox archive upload and download proxy endpoints with signed URL verification.
- Introduced security helpers for generating and verifying signed URLs.
- Updated file-related API routes to include sandbox archive functionality.
- Refactored app asset storage methods to streamline download/upload URL generation.
Cover Ctrl+S and Cmd+S save triggers, guard clauses for start tab and
null active tab, success/error toast notifications, and fallback
registry integration.
Move global keyboard shortcut handling from component-level hook to
SkillSaveProvider, eliminating duplicate event listener registrations
and race conditions. Delete use-skill-file-save hook as its logic is
now consolidated in the provider with direct store access.
- use cleanup-based save on tab switch with stable fallback snapshots
- add fallback registry for metadata-only autosave consistency
- add autosave/save-manager tests
Centralize file save operations using Context/Provider pattern for better
maintainability. Add auto-save on tab switch, visibility change, page unload,
and component unmount.
SQLite preview feature requires WebAssembly to run wa-sqlite, but CSP
policy was blocking WebAssembly.instantiate() without wasm-unsafe-eval
directive in script-src.
- Added a threaded function to handle the deletion of storage files asynchronously after asset removal.
- Updated the asset removal logic to include a call to the new deletion function, improving performance and responsiveness during asset management operations.
- Updated the download script to redirect error output to /dev/null, preventing unnecessary error messages from being displayed.
- Added an explicit exit command to ensure the script terminates correctly after execution.
- Updated storage wrappers to utilize a new base class, StorageWrapper, for better delegation of methods.
- Introduced SilentStorage to handle read operations gracefully by returning empty values instead of raising exceptions.
- Enhanced CachedPresignStorage to support batch caching of download URLs, improving performance.
- Refactored FilePresignStorage to support both presigned URLs and signed proxy URLs for downloads.
- Updated AppAssetService to utilize the new storage structure, ensuring consistent asset management.
- Introduced CachedPresignStorage to cache presigned download URLs, reducing repeated API calls.
- Updated AppAssetService to utilize CachedPresignStorage for improved performance in asset download URL generation.
- Refactored asset builders and packagers to support the new storage mechanism.
- Removed unused AppAssetsAttrsInitializer to streamline initialization processes.
- Added unit tests for CachedPresignStorage to ensure functionality and reliability.
The `/api/system-features` is required for the web app initialization.
Authentication would create circular dependency (can't authenticate without web app loading).
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Change onSuccess to onSettled for upload mutations so the file tree
refreshes regardless of success or failure, ensuring consistency when
backend creates nodes but storage upload fails.
- Deleted `AppAssetFileResource` class and its associated file upload logic.
- Removed the `create_file` method from `AppAssetService` to streamline asset management.
- Updated `AppAssetBatchUploadResource` for improved readability by condensing method calls.
- Added `RuntimeType` enum to define app runtime types: CLASSIC and SANDBOXED.
- Updated `AppPartial` model to include `runtime_type` field.
- Enhanced `AppListApi` to determine and assign the appropriate runtime type based on sandbox feature availability.
Refactor StartTabContent into separate components following Figma design specs:
- ActionCard: reusable card with icon, title, description
- SectionHeader: title/xl-semi-bold header with description
- CreateImportSection: 3-column grid layout for Create/Import cards
- SkillTemplatesSection: templates area with placeholder
Align styles with Figma: 3-col grid, 16px title, proper spacing and padding.
Add i18n translations for all user-facing text (en-US, zh-Hans).
- Introduced `GetUploadUrlPayload` and `BatchUploadPayload` models for handling file uploads.
- Implemented `AppAssetFileUploadUrlResource` for generating pre-signed upload URLs.
- Added `AppAssetBatchUploadResource` to support batch creation of asset nodes from a tree structure.
- Enhanced `AppAssetService` with methods for obtaining upload URLs and batch creation of assets.
- Removed checksum handling from file creation to streamline the process.
- Add guards in tool-block component to skip metadata read/write when Start tab is active
- Add guard in tool-picker-block to prevent writing tool config to Start tab
- Add guard in use-sync-tree-with-active-tab to skip tree sync for Start tab
- Add START_TAB_ID constant and StartTabItem/StartTabContent components
- Default to Start tab when no file tabs are open
- Optimize zustand selectors to subscribe to specific Map values instead of
entire Map objects, reducing unnecessary re-renders when other tabs change
- Refactor useSkillFileSave to accept precise values instead of Map/Set
- Updated the `get_system_default_config` method to accept a `provider_type` parameter for more precise querying.
- Improved error handling to raise a ValueError if no system default provider is configured for the specified tenant and provider type.
- Added fallback logic to ensure a system default configuration is returned when available.
- Updated the SandboxManager to rename the method for deleting storage to better reflect its purpose.
- Adjusted the WorkflowVariableCollectionApi to utilize the new method name.
- Improved error handling in ArchiveSandboxStorage's delete method to log exceptions during deletion.
Display a notice at the bottom of SQLite table preview when data
is truncated due to PREVIEW_ROW_LIMIT (1000 rows), informing users
that additional rows are not displayed.
- Renamed `build_skill_artifact_set` to `build_skill_bundle` for improved clarity in asset management.
- Updated references in `SkillManager` to reflect the new method name and ensure consistent handling of skill bundles.
- Added `AppAssetsAttrsInitializer` to `SandboxManager` to enhance asset initialization processes.
- Implemented output truncation in `SandboxBashTool` to manage long command outputs effectively.
- Introduced AppBundleService for managing app bundle publishing and importing, integrating workflow and asset services.
- Added methods for exporting app bundles as ZIP files, including DSL and asset management.
- Implemented source zip extraction and validation to enhance asset import processes.
- Refactored asset packaging to utilize AssetZipPackager for improved performance and organization.
- Enhanced error handling for bundle format and security during import operations.
- Replaced SkillArtifactSet with SkillBundle across various components, enhancing the organization of skill dependencies and references.
- Updated SkillManager methods to load and save bundles instead of artifacts, improving clarity in asset management.
- Refactored SkillCompiler to compile skills into bundles, streamlining the dependency resolution process.
- Adjusted DifyCli and SandboxBashSession to utilize ToolDependencies, ensuring consistent handling of tool references.
- Introduced AssetReferences for better management of file dependencies within skill bundles.
- Updated binary files for Dify CLI on various platforms (darwin amd64, darwin arm64, linux amd64, linux arm64).
- Refactored skill compilation in LLMNode to improve clarity and maintainability by explicitly naming parameters and incorporating AppAssets for base path management.
- Minor fix in AppAssetFileTree to remove unnecessary leading slash in path construction.
The `/console/api/system-features` is required for the dashboard initialization. Authentication would create circular dependency (can't login without dashboard loading).
ref: CVE-2025-63387
Related: #31368
- Introduced SandboxManager.delete_storage method for improved storage management.
- Refactored skill loading and tool artifact handling in DifyCliInitializer and SandboxBashSession.
- Updated LLMNode to extract and compile tool artifacts, enhancing integration with skills.
- Improved attribute management in AttrMap for better error handling and retrieval methods.
- Added AttrMap and AttrKey classes for type-safe attribute storage.
- Implemented AppAssetsAttrs and SkillAttrs for managing application and skill attributes.
- Refactored Sandbox and initializers to utilize the new attribute management system, enhancing modularity and clarity in asset handling.
Optimize folder upload by creating folders at the same depth level in
parallel and uploading all files concurrently. This reduces upload time
from O(n) sequential requests to O(depth) folder requests + 1 file request.
Add overflow-hidden to SkillPageLayout and min-w-0 to flex children
to ensure wide content (like SQLite tables with many columns) scrolls
internally rather than causing the entire page to scroll horizontally.
- Add min-w-0 to flex containers for proper text truncation
- Use w-max on table to ensure columns don't collapse
- Simplify table selector when only one table exists (remove dropdown)
- Introduced threading for loading skills and uploading compiled content to improve performance.
- Added data classes for better structure and clarity in handling loaded and compiled skills.
- Refactored the skill compilation process to separate loading and uploading, enhancing maintainability.
- Changed the command timeout duration from 60 seconds to 1 hour for improved session stability.
- Refactored session cleanup logic to utilize the CLI API session object instead of session ID, enhancing clarity and maintainability.
- Added a keepalive thread to maintain the E2B sandbox timeout, preventing premature termination.
- Introduced a stop event to manage the lifecycle of the keepalive thread.
- Refactored the sandbox initialization to include the new keepalive functionality.
- Enhanced logging to capture failures in refreshing the sandbox timeout.
- Moved sandbox-related classes and functions into a dedicated module for better organization.
- Updated the sandbox initialization process to streamline asset management and environment setup.
- Removed deprecated constants and refactored related code to utilize new sandbox entities.
- Enhanced the workflow context to support sandbox integration, allowing for improved state management during execution.
- Adjusted various components to utilize the new sandbox structure, ensuring compatibility across the application.
Switch from whitelist to blacklist pattern for determining editable files.
Files are now editable unless they are known binary types (audio, archives,
executables, Office documents, fonts, etc.), enabling support for any
runtime-generated text files without needing to add extensions one by one.
- Convert clickable div to semantic button in artifacts-section
- Add aria-hidden to decorative icons
- Add aria-label to rename inputs and hidden file inputs
- Add i18n keys for artifacts section and rename labels
- Support ignore file extensions (.gitignore, etc.)
- Introduced DraftAppAssetsInitializer for handling draft assets.
- Updated SandboxLayer to conditionally set sandbox ID and storage based on workflow version.
- Improved asset initialization logging and error handling.
- Refactored ArchiveSandboxStorage to support exclusion patterns during archiving.
- Modified command and LLM nodes to retrieve sandbox from workflow context, supporting draft workflows.
Replace event-based auto-expand trigger with Zustand state-driven
approach. Now both external file uploads and internal node drag use
the same isDragOver state as the single source of truth for folder
auto-expand timing (1s blink, 2s expand).
Simplify file drop hooks by removing the unnecessary useUnifiedDrag
wrapper that became redundant after internal node drag was migrated
to react-arborist's built-in system. Now useFolderFileDrop and
useRootFileDrop directly use useFileDrop, reducing code complexity
and eliminating unused treeChildren prop drilling.
Switch from native HTML5 drag to react-arborist's built-in drag system
for internal node drag-and-drop. The HTML5Backend used by react-arborist
was intercepting dragstart events, preventing native drag from working.
- Add onMove callback and disableDrop validation to Tree component
- Sync react-arborist drag state (isDragging, willReceiveDrop) to Zustand
- Simplify use-node-move to only handle API execution
- Update use-unified-drag to only handle external file uploads
- External file drops continue to work via native HTML5 events
Implement unified drag system that supports both internal node moves
and external file uploads with consistent UI feedback. Uses native
HTML5 drag API with shared visual states (isDragOver, isBlinking,
DragActionTooltip showing 'Move to' or 'Upload to').
Move searchTerm from props drilling to zustand store for cleaner
architecture. Remove unnecessary controlled/uncontrolled pattern
and unused debounce logic since search is pure frontend filtering.
- Add fileTreeSearchTerm state to file-tree-slice
- Remove useState and props from main.tsx
- Simplify sidebar-search-add.tsx to read/write store directly
- Add empty state UI with reset filter button
Remove the Office file placeholder that only showed "Preview will be
supported in a future update" without any download option. Office files
(pdf, doc, docx, xls, xlsx, ppt, pptx) now fall through to the generic
"unsupported file" handler which provides a download button.
Removed:
- OfficeFilePlaceholder component
- isOfficeFile function and OFFICE_EXTENSIONS constant
- isOffice flag from useFileTypeInfo hook
- i18n keys for officePlaceholder
This simplifies the file type handling to just three categories:
- Editable: markdown, code, text files → editor
- Previewable: image, video files → media preview
- Everything else: download button
Change useSkillFileData to use isEditable instead of isMediaFile:
- Editable files (markdown, code, text) fetch file content for editing
- Non-editable files (image, video, office, unsupported) fetch download URL
This fixes the download button for unsupported files which was incorrectly
using file content (UTF-8 decoded garbage) instead of the presigned URL.
Extract business logic into dedicated hooks to reduce component complexity:
- useFileTypeInfo: file type detection (markdown, code, image, video, etc.)
- useSkillFileData: data fetching with conditional API calls
- useSkillFileSave: save logic with Ctrl+S keyboard shortcut
Also fix Vercel best practice: use ternary instead of && for conditional rendering.
Previously, media files were fetched via getFileContent API which decodes
binary data as UTF-8, resulting in corrupted strings that cannot be used
as img/video src. Now media files use getFileDownloadUrl API to get a
presigned URL, enabling proper preview of images and videos of any size.
- Add constants.ts with ROOT_ID, CONTEXT_MENU_TYPE, NODE_MENU_TYPE
- Add root utilities to tree-utils.ts (isRootId, toApiParentId, etc.)
- Replace '__root__' with ROOT_ID for consistent root identifier
- Replace inline 'blank'/'root' strings with constants
- Use NodeMenuType for type-safe menu type props
- Remove duplicate ContextMenuType from types.ts, use from constants.ts
Remove redundant createTargetNodeId and use selectedTreeNodeId for both
visual highlight and creation target. This simplifies the state management
by having a single source of truth for tree selection, similar to VSCode's
file explorer behavior where both files and folders can be selected.
Update frontend to use new backend API:
- save_config now accepts optional 'activate' parameter
- activate endpoint now requires 'type' parameter ('system' | 'user')
When switching to managed mode, call activate with type='system' instead
of deleting user config, so custom configurations are preserved for
future use.
- Updated the request parser in SandboxProviderListApi to include an optional 'activate' boolean argument for JSON input.
- This enhancement allows users to specify activation status when configuring sandbox providers.
- Enhanced the SandboxProviderConfigApi to accept an 'activate' argument when saving provider configurations.
- Introduced a new request parser for activating sandbox providers, requiring a 'type' argument.
- Updated the SandboxProviderService to handle the activation state during configuration saving and provider activation.
Split use-file-operations.ts (248 lines) into smaller focused hooks:
- use-create-operations.ts for file/folder creation and upload
- use-modify-operations.ts for rename and delete operations
- use-file-operations.ts now serves as orchestrator maintaining backward compatibility
Extract TreeNodeIcon component from tree-node.tsx for cleaner separation of concerns.
Add brief comments to drag hooks explaining their purpose and relationships.
Add ability to choose between "Managed by Dify" (using system config)
and "Bring Your Own API Key" modes when configuring E2B sandbox provider.
This allows Cloud users to use Dify's pre-configured credentials or
their own E2B account for more control over resources and billing.
When dragging files over a closed folder, the highlight now blinks
during the second half of the 2-second hover period to signal that
the folder is about to expand. This provides better visual feedback
similar to VSCode's drag-and-drop behavior.
- Added DIFY_CLI_ROOT constant for the root directory of Dify CLI.
- Updated DIFY_CLI_PATH and DIFY_CLI_CONFIG_PATH to use absolute paths.
- Modified app asset initialization to create directories under DIFY_CLI_ROOT.
- Enhanced Docker and E2B environment file handling to use workspace paths.
Use Promise.all for concurrent file uploads instead of sequential
processing, improving upload performance for multiple files. Also
add isFileDrag check to handleFolderDragOver for consistency with
other drag handlers.
Enable users to drag files from their system directly into the file tree
to upload them. Files can be dropped on the tree container (uploads to root)
or on specific folders. Hovering over a closed folder for 2 seconds auto-
expands it. Uses Zustand for drag state management instead of React Context
for better performance.
The previous refactor inadvertently passed undefined nodeId for blank
area menus, causing root-level folder creation/upload to fail. This
restores the original behavior by explicitly passing 'root' when the
context menu type is 'blank'.
Consolidate menu components by extending NodeMenu to support a 'root'
type, eliminating the redundant BlankAreaMenu component. This reduces
code duplication and simplifies the context menu logic by storing
isFolder in the context menu state instead of re-querying tree data.
When file content is not in JSON format (e.g., newly uploaded files),
return the raw content instead of empty string to ensure files display
correctly.
Remove types that don't match backend API:
- AppAssetFileContentResponse (unused, had extra metadata field)
- CreateFilePayload (unused, FormData built manually)
- metadata field from UpdateFileContentPayload
Add right-click context menu for file tree blank area with New File,
New Folder, and Upload Files options. Also align search input and
add button styles to match Figma design specs (24px height, 6px radius).
Remove label, description, and icon fields from SandboxProvider type
as they are no longer returned by the backend API. Use i18n translations
to display provider labels instead of relying on API response data.
Use explicit StateCreator<FullStore, [], [], SliceType> pattern instead of
StateCreator<SliceType> for all skill-editor slices. This enables:
- Type-safe cross-slice state access via get()
- Explicit type contracts instead of relying on spread args behavior
- Better maintainability following Lobe-chat's proven pattern
Extract all type definitions to types.ts to avoid circular dependencies.
Replace explicit parameter destructuring with spread args pattern to
eliminate `as unknown as` type assertions when composing sub-slices.
This aligns with the pattern used in the main workflow store.
Consolidate file type derivations into a single useMemo with stable
dependencies (currentFileNode?.name and currentFileNode?.extension)
to help React Compiler track stability.
Extract originalContent as a separate variable to avoid property access
in useCallback dependencies, which caused Compiler to infer broader
dependencies than specified.
Wrap isEditable in useMemo to help React Compiler track its stability
and preserve memoization for callbacks that depend on it. Also replace
Record<string, any> with Record<string, unknown> to satisfy no-explicit-any.
Move activeTabId and fileMetadata reads from selector subscriptions to
getState() calls inside the callback. These values were only used in the
insertTools callback, not for rendering, causing unnecessary re-renders
when they changed.
Previously, any edit would mark the file as dirty even if the content
was restored to its original state. Now we compare against the original
content and clear the dirty flag when they match.
Split the monolithic skill-editor-slice.ts into a dedicated directory with
individual slice files (tab, file-tree, dirty, metadata, file-operations-menu)
to improve maintainability and code organization.
Reduce initial bundle size by dynamically importing SkillMain
component. This prevents loading the entire Skill module (including
Monaco and Lexical editors) when users only access the Graph view.
Add search functionality to skill sidebar using react-arborist's built-in
searchTerm and searchMatch props. Search input is debounced at 300ms and
filters tree nodes by name (case-insensitive). Also add success toast for
rename operations.
Replace getAncestorIds(treeData) with node.parent chain traversal
for more efficient ancestor lookup. This avoids re-traversing the
tree data structure and uses react-arborist's built-in parent refs.
Also rename hook to useSyncTreeWithActiveTab for clarity.
Use split button pattern to separate main content area from more button.
This prevents click events on the more button from bubbling up to the
parent element's click/double-click handlers, which caused unintended
file opening when clicking the menu button multiple times.
Move SkillEditorSlice from injection pattern to core workflow store,
making it available to all workflow contexts (workflow-app, chatflow,
and future rag-pipeline).
- Add createSkillEditorSlice to core createWorkflowStore
- Remove complex type conversion logic from workflow-app/index.tsx
- Remove optional chaining (?.) and non-null assertions (!) from components
- Simplify slice composition with type assertions via unknown
Lexical editor only uses initialConfig.editorState on mount, ignoring
subsequent value prop changes when the component is reused by React.
Adding key={activeTabId} forces React to remount editors when switching
tabs, ensuring correct content is displayed.
Refactor the skill editor state management from a standalone Zustand store
with Context provider pattern to a slice injection pattern that integrates
with the existing workflow store. This aligns with how rag-pipeline already
injects its slice.
- Remove SkillEditorProvider and SkillEditorContext
- Export createSkillEditorSlice for injection into workflow store
- Update all components to use useStore/useWorkflowStore from workflow store
- Add SkillEditorSliceShape to SliceFromInjection union type
- Use type-safe slice creator args without any types
Merge file-node-menu.tsx and folder-node-menu.tsx into a single
declarative NodeMenu component that uses type prop to determine
menu items. Add cva-based variant support to MenuItem for consistent
destructive styling.
- Use es-toolkit throttle with leading edge to prevent folder toggle
flickering on double-click (3 toggles reduced to 1)
- Add validation in pinTab to check if file exists in openTabIds
- Single-click file in tree opens in preview mode (temporary, replaceable)
- Double-click file opens in pinned mode (permanent)
- Preview tabs display with italic filename
- Editing content auto-converts preview tab to pinned
- Double-clicking preview tab header converts to pinned
- Only one preview tab can exist at a time
Add TreeEditInput component for inline file/folder renaming with keyboard
support (Enter to submit, Escape to cancel). Add TreeGuideLines component
to render vertical indent lines based on node depth for better visual
hierarchy in the tree view.
Reorganize file tree components into dedicated `file-tree` subdirectory
for better code organization.
Extract repeated appId retrieval and tree data fetching patterns into
dedicated hooks (useSkillAssetTreeData, useSkillAssetNodeMap) to reduce
code duplication across 6 components and leverage TanStack Query's
select option for efficient nodeMap computation.
- Convert div with onClick to proper button elements for keyboard access
- Add focus-visible ring styles to all interactive elements
- Add ARIA attributes (role, aria-selected, aria-expanded) to tree nodes
- Add keyboard navigation (Enter/Space) support to tree items
- Mark decorative icons with aria-hidden="true"
- Add missing i18n keys for accessibility labels
- Fix typography: use ellipsis character (…) instead of three dots
- Add useIsMutating hook to track ongoing mutations
- Apply pointer-events-none and opacity-50 when mutating
- Prevents user interaction during file operations
- Move DropTip component outside the tree flex container
- Use Fragment to group tree container, DropTip and context menu
- DropTip is now an independent fixed element at the bottom
- Replace fixed height={1000} with dynamic containerSize.height
- Use useSize hook from ahooks to observe container dimensions
- Fallback to 400px default height for initial render
- Fixes scroll issues when collapsing folders
- Extract useFileOperations hook to hooks/use-file-operations.ts
- Move tree utilities to utils/tree-utils.ts
- Move file utilities to utils/file-utils.ts (renamed from utils.ts)
- Remove unnecessary JSDoc comments throughout components
- Simplify type.ts to only contain local type definitions
- Clean up store/index.ts by removing verbose comments
- Add Rename using react-arborist native inline editing (node.edit())
- Add Delete with Confirm modal and automatic tab cleanup
- Add getAllDescendantFileIds utility for finding files to close on delete
- Add i18n strings for rename/delete operations (en-US, zh-Hans)
Use nuqs to sync graph/skill view selection to URL, enabling
shareable links and browser history navigation. Hoists
SkillEditorProvider to maintain state across view switches.
Move SkillEditorProvider from SkillMain to WorkflowAppWrapper so that
store state persists across view switches between Graph and Skill views.
Also add URL query state for view type using nuqs.
Add right-click context menu and "..." dropdown button for folders in
the file tree, enabling file operations within any folder:
- New File: Create empty file via Blob upload
- New Folder: Create subfolder
- Upload File: Upload multiple files to folder
- Upload Folder: Upload entire folder structure preserving hierarchy
Implementation includes:
- FileOperationsMenu: Shared menu component for both triggers
- FileTreeContextMenu: Right-click menu with absolute positioning
- FileTreeNode: Added context menu and dropdown button for folders
- Store slice for context menu state management
- i18n strings for en-US and zh-Hans
- Replace useRef pattern with useMemo for store creation in context.tsx
- Remove unused extension prop from EditorTabItem
- Fix useMemo dependency warnings in editor-tabs.tsx and skill-doc-editor.tsx
- Add proper OnMount type for Monaco editor instead of any
- Delete unused file-item.tsx and fold-item.tsx components
- Remove unused getExtension and fromOpensObject utilities from type.ts
- Refactor auto-reveal effect in files.tsx for better readability
Implement MVP features for skill editor based on design doc:
- Add Zustand store with Tab, FileTree, and Dirty slices
- Rewrite file tree using react-arborist for virtual scrolling
- Implement Tab↔FileTree sync with auto-reveal on tab activation
- Add upload functionality (new folder, upload file)
- Implement Monaco editor with dirty state tracking and Ctrl+S save
- Add i18n translations (en-US and zh-Hans)
The upload helper was hardcoded to only accept HTTP 201, which broke
PUT requests that return 200. This aligns with standard HTTP semantics
where POST returns 201 Created and PUT returns 200 OK.
- Introduce _app_id attribute to store application ID from system variables
- Add _get_app_id method to retrieve and validate app_id
- Update on_graph_start to log app_id during sandbox initialization
- Integrate ArchiveSandboxStorage for persisting and restoring sandbox files
- Ensure proper error handling for sandbox file operations
- Add AppAssetsInitializer to load published app assets into sandbox
- Refactor VMFactory.create() to VMBuilder with builder pattern
- Extract SandboxInitializer base class and DifyCliInitializer
- Simplify SandboxLayer constructor (remove options/environments params)
- Fix circular import in sandbox module by removing eager SandboxBashTool export
- Update SandboxProviderService to return VMBuilder instead of VirtualEnvironment
- Add helpers.py with connection management utilities:
- with_connection: context manager for connection lifecycle
- submit_command: execute command and return CommandFuture
- execute: run command with auto connection, raise on failure
- try_execute: run command with auto connection, return result
- Add CommandExecutionError to exec.py for typed error handling
with access to exit_code, stderr, and full result
- Remove run_command method from VirtualEnvironment base class
(now available as submit_command helper)
- Update all call sites to use new helper functions:
- sandbox/session.py
- sandbox/storage/archive_storage.py
- sandbox/bash/bash_tool.py
- workflow/nodes/command/node.py
- Add comprehensive unit tests for helpers with connection reuse
- Add useMemo to prevent unnecessary re-renders of context value
- Extract ProviderProps type for better readability
- Convert arrow functions to standard function declarations
- Remove unused versionSupported/sandboxEnabled from hook return type
Extend MCP tool availability context to include sandbox mode check
alongside version support. MCP tools are now blocked when sandbox
is disabled, with appropriate tooltip messages for each blocking
condition.
Move type imports to @/types/sandbox-provider instead of re-exporting
from service file. Remove unnecessary retry:0 options to use React
Query's default retry behavior.
Replace manual fetch calls in use-sandbox-provider.ts with typed ORPC
contracts and client. Adds type definitions to types/sandbox-provider.ts
and registers contracts in the console router for consistent API handling.
Extend variable pattern matching to support both `#` and `@` markers,
with `@` specifically used for agent context variables. Update regex
patterns, text processing logic, and add sub-graph persistence for agent
variable handling.
Rename the constant for clarity and consistency with the new
sandbox-provider tab naming convention. Update all references
across the codebase to use the new constant name.
Update E2B, Daytona, and Docker descriptions with unique copy from design:
- E2B: "E2B Gives AI Agents Secure Computers with Real-World Tools."
- Daytona: "Deploy AI code with confidence using Daytona's lightning-fast infrastructure."
- Docker: "The Easiest Way to Build, Run, and Secure Agents."
The original fix seems correct on its own. However, for chatflows with multiple answer nodes, the `message_replace` command only preserves the output of the last executed answer node.
- Created a new GitHub Actions workflow to automate deployment for the agent development branch.
- Configured the workflow to trigger upon successful completion of the "Build and Push API & Web" workflow.
- Implemented SSH deployment steps using appleboy/ssh-action for secure server updates.
- Enhanced the SandboxManager to use a sharded locking mechanism for improved concurrency and performance.
- Replaced the global lock with shard-specific locks, allowing for lock-free reads and reducing contention.
- Updated methods for registering, retrieving, unregistering, and counting sandboxes to work with the new sharded structure.
- Improved documentation within the class to clarify the purpose and functionality of the sharding approach.
- Simplified the SandboxLayer initialization by removing unused parameters and consolidating sandbox creation logic.
- Integrated SandboxManager for better lifecycle management of sandboxes during workflow execution.
- Updated error handling to ensure proper initialization and cleanup of sandboxes.
- Enhanced CommandNode to retrieve sandboxes from SandboxManager, improving sandbox availability checks.
- Added unit tests to validate the new sandbox management approach and ensure robust error handling.
- Adjusted padding in the LLM panel for better visual alignment.
- Refactored the Max Iterations component to accept a className prop for flexible styling.
- Maintained the structure of advanced settings while ensuring consistent rendering of fields.
- Introduced threading to handle Docker's stdout/stderr streams, improving thread safety and preventing race conditions.
- Replaced buffer-based reading with queue-based reading for stdout and stderr.
- Updated read methods to handle errors and end-of-stream conditions more gracefully.
- Enhanced documentation to reflect changes in the demuxing process.
- Introduced a collapsible section for advanced settings in the LLM panel.
- Added Max Iterations component with conditional rendering based on the new hideMaxIterations prop.
- Updated context field and vision configuration to be part of the advanced settings.
- Added new translation key for advanced settings in the workflow localization file.
- Improved error message handling by assigning the stderr output to a variable for better readability.
- Ensured consistent error reporting when a command fails, maintaining clarity in the output.
- Updated error handling to provide detailed stderr output when a command fails.
- Streamlined working directory and command rendering by combining operations into single lines.
- Simplified the command execution logic by removing unnecessary shell invocations.
- Enhanced working directory validation by directly using the `test` command.
- Improved command parsing with `shlex.split` for better handling of raw commands.
- Added checks to verify the existence of the specified working directory before executing commands.
- Updated command execution logic to conditionally change the working directory if provided.
- Included FIXME comments to address future enhancements for native cwd support in VirtualEnvironment.run_command.
- Added FIXME comments to indicate the need for unifying runtime config checking in AdvancedChatAppGenerator and WorkflowAppGenerator.
- Introduced sandbox management in WorkflowService with proper error handling for sandbox release.
- Enhanced runtime feature handling in the workflow execution process.
- Add _DockerDemuxer to properly separate stdout/stderr from multiplexed stream
- Fix binary header garbage in Docker exec output (tty=False 8-byte header)
- Fix LocalVirtualEnvironment.get_command_status() to use os.WEXITSTATUS()
- Update tests to use Transport API instead of raw file descriptors
- Introduced Command node type in workflow with associated UI components and translations.
- Enhanced SandboxLayer to manage sandbox attachment for Command nodes during execution.
- Updated various components and constants to integrate Command node functionality across the workflow.
- Updated `checkMakeGroupAvailability` to include a check for existing group nodes, preventing group creation if a group node is already selected.
- Modified `useMakeGroupAvailability` and `useNodesInteractions` hooks to incorporate the new group node check, ensuring accurate group creation logic.
- Adjusted UI rendering logic in the workflow panel to conditionally display elements based on node type, specifically for group nodes.
- Merge origin/feat/llm-node-support-tools branch
- Fix unused variable tenant_id in dsl.py
- Add None checks for app and workflow in dsl.py
- Add type ignore for e2b_code_interpreter import
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Enhanced `useNodesInteractions` to better manage group node handlers and connections, ensuring accurate identification of leaf nodes and their branches.
- Updated logic to create handlers based on node connections, differentiating between internal and external connections.
- Refined initial node setup to include target branches for group nodes, improving the overall interaction model for grouped elements.
- Introduced GroupDefault node with metadata and default values for group nodes.
- Enhanced useNodeMetaData hook to handle group node author and description using translations.
- Added translations for group node functionality in English, Japanese, Simplified Chinese, and Traditional Chinese.
- Introduced `createGroupInboundEdges` function to manage edges for group nodes, ensuring proper connections to head nodes.
- Updated edge creation logic to handle group nodes in both inbound and outbound scenarios, including temporary edges.
- Enhanced validation in `useWorkflow` to check connections for group nodes based on their head nodes.
- Refined edge processing in `preprocessNodesAndEdges` to ensure correct handling of source handles for group edges.
- Added support for UI-only group nodes, including custom-group, custom-group-input, and custom-group-exit-port types.
- Enhanced edge interactions to manage temporary edges connected to groups, ensuring corresponding real edges are deleted when temp edges are removed.
- Updated node interaction hooks to restore hidden edges and remove temp edges efficiently.
- Implemented logic for creating and managing group structures, including entry and exit ports, while maintaining execution graph integrity.
- Added `handleUngroup`, `getCanUngroup`, and `getSelectedGroupId` methods to manage ungrouping of selected group nodes.
- Integrated ungrouping logic into the `useShortcuts` hook for keyboard shortcut support (Ctrl + Shift + G).
- Updated UI to include ungroup option in the panel operator popup for group nodes.
- Added translations for the ungroup action in multiple languages.
- Added headNodeIds and leafNodeIds to GroupNodeData to track nodes that receive input and send output outside the group.
- Updated useNodesInteractions hook to include headNodeIds in the group node data.
- Modified isValidConnection logic in useWorkflow to validate connections based on leaf node types for group nodes.
- Enhanced preprocessNodesAndEdges to rebuild temporary edges for group nodes, connecting them to external nodes for visual representation.
- Added alignment buttons for nodes with tooltips in the selection context menu.
- Implemented grouping functionality with a new "Make group" option, including keyboard shortcuts.
- Updated translations for the new grouping feature in multiple languages.
- Refactored node selection logic to improve performance and readability.
- Ensure `EventManager._notify_layers` logs exceptions instead of silently swallowing them
so GraphEngine layer failures surface for debugging
- Introduce unit tests to assert the logger captures the runtime error when collecting events
- Enable the `S110` lint rule to catch `try-except-pass` patterns
- Add proper error logging for existing `try-except-pass` blocks.
This commit:
1. Convert `pause_reason` to `pause_reasons` in `GraphExecution` and relevant classes. Change the field from a scalar value to a list that can contain multiple `PauseReason` objects, ensuring all pause events are properly captured.
2. Introduce a new `WorkflowPauseReason` model to record reasons associated with a specific `WorkflowPause`.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: -LAN- <laipz8200@outlook.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
The collaboration feature increased workflow operator z-index from z-10 to z-[60].
This caused the AppInfo ContentDialog (z-30) to appear below the operator buttons.
Increased ContentDialog z-index to z-[70] to ensure proper layer hierarchy.
Add isComposing check at the start of handleKeyDown to ignore keyboard events during IME (Chinese/Japanese/Korean) input composition. This follows the existing pattern used in tag-management component and prevents premature form submission when users press Enter to confirm IME candidates.
Merged origin/main into feat/collaboration and resolved dependency lock file conflicts by regenerating pnpm-lock.yaml through clean install.
Changes:
- Resolved eslint version differences (9.36.0 vs 9.35.0)
- Updated lock file reflects current dependency resolution
- All other changes from main branch successfully merged
- Add forwardRef support to MentionInput to expose textarea ref
- Auto-focus reply input when thread opens (100ms delay)
- Restore focus after reply submission and edit operations
- Add Esc key handler to close thread with smart guards
- Enhance accessibility with ARIA attributes (dialog, modal, labelledby)
- Improve keyboard navigation and user experience
Implements P0-P3 priorities following WCAG 2.1 AA accessibility standards
Use pre-rendering strategy with CSS visibility control instead of conditional rendering to avoid race condition between React state update and PortalToFollowElem's click-outside detection.
- Add compact tooltips to Delete, Resolve, Previous, and Next buttons
- Change delete button hover to red background and text
- Use existing i18n translations for tooltip content
- Update background to blur variant with backdrop filter
- Change border radius from lg to xl (12px)
- Add rounded corners to menu items to prevent hover overflow
Separate loading states to distinguish between different operations:
- activeCommentDetailLoading: loading comment details, delete/resolve operations
- replySubmitting: sending new replies
- replyUpdating: editing existing replies
Changes:
- Add replySubmitting and replyUpdating states to comment store
- Restore full-screen loading overlay for comment detail loading
- Use inline spinner (RiLoader2Line) in send/save buttons for reply operations
- Update loading state usage in handleCommentReply and handleCommentReplyUpdate
- Pass separated loading states from workflow index to CommentThread component
Benefits:
- UI clarity: different loading states have appropriate visual feedback
- Better UX: users can still navigate while sending replies
- Clear separation of concerns: each operation has its own loading state
- Replace inline dropdown with PortalToFollowElem to prevent container overflow
- Use z-[100] for dropdown to ensure proper stacking
- Remove redundant outside click handler (handled by PortalToFollowElem)
- Add scroll event listener to auto-close dropdown when scrolling
- Dropdown now renders via portal outside message container
description: Refactor high-complexity React components in Dify frontend. Use when `pnpm analyze-component --json` shows complexity > 50 or lineCount > 300, when the user asks for code splitting, hook extraction, or complexity reduction, or when `pnpm analyze-component` warns to refactor before testing; avoid for simple/well-structured components, third-party wrappers, or when the user explicitly wants testing without refactoring.
---
# Dify Component Refactoring Skill
Refactor high-complexity React components in the Dify frontend codebase with the patterns and workflow below.
> **Complexity Threshold**: Components with complexity > 50 (measured by `pnpm analyze-component`) should be refactored before testing.
## Quick Reference
### Commands (run from `web/`)
Use paths relative to `web/` (e.g., `app/components/...`).
Use `refactor-component` for refactoring prompts and `analyze-component` for testing prompts and metrics.
description: "Trigger when the user requests a review of frontend files (e.g., `.tsx`, `.ts`, `.js`). Support both pending-change reviews and focused file reviews while applying the checklist rules."
---
# Frontend Code Review
## Intent
Use this skill whenever the user asks to review frontend code (especially `.tsx`, `.ts`, or `.js` files). Support two review modes:
1.**Pending-change review**– inspect staged/working-tree files slated for commit and flag checklist violations before submission.
2.**File-targeted review**– review the specific file(s) the user names and report the relevant checklist findings.
Stick to the checklist below for every applicable file and mode.
## Checklist
See [references/code-quality.md](references/code-quality.md), [references/performance.md](references/performance.md), [references/business-logic.md](references/business-logic.md) for the living checklist split by category—treat it as the canonical set of rules to follow.
Flag each rule violation with urgency metadata so future reviewers can prioritize fixes.
## Review Process
1. Open the relevant component/module. Gather lines that relate to class names, React Flow hooks, prop memoization, and styling.
2. For each rule in the review point, note where the code deviates and capture a representative snippet.
3. Compose the review section per the template below. Group violations first by **Urgent** flag, then by category order (Code Quality, Performance, Business Logic).
## Required output
When invoked, the response must exactly follow one of the two templates:
### Template A (any findings)
```
# Code review
Found <N> urgent issues need to be fixed:
## 1 <brief description of bug>
FilePath: <path> line <line>
<relevant code snippet or pointer>
### Suggested fix
<brief description of suggested fix>
---
... (repeat for each urgent issue) ...
Found <M> suggestions for improvement:
## 1 <brief description of suggestion>
FilePath: <path> line <line>
<relevant code snippet or pointer>
### Suggested fix
<brief description of suggested fix>
---
... (repeat for each suggestion) ...
```
If there are no urgent issues, omit that section. If there are no suggestions, omit that section.
If the issue number is more than 10, summarize as "10+ urgent issues" or "10+ suggestions" and just output the first 10 issues.
Don't compress the blank lines between sections; keep them as-is for readability.
If you use Template A (i.e., there are issues to fix) and at least one issue requires code changes, append a brief follow-up question after the structured output asking whether the user wants you to apply the suggested fix(es). For example: "Would you like me to use the Suggested fix section to address these issues?"
File path pattern of node components: `web/app/components/workflow/nodes/[nodeName]/node.tsx`
Node components are also used when creating a RAG Pipe from a template, but in that context there is no workflowStore Provider, which results in a blank screen. [This Issue](https://github.com/langgenius/dify/issues/29168) was caused by exactly this reason.
### Suggested Fix
Use `import { useNodes } from 'reactflow'` instead of `import useNodes from '@/app/components/workflow/store/workflow/use-nodes'`.
Ensure conditional CSS is handled via the shared `classNames` instead of custom ternaries, string concatenation, or template strings. Centralizing class logic keeps components consistent and easier to maintain.
Favor Tailwind CSS utility classes instead of adding new `.module.css` files unless a Tailwind combination cannot achieve the required styling. Keeping styles in Tailwind improves consistency and reduces maintenance overhead.
Update this file when adding, editing, or removing Code Quality rules so the catalog remains accurate.
## Classname ordering for easy overrides
### Description
When writing components, always place the incoming `className` prop after the component’s own class values so that downstream consumers can override or extend the styling. This keeps your component’s defaults but still lets external callers change or remove specific styles.
When rendering React Flow, prefer `useNodes`/`useEdges` for UI consumption and rely on `useStoreApi` inside callbacks that mutate or read node/edge state. Avoid manually pulling Flow data outside of these hooks.
## Complex prop memoization
IsUrgent: True
Category: Performance
### Description
Wrap complex prop values (objects, arrays, maps) in `useMemo` prior to passing them into child components to guarantee stable references and prevent unnecessary renders.
Update this file when adding, editing, or removing Performance rules so the catalog remains accurate.
description: Generate Vitest + React Testing Library tests for Dify frontend components, hooks, and utilities. Triggers on testing, spec files, coverage, Vitest, RTL, unit tests, integration tests, or write/review test requests.
---
# Dify Frontend Testing Skill
This skill enables Claude to generate high-quality, comprehensive frontend tests for the Dify project following established conventions and best practices.
> **⚠️ Authoritative Source**: This skill is derived from `web/docs/test.md`. Use Vitest mock/timer APIs (`vi.*`).
## When to Apply This Skill
Apply this skill when the user:
- Asks to **write tests** for a component, hook, or utility
- Asks to **review existing tests** for completeness
- Mentions **Vitest**, **React Testing Library**, **RTL**, or **spec files**
- Requests **test coverage** improvement
- Uses `pnpm analyze-component` output as context
- Mentions **testing**, **unit tests**, or **integration tests** for frontend code
- Wants to understand **testing patterns** in the Dify codebase
**Do NOT apply** when:
- User is asking about backend/API tests (Python/pytest)
- User is asking about E2E tests (Playwright/Cypress)
- User is only asking conceptual questions without code context
- Modules are not mocked automatically. Global mocks live in `web/vitest.setup.ts` (for example `react-i18next`, `next/image`); mock other modules like `ky` or `mime` locally in test files.
See [Zustand Store Testing](#zustand-store-testing) section for full details.
## Mock Placement
| Location | Purpose |
|----------|---------|
| `web/vitest.setup.ts` | Global mocks shared by all tests (`react-i18next`, `next/image`, `zustand`) |
| `web/__mocks__/zustand.ts` | Zustand mock implementation (auto-resets stores after each test) |
| `web/__mocks__/` | Reusable mock factories shared across multiple test files |
| Test file | Test-specific mocks, inline with `vi.mock()` |
Modules are not mocked automatically. Use `vi.mock` in test files, or add global mocks in `web/vitest.setup.ts`.
**Note**: Zustand is special - it's globally mocked but you should NOT mock store modules manually. See [Zustand Store Testing](#zustand-store-testing).
## Essential Mocks
### 1. i18n (Auto-loaded via Global Mock)
A global mock is defined in `web/vitest.setup.ts` and is auto-loaded by Vitest setup.
The global mock provides:
-`useTranslation` - returns translation keys with namespace prefix
-`Trans` component - renders i18nKey and components
1.**Use real base components** - Import from `@/app/components/base/` directly
1.**Use real project components** - Prefer importing over mocking
1.**Use real Zustand stores** - Set test state via `store.setState()`
1.**Reset mocks in `beforeEach`**, not `afterEach`
1.**Match actual component behavior** in mocks (when mocking is necessary)
1.**Use factory functions** for complex mock data
1.**Import actual types** for type safety
1.**Reset shared mock state** in `beforeEach`
### ❌ DON'T
1.**Don't mock base components** (`Loading`, `Button`, `Tooltip`, etc.)
1.**Don't mock Zustand store modules** - Use real stores with `setState()`
1. Don't mock components you can import directly
1. Don't create overly simplified mocks that miss conditional logic
1. Don't forget to clean up nock after each test
1. Don't use `any` types in mocks without necessity
### Mock Decision Tree
```
Need to use a component in test?
│
├─ Is it from @/app/components/base/*?
│ └─ YES → Import real component, DO NOT mock
│
├─ Is it a project component?
│ └─ YES → Prefer importing real component
│ Only mock if setup is extremely complex
│
├─ Is it an API service (@/service/*)?
│ └─ YES → Mock it
│
├─ Is it a third-party lib with side effects?
│ └─ YES → Mock it (next/navigation, external SDKs)
│
├─ Is it a Zustand store?
│ └─ YES → DO NOT mock the module!
│ Use real store + setState() to set test state
│ (Global mock handles auto-reset)
│
└─ Is it i18n?
└─ YES → Uses shared mock (auto-loaded). Override only for custom translations
```
## Zustand Store Testing
### Global Zustand Mock (Auto-loaded)
Zustand is globally mocked in `web/vitest.setup.ts` following the [official Zustand testing guide](https://zustand.docs.pmnd.rs/guides/testing). The mock in `web/__mocks__/zustand.ts` provides:
- Real store behavior with `getState()`, `setState()`, `subscribe()` methods
- Automatic store reset after each test via `afterEach`
- Proper test isolation between tests
### ✅ Recommended: Use Real Stores (Official Best Practice)
**DO NOT mock store modules manually.** Import and use the real store, then use `setState()` to set test state:
This guide defines the workflow for generating tests, especially for complex components or directories with multiple files.
## Scope Clarification
This guide addresses **multi-file workflow** (how to process multiple test files). For coverage requirements within a single test file, see `web/docs/test.md` § Coverage Goals.
| Scope | Rule |
|-------|------|
| **Single file** | Complete coverage in one generation (100% function, >95% branch) |
| **Multi-file directory** | Process one file at a time, verify each before proceeding |
## ⚠️ Critical Rule: Incremental Approach for Multi-File Testing
When testing a **directory with multiple files**, **NEVER generate all test files at once.** Use an incremental, verify-as-you-go approach.
description: Guide for implementing oRPC contract-first API patterns in Dify frontend. Triggers when creating new API contracts, adding service endpoints, integrating TanStack Query with typed contracts, or migrating legacy service calls to oRPC. Use for all API layer work in web/contract and web/service directories.
---
# oRPC Contract-First Development
## Project Structure
```
web/contract/
├── base.ts # Base contract (inputStructure: 'detailed')
├── router.ts # Router composition & type exports
├── marketplace.ts # Marketplace contracts
└── console/ # Console contracts by domain
├── system.ts
└── billing.ts
```
## Workflow
1.**Create contract** in `web/contract/console/{domain}.ts`
- Import `base` from `../base` and `type` from `@orpc/contract`
- Define route with `path`, `method`, `input`, `output`
2.**Register in router** at `web/contract/router.ts`
- Import directly from domain file (no barrel files)
- Nest by API prefix: `billing: { invoices, bindPartnerStack }`
3.**Create hooks** in `web/service/use-{domain}.ts`
- Use `consoleQuery.{group}.{contract}.queryKey()` for query keys
- Use `consoleClient.{group}.{contract}()` for API calls
## Key Rules
- **Input structure**: Always use `{ params, query?, body? }` format
- **Path params**: Use `{paramName}` in path, match in `params` object
- **Router nesting**: Group by API prefix (e.g., `/billing/*` → `billing: {}`)
- **No barrel files**: Import directly from specific files
- **Types**: Import from `@/types/`, use `type<T>()` helper
# This file defines code ownership for the Dify project.
# Each line is a file pattern followed by one or more owners.
# Owners can be @username, @org/team-name, or email addresses.
# For more information, see: https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners
* @crazywoola @laipz8200 @Yeuoly
# CODEOWNERS file
/.github/CODEOWNERS @laipz8200 @crazywoola
# Docs
/docs/ @crazywoola
# Backend (default owner, more specific rules below will override)
description:Refactor existing code for improved readability and maintainability.
title:"[Chore/Refactor] "
labels:
- refactor
name:"✨ Refactor or Chore"
description:Refactor existing code or perform maintenance chores to improve readability and reliability.
title:"[Refactor/Chore] "
body:
- type:checkboxes
attributes:
@ -11,7 +9,7 @@ body:
options:
- label:I have read the [Contributing Guide](https://github.com/langgenius/dify/blob/main/CONTRIBUTING.md) and [Language Policy](https://github.com/langgenius/dify/issues/1542).
required:true
- label:This is only for refactoring, if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
- label:This is only for refactors or chores; if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
required:true
- label:I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
required:true
@ -25,14 +23,14 @@ body:
id:description
attributes:
label:Description
placeholder:"Describe the refactor you are proposing."
placeholder:"Describe the refactor or chore you are proposing."
validations:
required:true
- type:textarea
id:motivation
attributes:
label:Motivation
placeholder:"Explain why this refactor is necessary."
placeholder:"Explain why this refactor or chore is necessary."
- Run backend CLI commands through `uv run --project api <command>`.
- Before submission, all backend modifications must pass local checks: `make lint`, `make type-check`, and `uv run --project api --dev dev/pytest/pytest_unit_tests.sh`.
- Use Makefile targets for linting and formatting; `make lint` and `make type-check` cover the required checks.
- Integration tests are CI-only and are not expected to run in the local environment.
## Frontend Workflow
```bash
cd web
pnpm lint
pnpm lint:fix
pnpm test
```
- Read `web/AGENTS.md` for details
## Testing & Quality Practices
@ -39,7 +30,7 @@ pnpm test
## Language Style
- **Python**: Keep type hints on functions and attributes, and implement relevant special methods (e.g., `__repr__`, `__str__`).
- **TypeScript**: Use the strict config, lean on ESLint + Prettier workflows, and avoid `any` types.
- **TypeScript**: Use the strict config, rely on ESLint (`pnpm lint:fix` preferred) plus `pnpm type-check:tsgo`, and avoid `any` types.
For setting up the frontend service, please refer to our comprehensive [guide](https://github.com/langgenius/dify/blob/main/web/README.md) in the `web/README.md` file. This document provides detailed instructions to help you set up the frontend environment properly.
**Testing**: All React components must have comprehensive test coverage. See [web/docs/test.md](https://github.com/langgenius/dify/blob/main/web/docs/test.md) for the canonical frontend testing guidelines and follow every requirement described there.
#### Backend
For setting up the backend service, kindly refer to our detailed [instructions](https://github.com/langgenius/dify/blob/main/api/README.md) in the `api/README.md` file. This document contains step-by-step guidance to help you get the backend up and running smoothly.
<imgalt="LFX Active Contributors"src="https://insights.linuxfoundation.org/api/badge/active-contributors?project=langgenius-dify"></a>
</p>
<palign="center">
@ -133,6 +139,19 @@ Star Dify on GitHub and be instantly notified of new releases.
If you need to customize the configuration, please refer to the comments in our [.env.example](docker/.env.example) file and update the corresponding values in your `.env` file. Additionally, you might need to make adjustments to the `docker-compose.yaml` file itself, such as changing image versions, port mappings, or volume mounts, based on your specific deployment environment and requirements. After making any changes, please re-run `docker-compose up -d`. You can find the full list of available environment variables [here](https://docs.dify.ai/getting-started/install-self-hosted/environments).
#### Customizing Suggested Questions
You can now customize the "Suggested Questions After Answer" feature to better fit your use case. For example, to generate longer, more technical questions:
```bash
# In your .env file
SUGGESTED_QUESTIONS_PROMPT='Please help me predict the five most likely technical follow-up questions a developer would ask. Focus on implementation details, best practices, and architecture considerations. Keep each question between 40-60 characters. Output must be JSON array: ["question1","question2","question3","question4","question5"]'
SUGGESTED_QUESTIONS_MAX_TOKENS=512
SUGGESTED_QUESTIONS_TEMPERATURE=0.3
```
See the [Suggested Questions Configuration Guide](docs/suggested-questions-configuration.md) for detailed examples and usage instructions.
### Metrics Monitoring with Grafana
Import the dashboard to Grafana, using Dify's PostgreSQL database as data source, to monitor metrics in granularity of apps, tenants, messages, and more.
# Set to false to export dataset IDs as plain text for easier cross-environment import
DSL_EXPORT_ENCRYPT_DATASET_ID=true
# Suggested Questions After Answer Configuration
# These environment variables allow customization of the suggested questions feature
#
# Custom prompt for generating suggested questions (optional)
# If not set, uses the default prompt that generates 3 questions under 20 characters each
# Example: "Please help me predict the five most likely technical follow-up questions a developer would ask. Focus on implementation details, best practices, and architecture considerations. Keep each question between 40-60 characters. Output must be JSON array: [\"question1\",\"question2\",\"question3\",\"question4\",\"question5\"]"
# SUGGESTED_QUESTIONS_PROMPT=
# Maximum number of tokens for suggested questions generation (default: 256)
# Adjust this value for longer questions or more questions
# SUGGESTED_QUESTIONS_MAX_TOKENS=256
# Temperature for suggested questions generation (default: 0.0)
# Higher values (0.5-1.0) produce more creative questions, lower values (0.0-0.3) produce more focused questions
# SUGGESTED_QUESTIONS_TEMPERATURE=0
# Tenant isolated task queue configuration
TENANT_ISOLATED_TASK_CONCURRENCY=1
# Maximum number of segments for dataset segments API (0 for unlimited)
DATASET_MAX_SEGMENTS_PER_REQUEST=0
# Multimodal knowledgebase limit
SINGLE_CHUNK_ATTACHMENT_LIMIT=10
ATTACHMENT_IMAGE_FILE_SIZE_LIMIT=2
ATTACHMENT_IMAGE_DOWNLOAD_TIMEOUT=60
IMAGE_FILE_BATCH_LIMIT=10
# Maximum allowed CSV file size for annotation import in megabytes
ANNOTATION_IMPORT_FILE_SIZE_LIMIT=2
#Maximum number of annotation records allowed in a single import
ANNOTATION_IMPORT_MAX_RECORDS=10000
# Minimum number of annotation records required in a single import
ANNOTATION_IMPORT_MIN_RECORDS=1
ANNOTATION_IMPORT_RATE_LIMIT_PER_MINUTE=5
ANNOTATION_IMPORT_RATE_LIMIT_PER_HOUR=20
# Maximum number of concurrent annotation import tasks per tenant
ANNOTATION_IMPORT_MAX_CONCURRENT=5
# Sandbox expired records clean configuration
SANDBOX_EXPIRED_RECORDS_CLEAN_GRACEFUL_PERIOD=21
SANDBOX_EXPIRED_RECORDS_CLEAN_BATCH_SIZE=1000
SANDBOX_EXPIRED_RECORDS_RETENTION_DAYS=30
SANDBOX_EXPIRED_RECORDS_CLEAN_TASK_LOCK_TTL=90000
# Sandbox Dify CLI configuration
# Directory containing dify CLI binaries (dify-cli-<os>-<arch>). Defaults to api/bin when unset.
SANDBOX_DIFY_CLI_ROOT=
# CLI API URL for sandbox (dify-sandbox or e2b) to call back to Dify API.
# This URL must be accessible from the sandbox environment.
# For local development: use http://localhost:5001 or http://127.0.0.1:5001
# For Docker deployment: use http://api:5001 (internal Docker network)
# For external sandbox (e.g., e2b): use a publicly accessible URL
Start with the section that best matches your need. Each entry lists the problems it solves plus key files/concepts so you know what to expect before opening it.
Before changing any backend code under `api/`, you MUST read the surrounding docstrings and comments. These notes contain required context (invariants, edge cases, trade-offs) and are treated as part of the spec.
In this section, “notes” means module/class/function docstrings plus any relevant paragraph/block comments.
## Plugin & Extension Development
- **Before working**
- Read the notes in the area you’ll touch; treat them as part of the spec.
- If a docstring or comment conflicts with the current code, treat the **code as the single source of truth** and update the docstring or comment to match reality.
- If important intent/invariants/edge cases are missing, add them in the closest docstring or comment (module for overall scope, function for behaviour).
- **During working**
- Keep the notes in sync as you discover constraints, make decisions, or change approach.
- If you move/rename responsibilities across modules/classes, update the affected docstrings and comments so readers can still find the “why” and the invariants.
- Record non-obvious edge cases, trade-offs, and the test/verification plan in the nearest docstring or comment that will stay correct.
- Keep the notes **coherent**: integrate new findings into the relevant docstrings and comments; avoid append-only “recent fix” / changelog-style additions.
- **When finishing**
- Update the notes to reflect what changed, why, and any new edge cases/tests.
- Remove or rewrite any comments that could be mistaken as current guidance but no longer apply.
- Keep docstrings and comments concise and accurate; they are meant to prevent repeated rediscovery.
- **[Plugin Systems](agent_skills/plugin.md)**\
When to read this:
## Coding Style
- You’re building or debugging a marketplace plugin.
- You need to know how manifests, providers, daemons, and migrations fit together.\
What it covers: plugin manifests (`core/plugin/entities/plugin.py`), installation/upgrade flows (`services/plugin/plugin_service.py`, CLI commands), runtime adapters (`core/plugin/impl/*` for tool/model/datasource/trigger/endpoint/agent), daemon coordination (`core/plugin/entities/plugin_daemon.py`), and how provider registries surface capabilities to the rest of the platform.
This is the default standard for backend code in this repo. Follow it for new code and use it as the checklist when reviewing changes.
- Code should usually include type annotations that match the repo’s current Python version (avoid untyped public APIs and “mystery” values).
- Prefer modern typing forms (e.g. `list[str]`, `dict[str, int]`) and avoid `Any` unless there’s a strong reason.
- For classes, declare member variables at the top of the class body (before `__init__`) so the class shape is obvious at a glance:
## Additional Notes for Agents
```python
fromdatetimeimportdatetime
- All skill docs assume you follow the coding style guide—run Ruff/BasedPyright/tests listed there before submitting changes.
- When you cannot find an answer in these briefs, search the codebase using the paths referenced (e.g., `core/plugin/impl/tool.py`, `services/dataset_service.py`).
- If you run into cross-cutting concerns (tenancy, configuration, storage), check the infrastructure guide first; it links to most supporting modules.
- Keep multi-tenancy and configuration central: everything flows through `configs.dify_config` and `tenant_id`.
- When touching plugins or triggers, consult both the system overview and the specialised doc to ensure you adjust lifecycle, storage, and observability consistently.
> [`uv`](https://docs.astral.sh/uv/) as the package manager
> for Dify API backend service.
1. Start the docker-compose stack
`uv` and `pnpm` are required to run the setup and development commands below.
The backend require some middleware, including PostgreSQL, Redis, and Weaviate, which can be started together using `docker-compose`.
### Using scripts (recommended)
The scripts resolve paths relative to their location, so you can run them from anywhere.
1. Run setup (copies env files and installs dependencies).
```bash
cd ../docker
cp middleware.env.example middleware.env
# change the profile to other vector database if you are not using weaviate
docker compose -f docker-compose.middleware.yaml --profile weaviate -p dify up -d
cd ../api
./dev/setup
```
1. Copy `.env.example` to `.env`
1. Review `api/.env`, `web/.env.local`, and `docker/middleware.env` values (see the `SECRET_KEY` note below).
```cli
cp .env.example .env
1. Start middleware (PostgreSQL/Redis/Weaviate).
```bash
./dev/start-docker-compose
```
> [!IMPORTANT]
>
> When the frontend and backend run on different subdomains, set COOKIE_DOMAIN to the site’s top-level domain (e.g., `example.com`). The frontend and backend must be under the same top-level domain in order to share authentication cookies.
1. Start backend (runs migrations first).
1. Generate a `SECRET_KEY` in the `.env` file.
bash for Linux
```bash for Linux
sed -i "/^SECRET_KEY=/c\SECRET_KEY=$(openssl rand -base64 42)" .env
```bash
./dev/start-api
```
bash for Mac
1. Start Dify [web](../web) service.
```bash for Mac
secret_key=$(openssl rand -base64 42)
sed -i '' "/^SECRET_KEY=/c\\
SECRET_KEY=${secret_key}" .env
```bash
./dev/start-web
```
1. Create environment.
1. Set up your application by visiting `http://localhost:3000`.
Dify API service uses [UV](https://docs.astral.sh/uv/) to manage dependencies.
First, you need to add the uv package manager, if you don't have it already.
1. Optional: start the worker service (async tasks, runs from `api`).
```bash
./dev/start-worker
```
1. Optional: start Celery Beat (scheduled tasks).
```bash
./dev/start-beat
```
### Manual commands
<details>
<summary>Show manual setup and run steps</summary>
These commands assume you start from the repository root.
1. Start the docker-compose stack.
The backend requires middleware, including PostgreSQL, Redis, and Weaviate, which can be started together using `docker-compose`.
Before the first launch, migrate the database to the latest version.
1. Install web dependencies.
```bash
cd web
pnpm install
cd ..
```
1. Start backend (runs migrations first, in a new terminal).
```bash
cd api
uv run flask db upgrade
```
1. Start backend
```bash
uv run flask run --host 0.0.0.0 --port=5001 --debug
```
1. Start Dify [web](../web) service.
1. Start Dify [web](../web) service (in a new terminal).
1. Setup your application by visiting `http://localhost:3000`.
```bash
cd web
pnpm dev:inspect
```
1. If you need to handle and debug the async tasks (e.g. dataset importing and documents indexing), please start the worker service.
1. Set up your application by visiting `http://localhost:3000`.
```bash
uv run celery -A app.celery worker -P threads -c 2 --loglevel INFO -Q dataset,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,priority_pipeline,pipeline
```
1. Optional: start the worker service (async tasks, in a new terminal).
Additionally, if you want to debug the celery scheduled tasks, you can run the following command in another terminal to start the beat service:
```bash
cd api
uv run celery -A app.celery worker -P threads -c 2 --loglevel INFO -Q dataset,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_executor,retention
```
```bash
uv run celery -A app.celery beat
```
1. Optional: start Celery Beat (scheduled tasks, in a new terminal).
```bash
cd api
uv run celery -A app.celery beat
```
</details>
### Environment notes
> [!IMPORTANT]
>
> When the frontend and backend run on different subdomains, set COOKIE_DOMAIN to the site’s top-level domain (e.g., `example.com`). The frontend and backend must be under the same top-level domain in order to share authentication cookies.
- Generate a `SECRET_KEY` in the `.env` file.
bash for Linux
```bash
sed -i "/^SECRET_KEY=/c\\SECRET_KEY=$(openssl rand -base64 42)" .env
```
bash for Mac
```bash
secret_key=$(openssl rand -base64 42)
sed -i '' "/^SECRET_KEY=/c\\
SECRET_KEY=${secret_key}" .env
```
## Testing
1. Install dependencies for both the backend and the test environment
```bash
uv sync --dev
cd api
uv sync --group dev
```
1. Run the tests locally with mocked system environment variables in `tool.pytest_env` section in `pyproject.toml`, more can check [Claude.md](../CLAUDE.md)
```bash
cd api
uv run pytest # Run all tests
uv run pytest tests/unit_tests/ # Unit tests only
uv run pytest tests/integration_tests/ # Integration tests
- Import `configs.dify_config` for every runtime toggle. Do not read environment variables directly.
- Add new settings to the proper mixin inside `configs/` (deployment, feature, middleware, etc.) so they load through `DifyConfig`.
- Remote overrides come from the optional providers in `configs/remote_settings_sources`; keep defaults in code safe when the value is missing.
- Example: logging pulls targets from `extensions/ext_logging.py`, and model provider URLs are assembled in `services/entities/model_provider_entities.py`.
## Dependencies
- Runtime dependencies live in `[project].dependencies` inside `pyproject.toml`. Optional clients go into the `storage`, `tools`, or `vdb` groups under `[dependency-groups]`.
- Always pin versions and keep the list alphabetised. Shared tooling (lint, typing, pytest) belongs in the `dev` group.
- When code needs a new package, explain why in the PR and run `uv lock` so the lockfile stays current.
## Storage & Files
- Use `extensions.ext_storage.storage` for all blob IO; it already respects the configured backend.
- Convert files for workflows with helpers in `core/file/file_manager.py`; they handle signed URLs and multimodal payloads.
- When writing controller logic, delegate upload quotas and metadata to `services/file_service.py` instead of touching storage directly.
- All outbound HTTP fetches (webhooks, remote files) must go through the SSRF-safe client in `core/helper/ssrf_proxy.py`; it wraps `httpx` with the allow/deny rules configured for the platform.
## Redis & Shared State
- Access Redis through `extensions.ext_redis.redis_client`. For locking, reuse `redis_client.lock`.
- Prefer higher-level helpers when available: rate limits use `libs.helper.RateLimiter`, provider metadata uses caches in `core/helper/provider_cache.py`.
## Models
- SQLAlchemy models sit in `models/` and inherit from the shared declarative `Base` defined in `models/base.py` (metadata configured via `models/engine.py`).
-`models/__init__.py` exposes grouped aggregates: account/tenant models, app and conversation tables, datasets, providers, workflow runs, triggers, etc. Import from there to avoid deep path churn.
- Follow the DDD boundary: persistence objects live in `models/`, repositories under `repositories/` translate them into domain entities, and services consume those repositories.
- When adding a table, create the model class, register it in `models/__init__.py`, wire a repository if needed, and generate an Alembic migration as described below.
## Vector Stores
- Vector client implementations live in `core/rag/datasource/vdb/<provider>`, with a common factory in `core/rag/datasource/vdb/vector_factory.py` and enums in `core/rag/datasource/vdb/vector_type.py`.
- Retrieval pipelines call these providers through `core/rag/datasource/retrieval_service.py` and dataset ingestion flows in `services/dataset_service.py`.
- The CLI helper `flask vdb-migrate` orchestrates bulk migrations using routines in `commands.py`; reuse that pattern when adding new backend transitions.
- To add another store, mirror the provider layout, register it with the factory, and include any schema changes in Alembic migrations.
## Observability & OTEL
- OpenTelemetry settings live under the observability mixin in `configs/observability`. Toggle exporters and sampling via `dify_config`, not ad-hoc env reads.
- HTTP, Celery, Redis, SQLAlchemy, and httpx instrumentation is initialised in `extensions/ext_app_metrics.py` and `extensions/ext_request_logging.py`; reuse these hooks when adding new workers or entrypoints.
- When creating background tasks or external calls, propagate tracing context with helpers in the existing instrumented clients (e.g. use the shared `httpx` session from `core/helper/http_client_pooling.py`).
- If you add a new external integration, ensure spans and metrics are emitted by wiring the appropriate OTEL instrumentation package in `pyproject.toml` and configuring it in `extensions/`.
## Ops Integrations
- Langfuse support and other tracing bridges live under `core/ops/opik_trace`. Config toggles sit in `configs/observability`, while exporters are initialised in the OTEL extensions mentioned above.
- External monitoring services should follow this pattern: keep client code in `core/ops`, expose switches via `dify_config`, and hook initialisation in `extensions/ext_app_metrics.py` or sibling modules.
- Before instrumenting new code paths, check whether existing context helpers (e.g. `extensions/ext_request_logging.py`) already capture the necessary metadata.
## Controllers, Services, Core
- Controllers only parse HTTP input and call a service method. Keep business rules in `services/`.
- Services enforce tenant rules, quotas, and orchestration, then call into `core/` engines (workflow execution, tools, LLMs).
- When adding a new endpoint, search for an existing service to extend before introducing a new layer. Example: workflow APIs pipe through `services/workflow_service.py` into `core/workflow`.
## Plugins, Tools, Providers
- In Dify a plugin is a tenant-installable bundle that declares one or more providers (tool, model, datasource, trigger, endpoint, agent strategy) plus its resource needs and version metadata. The manifest (`core/plugin/entities/plugin.py`) mirrors what you see in the marketplace documentation.
- Installation, upgrades, and migrations are orchestrated by `services/plugin/plugin_service.py` together with helpers such as `services/plugin/plugin_migration.py`.
- Runtime loading happens through the implementations under `core/plugin/impl/*` (tool/model/datasource/trigger/endpoint/agent). These modules normalise plugin providers so that downstream systems (`core/tools/tool_manager.py`, `services/model_provider_service.py`, `services/trigger/*`) can treat builtin and plugin capabilities the same way.
- For remote execution, plugin daemons (`core/plugin/entities/plugin_daemon.py`, `core/plugin/impl/plugin.py`) manage lifecycle hooks, credential forwarding, and background workers that keep plugin processes in sync with the main application.
- Acquire tool implementations through `core/tools/tool_manager.py`; it resolves builtin, plugin, and workflow-as-tool providers uniformly, injecting the right context (tenant, credentials, runtime config).
- To add a new plugin capability, extend the relevant `core/plugin/entities` schema and register the implementation in the matching `core/plugin/impl` module rather than importing the provider directly.
## Async Workloads
see `agent_skills/trigger.md` for more detailed documentation.
- Enqueue background work through `services/async_workflow_service.py`. It routes jobs to the tiered Celery queues defined in `tasks/`.
- Workers boot from `celery_entrypoint.py` and execute functions in `tasks/workflow_execution_tasks.py`, `tasks/trigger_processing_tasks.py`, etc.
- Scheduled workflows poll from `schedule/workflow_schedule_tasks.py`. Follow the same pattern if you need new periodic jobs.
## Database & Migrations
- SQLAlchemy models live under `models/` and map directly to migration files in `migrations/versions`.
- Generate migrations with `uv run --project api flask db revision --autogenerate -m "<summary>"`, then review the diff; never hand-edit the database outside Alembic.
- Apply migrations locally using `uv run --project api flask db upgrade`; production deploys expect the same history.
- If you add tenant-scoped data, confirm the upgrade includes tenant filters or defaults consistent with the service logic touching those tables.
## CLI Commands
- Maintenance commands from `commands.py` are registered on the Flask CLI. Run them via `uv run --project api flask <command>`.
- Use the built-in `db` commands from Flask-Migrate for schema operations (`flask db upgrade`, `flask db stamp`, etc.). Only fall back to custom helpers if you need their extra behaviour.
- Custom entries such as `flask reset-password`, `flask reset-email`, and `flask vdb-migrate` handle self-hosted account recovery and vector database migrations.
- Before adding a new command, check whether an existing service can be reused and ensure the command guards edition-specific behaviour (many enforce `SELF_HOSTED`). Document any additions in the PR.
- Ruff helpers are run directly with `uv`: `uv run --project api --dev ruff format ./api` for formatting and `uv run --project api --dev ruff check ./api` (add `--fix` if you want automatic fixes).
## When You Add Features
- Check for an existing helper or service before writing a new util.
- Uphold tenancy: every service method should receive the tenant ID from controller wrappers such as `controllers/console/wraps.py`.
- Update or create tests alongside behaviour changes (`tests/unit_tests` for fast coverage, `tests/integration_tests` when touching orchestrations).
- Run `uv run --project api --dev ruff check ./api`, `uv run --directory api --dev basedpyright`, and `uv run --project api --dev dev/pytest/pytest_unit_tests.sh` before submitting changes.
Trigger is a collection of nodes that we called `Start` nodes, also, the concept of `Start` is the same as `RootNode` in the workflow engine `core/workflow/graph_engine`, On the other hand, `Start` node is the entry point of workflows, every workflow run always starts from a `Start` node.
## Trigger nodes
-`UserInput`
-`Trigger Webhook`
-`Trigger Schedule`
-`Trigger Plugin`
### UserInput
Before `Trigger` concept is introduced, it's what we called `Start` node, but now, to avoid confusion, it was renamed to `UserInput` node, has a strong relation with `ServiceAPI` in `controllers/service_api/app`
1.`UserInput` node introduces a list of arguments that need to be provided by the user, finally it will be converted into variables in the workflow variable pool.
1.`ServiceAPI` accept those arguments, and pass through them into `UserInput` node.
1. For its detailed implementation, please refer to `core/workflow/nodes/start`
### Trigger Webhook
Inside Webhook Node, Dify provided a UI panel that allows user define a HTTP manifest `core/workflow/nodes/trigger_webhook/entities.py`.`WebhookData`, also, Dify generates a random webhook id for each `Trigger Webhook` node, the implementation was implemented in `core/trigger/utils/endpoint.py`, as you can see, `webhook-debug` is a debug mode for webhook, you may find it in `controllers/trigger/webhook.py`.
Finally, requests to `webhook` endpoint will be converted into variables in workflow variable pool during workflow execution.
### Trigger Schedule
`Trigger Schedule` node is a node that allows user define a schedule to trigger the workflow, detailed manifest is here `core/workflow/nodes/trigger_schedule/entities.py`, we have a poller and executor to handle millions of schedules, see `docker/entrypoint.sh` / `schedule/workflow_schedule_task.py` for help.
To Achieve this, a `WorkflowSchedulePlan` model was introduced in `models/trigger.py`, and a `events/event_handlers/sync_workflow_schedule_when_app_published.py` was used to sync workflow schedule plans when app is published.
### Trigger Plugin
`Trigger Plugin` node allows user define there own distributed trigger plugin, whenever a request was received, Dify forwards it to the plugin and wait for parsed variables from it.
1. Requests were saved in storage by `services/trigger/trigger_request_service.py`, referenced by `services/trigger/trigger_service.py`.`TriggerService`.`process_endpoint`
1. Plugins accept those requests and parse variables from it, see `core/plugin/impl/trigger.py` for details.
A `subscription` concept was out here by Dify, it means an endpoint address from Dify was bound to thirdparty webhook service like `Github``Slack``Linear``GoogleDrive``Gmail` etc. Once a subscription was created, Dify continually receives requests from the platforms and handle them one by one.
## Worker Pool / Async Task
All the events that triggered a new workflow run is always in async mode, a unified entrypoint can be found here `services/async_workflow_service.py`.`AsyncWorkflowService`.`trigger_workflow_async`.
The infrastructure we used is `celery`, we've already configured it in `docker/entrypoint.sh`, and the consumers are in `tasks/async_workflow_tasks.py`, 3 queues were used to handle different tiers of users, `PROFESSIONAL_QUEUE``TEAM_QUEUE``SANDBOX_QUEUE`.
## Debug Strategy
Dify divided users into 2 groups: builders / end users.
Builders are the users who create workflows, in this stage, debugging a workflow becomes a critical part of the workflow development process, as the start node in workflows, trigger nodes can `listen` to the events from `WebhookDebug``Schedule``Plugin`, debugging process was created in `controllers/console/app/workflow.py`.`DraftWorkflowTriggerNodeApi`.
A polling process can be considered as combine of few single `poll` operations, each `poll` operation fetches events cached in `Redis`, returns `None` if no event was found, more detailed implemented: `core/trigger/debug/event_bus.py` was used to handle the polling process, and `core/trigger/debug/event_selectors.py` was used to select the event poller based on the trigger type.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.