Compare commits

..

647 Commits

Author SHA1 Message Date
18a589003e feat(sandbox): enhance sandbox initialization with draft support and asset management
- Introduced DraftAppAssetsInitializer for handling draft assets.
- Updated SandboxLayer to conditionally set sandbox ID and storage based on workflow version.
- Improved asset initialization logging and error handling.
- Refactored ArchiveSandboxStorage to support exclusion patterns during archiving.
- Modified command and LLM nodes to retrieve sandbox from workflow context, supporting draft workflows.
2026-01-20 19:45:04 +08:00
yyh
da6fdc963c Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-20 19:17:51 +08:00
1c76ed2c40 feat(sandbox): draft storage 2026-01-20 18:45:13 +08:00
ceb410fb5c fix: Update archive path for sandbox storage to use a temporary directory 2026-01-20 18:44:19 +08:00
yyh
54921844bb fix(web): disable HTML escaping for form field validation messages (#31292) 2026-01-20 18:43:01 +08:00
yyh
4fa7843050 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-20 18:42:02 +08:00
yyh
3205f98d05 refactor(web): unify auto-expand trigger for drag-and-drop
Replace event-based auto-expand trigger with Zustand state-driven
approach. Now both external file uploads and internal node drag use
the same isDragOver state as the single source of truth for folder
auto-expand timing (1s blink, 2s expand).
2026-01-20 18:10:52 +08:00
yyh
0092254007 Revert "refactor(web): remove redundant useUnifiedDrag abstraction layer"
This reverts commit ee91c9d5f1.
2026-01-20 18:09:25 +08:00
yyh
ee91c9d5f1 refactor(web): remove redundant useUnifiedDrag abstraction layer
Simplify file drop hooks by removing the unnecessary useUnifiedDrag
wrapper that became redundant after internal node drag was migrated
to react-arborist's built-in system. Now useFolderFileDrop and
useRootFileDrop directly use useFileDrop, reducing code complexity
and eliminating unused treeChildren prop drilling.
2026-01-20 18:09:08 +08:00
yyh
2151676db1 refactor: use react-arborist built-in drag for internal node moves
Switch from native HTML5 drag to react-arborist's built-in drag system
for internal node drag-and-drop. The HTML5Backend used by react-arborist
was intercepting dragstart events, preventing native drag from working.

- Add onMove callback and disableDrop validation to Tree component
- Sync react-arborist drag state (isDragging, willReceiveDrop) to Zustand
- Simplify use-node-move to only handle API execution
- Update use-unified-drag to only handle external file uploads
- External file drops continue to work via native HTML5 events
2026-01-20 18:09:08 +08:00
yyh
dc9658b003 perf(web): avoid per-node tree query subscription 2026-01-20 18:09:08 +08:00
yyh
b527921f3f feat: unified drag-and-drop for skill file tree
Implement unified drag system that supports both internal node moves
and external file uploads with consistent UI feedback. Uses native
HTML5 drag API with shared visual states (isDragOver, isBlinking,
DragActionTooltip showing 'Move to' or 'Upload to').
2026-01-20 18:09:08 +08:00
0e66b51ca0 fix: history messages toolcalls 2026-01-20 17:37:23 +08:00
33e96fd11a Merge remote-tracking branch 'origin/feat/support-agent-sandbox' into feat/support-agent-sandbox 2026-01-20 17:07:30 +08:00
2e037014c3 refactor: Replace manual ref syncing with useLatest hook 2026-01-20 17:00:47 +08:00
8c4aaa8286 fix: add message tool call icon 2026-01-20 16:59:53 +08:00
dc8c018e28 refactor: Refactor agent context insertion to use regex 2026-01-20 16:48:05 +08:00
57a8c453b9 fix: Fix variable insertion to only trigger on current line 2026-01-20 16:45:20 +08:00
e5dc56c483 Merge remote-tracking branch 'origin/feat/support-agent-sandbox' into feat/support-agent-sandbox 2026-01-20 16:37:04 +08:00
812df81d92 feat: Add paramKey prop to VariableReferenceFields component 2026-01-20 16:35:52 +08:00
67c29be3c6 fix: message answer include tool result 2026-01-20 16:05:28 +08:00
yyh
cf5e8491df chore: optimize code quality and performance 2026-01-20 15:54:31 +08:00
yyh
53f828f00e feat: paste operation for skill file tree 2026-01-20 15:42:53 +08:00
yyh
357489d444 feat: multi select for file tree & clipboard support 2026-01-20 15:42:53 +08:00
331c65fd1d fix: click file tab caused popup hide 2026-01-20 15:35:08 +08:00
yyh
56b09d9f72 fix: download option trigger open tab 2026-01-20 14:28:05 +08:00
d4ed398e4f fix lint 2026-01-20 14:26:01 +08:00
yyh
951a580907 feat: artifacts section layout 2026-01-20 14:21:31 +08:00
3b72b45319 Merge branch 'feat/support-agent-sandbox' of https://github.com/langgenius/dify into feat/support-agent-sandbox 2026-01-20 14:01:43 +08:00
2650ceb0a6 feat: support picker vars files ui in editor 2026-01-20 14:01:30 +08:00
yyh
c5fc3cc08e revert icons 2026-01-20 14:00:46 +08:00
fdaf471a03 fix: answer node text 2026-01-20 13:59:49 +08:00
27de07e93d chore: fix the llm node memory issue 2026-01-20 13:52:45 +08:00
yyh
8154d0af53 feat: add FolderSpark icon for workflow 2026-01-20 13:51:49 +08:00
yyh
466f76345b feat: add drag action tooltip 2026-01-20 13:50:51 +08:00
3ebe53ada1 ci: label web changes (#31261)
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-20 13:46:23 +08:00
yyh
fc83e2b1c4 feat!: file download in skill file tree menu 2026-01-20 13:16:27 +08:00
76b64dda52 test: add tests for dataset list (#31231)
Co-authored-by: CodingOnStar <hanxujiang@dify.ai>
Co-authored-by: yyh <92089059+lyzno1@users.noreply.github.com>
2026-01-20 13:07:00 +08:00
yyh
552f9a8989 refactor(skill): simplify file tree search state management
Move searchTerm from props drilling to zustand store for cleaner
  architecture. Remove unnecessary controlled/uncontrolled pattern
  and unused debounce logic since search is pure frontend filtering.

  - Add fileTreeSearchTerm state to file-tree-slice
  - Remove useState and props from main.tsx
  - Simplify sidebar-search-add.tsx to read/write store directly
  - Add empty state UI with reset filter button
2026-01-20 12:43:56 +08:00
a715c015e7 chore(web): remove redundant optimizePackageImports config (#31257) 2026-01-20 12:24:16 +08:00
4f5b175e55 fix: emoji icon validate error 2026-01-20 11:09:32 +08:00
45b8d033be chore: init tsslint (#31209)
Co-authored-by: Johnson Chu <johnsoncodehk@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-20 11:08:50 +08:00
13d6923c11 Merge branch 'feat/llm-support-tools' into feat/support-agent-sandbox 2026-01-20 10:27:42 +08:00
1483a51aa1 Merge branch 'feat/pull-a-variable' into feat/support-agent-sandbox 2026-01-20 09:54:41 +08:00
cb51a449d3 fix: correct i18n for stepOne.uploader.tip (#31177) 2026-01-20 09:30:50 +08:00
f5a34e9ee8 feat(skill): skill support 2026-01-20 03:02:34 +08:00
d69e7eb12a fix: Fix variable insertion to only remove @ trigger on current line 2026-01-20 01:32:42 +08:00
c44aaf1883 fix: Fix prompt editor trigger match to use current selection 2026-01-20 00:42:19 +08:00
4b91969d0f refactor: Refactor keyboard navigation in agent and variable lists 2026-01-20 00:41:23 +08:00
92c54d3c9d feat: merge app and meta defaults when creating workflow nodes 2026-01-19 23:56:15 +08:00
yyh
bc9ce23fdc refactor(skill): rename components for semantic clarity
Rename components and reorganize directory structure:
- skill-doc-editor.tsx → file-content-panel.tsx (handles edit/preview/download)
- editor-area.tsx → content-area.tsx
- editor-body.tsx → content-body.tsx
- editor-tabs.tsx → file-tabs.tsx
- editor-tab-item.tsx → file-tab-item.tsx

Create viewer/ directory for non-editor components:
- Move media-file-preview.tsx from editor/ to viewer/
- Move unsupported-file-download.tsx from editor/ to viewer/

This clarifies the distinction between:
- editor/: actual file editors (code, markdown)
- viewer/: preview and download components (media, unsupported files)
2026-01-19 23:50:08 +08:00
yyh
cab33d440b refactor(skill): remove Office file special handling, merge into unsupported
Remove the Office file placeholder that only showed "Preview will be
supported in a future update" without any download option. Office files
(pdf, doc, docx, xls, xlsx, ppt, pptx) now fall through to the generic
"unsupported file" handler which provides a download button.

Removed:
- OfficeFilePlaceholder component
- isOfficeFile function and OFFICE_EXTENSIONS constant
- isOffice flag from useFileTypeInfo hook
- i18n keys for officePlaceholder

This simplifies the file type handling to just three categories:
- Editable: markdown, code, text files → editor
- Previewable: image, video files → media preview
- Everything else: download button
2026-01-19 23:39:32 +08:00
267de1861d perf: reduce input lag in variable pickers 2026-01-19 23:35:45 +08:00
yyh
b3793b0198 fix(skill): use download URL for all non-editable files
Change useSkillFileData to use isEditable instead of isMediaFile:
- Editable files (markdown, code, text) fetch file content for editing
- Non-editable files (image, video, office, unsupported) fetch download URL

This fixes the download button for unsupported files which was incorrectly
using file content (UTF-8 decoded garbage) instead of the presigned URL.
2026-01-19 23:34:56 +08:00
yyh
8486c675c8 refactor(skill): extract hooks from skill-doc-editor for better separation
Extract business logic into dedicated hooks to reduce component complexity:
- useFileTypeInfo: file type detection (markdown, code, image, video, etc.)
- useSkillFileData: data fetching with conditional API calls
- useSkillFileSave: save logic with Ctrl+S keyboard shortcut

Also fix Vercel best practice: use ternary instead of && for conditional rendering.
2026-01-19 23:25:48 +08:00
5e49b27dba Merge branch 'zhsama/panel-var-popup' into feat/pull-a-variable 2026-01-19 23:15:01 +08:00
yyh
b6df7b3afe fix(skill): use presigned URL for image/video preview in skill editor
Previously, media files were fetched via getFileContent API which decodes
binary data as UTF-8, resulting in corrupted strings that cannot be used
as img/video src. Now media files use getFileDownloadUrl API to get a
presigned URL, enabling proper preview of images and videos of any size.
2026-01-19 23:15:00 +08:00
6f74a66c8a feat: enable typeahead filtering and keyboard navigation 2026-01-19 23:12:08 +08:00
yyh
31a7db2657 refactor(skill): unify root/blank constants and eliminate magic strings
- Add constants.ts with ROOT_ID, CONTEXT_MENU_TYPE, NODE_MENU_TYPE
- Add root utilities to tree-utils.ts (isRootId, toApiParentId, etc.)
- Replace '__root__' with ROOT_ID for consistent root identifier
- Replace inline 'blank'/'root' strings with constants
- Use NodeMenuType for type-safe menu type props
- Remove duplicate ContextMenuType from types.ts, use from constants.ts
2026-01-19 23:07:49 +08:00
68fd7c021c feat: Remove allowGraphActions check from retry and error panels 2026-01-19 23:07:32 +08:00
e1e64ae430 feat: code node output initialization and agent placeholder1 2026-01-19 23:06:08 +08:00
yyh
9080607028 refactor(skill): unify tree selection with VSCode-style single state
Remove redundant createTargetNodeId and use selectedTreeNodeId for both
visual highlight and creation target. This simplifies the state management
by having a single source of truth for tree selection, similar to VSCode's
file explorer behavior where both files and folders can be selected.
2026-01-19 22:36:04 +08:00
6e9a5139b4 chore: Remove sonarjs ESLint suppressions and reformat code 2026-01-19 22:31:04 +08:00
f44305af0d feat: add AssembleVariablesAlt icon and integrate into sub-graph
components.
2026-01-19 22:31:04 +08:00
yyh
8f4a4214a1 feat(sandbox): preserve user config when switching to system default
Update frontend to use new backend API:
- save_config now accepts optional 'activate' parameter
- activate endpoint now requires 'type' parameter ('system' | 'user')

When switching to managed mode, call activate with type='system' instead
of deleting user config, so custom configurations are preserved for
future use.
2026-01-19 22:27:06 +08:00
yyh
ff210a98db feat(skill): add placeholder for inline tree node input
Display localized placeholder text ("File name" / "Folder name") when
creating new files or folders in the skill editor file tree.
2026-01-19 22:01:31 +08:00
9ad1f30a8c fix(app_asset_service): increase maximum preview content size from 1MB to 5MB 2026-01-19 21:53:48 +08:00
5053fae5b4 fix(app_asset_service): reduce maximum preview content size from 5MB to 1MB 2026-01-19 21:52:18 +08:00
d297167fef feat(sandbox): add optional activate argument to sandbox provider config
- Updated the request parser in SandboxProviderListApi to include an optional 'activate' boolean argument for JSON input.
- This enhancement allows users to specify activation status when configuring sandbox providers.
2026-01-19 21:46:26 +08:00
41aec357b0 feat(sandbox): add activation functionality for sandbox providers
- Enhanced the SandboxProviderConfigApi to accept an 'activate' argument when saving provider configurations.
- Introduced a new request parser for activating sandbox providers, requiring a 'type' argument.
- Updated the SandboxProviderService to handle the activation state during configuration saving and provider activation.
2026-01-19 21:43:03 +08:00
yyh
96da3b9560 fix: migration 2026-01-19 20:13:24 +08:00
yyh
3bb9625ced fix(sandbox): prevent revoking active provider config
Hide revoke button for active providers to avoid "no sandbox provider"
error when user deletes the only available configuration.
2026-01-19 20:09:14 +08:00
1bdc47220b fix: mention graph config don't support structured output 2026-01-19 19:59:19 +08:00
yyh
5aa4088051 fix(sandbox): use deleteConfig when switching to managed mode
Delete user config instead of saving empty config when switching to
managed mode, allowing the system to fall back to system defaults.
2026-01-19 19:51:47 +08:00
yyh
9f444f1f6a refactor(skill): split file operations hook and extract TreeNodeIcon component
Split use-file-operations.ts (248 lines) into smaller focused hooks:
- use-create-operations.ts for file/folder creation and upload
- use-modify-operations.ts for rename and delete operations
- use-file-operations.ts now serves as orchestrator maintaining backward compatibility

Extract TreeNodeIcon component from tree-node.tsx for cleaner separation of concerns.

Add brief comments to drag hooks explaining their purpose and relationships.
2026-01-19 19:13:09 +08:00
49effca35d fix: auto default 2026-01-19 18:41:05 +08:00
yyh
fb28f03155 Merge branch 'feat/support-agent-sandbox' of https://github.com/langgenius/dify into feat/support-agent-sandbox 2026-01-19 18:37:48 +08:00
2afc4704ad chore: add limit to tool param auto 2026-01-19 18:35:57 +08:00
yyh
5496fc014c feat(sandbox): add connect mode selection for E2B provider
Add ability to choose between "Managed by Dify" (using system config)
and "Bring Your Own API Key" modes when configuring E2B sandbox provider.
This allows Cloud users to use Dify's pre-configured credentials or
their own E2B account for more control over resources and billing.
2026-01-19 18:35:53 +08:00
yyh
7756c151ed feat: add VSCode-style blink animation before folder auto-expand
When dragging files over a closed folder, the highlight now blinks
during the second half of the 2-second hover period to signal that
the folder is about to expand. This provides better visual feedback
similar to VSCode's drag-and-drop behavior.
2026-01-19 18:35:26 +08:00
83c458d2fe chore: change tool setting copywriting and ts promble 2026-01-19 18:27:33 +08:00
956436b943 feat(sandbox): skill initialize & draft run 2026-01-19 18:15:39 +08:00
3bb9c4b280 feat(constants): introduce DIFY_CLI_ROOT and update paths for Dify CLI and app assets
- Added DIFY_CLI_ROOT constant for the root directory of Dify CLI.
- Updated DIFY_CLI_PATH and DIFY_CLI_CONFIG_PATH to use absolute paths.
- Modified app asset initialization to create directories under DIFY_CLI_ROOT.
- Enhanced Docker and E2B environment file handling to use workspace paths.
2026-01-19 18:15:39 +08:00
c38463c9a9 refactor: reorganize asset-related classes into entities module and remove unused skill and asset files 2026-01-19 18:15:39 +08:00
yyh
fc49592769 Merge branch 'feat/support-agent-sandbox' of https://github.com/langgenius/dify into feat/support-agent-sandbox 2026-01-19 18:07:15 +08:00
6643569efc fix: tool can not auth modal 2026-01-19 18:06:23 +08:00
yyh
fe0ea13f70 perf: parallelize file uploads and add consistent drag validation
Use Promise.all for concurrent file uploads instead of sequential
processing, improving upload performance for multiple files. Also
add isFileDrag check to handleFolderDragOver for consistency with
other drag handlers.
2026-01-19 18:05:59 +08:00
yyh
c979b59e1e fix: correct test expectation for model provider setting payload
The test was expecting 'provider' but the actual value passed is
'model-provider' from ACCOUNT_SETTING_TAB.MODEL_PROVIDER constant.
2026-01-19 18:05:59 +08:00
yyh
144ca11c03 refactor file drop handlers into hooks 2026-01-19 18:05:58 +08:00
yyh
a432fa5fcf feat: add external file drag-and-drop upload to file tree
Enable users to drag files from their system directly into the file tree
to upload them. Files can be dropped on the tree container (uploads to root)
or on specific folders. Hovering over a closed folder for 2 seconds auto-
expands it. Uses Zustand for drag state management instead of React Context
for better performance.
2026-01-19 18:05:58 +08:00
dbc70f8f05 feat: add inner graph api 2026-01-19 17:13:07 +08:00
4b67008dba fix: not blank not render tool correct 2026-01-19 17:01:32 +08:00
f4b683aa2f fix: no blank not render file write 2026-01-19 17:01:32 +08:00
62ac02a568 feat: Download the uploaded files (#31068)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Cursor Agent <cursoragent@cursor.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-19 16:48:13 +08:00
yyh
7de6ecdedf fix: lint 2026-01-19 16:35:50 +08:00
bd070857ed fix: fold indent style 2026-01-19 16:34:46 +08:00
yyh
d3d1ba2488 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	api/core/app/apps/workflow/app_generator.py
2026-01-19 16:33:10 +08:00
2d4289a925 chore: relocate datasets api form (#31224) 2026-01-19 16:15:51 +08:00
eae82b1085 chore: remove sync from left panel tree 2026-01-19 16:11:10 +08:00
88780c7eb7 fix: Revert "fix: fix create app xss issue" (#31219) 2026-01-19 16:07:24 +08:00
0f1db88dcb fix: fix dify-plugin-daemon error message (#31218) 2026-01-19 16:00:44 +08:00
f9fd234cf8 feat: support expand the selected file struct 2026-01-19 15:38:43 +08:00
1dfee05b7e fix: view file popup place error 2026-01-19 15:25:57 +08:00
dd42e7706a fix: workflow can not init 2026-01-19 15:15:24 +08:00
066d18df7a Merge branch 'main' into feat/pull-a-variable 2026-01-19 15:00:15 +08:00
06f6ded20f fix: Fix assemble variables insertion in prompt editor 2026-01-19 14:59:08 +08:00
3a775fc2bf feat: support choose folders and files 2026-01-19 14:47:57 +08:00
92dbc94f2f test: add unit tests for plugin detail panel components including action lists, strategy lists, and endpoint management (#31053)
Co-authored-by: CodingOnStar <hanxujiang@dify.ai>
2026-01-19 14:40:32 +08:00
9f09414dbe refactor: make url in email template more better (#31166) 2026-01-19 14:28:41 +08:00
yyh
0d5e971a0c fix(skill): pass root nodeId for blank-area context menu
The previous refactor inadvertently passed undefined nodeId for blank
area menus, causing root-level folder creation/upload to fail. This
restores the original behavior by explicitly passing 'root' when the
context menu type is 'blank'.
2026-01-19 14:23:38 +08:00
yyh
9aed4f830f refactor(skill): merge BlankAreaMenu into NodeMenu
Consolidate menu components by extending NodeMenu to support a 'root'
type, eliminating the redundant BlankAreaMenu component. This reduces
code duplication and simplifies the context menu logic by storing
isFolder in the context menu state instead of re-querying tree data.
2026-01-19 14:22:25 +08:00
yyh
5947e04226 feat: decouple create target from tab selection 2026-01-19 14:09:37 +08:00
yyh
611ff05bde feat: sync tree selection with active tab 2026-01-19 14:05:46 +08:00
yyh
0e890e5692 feat: auto pin created editable files 2026-01-19 13:51:08 +08:00
yyh
6584dc2480 feat: inline create nodes in skill file tree 2026-01-19 13:43:29 +08:00
yyh
a922e844eb fix(skill): return raw content as fallback for non-JSON file content
When file content is not in JSON format (e.g., newly uploaded files),
return the raw content instead of empty string to ensure files display
correctly.
2026-01-19 12:55:22 +08:00
b3902374ac chore: drop slow lint rules (#31205) 2026-01-19 12:45:02 +08:00
yyh
4bd05ed96e fix(types): remove unused and misaligned app-asset types
Remove types that don't match backend API:
- AppAssetFileContentResponse (unused, had extra metadata field)
- CreateFilePayload (unused, FormData built manually)
- metadata field from UpdateFileContentPayload
2026-01-19 12:43:44 +08:00
0de32f682a feat(skill): skill parser & packager 2026-01-19 12:41:01 +08:00
245567118c chore: struct to wrap with content 2026-01-19 12:19:40 +08:00
3b225c01da refactor: refactor workflow context (#30607) 2026-01-19 12:18:51 +08:00
yyh
021f055c36 feat(skill-editor): add blank area context menu and align search/add styles
Add right-click context menu for file tree blank area with New File,
New Folder, and Upload Files options. Also align search input and
add button styles to match Figma design specs (24px height, 6px radius).
2026-01-19 11:38:59 +08:00
72ce6ca437 feat: implement workspace permission checks for member invitations an… (#31202) 2026-01-18 19:35:50 -08:00
269c85d5a3 feat: ee workspace permission control (#30841) 2026-01-19 11:06:04 +08:00
b0545635b8 chore: improve clear workflow_run task (#31124)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: hj24 <mambahj24@gmail.com>
2026-01-19 10:58:57 +08:00
yyh
5f707c5585 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-19 10:53:16 +08:00
yyh
232da66b53 chore: update eslint suppressions 2026-01-19 10:51:53 +08:00
yyh
ebeee92e51 fix(sandbox-provider): align frontend types with backend API after refactor
Remove label, description, and icon fields from SandboxProvider type
as they are no longer returned by the backend API. Use i18n translations
to display provider labels instead of relying on API response data.
2026-01-19 10:50:57 +08:00
yyh
f481947b0d Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-19 10:38:36 +08:00
13d648cf7b chore: no custom lint cache location (#31195) 2026-01-19 10:37:49 +08:00
yyh
94ea7031e8 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-19 10:31:54 +08:00
yyh
e8397ae7a8 fix(web): Zustand testing best practices and state read optimization (#31163)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-19 10:31:34 +08:00
yyh
8893913b3a feat: add Vercel React Best Practices skill for Claude Code (#31133) 2026-01-19 10:30:49 +08:00
14f123802d chore: update vite related version (#31180) 2026-01-19 10:28:06 +08:00
yyh
2f081fa6fa refactor(skill-editor): adopt 4-generic StateCreator pattern for type-safe cross-slice access
Use explicit StateCreator<FullStore, [], [], SliceType> pattern instead of
StateCreator<SliceType> for all skill-editor slices. This enables:
- Type-safe cross-slice state access via get()
- Explicit type contracts instead of relying on spread args behavior
- Better maintainability following Lobe-chat's proven pattern

Extract all type definitions to types.ts to avoid circular dependencies.
2026-01-18 13:24:34 +08:00
yyh
3b27d9e819 refactor(skill-editor): remove type assertions by using spread args pattern
Replace explicit parameter destructuring with spread args pattern to
eliminate `as unknown as` type assertions when composing sub-slices.
This aligns with the pattern used in the main workflow store.
2026-01-18 13:11:06 +08:00
yyh
c0a76220dd fix(skill-editor): resolve React Compiler memoization warnings
Consolidate file type derivations into a single useMemo with stable
dependencies (currentFileNode?.name and currentFileNode?.extension)
to help React Compiler track stability.

Extract originalContent as a separate variable to avoid property access
in useCallback dependencies, which caused Compiler to infer broader
dependencies than specified.
2026-01-17 22:01:33 +08:00
yyh
9d04fb4992 fix(skill-editor): resolve React Compiler memoization warnings
Wrap isEditable in useMemo to help React Compiler track its stability
and preserve memoization for callbacks that depend on it. Also replace
Record<string, any> with Record<string, unknown> to satisfy no-explicit-any.
2026-01-17 21:51:25 +08:00
yyh
02fcf33067 fix(skill-editor): remove unnecessary store subscriptions in tool-picker-block
Move activeTabId and fileMetadata reads from selector subscriptions to
getState() calls inside the callback. These values were only used in the
insertTools callback, not for rendering, causing unnecessary re-renders
when they changed.
2026-01-17 21:47:31 +08:00
7b66bbc35a chore: introduce bulk-suppressions and multithread linting (#31157) 2026-01-17 19:51:56 +08:00
yyh
bbf1247f80 fix(skill-editor): compare content with original to determine dirty state
Previously, any edit would mark the file as dirty even if the content
was restored to its original state. Now we compare against the original
content and clear the dirty flag when they match.
2026-01-17 17:52:00 +08:00
77366f33a4 feat(web): add loading indicators for infinite scroll pagination (#31110)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Stephen Zhou <38493346+hyoban@users.noreply.github.com>
2026-01-17 17:36:07 +08:00
yyh
e3b0918dd9 test(web): add global zustand mock for tests (#31149) 2026-01-17 17:29:13 +08:00
yyh
b82b73ef94 refactor(skill-editor): split slice into separate files for better organization
Split the monolithic skill-editor-slice.ts into a dedicated directory with
individual slice files (tab, file-tree, dirty, metadata, file-operations-menu)
to improve maintainability and code organization.
2026-01-17 17:28:25 +08:00
yyh
15d6f60f25 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-17 17:03:32 +08:00
yyh
ad8c5f5452 perf: lazy load SkillMain component using next/dynamic
Reduce initial bundle size by dynamically importing SkillMain
component. This prevents loading the entire Skill module (including
Monaco and Lexical editors) when users only access the Graph view.
2026-01-16 21:31:56 +08:00
721d82b91a refactor(sandbox): modify sandbox provider configuration by adding 'configure_type' column and updating unique constraints 2026-01-16 19:02:16 +08:00
0c62c39a1d Merge branch 'zhsama/assemble-var-input' into feat/pull-a-variable 2026-01-16 18:54:53 +08:00
8d643e4b85 feat: add assemble variables icon 2026-01-16 18:45:28 +08:00
d542a74733 feat: panel ui 2026-01-16 18:39:13 +08:00
16078a9df6 refactor(sandbox): update DifyCliLocator path resolution and enhance sandbox provider configuration logic 2026-01-16 18:37:43 +08:00
0bd17c6d0f refactor(sandbox): sandbox provider system default configuration 2026-01-16 18:22:44 +08:00
77401e6f5c feat: optimize variable picker styling and optimize agent nodes 2026-01-16 18:21:43 +08:00
8b42435f7a feat: support set default value when choose tool 2026-01-16 18:16:01 +08:00
fad6fa141d chore: improve accessibility for learn more link (#31120)
Co-authored-by: khmandarrin <jeong-ga-eun@jeong-ga-eun-ui-MacBookAir.local>
2026-01-16 18:12:07 +08:00
30821fd26c chore: Update outdated GitHub Actions versions (#31114) 2026-01-16 17:56:55 +08:00
3147e850be fix: click tool not show current 2026-01-16 17:52:40 +08:00
1a9fdd9a65 refactor: migrate tag list API query parameters to Pydantic (#31097)
Co-authored-by: fghpdf <fghpdf@users.noreply.github.com>
2026-01-16 17:49:52 +08:00
0b33381efb feat: support save settings 2026-01-16 17:44:40 +08:00
de610cbf39 fix: call get_text_content() instead of casting to str (#31121)
Signed-off-by: Stream <Stream_2@qq.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-16 18:41:00 +09:00
yyh
ee7a9a34e0 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-16 17:25:19 +08:00
148f92f92d fix: allow all fileds and not allow model set to auto 2026-01-16 17:20:11 +08:00
4ee49552ce feat: add prompt variable message 2026-01-16 17:10:18 +08:00
40caaaab23 Merge branch 'zhsama/assemble-var-input' into feat/pull-a-variable 2026-01-16 17:04:18 +08:00
1bc1c04be5 feat: add assemble variables entry 2026-01-16 17:03:22 +08:00
18abc66585 feat: add context file support 2026-01-16 17:01:44 +08:00
f79df6982d feat: support setting show on click 2026-01-16 16:58:58 +08:00
yyh
6903c31b84 fix(search-input): retain focus after clearing input (#31107) 2026-01-16 16:22:14 +08:00
e85e31773a Merge branch 'zhsama/llm-warning-ui' into feat/pull-a-variable 2026-01-16 16:22:07 +08:00
e5336a2d75 Use warning token borders for mentions 2026-01-16 15:09:42 +08:00
649283df09 fix: not popup and use new setting 2026-01-16 15:09:25 +08:00
7222a896d8 Align warning styles for agent mentions 2026-01-16 15:01:11 +08:00
b5712bf8b0 Merge branch 'zhsama/agent-at-nodes' into feat/pull-a-variable 2026-01-16 14:47:37 +08:00
yyh
06b6625c01 feat(skill): implement file tree search with debounced filtering
Add search functionality to skill sidebar using react-arborist's built-in
searchTerm and searchMatch props. Search input is debounced at 300ms and
filters tree nodes by name (case-insensitive). Also add success toast for
rename operations.
2026-01-16 14:44:44 +08:00
7bc2e33e83 Merge remote-tracking branch 'origin/feat/pull-a-variable' into feat/pull-a-variable 2026-01-16 14:43:31 +08:00
eb4f57fb8b chore: split tool config 2026-01-16 14:39:33 +08:00
b2cc9b255d chore: Update coding agent workflow for backend (#31093) 2026-01-16 14:28:47 +08:00
yyh
0f5d3f38da refactor(skill): use node.parent chain for ancestor traversal
Replace getAncestorIds(treeData) with node.parent chain traversal
for more efficient ancestor lookup. This avoids re-traversing the
tree data structure and uses react-arborist's built-in parent refs.

Also rename hook to useSyncTreeWithActiveTab for clarity.
2026-01-16 14:27:21 +08:00
e9f0e1e839 fix(web): replace Response.json with legacy Response constructor for pre-Chrome 105 compatibility(#31091) (#31095)
Co-authored-by: Xiaoba Yu <xb1823725853@gmail.com>
2026-01-16 14:26:23 +08:00
yyh
76da178cc1 refactor(skill): extract tree node handlers into reusable hooks
Extract complex event handling and side effects from file tree components
into dedicated hooks for better separation of concerns and reusability.
2026-01-16 14:15:21 +08:00
yyh
38a2d2fe68 fix(skill): isolate more button click from tree node click handling
Use split button pattern to separate main content area from more button.
This prevents click events on the more button from bubbling up to the
parent element's click/double-click handlers, which caused unintended
file opening when clicking the menu button multiple times.
2026-01-16 14:07:07 +08:00
yyh
9397ba5bd2 refactor: move skill store to workflow/store/ 2026-01-16 13:51:50 +08:00
yyh
7093962f30 refactor(skill): move skill editor slice to core workflow store
Move SkillEditorSlice from injection pattern to core workflow store,
making it available to all workflow contexts (workflow-app, chatflow,
and future rag-pipeline).

- Add createSkillEditorSlice to core createWorkflowStore
- Remove complex type conversion logic from workflow-app/index.tsx
- Remove optional chaining (?.) and non-null assertions (!) from components
- Simplify slice composition with type assertions via unknown
2026-01-16 13:51:50 +08:00
yyh
7022e4b9ca fix(skill): add key prop to editors to fix content sync on tab switch
Lexical editor only uses initialConfig.editorState on mount, ignoring
subsequent value prop changes when the component is reused by React.
Adding key={activeTabId} forces React to remount editors when switching
tabs, ensuring correct content is displayed.
2026-01-16 13:51:50 +08:00
yyh
b8d67a42bd refactor(skill): migrate skill editor store to workflow store slice injection
Refactor the skill editor state management from a standalone Zustand store
with Context provider pattern to a slice injection pattern that integrates
with the existing workflow store. This aligns with how rag-pipeline already
injects its slice.

- Remove SkillEditorProvider and SkillEditorContext
- Export createSkillEditorSlice for injection into workflow store
- Update all components to use useStore/useWorkflowStore from workflow store
- Add SkillEditorSliceShape to SliceFromInjection union type
- Use type-safe slice creator args without any types
2026-01-16 13:51:49 +08:00
yyh
106cb8e373 refactor(skill): unify node menu components with cva variants
Merge file-node-menu.tsx and folder-node-menu.tsx into a single
declarative NodeMenu component that uses type prop to determine
menu items. Add cva-based variant support to MenuItem for consistent
destructive styling.
2026-01-16 13:51:49 +08:00
9492eda5ef chore: tool format and render problem 2026-01-16 13:50:20 +08:00
cd497a8c52 fix(web): use portal for variable picker in code editor (Fixes #31063) (#31066) 2026-01-16 13:31:57 +08:00
7aab4529e6 chore: lint for state hooks (#31088) 2026-01-16 11:58:28 +08:00
a7826d9ea4 feat: agent add context 2026-01-16 11:47:55 +08:00
E.G
4bff0cd0ab fix: resolve 'Expand all chunks' button not working (#31074)
Co-authored-by: GlobalStar117 <GlobalStar117@users.noreply.github.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: crazywoola <427733928@qq.com>
2026-01-16 11:34:42 +08:00
64ddcc8960 chore: fix choose provder id 2026-01-16 11:31:03 +08:00
yyh
c7bca6a3fb fix(skill): restore auto-pin on edit behavior (VS Code style) 2026-01-16 11:26:13 +08:00
yyh
f1ce933b33 fix(skill): address code review issues for tab management
1. Add confirmation dialog when closing dirty tabs
2. Fix file double-click race condition with useDelayedClick hook
3. Fix previewTabId orphan state in closeTab
4. Remove auto-pin on every keystroke (VS Code behavior)
5. Extract shared MenuItem component to eliminate duplication
6. Make nodeId optional when node is provided (reduce props drilling)
2026-01-16 11:20:49 +08:00
yyh
17990512ce fix(skill): add throttle to folder toggle and validate pinTab
- Use es-toolkit throttle with leading edge to prevent folder toggle
  flickering on double-click (3 toggles reduced to 1)
- Add validation in pinTab to check if file exists in openTabIds
2026-01-16 11:20:49 +08:00
yyh
a30fb5909b feat(skill): implement VS Code-style preview/pinned tab management
- Single-click file in tree opens in preview mode (temporary, replaceable)
- Double-click file opens in pinned mode (permanent)
- Preview tabs display with italic filename
- Editing content auto-converts preview tab to pinned
- Double-clicking preview tab header converts to pinned
- Only one preview tab can exist at a time
2026-01-16 11:20:49 +08:00
3dea5adf5c fix: change caused problem 2026-01-16 11:00:56 +08:00
yyh
5aca563a01 fix: migrations 2026-01-16 10:26:53 +08:00
yyh
bf1ebcdf8f Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-16 10:05:12 +08:00
yyh
3252748345 feat(skill): add oRPC contract and hook for file download URL
Add frontend oRPC integration for the existing backend download URL
endpoint to enable file downloads from the asset tree.
2026-01-16 09:55:17 +08:00
c98870c3f4 refactor: always preserve marketplace search state in URL (#31069)
Co-authored-by: Stephen Zhou <38493346+hyoban@users.noreply.github.com>
2026-01-16 08:52:53 +09:00
72eb29c01b fix: fix duplicate agent context warnings in tool node 2026-01-16 00:42:42 +08:00
0f3156dfbe fix: list multiple @mentions 2026-01-16 00:19:28 +08:00
b21875eaaf fix: simplify @llm warning 2026-01-16 00:08:51 +08:00
2591615a3c Merge branch 'zhsama/agent-at-nodes' into feat/pull-a-variable 2026-01-15 23:51:35 +08:00
691554ad1c feat: 展示@agent引用 2026-01-15 23:32:14 +08:00
f43fde5797 feat: Enhance context variable handling for Agent and LLM nodes 2026-01-15 23:26:19 +08:00
b06c7c8f33 ci: disable limit annotation (#31072) 2026-01-15 23:04:26 +08:00
1a2fce7055 ci: eslint annotation (#31056) 2026-01-15 21:49:46 +08:00
yyh
783cdb1357 feat(skill): add inline rename and guide lines to file tree
Add TreeEditInput component for inline file/folder renaming with keyboard
support (Enter to submit, Escape to cancel). Add TreeGuideLines component
to render vertical indent lines based on node depth for better visual
hierarchy in the tree view.

Reorganize file tree components into dedicated `file-tree` subdirectory
for better code organization.
2026-01-15 21:30:02 +08:00
yyh
2de17cb1a4 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-15 20:47:34 +08:00
yyh
3b6946d3da refactor(skill): centralize asset tree data fetching with custom hooks
Extract repeated appId retrieval and tree data fetching patterns into
dedicated hooks (useSkillAssetTreeData, useSkillAssetNodeMap) to reduce
code duplication across 6 components and leverage TanStack Query's
select option for efficient nodeMap computation.
2026-01-15 19:45:33 +08:00
yyh
b8adc8f498 fix(web): memoize skill sidebar menu offset 2026-01-15 19:45:32 +08:00
yyh
ca7c4d2c86 fix(skill): improve accessibility for file tree and tabs
- Convert div with onClick to proper button elements for keyboard access
- Add focus-visible ring styles to all interactive elements
- Add ARIA attributes (role, aria-selected, aria-expanded) to tree nodes
- Add keyboard navigation (Enter/Space) support to tree items
- Mark decorative icons with aria-hidden="true"
- Add missing i18n keys for accessibility labels
- Fix typography: use ellipsis character (…) instead of three dots
2026-01-15 19:45:32 +08:00
d8bafb0d1c refactor(app-asset): remove deprecated file download resource and streamline download URL handling with pre-signed storage 2026-01-15 19:28:15 +08:00
cd0724b827 refactor(app-asset-service): remove unused signed proxy URL generation and improve error handling for download URL 2026-01-15 19:28:15 +08:00
yyh
6e66e2591b feat(skill): disable file tree during mutations
- Add useIsMutating hook to track ongoing mutations
- Apply pointer-events-none and opacity-50 when mutating
- Prevents user interaction during file operations
2026-01-15 18:14:10 +08:00
yyh
fd0556909f fix(skill): default folders to collapsed state on load
- Add openByDefault={false} to Tree component
- react-arborist defaults openByDefault to true, causing all folders
  to be expanded on page refresh
2026-01-15 18:05:42 +08:00
yyh
ac2120da1e refactor(skill): separate DropTip from tree container
- Move DropTip component outside the tree flex container
- Use Fragment to group tree container, DropTip and context menu
- DropTip is now an independent fixed element at the bottom
2026-01-15 18:05:42 +08:00
yyh
f3904a7e39 fix(skill): use dynamic height for file tree to fix scroll issues
- Replace fixed height={1000} with dynamic containerSize.height
- Use useSize hook from ahooks to observe container dimensions
- Fallback to 400px default height for initial render
- Fixes scroll issues when collapsing folders
2026-01-15 18:05:42 +08:00
yyh
b3923ec3ca fix: translations 2026-01-15 18:05:41 +08:00
9ffdad6465 fix: click tool inner caused blur 2026-01-15 17:58:38 +08:00
f247ebfbe1 feat: Await sub-graph save before syncing workflow draft 2026-01-15 17:53:28 +08:00
lif
2b021e8752 fix: remove hardcoded 48-character limit from text inputs (#30156)
Signed-off-by: majiayu000 <1835304752@qq.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2026-01-15 17:43:00 +08:00
yyh
713e040481 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-15 17:26:58 +08:00
yyh
f58f36fc8f feat(skill): add file right-click/more menu and refactor naming
- Add right-click context menu and '...' more button for files
  - Files now support Rename and Delete operations
  - Created file-node-menu.tsx for file-specific menu

- Refactor component naming for consistency
  - file-item-menu.tsx -> file-node-menu.tsx (unify 'node' terminology)
  - file-operations-menu.tsx -> folder-node-menu.tsx (clarify folder menu)
  - file-tree-context-menu.tsx -> tree-context-menu.tsx (simplify)
  - file-tree-node.tsx -> tree-node.tsx (simplify)
  - files.tsx -> file-tree.tsx (more descriptive)
  - Renamed internal components: FileTreeNode -> TreeNode, Files -> FileTree

- Add context menu node highlight
  - When right-clicking a node, it now shows hover highlight
  - Subscribed to contextMenu.nodeId in TreeNode component
2026-01-15 17:26:12 +08:00
195cd2c898 chore: show line numbers to skill editor 2026-01-15 17:21:12 +08:00
6bb09dc58c feat(app-assets): add file download functionality with pre-signed URLs and enhance asset management 2026-01-15 17:20:10 +08:00
33f3374ea6 refactor(sandbox): simplify sandbox_layer by removing ArchiveSandboxStorage and updating event handling 2026-01-15 17:20:10 +08:00
41baaca21d feat(sandbox): integrate ArchiveSandboxStorage into AdvancedChat and Workflow app generators 2026-01-15 17:20:10 +08:00
d650cde323 feat: skill editor choose tool 2026-01-15 17:16:01 +08:00
d641c845dd feat: Pass workflow draft sync callback to sub-graph 2026-01-15 17:12:30 +08:00
yyh
e651c6cacf fix: css 2026-01-15 16:45:40 +08:00
2e10d67610 perf: Replace topOffset prop with withHeader in Panel component 2026-01-15 16:44:15 +08:00
yyh
eab395f58a refactor: sync file tree open state 2026-01-15 16:39:22 +08:00
yyh
2f92957e15 fix: css 2026-01-15 16:14:51 +08:00
e89d4e14ea Merge branch 'main' into feat/pull-a-variable 2026-01-15 16:14:15 +08:00
5525f63032 refactor: sub-graph panel use shared Panel component 2026-01-15 16:12:39 +08:00
yyh
7bc1390366 feat(skill-editor): enhance + button with full operations and smart target folder
- Refactor sidebar-search-add to reuse useFileOperations hook
- Add getTargetFolderIdFromSelection utility for smart folder targeting
- Expand + button menu: New File, New Folder, Upload File, Upload Folder
- Target folder based on selection: file's parent, folder itself, or root
2026-01-15 16:10:01 +08:00
e91fb94d0e chore: palceholder 2026-01-15 16:08:26 +08:00
yyh
5c03a2e251 refactor(skill-editor): extract hooks and utils into separate directories
- Extract useFileOperations hook to hooks/use-file-operations.ts
- Move tree utilities to utils/tree-utils.ts
- Move file utilities to utils/file-utils.ts (renamed from utils.ts)
- Remove unnecessary JSDoc comments throughout components
- Simplify type.ts to only contain local type definitions
- Clean up store/index.ts by removing verbose comments
2026-01-15 16:00:42 +08:00
yyh
1741fcf84d feat(skill-editor): add rename and delete operations for folder context menu
- Add Rename using react-arborist native inline editing (node.edit())
- Add Delete with Confirm modal and automatic tab cleanup
- Add getAllDescendantFileIds utility for finding files to close on delete
- Add i18n strings for rename/delete operations (en-US, zh-Hans)
2026-01-15 16:00:41 +08:00
yyh
52215e9166 fix(prompt-editor): show border on hover for better scroll boundary visibility
Add hover state border to prompt editor so users can see the boundary
while scrolling even when the editor is not focused.
2026-01-15 16:00:41 +08:00
4cfc135652 feat: prompt editor support line num 2026-01-15 15:56:49 +08:00
8ee643e88d fix: fix variable inspect panel width in subgraphs 2026-01-15 15:55:55 +08:00
4a197b9458 fix: fix log updated_at is refreshed (#31045) 2026-01-15 15:42:46 +08:00
772ff636ec feat: credential sync fix for enterprise edition (#30626) 2026-01-14 23:33:24 -08:00
ab1c5a2027 refactor: remove manual set query logic (#31039) 2026-01-15 15:25:43 +08:00
33e99f069b fix: message clean service ut (#31038) 2026-01-15 15:13:25 +08:00
yyh
ff632bf9b8 feat(workflow): persist view tab state to URL search params
Use nuqs to sync graph/skill view selection to URL, enabling
shareable links and browser history navigation. Hoists
SkillEditorProvider to maintain state across view switches.
2026-01-15 15:09:36 +08:00
yyh
ce9ed88b03 refactor(skill-editor): hoist SkillEditorProvider for state persistence
Move SkillEditorProvider from SkillMain to WorkflowAppWrapper so that
store state persists across view switches between Graph and Skill views.
Also add URL query state for view type using nuqs.
2026-01-15 15:09:12 +08:00
yyh
e6a4a08120 refactor(skill-editor): simplify code by extracting MenuItem component and removing dead code
- Extract reusable MenuItem component for menu buttons in FileOperationsMenu
- Remove unused handleUploadFileClick/handleUploadFolderClick callbacks
- Remove unused handleDropdownClose callback, inline directly
- Remove unused _fileId parameter from revealFile function
- Simplify toOpensObject using Object.fromEntries
2026-01-15 15:05:43 +08:00
yyh
388ee087c0 feat(skill-editor): add folder context menu with file operations
Add right-click context menu and "..." dropdown button for folders in
the file tree, enabling file operations within any folder:

- New File: Create empty file via Blob upload
- New Folder: Create subfolder
- Upload File: Upload multiple files to folder
- Upload Folder: Upload entire folder structure preserving hierarchy

Implementation includes:
- FileOperationsMenu: Shared menu component for both triggers
- FileTreeContextMenu: Right-click menu with absolute positioning
- FileTreeNode: Added context menu and dropdown button for folders
- Store slice for context menu state management
- i18n strings for en-US and zh-Hans
2026-01-15 14:56:31 +08:00
2fb8883918 feat: split different filetypes 2026-01-15 14:53:00 +08:00
yyh
28ccd42a1c refactor(skill-editor): simplify SkillEditorProvider
Remove verbose comments and appId reset logic since parent component
remounts on appId change. Consolidate imports and use function declaration.
2026-01-15 14:10:41 +08:00
52af829f1f refactor: enhance clean messages task (#29638)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: 非法操作 <hjlarry@163.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-15 14:03:17 +08:00
yyh
fcd814a2c3 refactor(skill-editor): simplify state management and remove dead code
- Replace useRef pattern with useMemo for store creation in context.tsx
- Remove unused extension prop from EditorTabItem
- Fix useMemo dependency warnings in editor-tabs.tsx and skill-doc-editor.tsx
- Add proper OnMount type for Monaco editor instead of any
- Delete unused file-item.tsx and fold-item.tsx components
- Remove unused getExtension and fromOpensObject utilities from type.ts
- Refactor auto-reveal effect in files.tsx for better readability
2026-01-15 14:02:15 +08:00
yyh
fe17cbc1a8 feat(skill-editor): implement file tree, tab management, and dirty state tracking
Implement MVP features for skill editor based on design doc:
- Add Zustand store with Tab, FileTree, and Dirty slices
- Rewrite file tree using react-arborist for virtual scrolling
- Implement Tab↔FileTree sync with auto-reveal on tab activation
- Add upload functionality (new folder, upload file)
- Implement Monaco editor with dirty state tracking and Ctrl+S save
- Add i18n translations (en-US and zh-Hans)
2026-01-15 13:53:19 +08:00
63b3e71909 refactor(sandbox): redesign sandbox_layer & reorganize import paths 2026-01-15 13:22:49 +08:00
c1c8b6af44 chore: remove duplicate secret field in CliApiSession 2026-01-15 12:10:53 +08:00
0ef8b5a0ca chore: bump version to 1.11.4 (#30961) 2026-01-15 11:36:15 +08:00
3bd434ddf2 chore: ui enchance 2026-01-15 11:35:48 +08:00
834a5df580 fix: switch zindex 2026-01-15 11:31:08 +08:00
e40c2354d5 chore: remove useless props 2026-01-15 11:24:59 +08:00
b0eca12d88 feat: tabs 2026-01-15 11:22:43 +08:00
2bfc54314e feat: single run add opentelemetry (#31020) 2026-01-15 11:10:55 +08:00
yyh
3a86983207 refactor(web): nest sandbox provider contracts 2026-01-15 11:04:43 +08:00
f461ddeb7e missing files 2026-01-15 11:04:15 +08:00
7b534baf15 chore: file type utils 2026-01-15 11:02:07 +08:00
74d8bdd3a7 chore: search ui 2026-01-15 11:02:07 +08:00
yyh
657739d48b Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	api/models/model.py
#	web/contract/router.ts
2026-01-15 10:59:45 +08:00
bdd8d5b470 test: add unit tests for PluginPage and related components (#30908)
Co-authored-by: CodingOnStar <hanxujiang@dify.ai>
2026-01-15 10:56:02 +08:00
yyh
f8b27dd662 fix(web): accept 2xx status codes in upload function for HTTP semantics
The upload helper was hardcoded to only accept HTTP 201, which broke
PUT requests that return 200. This aligns with standard HTTP semantics
where POST returns 201 Created and PUT returns 200 OK.
2026-01-15 10:54:42 +08:00
4955de5905 fix: validation error when uploading images with None URL values (#31012)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2026-01-15 10:54:10 +08:00
yyh
3bee2ee067 refactor(contract): restructure console contracts with nested billing module (#30999) 2026-01-15 10:41:18 +08:00
328897f81c build: require node 24.13.0 (#30945) 2026-01-15 10:38:55 +08:00
ab078380a3 feat(web): refactor documents component structure and enhance functionality (#30854)
Co-authored-by: CodingOnStar <hanxujiang@dify.ai>
2026-01-15 10:33:58 +08:00
a33ac77a22 feat: implement document creation pipeline with multi-step wizard and datasource management (#30843)
Co-authored-by: CodingOnStar <hanxujiang@dify.ai>
2026-01-15 10:33:48 +08:00
d3923e7b56 refactor: port AppAnnotationHitHistory (#30922)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-15 10:14:55 +08:00
2f633de45e refactor: port TenantCreditPool (#30926)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-15 10:14:15 +08:00
98c88cec34 refactor: delete_endpoint should be idempotent (#30954) 2026-01-15 10:10:10 +08:00
c6999fb5be fix: fix plugin edit endpoint app disappear (#30951) 2026-01-15 10:09:57 +08:00
f7f9a08fa5 refactor: port TidbAuthBinding( (#31006)
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-01-15 10:07:02 +08:00
5008f5e89b fix: Use raw SQL UPDATE to set read status without triggering updated… (#31015) 2026-01-15 09:51:44 +08:00
yyh
18c7f4698a feat(web): add oRPC contracts and service hooks for app asset API
- Add TypeScript types for app asset management (types/app-asset.ts)
- Add oRPC contract definitions with nested router pattern (contract/console/app-asset.ts)
- Add React Query hooks for all asset operations (service/use-app-asset.ts)
- Integrate app asset contracts into console router

Endpoints covered: tree, createFolder, createFile, getFileContent,
updateFileContent, deleteNode, renameNode, moveNode, reorderNode, publish
2026-01-15 09:50:05 +08:00
ccb337e8eb fix: Sync extractor prompt template with tool input text 2026-01-15 04:09:35 +08:00
1ff677c300 refactor: Remove unused sub-graph persistence and initialization hooks.
Simplified sub-graph store by removing unused state fields and setters.
2026-01-15 04:08:42 +08:00
04145b19a1 refactor: refactor prompt template processing logic 2026-01-15 01:14:46 +08:00
6cb8d03bf6 feat(sandbox): enhance SandboxLayer with app_id handling and storage integration
- Introduce _app_id attribute to store application ID from system variables
- Add _get_app_id method to retrieve and validate app_id
- Update on_graph_start to log app_id during sandbox initialization
- Integrate ArchiveSandboxStorage for persisting and restoring sandbox files
- Ensure proper error handling for sandbox file operations
2026-01-15 00:28:41 +08:00
94ff904a04 feat(sandbox): add AppAssetsInitializer and refactor VMFactory to VMBuilder
- Add AppAssetsInitializer to load published app assets into sandbox
- Refactor VMFactory.create() to VMBuilder with builder pattern
- Extract SandboxInitializer base class and DifyCliInitializer
- Simplify SandboxLayer constructor (remove options/environments params)
- Fix circular import in sandbox module by removing eager SandboxBashTool export
- Update SandboxProviderService to return VMBuilder instead of VirtualEnvironment
2026-01-15 00:13:52 +08:00
a0c388f283 refactor(sandbox): extract connection helpers and move run_command to helper module
- Add helpers.py with connection management utilities:
    - with_connection: context manager for connection lifecycle
    - submit_command: execute command and return CommandFuture
    - execute: run command with auto connection, raise on failure
    - try_execute: run command with auto connection, return result

  - Add CommandExecutionError to exec.py for typed error handling
    with access to exit_code, stderr, and full result

  - Remove run_command method from VirtualEnvironment base class
    (now available as submit_command helper)

  - Update all call sites to use new helper functions:
    - sandbox/session.py
    - sandbox/storage/archive_storage.py
    - sandbox/bash/bash_tool.py
    - workflow/nodes/command/node.py

  - Add comprehensive unit tests for helpers with connection reuse
2026-01-15 00:13:52 +08:00
56e537786f feat: Update LLM context selector styling 2026-01-14 23:30:12 +08:00
810f9eaaad feat: Enhance sub-graph components with context handling and variable management 2026-01-14 23:23:09 +08:00
1dd89a02ea fix: fix missing id and message_id (#31008) 2026-01-14 23:26:17 +09:00
yyh
31427e9c42 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-14 21:15:23 +08:00
yyh
384b99435b Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	api/.env.example
#	api/uv.lock
2026-01-14 21:14:36 +08:00
5bf4114d6f fix: increase name length limit in ExternalDatasetCreatePayload (#31000)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Asuka Minato <i@asukaminato.eu.org>
2026-01-14 22:13:53 +09:00
yyh
a56e94ba8e feat: add .agent/skills symlink and orpc-contract-first skill (#30968) 2026-01-14 21:13:14 +08:00
425d182f21 refactor: move app_asset_tree module and update imports in app_asset and app_asset_service 2026-01-14 20:31:40 +08:00
4394ba1fe1 feat(skill): implement app asset management features including folder and file operations, error handling, and database migration for app asset drafts 2026-01-14 20:25:17 +08:00
11f1782df0 fix: correct API Extension documentation link (#30962) 2026-01-14 21:21:15 +09:00
8cf5d9a6a1 fix: fix Cannot destructure property 'name' of 'value' as it is undef… (#30991) 2026-01-14 19:30:47 +08:00
0ec2b12e65 feat: allow pass hostname in docker env (#30975) 2026-01-14 19:30:37 +08:00
4828348532 feat: Add structured output to sub-graph LLM nodes 2026-01-14 17:25:06 +08:00
f33b1a3332 fix: redirect after login (#30985) 2026-01-14 17:20:49 +08:00
be5a4cf5e3 temp fix: tab change caused empty the nodes 2026-01-14 17:20:40 +08:00
08026f7399 fix(deps): security updates for pdfminer.six, authlib, werkzeug, aiohttp and others (#30976)
Signed-off-by: kenwoodjw <blackxin55+@gmail.com>
2026-01-14 17:03:46 +08:00
yyh
18e051bd66 chore(web): remove unused demo service component (#30979) 2026-01-14 17:03:35 +08:00
yyh
d17a92f713 refactor(web): split sandbox provider contracts into separate file
Move sandbox provider related contracts from contract/console.ts
to contract/console/sandbox-provider.ts for better organization
2026-01-14 16:46:04 +08:00
5ac2230c5d feat: sandbox storage 2026-01-14 16:31:24 +08:00
ab531d946e feat: add main skill struct 2026-01-14 16:28:14 +08:00
1a8fd08563 chore: add list define and mock data 2026-01-14 16:28:14 +08:00
yyh
c6ddf89980 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-14 16:24:47 +08:00
yyh
42f991dbef chore(web): disable Serwist dev logs (#30980) 2026-01-14 16:23:58 +08:00
yyh
71c39ae583 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-14 16:23:57 +08:00
yyh
b1b2c9636f fix(web): preserve HTTP method in ORPC fetchCompat mode (#30971)
Co-authored-by: Stephen Zhou <38493346+hyoban@users.noreply.github.com>
2026-01-14 16:18:12 +08:00
yyh
7209ef4aa7 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-14 16:16:28 +08:00
6b55e6781f feat: graph skill main struct 2026-01-14 15:41:02 +08:00
c8c048c3a3 perf: Optimize sub-graph store selectors and layout 2026-01-14 15:39:21 +08:00
01f17b7ddc refactor(http_request_node): apply DI for http request node (#30509)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-14 14:19:48 +08:00
yyh
4887c9ea6f refactor(web): simplify MCP tool availability context and hook
- Add useMemo to prevent unnecessary re-renders of context value
- Extract ProviderProps type for better readability
- Convert arrow functions to standard function declarations
- Remove unused versionSupported/sandboxEnabled from hook return type
2026-01-14 14:15:07 +08:00
495d575ebc feat: add assemble variable builder api 2026-01-14 14:12:36 +08:00
yyh
18170a1de5 feat(web): add sandbox mode check for MCP tool availability
Extend MCP tool availability context to include sandbox mode check
alongside version support. MCP tools are now blocked when sandbox
is disabled, with appropriate tooltip messages for each blocking
condition.
2026-01-14 14:01:56 +08:00
yyh
7ce144f493 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-14 13:40:39 +08:00
yyh
14b2e5bd0d refactor(web): MCP tool availability to context-based version gating (#30955) 2026-01-14 13:40:16 +08:00
d095bd413b fix: fix LOOP_CHILDREN_Z_INDEX (#30719) 2026-01-14 10:22:31 +08:00
3473ff7ad1 fix: use Factory to create repository in Aliyun Trace (#30899) 2026-01-14 10:21:46 +08:00
138c56bd6e fix(logstore): prevent SQL injection, fix serialization issues, and optimize initialization (#30697) 2026-01-14 10:21:26 +08:00
c327d0bb44 fix: Correction to the full name of Volc TOS (#30741) 2026-01-14 10:11:30 +08:00
e4b97fba29 chore(deps): bump azure-core from 1.36.0 to 1.38.0 in /api (#30941)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-14 10:10:49 +08:00
yyh
2279b605c6 refactor: import SandboxProvider type from @/types and remove retry:0
Move type imports to @/types/sandbox-provider instead of re-exporting
from service file. Remove unnecessary retry:0 options to use React
Query's default retry behavior.
2026-01-14 10:10:04 +08:00
7f9884e7a1 feat: Add option to delete or keep API keys when uninstalling plugin (#28201)
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: -LAN- <laipz8200@outlook.com>
2026-01-14 10:09:30 +08:00
yyh
3b78f9c2a5 refactor: migrate sandbox-provider API to ORPC
Replace manual fetch calls in use-sandbox-provider.ts with typed ORPC
contracts and client. Adds type definitions to types/sandbox-provider.ts
and registers contracts in the console router for consistent API handling.
2026-01-14 10:07:27 +08:00
e389cd1665 chore(deps): bump filelock from 3.20.0 to 3.20.3 in /api (#30939)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-14 09:56:02 +08:00
yyh
7c029ce808 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox
# Conflicts:
#	api/services/workflow_service.py
2026-01-14 09:54:07 +08:00
87f348a0de feat: change param to pydantic model (#30870) 2026-01-14 09:46:41 +08:00
b9052bc244 feat: add sub-graph config panel with variable selection and null
handling
2026-01-14 03:22:42 +08:00
b7025ad9d6 feat: change sub-graph prompt handling to use user role 2026-01-13 23:23:18 +08:00
c5482c2503 Merge branch 'main' into feat/pull-a-variable 2026-01-13 22:57:27 +08:00
d394adfaf7 feat: Fix prompt template handling for Jinja2 edition type 2026-01-13 22:57:05 +08:00
bc771d9c50 feat: Add onSave prop to SubGraph components for draft sync 2026-01-13 22:51:29 +08:00
206706987d refactor(variables): clarify base vs union type naming (#30634)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-13 23:39:34 +09:00
91da784f84 refactor: init orpc contract (#30885)
Co-authored-by: yyh <yuanyouhuilyz@gmail.com>
2026-01-13 23:38:28 +09:00
a129e684cc feat: inject traceparent in enterprise api (#30895)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-13 23:37:39 +09:00
96ec176b83 feat: sub-graph to use dynamic node generation 2026-01-13 22:28:30 +08:00
fe07c810ba fix: fix instance is not bind to session (#30913) 2026-01-13 21:15:21 +08:00
f57d2ef31f refactor: refactor workflow nodes state sync and extractor node
lifecycle
2026-01-13 18:37:23 +08:00
f28ded8455 feat(agent-sandbox): new tool resolver and bash execution implementation 2026-01-13 18:16:48 +08:00
e80bc78780 fix: clear mock llm node functions 2026-01-13 17:57:02 +08:00
a22cc5bc5e chore: Bump Dify version to 1.11.3 (#30903) 2026-01-13 17:49:13 +08:00
yyh
c6ba51127f fix(sandbox-provider): allow admin role to manage sandbox providers
Change permission check from isCurrentWorkspaceOwner to
isCurrentWorkspaceManager so both owner and admin roles can
configure sandbox providers.
2026-01-13 17:17:36 +08:00
ddbbddbd14 refactor: Update variable syntax to support agent context markers
Extend variable pattern matching to support both `#` and `@` markers,
with `@` specifically used for agent context variables. Update regex
patterns, text processing logic, and add sub-graph persistence for agent
variable handling.
2026-01-13 17:13:45 +08:00
yyh
1fbdf6b465 refactor(web): setup status caching (#30798) 2026-01-13 16:59:49 +08:00
9b961fb41e feat: structured output support file type 2026-01-13 16:48:01 +08:00
1db995be0d Merge branch 'main' into feat/llm-support-tools 2026-01-13 16:46:03 +08:00
yyh
5675a44ffd fix(sandbox-provider): use Loading component and add daytona doc link
- Replace hardcoded "Loading..." text with Loading component
- Add daytona documentation link to PROVIDER_DOC_LINKS
2026-01-13 16:37:58 +08:00
yyh
48295e5161 refactor(sandbox-provider): extract shared constants and remove redundant cache invalidation
- Extract PROVIDER_ICONS and PROVIDER_DESCRIPTION_KEYS to constants.ts
- Create shared ProviderIcon component with size and withBorder props
- Remove manual invalidateList() calls from config-modal and switch-modal
  (mutations already invalidate cache in onSuccess)
- Remove unused useInvalidSandboxProviderList hook
2026-01-13 16:18:08 +08:00
4f79d09d7b chore: change the DSL design 2026-01-13 16:10:18 +08:00
491e1fd6a4 chore: case insensitive email (#29978)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: -LAN- <laipz8200@outlook.com>
2026-01-13 15:42:44 +08:00
0e33dfb5c2 fix: In the LLM model in dify, when a message is added, the first cli… (#29540)
Co-authored-by: 青枕 <qingzhen.ww@alibaba-inc.com>
2026-01-13 15:42:32 +08:00
lif
ea708e7a32 fix(web): add null check for SSE stream bufferObj to prevent TypeError (#30131)
Signed-off-by: majiayu000 <1835304752@qq.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-13 15:40:43 +08:00
c09e29c3f8 chore: rename the migration file (#30893) 2026-01-13 15:26:41 +08:00
2d53ba8671 fix: fix object value is optional should skip validate (#30894) 2026-01-13 15:21:06 +08:00
dbed937fc6 Merge remote-tracking branch 'origin/feat/pull-a-variable' into feat/pull-a-variable 2026-01-13 15:17:24 +08:00
yyh
ffc39b0235 refactor: rename ACCOUNT_SETTING_TAB.PROVIDER to MODEL_PROVIDER
Rename the constant for clarity and consistency with the new
sandbox-provider tab naming convention. Update all references
across the codebase to use the new constant name.
2026-01-13 15:07:04 +08:00
yyh
f72f58dbc4 fix: loading state 2026-01-13 14:38:19 +08:00
yyh
9d0f4a2152 fix(sandbox-provider): prevent permission hint flash on page load
Use strict equality check to only show no-permission message when
isCurrentWorkspaceOwner is explicitly false, not undefined.
2026-01-13 14:23:52 +08:00
yyh
1ed4ab4299 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-13 14:19:04 +08:00
969c96b070 feat: add stream response 2026-01-13 14:13:43 +08:00
yyh
3f69d348a1 chore: add translations 2026-01-13 14:05:41 +08:00
yyh
63fff151c7 fix: provider card style 2026-01-13 13:50:28 +08:00
yyh
9920e0b89a fix(sandbox-provider): hide config controls in read-only mode
Hide config button, divider, and enable button for non-owner users.
Adjust right padding to 24px in read-only mode for proper alignment.
2026-01-13 13:32:18 +08:00
yyh
3042f29c15 fix(sandbox-provider): update switch modal warning style to match design
Replace yellow warning box with red text for destructive emphasis.
Bold the provider name in confirmation text using Trans component.
2026-01-13 13:23:03 +08:00
yyh
99273e1118 style: provider card 2026-01-13 13:18:09 +08:00
9be863fefa fix: missing content if assistant message with tool_calls (#30083)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-13 12:46:33 +08:00
8f43629cd8 fix(amplitude): update sessionReplaySampleRate default value to 0.5 (#30880)
Co-authored-by: CodingOnStar <hanxujiang@dify.ai>
2026-01-13 12:26:50 +08:00
9ee71902c1 fix: fix formatNumber accuracy (#30877) 2026-01-13 11:51:15 +08:00
yyh
041dbd482d fix(sandbox-provider): use i18n for provider card descriptions
Use PROVIDER_DESCRIPTION_KEYS mapping to display localized descriptions
instead of raw backend data, ensuring descriptions match Figma design.
2026-01-13 11:43:49 +08:00
yyh
b4aa1de10a fix(sandbox-provider): update provider descriptions to match Figma design
Update E2B, Daytona, and Docker descriptions with unique copy from design:
- E2B: "E2B Gives AI Agents Secure Computers with Real-World Tools."
- Daytona: "Deploy AI code with confidence using Daytona's lightning-fast infrastructure."
- Docker: "The Easiest Way to Build, Run, and Secure Agents."
2026-01-13 11:41:20 +08:00
yyh
c5a9b98cbe refactor(sandbox-provider): add centralized query keys management
Add sandboxProviderQueryKeys object for type-safe and maintainable
query key management, following the pattern used in use-common.ts.
2026-01-13 11:39:01 +08:00
yyh
21f47fbe58 fix(sandbox-provider): fix config modal header spacing and icon style
- Use custom header with 8px gap between title and subtitle
- Fix icon overflow-clip for proper border-radius
2026-01-13 11:12:51 +08:00
yyh
49f115dce3 fix(sandbox-provider): fix config modal subtitle icon to fill container 2026-01-13 11:11:03 +08:00
yyh
a81d0327d2 feat(sandbox-provider): update UI to match Figma design
- Update settings icon to RiEqualizer2Line
- Add 4px rounded container for provider icons in config modal
- Update section titles to uppercase style
- Change switch modal confirm button to warning variant
- Add i18n keys for setAsActive, readDocLink, securityTip
2026-01-13 11:04:11 +08:00
yyh
9eafe982ee fix: migration 2026-01-13 10:21:38 +08:00
yyh
a46bfdd0fc Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-13 10:15:59 +08:00
a012c87445 fix: entrypoint.sh overrides NEXT_PUBLIC_TEXT_GENERATION_TIMEOUT_MS when TEXT_GENERATION_TIMEOUT_MS is unset (#30864) (#30865) 2026-01-13 10:12:51 +08:00
450578d4c0 feat(ops): set root span kind for AliyunTrace to enable service-level metrics aggregation (#30728)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-13 10:12:00 +08:00
837237aa6d fix: use node factory for single-step workflow nodes (#30859) 2026-01-13 10:11:18 +08:00
16f26c4f99 feat(cli_api): implement CLI API for external sandbox interactions, including session management and request handling 2026-01-12 20:57:07 +08:00
03e0c4c617 feat: Add VarKindType parameter metion to mixed variable text input 2026-01-12 20:08:41 +08:00
47790b49d4 fix: Fix agent context variable insertion to preserve existing text 2026-01-12 18:12:06 +08:00
b25b069917 fix: refine agent variable logic 2026-01-12 18:12:06 +08:00
bb190f9610 feat: add mention type variable 2026-01-12 17:40:37 +08:00
d65ae68668 Merge branch 'main' into feat/pull-a-variable
# Conflicts:
#	.nvmrc
2026-01-12 17:15:56 +08:00
f625350439 refactor:Refactor agent variable handling in mixed variable text input 2026-01-12 17:05:00 +08:00
f4e8f64bf7 refactor:Change sub-graph output handling from skip to default 2026-01-12 17:04:13 +08:00
42fd0a0a62 refactor(sandbox): simplify command execution by using shlex for command parsing and improve output formatting 2026-01-12 16:35:09 +08:00
b63dfbf654 fix(api): defer streaming response until referenced variables are updated (#30832)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-12 16:23:18 +08:00
51ea87ab85 feat: clear free plan workflow run logs (#29494)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2026-01-12 15:57:40 +08:00
b78439b334 refactor(llm): update model features handling and change agent strategy to FUNCTION_CALLING 2026-01-12 15:52:26 +08:00
00698e41b7 build: limit esbuild, glob, docker base version to avoid cve (#30848) 2026-01-12 15:33:20 +08:00
df938a4543 ci: add HITL test env deployment action (#30846) 2026-01-12 15:07:53 +08:00
1082d73355 refactor(sandbox): remove unused SANDBOX_WORK_DIR constant and update bash command descriptions for clarity 2026-01-12 15:02:30 +08:00
d91087492d Refactor sub-graph components structure 2026-01-12 15:00:41 +08:00
cab7cd37b8 feat: Add sub-graph component for workflow 2026-01-12 14:56:53 +08:00
201a18d6ba refactor(virtual_environment): add cwd parameter to execute_command method across all providers for improved command execution context 2026-01-12 14:20:03 +08:00
f990f4a8d4 refactor(sandbox): update DIFY_CLI_PATH and DIFY_CLI_CONFIG_PATH to use SANDBOX_WORK_DIR and enhance error handling in SandboxSession 2026-01-12 14:07:54 +08:00
aa5e37f2db Merge branch 'main' into feat/llm-support-tools 2026-01-12 13:42:58 +08:00
e7c89b6153 refactor(sandbox): update imports and remove unused bash tool files, adjust DIFY_CLI_CONFIG_PATH 2026-01-12 13:36:19 +08:00
yyh
9161936f41 refactor(web): extract isServer/isClient utility & upgrade Node.js to 22.12.0 (#30803)
Co-authored-by: Stephen Zhou <38493346+hyoban@users.noreply.github.com>
2026-01-12 12:57:43 +08:00
f9a21b56ab feat: add block-no-verify hook for Claude Code (#30839) 2026-01-12 12:56:05 +08:00
220e1df847 docs(web): add corepack recommendation (#30837)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-12 12:44:30 +08:00
8cfdde594c chore(deps-dev): bump tos from 2.7.2 to 2.9.0 in /api (#30834)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 12:44:21 +08:00
31a8fd810c chore(deps-dev): bump @storybook/react from 9.1.13 to 9.1.17 in /web (#30833)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 12:44:11 +08:00
3e49d6b900 refactor: using initializer to replace hardcoded dify cli initialization 2026-01-12 12:13:56 +08:00
8aaff7fec1 refactor(sandbox): move VMFactory and related classes, update imports to reflect new structure 2026-01-12 12:01:21 +08:00
9fad97ec9b fix: drop useless pyrefly in ci (#30826)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2026-01-12 09:45:49 +08:00
0c2729d9b3 fix: fix refresh token deadlock (#30828) 2026-01-12 09:35:31 +08:00
51ac23c9f1 refactor(sandbox): reorganize sandbox-related imports and rename SandboxFactory to VMFactory for clarity 2026-01-12 02:07:31 +08:00
9dd0361d0e refactor: rename new runtime as sandbox feature 2026-01-12 01:53:39 +08:00
3d2840edb6 feat: sandbox session and dify cli 2026-01-12 01:49:08 +08:00
ce0a59b60d feat: ad os field to virtual enviroment 2026-01-12 01:26:55 +08:00
2d8acf92f0 refactor(sandbox): remove Chinese translation for bash command execution description in SandboxBashTool 2026-01-12 01:16:53 +08:00
bc2ffa39fc refactor(sandbox): remove unused bash tool methods and streamline sandbox session handling in LLMNode 2026-01-12 00:09:40 +08:00
390c805ef4 feat(sandbox): implement sandbox runtime checks and integrate bash tool invocation in LLMNode 2026-01-11 22:56:05 +08:00
a2e03b811e fix: Broken import in .storybook/preview.tsx (#30812) 2026-01-10 19:49:23 +08:00
1e10bf525c refactor(models): Refine MessageAgentThought SQLAlchemy typing (#27749)
Co-authored-by: Asuka Minato <i@asukaminato.eu.org>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-10 17:17:45 +09:00
8b1af36d94 feat(web): migrate PWA to Serwist (#30808) 2026-01-10 17:16:18 +09:00
5b753dfd6e fix(sandbox): update FIXME comments to specify sandbox context for runtime config checks 2026-01-09 18:12:36 +08:00
5c8b80b01a feat(app): update default runtime mode and adjust runtime selection component styling 2026-01-09 18:12:36 +08:00
95d62039b1 feat(ui): change runtime selection component 2026-01-09 18:12:36 +08:00
78acfb0040 feat(sandbox): add command to setup system-level sandbox provider configuration 2026-01-09 18:12:35 +08:00
eb821efda7 refactor(encryption): update encryption utility references and clean up sandbox provider service logic 2026-01-09 18:12:35 +08:00
925825a41b refactor(encryption): using oauth encryption as a general encryption util. 2026-01-09 18:12:34 +08:00
f925266c1b Merge branch 'main' into feat/pull-a-variable 2026-01-09 16:20:55 +08:00
07ff8df58d Merge branch 'main' into feat/support-agent-sandbox 2026-01-09 16:20:33 +08:00
0711dd4159 feat: enhance start node object value check (#30732) 2026-01-09 16:13:17 +08:00
ae0a26f5b6 revert: "fix: fix assign value stand as default (#30651)" (#30717)
The original fix seems correct on its own. However, for chatflows with multiple answer nodes, the `message_replace` command only preserves the output of the last executed answer node.
2026-01-09 16:08:24 +08:00
0a0f02c0c6 chore(migrations): re-arrange migration of "add llm generation details table" 2026-01-09 15:55:25 +08:00
d2f41ae9ef Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2026-01-09 15:37:29 +08:00
5a4f5f54a7 chore: apply ruff 2026-01-09 14:47:21 +08:00
eabfa8f3af fix(migrations): update down_revision for sandbox_providers migration 2026-01-09 14:45:56 +08:00
d4432ed80f refactor: marketplace state management (#30702)
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-09 14:31:24 +08:00
1557f48740 Merge branch 'feat/agent-node-v2' into feat/support-agent-sandbox 2026-01-09 14:19:27 +08:00
lif
9d9f027246 fix(web): invalidate app list cache after deleting app from detail page (#30751)
Signed-off-by: majiayu000 <1835304752@qq.com>
2026-01-09 14:08:37 +08:00
77f097ce76 fix: fix enhance app mode check (#30758) 2026-01-09 14:07:40 +08:00
7843afc91c feat(workflows): add agent-dev deploy workflow (#30774) 2026-01-09 13:55:49 +08:00
00d787a75b feat(workflows): add deployment workflow for agent development
- Created a new GitHub Actions workflow to automate deployment for the agent development branch.
- Configured the workflow to trigger upon successful completion of the "Build and Push API & Web" workflow.
- Implemented SSH deployment steps using appleboy/ssh-action for secure server updates.
2026-01-09 13:11:37 +08:00
3b454fa95a refactor(sandbox-manager): implement sharded locking for sandbox management
- Enhanced the SandboxManager to use a sharded locking mechanism for improved concurrency and performance.
- Replaced the global lock with shard-specific locks, allowing for lock-free reads and reducing contention.
- Updated methods for registering, retrieving, unregistering, and counting sandboxes to work with the new sharded structure.
- Improved documentation within the class to clarify the purpose and functionality of the sharding approach.
2026-01-09 12:13:41 +08:00
0da4d64d38 feat(sandbox-layer): refactor sandbox management and integrate with SandboxManager
- Simplified the SandboxLayer initialization by removing unused parameters and consolidating sandbox creation logic.
- Integrated SandboxManager for better lifecycle management of sandboxes during workflow execution.
- Updated error handling to ensure proper initialization and cleanup of sandboxes.
- Enhanced CommandNode to retrieve sandboxes from SandboxManager, improving sandbox availability checks.
- Added unit tests to validate the new sandbox management approach and ensure robust error handling.
2026-01-09 11:23:03 +08:00
98df99b0ca feat(embedding-process): implement embedding process components and polling logic (#30622)
Co-authored-by: CodingOnStar <hanxujiang@dify.ai>
2026-01-09 10:21:27 +08:00
9848823dcd feat: implement step two of dataset creation with comprehensive UI components and hooks (#30681)
Co-authored-by: CodingOnStar <hanxujiang@dify.ai>
2026-01-09 10:21:18 +08:00
6e2cf23a73 Merge branch 'main' into feat/pull-a-variable 2026-01-09 02:49:47 +08:00
5ad2385799 chore(i18n): sync translations with en-US (#30750)
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
2026-01-08 22:53:04 +08:00
yyh
7774a1312e fix(ci): use repository_dispatch for i18n sync workflow (#30744) 2026-01-08 21:28:49 +08:00
8b0bc6937d feat: enhance component picker and workflow variable block functionality 2026-01-08 18:17:09 +08:00
872fd98eda Merge remote-tracking branch 'origin/feat/pull-a-variable' into feat/pull-a-variable 2026-01-08 18:16:29 +08:00
91d44719f4 fix(web): resolve chat message loading race conditions and infinite loops (#30695)
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2026-01-08 18:05:32 +08:00
5bcd3b6fe6 feat: add mention node executor 2026-01-08 17:36:21 +08:00
b2cbeeae92 fix(web): restrict postMessage targetOrigin from wildcard to specific origins (#30690)
Co-authored-by: XW <wei.xu1@wiz.ai>
2026-01-08 17:23:27 +08:00
1aed585a19 feat: enhance agent integration in prompt editor and mixed-variable text input 2026-01-08 17:02:35 +08:00
831eba8b1c feat: update agent functionality in mixed-variable text input 2026-01-08 16:59:09 +08:00
b09a831d15 feat: add tenant_id support to Sandbox and VirtualEnvironment initialization 2026-01-08 16:19:29 +08:00
4d3d8b35d9 Merge branch 'main' into feat/llm-node-support-tools 2026-01-08 14:28:13 +08:00
c323028179 feat: llm node support tools 2026-01-08 14:27:37 +08:00
94dbda503f refactor(llm-panel): update layout and enhance Max Iterations component
- Adjusted padding in the LLM panel for better visual alignment.
- Refactored the Max Iterations component to accept a className prop for flexible styling.
- Maintained the structure of advanced settings while ensuring consistent rendering of fields.
2026-01-08 14:15:58 +08:00
beefff3d48 feat(docker-demuxer): implement producer-consumer pattern for stream demultiplexing
- Introduced threading to handle Docker's stdout/stderr streams, improving thread safety and preventing race conditions.
- Replaced buffer-based reading with queue-based reading for stdout and stderr.
- Updated read methods to handle errors and end-of-stream conditions more gracefully.
- Enhanced documentation to reflect changes in the demuxing process.
2026-01-08 14:15:41 +08:00
cd1af04dee feat: model total credits (#30727)
Co-authored-by: CodingOnStar <hanxujiang@dify.ai>
Co-authored-by: Stephen Zhou <38493346+hyoban@users.noreply.github.com>
2026-01-08 14:11:44 +08:00
fe0802262c feat: credit pool (#30720)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-08 13:17:30 +08:00
c2e5081437 feat(llm-panel): collapse panel with advanced settings and max iterations
- Introduced a collapsible section for advanced settings in the LLM panel.
- Added Max Iterations component with conditional rendering based on the new hideMaxIterations prop.
- Updated context field and vision configuration to be part of the advanced settings.
- Added new translation key for advanced settings in the workflow localization file.
2026-01-08 12:16:18 +08:00
786c3e4137 chore: apply ruff 2026-01-08 11:14:44 +08:00
0d33714f28 fix(command-node): enhance error message formatting in command execution
- Improved error message handling by assigning the stderr output to a variable for better readability.
- Ensured consistent error reporting when a command fails, maintaining clarity in the output.
2026-01-08 11:14:37 +08:00
1fbba38436 fix(command-node): improve error reporting in command execution
- Updated error handling to provide detailed stderr output when a command fails.
- Streamlined working directory and command rendering by combining operations into single lines.
2026-01-08 11:14:23 +08:00
15c3d712d3 feat: sandbox provider configuration 2026-01-08 11:04:12 +08:00
5b01f544d1 refactor(command-node): streamline command execution and directory checks
- Simplified the command execution logic by removing unnecessary shell invocations.
- Enhanced working directory validation by directly using the `test` command.
- Improved command parsing with `shlex.split` for better handling of raw commands.
2026-01-08 11:04:11 +08:00
c5b99ebd17 fix: web app login code encrypt (#30705) 2026-01-07 18:04:42 -08:00
adaf0e32c0 feat: add decryption decorators for password and code fields in webapp (#30704) 2026-01-08 10:03:39 +08:00
27a803a6f0 fix(web): resolve key-value input box height inconsistency on focus/blur (#30715) (#30716) 2026-01-08 09:54:27 +08:00
yyh
25ff4ae5da fix(i18n): resolve Claude Code sandbox path issues in workflow (#30710) 2026-01-08 09:53:32 +08:00
8b8e521c4e Merge branch 'main' into feat/pull-a-variable 2026-01-07 22:11:05 +08:00
7ccf858ce6 fix(workflow): pass correct user_from/invoke_from into graph init (#30637) 2026-01-07 21:47:23 +08:00
885f226f77 refactor: split changes for api/controllers/console/workspace/trigger… (#30627)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-07 21:18:02 +08:00
yyh
a422908efd feat(i18n): Migrate translation workflow to Claude Code GitHub Actions (#30692)
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-07 21:17:50 +08:00
d8a0291382 refactor(web): remove unused type alias VoiceLanguageKey (#30694)
Co-authored-by: fghpdf <fghpdf@users.noreply.github.com>
2026-01-07 21:15:43 +08:00
fe4c591cfd feat(daytona-environment): enhance command management with threading support and default API URL 2026-01-07 18:47:22 +08:00
0cd613ae52 fix(docker-daemon): update default Docker socket to use Unix socket 2026-01-07 18:35:49 +08:00
0082f468b4 Refactor code structure for improved readability and maintainability 2026-01-07 18:33:13 +08:00
eec57e84e4 Merge branch 'main' into feat/agent-node-v2 2026-01-07 17:34:23 +08:00
70149ea05e Merge branch 'main' into feat/llm-node-support-tools 2026-01-07 16:29:47 +08:00
1d93f41fcf feat: llm node support tools 2026-01-07 16:28:41 +08:00
cd0f41a3e0 fix(command-node): improve working directory handling in CommandNode
- Added checks to verify the existence of the specified working directory before executing commands.
- Updated command execution logic to conditionally change the working directory if provided.
- Included FIXME comments to address future enhancements for native cwd support in VirtualEnvironment.run_command.
2026-01-07 15:30:59 +08:00
094c9fd802 fix: command node single debug run
- Added FIXME comments to indicate the need for unifying runtime config checking in AdvancedChatAppGenerator and WorkflowAppGenerator.
- Introduced sandbox management in WorkflowService with proper error handling for sandbox release.
- Enhanced runtime feature handling in the workflow execution process.
2026-01-07 15:22:12 +08:00
1584a78fc9 chore: add model name in detail 2026-01-07 15:05:18 +08:00
187bfafe8b fix: fix assign value stand as default (#30651)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-07 14:54:11 +08:00
666640f7d5 refactor: remove unnecessary type: ignore from rag_pipeline_fields.py (#30666)
Co-authored-by: fghpdf <fghpdf@users.noreply.github.com>
2026-01-07 14:40:35 +08:00
yyh
160b4d194b fix: signin page stuck on loading when refresh token valid but access token expired (#30675)
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 14:20:38 +08:00
88248ad2d3 feat: add node level memory 2026-01-07 13:57:55 +08:00
e335cd0ef4 refactor(web): remove useMixedTranslation, better resource loading (#30630)
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 13:20:09 +08:00
1a203031e0 fix(virtual-env): fix Docker stdout/stderr demuxing and exit code parsing
- Add _DockerDemuxer to properly separate stdout/stderr from multiplexed stream
- Fix binary header garbage in Docker exec output (tty=False 8-byte header)
- Fix LocalVirtualEnvironment.get_command_status() to use os.WEXITSTATUS()
- Update tests to use Transport API instead of raw file descriptors
2026-01-07 12:20:07 +08:00
05c3344554 feat: future interface for easy way to use VM.execute_command 2026-01-07 11:57:00 +08:00
888be71639 feat: command node output variables 2026-01-07 11:15:52 +08:00
yyh
357548ca07 chore: rename ralph-wiggum plugin to ralph-loop (#30664)
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 10:25:52 +08:00
ace8ad429f fix: fix not record access token (#30654) 2026-01-07 10:19:14 +08:00
93faa672cc fix: add DB_TYPE environment variable to unit tests (#30660)
Co-authored-by: fghpdf <fghpdf@users.noreply.github.com>
2026-01-07 10:16:17 +08:00
yyh
9c6c2a3c14 chore: add skill creator for create agent skills (#30652)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-07 10:07:35 +08:00
3902929d9f feat: new runtime options 2026-01-07 00:01:55 +08:00
4f0fb6df2b chore: use from __future__ import annotations (#30254)
Co-authored-by: Dev <dev@Devs-MacBook-Pro-4.local>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Asuka Minato <i@asukaminato.eu.org>
Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com>
2026-01-06 23:57:20 +09:00
0294555893 refactor: port api/fields/file_fields.py (#30638) 2026-01-06 22:55:58 +08:00
55de731f9c refactor(api): clarify published RAG pipeline invoke naming (#30644) 2026-01-06 23:48:06 +09:00
760a739e91 Merge branch 'main' into feat/grouping-branching
# Conflicts:
#	web/package.json
2026-01-06 22:00:01 +08:00
9b128048c4 refactor: restructure DatasetCard component for improved readability and maintainability (#30617)
Co-authored-by: CodingOnStar <hanxujiang@dify.ai>
2026-01-06 21:57:21 +08:00
f57aa08a3f fix: flask db check fails due to nullable mismatch between migrations and models (#30474)
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Maries <xh001x@hotmail.com>
2026-01-06 20:23:59 +08:00
yyh
44d7aaaf33 fix: prevent empty state flash and add skeleton loading for app list (#30616)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-06 20:19:22 +08:00
yyh
7beed12eab refactor(web): migrate legacy forms to TanStack Form (#30631) 2026-01-06 20:18:27 +08:00
1c7c475c43 feat: add Command node support
- Introduced Command node type in workflow with associated UI components and translations.
- Enhanced SandboxLayer to manage sandbox attachment for Command nodes during execution.
- Updated various components and constants to integrate Command node functionality across the workflow.
2026-01-06 19:30:38 +08:00
64bfcbc4a9 feat: implement dataset creation step one with preview functionality (#30507)
Co-authored-by: CodingOnStar <hanxujiang@dify.ai>
2026-01-06 18:59:18 +08:00
2cc89d30db feat: use more universal C.UTF-8 instead of en_US.UTF-8 (#30621) 2026-01-06 16:39:04 +08:00
cef7fd484b chore: add trace metadata and streaming icon 2026-01-06 16:30:33 +08:00
caabca3f02 feat: sandbox layer for workflow execution 2026-01-06 15:47:20 +08:00
yyh
5661f821c3 chore: bump pnpm version in packageManager (#30605) 2026-01-06 15:24:25 +08:00
1f5d744cc2 fix(db): parameterize sessionmaker with Session (#30612) 2026-01-06 15:23:50 +08:00
68d68a46a0 refactor: generate_url to support scenario to build url (#30598) 2026-01-06 14:53:38 +08:00
d12b91a01a refactor(api): inject sessionmaker into conversation variable updater (#30609) 2026-01-06 14:52:59 +08:00
lif
f3ca8be9f9 refactor: clean type: ignore comments in login.py and template_transformer.py (#30510)
Signed-off-by: majiayu000 <1835304752@qq.com>
2026-01-06 14:33:27 +08:00
4f74e90f51 fix: _model_to_insertion_dict missing id (#30603) 2026-01-06 14:13:29 +08:00
d6e9c3310f feat: Add conversation variable persistence layer (#30531) 2026-01-06 14:05:33 +08:00
b2124a7358 feat: init rsc support for translation (#30596) 2026-01-06 13:23:03 +08:00
89463cc11d fix: allow unauthenticated CORS preflight for embedded bots (#30587)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-06 11:40:34 +08:00
114a34e008 fix: correct docx hyperlink extraction (#30360) 2026-01-06 11:24:26 +08:00
f320fd5f95 refactor: port controllers/console/app/app.py (#30522)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-06 10:12:52 +08:00
061d552928 feat: unified management stop event (#30479) 2026-01-06 10:12:05 +08:00
eccf79a710 chore: remove unused link icon type (#30469) 2026-01-06 10:10:06 +08:00
7e3bfb9250 refactor: split changes for api/controllers/console/datasets/hit_test… (#30581) 2026-01-06 10:08:09 +08:00
yyh
f14c3ce15e fix: system model selector loading state flash (#30572) 2026-01-06 10:07:42 +08:00
c0331b23a9 refactor: split changes for api/controllers/web/conversation.py (#30582)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-06 10:06:48 +08:00
ce87371bef refactor: split changes for api/controllers/web/saved_message.py (#30583)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-06 10:06:21 +08:00
615c313f80 fix(api): refactors the SQL LIKE pattern escaping logic to use a centralized utility function, ensuring consistent and secure handling of special characters across all database queries. (#30450)
Signed-off-by: NeatGuyCoding <15627489+NeatGuyCoding@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-06 09:56:30 +08:00
d92c476388 feat(workflow): enhance group node availability checks
- Updated `checkMakeGroupAvailability` to include a check for existing group nodes, preventing group creation if a group node is already selected.
- Modified `useMakeGroupAvailability` and `useNodesInteractions` hooks to incorporate the new group node check, ensuring accurate group creation logic.
- Adjusted UI rendering logic in the workflow panel to conditionally display elements based on node type, specifically for group nodes.
2026-01-06 02:07:13 +08:00
de6262784c chore: Harden API image Node.js runtime install (#30497) 2026-01-05 21:19:26 +09:00
36b7075cf4 Merge feat/llm-node-support-tools and fix type errors
- Merge origin/feat/llm-node-support-tools branch
- Fix unused variable tenant_id in dsl.py
- Add None checks for app and workflow in dsl.py
- Add type ignore for e2b_code_interpreter import

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 18:32:15 +08:00
f3761c26e9 Merge remote-tracking branch 'origin/main' into feat/llm-node-support-tools 2026-01-05 18:17:05 +08:00
43daf4f82c refactor: rename construct_environment method to _construct_environment for consistency across virtual environment providers 2026-01-05 18:13:13 +08:00
932be0ad64 feat: session management for InnerAPI&VM 2026-01-05 18:13:13 +08:00
9012dced6a feat(workflow): improve group node interaction handling
- Enhanced `useNodesInteractions` to better manage group node handlers and connections, ensuring accurate identification of leaf nodes and their branches.
- Updated logic to create handlers based on node connections, differentiating between internal and external connections.
- Refined initial node setup to include target branches for group nodes, improving the overall interaction model for grouped elements.
2026-01-05 17:42:31 +08:00
a9e2c05a10 feat(graph-engine): add command to update variables at runtime (#30563)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-05 16:47:34 +08:00
6f8bd58e19 feat(graph-engine): make layer runtime state non-null and bound early (#30552)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-05 16:43:42 +08:00
50bed78d7a feat(workflow): add group node support and translations
- Introduced GroupDefault node with metadata and default values for group nodes.
- Enhanced useNodeMetaData hook to handle group node author and description using translations.
- Added translations for group node functionality in English, Japanese, Simplified Chinese, and Traditional Chinese.
2026-01-05 16:29:00 +08:00
591ca05c84 feat(logstore): make graph field optional via env variable LOGSTORE… (#30554)
Co-authored-by: 阿永 <ayong.dy@alibaba-inc.com>
2026-01-05 16:12:41 +08:00
a72044aa86 chore: fix lint in i18n (#30571) 2026-01-05 16:12:12 +08:00
34f3b288a7 chore(docker): update nltk data download process to include unstructured download_nltk_packages (#28876) 2026-01-05 15:50:33 +08:00
a99ac3fe0d refactor(models): Add mapped type hints to MessageAnnotation (#27751) 2026-01-05 15:50:03 +08:00
52149c0d9b chore(web): add ESLint rules for i18n JSON validation (#30491) 2026-01-05 15:49:31 +08:00
631f999f65 refactor: use contains_any instead of Chaining where = where | f (#30559)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-05 15:48:31 +08:00
60250355cb feat(workflow): enhance group edge management and validation
- Introduced `createGroupInboundEdges` function to manage edges for group nodes, ensuring proper connections to head nodes.
- Updated edge creation logic to handle group nodes in both inbound and outbound scenarios, including temporary edges.
- Enhanced validation in `useWorkflow` to check connections for group nodes based on their head nodes.
- Refined edge processing in `preprocessNodesAndEdges` to ensure correct handling of source handles for group edges.
2026-01-05 15:48:26 +08:00
be3ef9f050 fix: #30511 [Bug] knowledge_retrieval_node fails when using Rerank Model: "Working outside of application context" and add regression test (#30549)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-05 15:02:21 +08:00
75afc2dc0e chore: update packageManager version in package.json to pnpm@10.27.0 2026-01-05 14:42:48 +08:00
93a85ae98a chore(deps): bump @amplitude/analytics-browser from 2.31.4 to 2.33.1 in /web (#30538)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-05 15:05:04 +09:00
e3e19c437a fix: fix db env not work (#30541) 2026-01-05 11:10:45 +08:00
693daea474 fix: INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH settings (#30463) 2026-01-05 11:10:04 +08:00
bc317a0009 feat: return data_source_info and data_source_detail_dict (#29912)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
2026-01-05 11:04:03 +08:00
c158dfa198 fix: support to change NEXT_PUBLIC_BASE_PATH env using --build-arg in docker build (#29836)
Co-authored-by: root <root@KIMI-DESKTOP-01.mchrcloud.com>
2026-01-05 11:03:12 +08:00
79913590ae fix(api): surface subscription deletion errors to users (#30333) 2026-01-05 11:02:04 +08:00
f1fff0a243 fix: fix WorkflowExecution.outputs containing non-JSON-serializable o… (#30464) 2026-01-05 10:57:23 +08:00
4bb08b93d7 chore: update dockerignore (#30460) 2026-01-05 10:55:14 +08:00
d0564ac63c feat: add flask command file-usage (#30500) 2026-01-05 10:52:21 +08:00
eb321ad614 chore: Add a new rule for import lint (#30526) 2026-01-05 10:48:14 +08:00
7128d71cf7 chore(deps-dev): bump intersystems-irispython from 5.3.0 to 5.3.1 in /api (#30540)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-05 10:47:39 +08:00
95edbad1c7 refactor(workflow): add Jinja2 renderer abstraction for template transform (#30535)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-01-05 10:46:37 +08:00
154abdd915 chore: Update PR template lint command (#30533) 2026-01-05 10:46:01 +08:00
225b13da93 Merge branch 'main' into feat/grouping-branching 2026-01-04 21:56:13 +08:00
37c748192d feat(workflow): implement UI-only group functionality
- Added support for UI-only group nodes, including custom-group, custom-group-input, and custom-group-exit-port types.
- Enhanced edge interactions to manage temporary edges connected to groups, ensuring corresponding real edges are deleted when temp edges are removed.
- Updated node interaction hooks to restore hidden edges and remove temp edges efficiently.
- Implemented logic for creating and managing group structures, including entry and exit ports, while maintaining execution graph integrity.
2026-01-04 21:54:15 +08:00
b7a2957340 feat(workflow): implement ungroup functionality for group nodes
- Added `handleUngroup`, `getCanUngroup`, and `getSelectedGroupId` methods to manage ungrouping of selected group nodes.
- Integrated ungrouping logic into the `useShortcuts` hook for keyboard shortcut support (Ctrl + Shift + G).
- Updated UI to include ungroup option in the panel operator popup for group nodes.
- Added translations for the ungroup action in multiple languages.
2026-01-04 21:40:34 +08:00
a6ce6a249b feat(workflow): refine strokeDasharray logic for temporary edges 2026-01-04 20:59:33 +08:00
8834e6e531 feat(workflow): enhance group node functionality with head and leaf node tracking
- Added headNodeIds and leafNodeIds to GroupNodeData to track nodes that receive input and send output outside the group.
- Updated useNodesInteractions hook to include headNodeIds in the group node data.
- Modified isValidConnection logic in useWorkflow to validate connections based on leaf node types for group nodes.
- Enhanced preprocessNodesAndEdges to rebuild temporary edges for group nodes, connecting them to external nodes for visual representation.
2026-01-04 20:45:42 +08:00
04f40303fd Merge branch 'main' into feat/llm-node-support-tools 2026-01-04 18:04:42 +08:00
ececc5ec2c feat: llm node support tools 2026-01-04 18:03:47 +08:00
81547c5981 feat: add tests for QueueTransportReadCloser to handle blocking reads and first chunk returns 2026-01-04 17:58:04 +08:00
a911b268aa feat: improve read behavior in QueueTransportReadCloser to handle initial data wait and subsequent immediate returns 2026-01-04 17:58:04 +08:00
39010fd153 Merge branch 'refs/heads/main' into feat/grouping-branching 2026-01-04 17:25:18 +08:00
dc8a618b6a feat: add think start end tag 2026-01-04 11:09:43 +08:00
f3e7fea628 feat: add tool call time 2026-01-04 10:29:02 +08:00
926349b1f8 feat: transform tool file message for external access 2026-01-02 15:23:16 +08:00
ec29c24916 feat: enhance QueueTransportReadCloser to handle reading with available data and improve EOF handling 2026-01-02 15:03:17 +08:00
3842eade67 feat: add API endpoint to fetch list of available tools and corresponding request model 2026-01-02 15:00:42 +08:00
bd338a9043 Merge branch 'main' into feat/grouping-branching 2026-01-02 01:34:02 +08:00
cf7e2d5d75 feat: add unit tests for transport classes including queue, pipe, and socket transports 2026-01-01 18:57:03 +08:00
2673fe05a5 feat: introduce TransportEOFError for handling closed transport scenarios and update transport classes to raise it 2026-01-01 18:46:08 +08:00
180fdffab1 feat: update E2BEnvironment options to include default template, list file depth, and API URL 2025-12-31 18:29:22 +08:00
62e422f75a feat: add NotSupportedOperationError and update E2BEnvironment to raise it for unsupported command status retrieval 2025-12-31 18:09:14 +08:00
41565e91ed feat: add support for passing environment variables to E2B sandbox 2025-12-31 18:07:43 +08:00
c9610e9949 feat: implement transport abstractions for virtual environments and add E2B environment provider 2025-12-31 17:51:38 +08:00
29dc083d8d feat: enhance DockerDaemonEnvironment with options handling and default values 2025-12-31 16:19:47 +08:00
39d6383474 Merge branch 'main' into feat/grouping-branching 2025-12-30 22:01:20 +08:00
f679065d2c feat: extend construct_environment method to accept environments parameter in virtual environment classes 2025-12-30 21:07:16 +08:00
0a97e87a8e docs: clarify usage of close() method in PipeTransport docstring 2025-12-30 20:58:51 +08:00
4d81455a83 fix: correct PipeTransport file descriptor assignments and architecture matching case sensitivity 2025-12-30 20:54:39 +08:00
39091fe4df feat: enhance command execution and status retrieval in virtual environments with transport abstractions 2025-12-30 19:37:30 +08:00
bac5245cd0 Merge remote-tracking branch 'origin/main' into feat/support-agent-sandbox 2025-12-30 19:11:29 +08:00
274f9a3f32 Refactor code structure for improved readability and maintainability 2025-12-30 16:31:34 +08:00
a513ab9a59 feat: implement DSL prediction API and virtual environment base classes 2025-12-30 15:24:54 +08:00
e83635ee5a Merge branch 'main' into feat/llm-node-support-tools 2025-12-30 11:47:54 +08:00
d79372a46d Merge branch 'main' into feat/llm-node-support-tools 2025-12-30 11:47:26 +08:00
bbd11c9e89 feat: llm node support tools 2025-12-30 10:40:01 +08:00
152fd52cd7 [autofix.ci] apply automated fixes 2025-12-30 02:23:25 +00:00
ccabdbc83b Merge branch 'main' into feat/agent-node-v2 2025-12-30 10:20:42 +08:00
56c8221b3f chore: remove frontend changes 2025-12-30 10:19:40 +08:00
add8980790 add missing translation 2025-12-30 10:06:49 +08:00
5157e1a96c Merge branch 'main' into feat/grouping-branching 2025-12-29 23:33:28 +08:00
d132abcdb4 merge main 2025-12-29 15:55:45 +08:00
d60348572e feat: llm node support tools 2025-12-29 14:55:26 +08:00
f55faae31b chore: strip reasoning from chatflow answers and persist generation details 2025-12-25 13:59:38 +08:00
0cff94d90e Merge branch 'main' into feat/llm-node-support-tools 2025-12-25 13:45:49 +08:00
7fc25cafb2 feat: basic app add thought field 2025-12-25 10:28:21 +08:00
a7859de625 feat: llm node support tools 2025-12-24 14:15:55 +08:00
4bb76acc37 Merge branch 'main' into feat/grouping-branching 2025-12-23 23:56:26 +08:00
b513933040 Merge branch 'main' into feat/grouping-branching
# Conflicts:
#	web/app/components/workflow/block-icon.tsx
#	web/app/components/workflow/hooks/use-nodes-interactions.ts
#	web/app/components/workflow/index.tsx
#	web/app/components/workflow/nodes/components.ts
#	web/app/components/workflow/selection-contextmenu.tsx
#	web/app/components/workflow/utils/workflow-init.ts
2025-12-23 23:55:21 +08:00
18ea9d3f18 feat: Add GROUP node type and update node configuration filtering in Graph class 2025-12-23 20:44:36 +08:00
7b660a9ebc feat: Simplify edge creation for group nodes in useNodesInteractions hook 2025-12-23 17:12:09 +08:00
783a49bd97 feat: Refactor group node edge creation logic in useNodesInteractions hook 2025-12-23 16:44:11 +08:00
d3c6b09354 feat: Implement group node edge handling in useNodesInteractions hook 2025-12-23 16:37:42 +08:00
3d61496d25 feat: Enhance CustomGroupNode with exit ports and visual indicators 2025-12-23 15:36:53 +08:00
16bff9e82f Merge branch 'refs/heads/main' into feat/grouping-branching 2025-12-23 15:27:54 +08:00
22f25731e8 refactor: streamline edge building and node filtering in workflow graph 2025-12-22 18:59:08 +08:00
035f51ad58 Merge branch 'main' into feat/grouping-branching 2025-12-22 18:18:37 +08:00
e9795bd772 feat: refine workflow graph processing to exclude additional UI-only node types 2025-12-22 18:17:25 +08:00
93b516a4ec feat: add UI-only group node types and enhance workflow graph processing 2025-12-22 17:35:33 +08:00
fc9d5b2a62 feat: implement group node functionality and enhance grouping interactions 2025-12-19 15:17:45 +08:00
e3bfb95c52 feat: implement grouping availability checks in selection context menu 2025-12-18 17:11:34 +08:00
047ea8c143 chore: improve type checking 2025-12-18 10:09:31 +08:00
752cb9e4f4 feat: enhance selection context menu with alignment options and grouping functionality
- Added alignment buttons for nodes with tooltips in the selection context menu.
- Implemented grouping functionality with a new "Make group" option, including keyboard shortcuts.
- Updated translations for the new grouping feature in multiple languages.
- Refactored node selection logic to improve performance and readability.
2025-12-17 19:52:02 +08:00
f54b9b12b0 feat: add process data 2025-12-17 17:34:02 +08:00
cb99b8f04d chore: handle migrations 2025-12-17 15:59:09 +08:00
7c03bcba2b Merge branch 'main' into feat/agent-node-v2 2025-12-17 15:55:27 +08:00
92fa7271ed refactor(llm node): remove unused args 2025-12-17 15:42:23 +08:00
d3486cab31 refactor(llm node): tool call tool result entity 2025-12-17 10:30:21 +08:00
dd0a870969 Merge branch 'main' into feat/agent-node-v2 2025-12-16 15:17:29 +08:00
0c4c268003 chore: fix ci issues 2025-12-16 15:14:42 +08:00
ff57848268 [autofix.ci] apply automated fixes 2025-12-15 07:29:20 +00:00
d223fee9b9 Merge branch 'main' into feat/agent-node-v2 2025-12-15 15:26:48 +08:00
ad18d084f3 feat: add sequence output variable. 2025-12-15 14:59:06 +08:00
9941d1f160 feat: add llm log metadata 2025-12-15 14:18:53 +08:00
13fa56b5b1 feat: add tracing metadata 2025-12-12 16:24:49 +08:00
9ce48b4dc4 fix: llm generation variable 2025-12-12 11:08:49 +08:00
abb2b860f2 chore: remove unused changes 2025-12-10 15:04:19 +08:00
930c36e757 fix: llm detail store 2025-12-09 20:56:54 +08:00
2d2ce5df85 feat: generation stream output. 2025-12-09 16:22:17 +08:00
2b23c43434 feat: add agent package 2025-12-09 11:36:47 +08:00
1393 changed files with 122912 additions and 17446 deletions

1
.agent/skills Symbolic link
View File

@ -0,0 +1 @@
../.claude/skills

View File

@ -1,9 +1,20 @@
{
"hooks": {
"PreToolUse": [
{
"matcher": "Bash",
"hooks": [
{
"type": "command",
"command": "npx -y block-no-verify@1.1.1"
}
]
}
]
},
"enabledPlugins": {
"feature-dev@claude-plugins-official": true,
"context7@claude-plugins-official": true,
"typescript-lsp@claude-plugins-official": true,
"pyright-lsp@claude-plugins-official": true,
"ralph-wiggum@claude-plugins-official": true
"ralph-loop@claude-plugins-official": true
}
}

View File

@ -83,6 +83,9 @@ vi.mock('next/navigation', () => ({
usePathname: () => '/test',
}))
// ✅ Zustand stores: Use real stores (auto-mocked globally)
// Set test state with: useAppStore.setState({ ... })
// Shared state for mocks (if needed)
let mockSharedState = false
@ -296,7 +299,7 @@ For each test file generated, aim for:
For more detailed information, refer to:
- `references/workflow.md` - **Incremental testing workflow** (MUST READ for multi-file testing)
- `references/mocking.md` - Mock patterns and best practices
- `references/mocking.md` - Mock patterns, Zustand store testing, and best practices
- `references/async-testing.md` - Async operations and API calls
- `references/domain-components.md` - Workflow, Dataset, Configuration testing
- `references/common-patterns.md` - Frequently used testing patterns

View File

@ -37,16 +37,36 @@ Only mock these categories:
1. **Third-party libraries with side effects** - `next/navigation`, external SDKs
1. **i18n** - Always mock to return keys
### Zustand Stores - DO NOT Mock Manually
**Zustand is globally mocked** in `web/vitest.setup.ts`. Use real stores with `setState()`:
```typescript
// ✅ CORRECT: Use real store, set test state
import { useAppStore } from '@/app/components/app/store'
useAppStore.setState({ appDetail: { id: 'test', name: 'Test' } })
render(<MyComponent />)
// ❌ WRONG: Don't mock the store module
vi.mock('@/app/components/app/store', () => ({ ... }))
```
See [Zustand Store Testing](#zustand-store-testing) section for full details.
## Mock Placement
| Location | Purpose |
|----------|---------|
| `web/vitest.setup.ts` | Global mocks shared by all tests (for example `react-i18next`, `next/image`) |
| `web/vitest.setup.ts` | Global mocks shared by all tests (`react-i18next`, `next/image`, `zustand`) |
| `web/__mocks__/zustand.ts` | Zustand mock implementation (auto-resets stores after each test) |
| `web/__mocks__/` | Reusable mock factories shared across multiple test files |
| Test file | Test-specific mocks, inline with `vi.mock()` |
Modules are not mocked automatically. Use `vi.mock` in test files, or add global mocks in `web/vitest.setup.ts`.
**Note**: Zustand is special - it's globally mocked but you should NOT mock store modules manually. See [Zustand Store Testing](#zustand-store-testing).
## Essential Mocks
### 1. i18n (Auto-loaded via Global Mock)
@ -276,6 +296,7 @@ const renderWithQueryClient = (ui: React.ReactElement) => {
1. **Use real base components** - Import from `@/app/components/base/` directly
1. **Use real project components** - Prefer importing over mocking
1. **Use real Zustand stores** - Set test state via `store.setState()`
1. **Reset mocks in `beforeEach`**, not `afterEach`
1. **Match actual component behavior** in mocks (when mocking is necessary)
1. **Use factory functions** for complex mock data
@ -285,6 +306,7 @@ const renderWithQueryClient = (ui: React.ReactElement) => {
### ❌ DON'T
1. **Don't mock base components** (`Loading`, `Button`, `Tooltip`, etc.)
1. **Don't mock Zustand store modules** - Use real stores with `setState()`
1. Don't mock components you can import directly
1. Don't create overly simplified mocks that miss conditional logic
1. Don't forget to clean up nock after each test
@ -308,10 +330,151 @@ Need to use a component in test?
├─ Is it a third-party lib with side effects?
│ └─ YES → Mock it (next/navigation, external SDKs)
├─ Is it a Zustand store?
│ └─ YES → DO NOT mock the module!
│ Use real store + setState() to set test state
│ (Global mock handles auto-reset)
└─ Is it i18n?
└─ YES → Uses shared mock (auto-loaded). Override only for custom translations
```
## Zustand Store Testing
### Global Zustand Mock (Auto-loaded)
Zustand is globally mocked in `web/vitest.setup.ts` following the [official Zustand testing guide](https://zustand.docs.pmnd.rs/guides/testing). The mock in `web/__mocks__/zustand.ts` provides:
- Real store behavior with `getState()`, `setState()`, `subscribe()` methods
- Automatic store reset after each test via `afterEach`
- Proper test isolation between tests
### ✅ Recommended: Use Real Stores (Official Best Practice)
**DO NOT mock store modules manually.** Import and use the real store, then use `setState()` to set test state:
```typescript
// ✅ CORRECT: Use real store with setState
import { useAppStore } from '@/app/components/app/store'
describe('MyComponent', () => {
it('should render app details', () => {
// Arrange: Set test state via setState
useAppStore.setState({
appDetail: {
id: 'test-app',
name: 'Test App',
mode: 'chat',
},
})
// Act
render(<MyComponent />)
// Assert
expect(screen.getByText('Test App')).toBeInTheDocument()
// Can also verify store state directly
expect(useAppStore.getState().appDetail?.name).toBe('Test App')
})
// No cleanup needed - global mock auto-resets after each test
})
```
### ❌ Avoid: Manual Store Module Mocking
Manual mocking conflicts with the global Zustand mock and loses store functionality:
```typescript
// ❌ WRONG: Don't mock the store module
vi.mock('@/app/components/app/store', () => ({
useStore: (selector) => mockSelector(selector), // Missing getState, setState!
}))
// ❌ WRONG: This conflicts with global zustand mock
vi.mock('@/app/components/workflow/store', () => ({
useWorkflowStore: vi.fn(() => mockState),
}))
```
**Problems with manual mocking:**
1. Loses `getState()`, `setState()`, `subscribe()` methods
1. Conflicts with global Zustand mock behavior
1. Requires manual maintenance of store API
1. Tests don't reflect actual store behavior
### When Manual Store Mocking is Necessary
In rare cases where the store has complex initialization or side effects, you can mock it, but ensure you provide the full store API:
```typescript
// If you MUST mock (rare), include full store API
const mockStore = {
appDetail: { id: 'test', name: 'Test' },
setAppDetail: vi.fn(),
}
vi.mock('@/app/components/app/store', () => ({
useStore: Object.assign(
(selector: (state: typeof mockStore) => unknown) => selector(mockStore),
{
getState: () => mockStore,
setState: vi.fn(),
subscribe: vi.fn(),
},
),
}))
```
### Store Testing Decision Tree
```
Need to test a component using Zustand store?
├─ Can you use the real store?
│ └─ YES → Use real store + setState (RECOMMENDED)
│ useAppStore.setState({ ... })
├─ Does the store have complex initialization/side effects?
│ └─ YES → Consider mocking, but include full API
│ (getState, setState, subscribe)
└─ Are you testing the store itself (not a component)?
└─ YES → Test store directly with getState/setState
const store = useMyStore
store.setState({ count: 0 })
store.getState().increment()
expect(store.getState().count).toBe(1)
```
### Example: Testing Store Actions
```typescript
import { useCounterStore } from '@/stores/counter'
describe('Counter Store', () => {
it('should increment count', () => {
// Initial state (auto-reset by global mock)
expect(useCounterStore.getState().count).toBe(0)
// Call action
useCounterStore.getState().increment()
// Verify state change
expect(useCounterStore.getState().count).toBe(1)
})
it('should reset to initial state', () => {
// Set some state
useCounterStore.setState({ count: 100 })
expect(useCounterStore.getState().count).toBe(100)
// After this test, global mock will reset to initial state
})
})
```
## Factory Function Pattern
```typescript

View File

@ -0,0 +1,46 @@
---
name: orpc-contract-first
description: Guide for implementing oRPC contract-first API patterns in Dify frontend. Triggers when creating new API contracts, adding service endpoints, integrating TanStack Query with typed contracts, or migrating legacy service calls to oRPC. Use for all API layer work in web/contract and web/service directories.
---
# oRPC Contract-First Development
## Project Structure
```
web/contract/
├── base.ts # Base contract (inputStructure: 'detailed')
├── router.ts # Router composition & type exports
├── marketplace.ts # Marketplace contracts
└── console/ # Console contracts by domain
├── system.ts
└── billing.ts
```
## Workflow
1. **Create contract** in `web/contract/console/{domain}.ts`
- Import `base` from `../base` and `type` from `@orpc/contract`
- Define route with `path`, `method`, `input`, `output`
2. **Register in router** at `web/contract/router.ts`
- Import directly from domain file (no barrel files)
- Nest by API prefix: `billing: { invoices, bindPartnerStack }`
3. **Create hooks** in `web/service/use-{domain}.ts`
- Use `consoleQuery.{group}.{contract}.queryKey()` for query keys
- Use `consoleClient.{group}.{contract}()` for API calls
## Key Rules
- **Input structure**: Always use `{ params, query?, body? }` format
- **Path params**: Use `{paramName}` in path, match in `params` object
- **Router nesting**: Group by API prefix (e.g., `/billing/*``billing: {}`)
- **No barrel files**: Import directly from specific files
- **Types**: Import from `@/types/`, use `type<T>()` helper
## Type Export
```typescript
export type ConsoleInputs = InferContractRouterInputs<typeof consoleRouterContract>
```

View File

@ -0,0 +1,355 @@
---
name: skill-creator
description: Guide for creating effective skills. This skill should be used when users want to create a new skill (or update an existing skill) that extends Claude's capabilities with specialized knowledge, workflows, or tool integrations.
---
# Skill Creator
This skill provides guidance for creating effective skills.
## About Skills
Skills are modular, self-contained packages that extend Claude's capabilities by providing
specialized knowledge, workflows, and tools. Think of them as "onboarding guides" for specific
domains or tasks—they transform Claude from a general-purpose agent into a specialized agent
equipped with procedural knowledge that no model can fully possess.
### What Skills Provide
1. Specialized workflows - Multi-step procedures for specific domains
2. Tool integrations - Instructions for working with specific file formats or APIs
3. Domain expertise - Company-specific knowledge, schemas, business logic
4. Bundled resources - Scripts, references, and assets for complex and repetitive tasks
## Core Principles
### Concise is Key
The context window is a public good. Skills share the context window with everything else Claude needs: system prompt, conversation history, other Skills' metadata, and the actual user request.
**Default assumption: Claude is already very smart.** Only add context Claude doesn't already have. Challenge each piece of information: "Does Claude really need this explanation?" and "Does this paragraph justify its token cost?"
Prefer concise examples over verbose explanations.
### Set Appropriate Degrees of Freedom
Match the level of specificity to the task's fragility and variability:
**High freedom (text-based instructions)**: Use when multiple approaches are valid, decisions depend on context, or heuristics guide the approach.
**Medium freedom (pseudocode or scripts with parameters)**: Use when a preferred pattern exists, some variation is acceptable, or configuration affects behavior.
**Low freedom (specific scripts, few parameters)**: Use when operations are fragile and error-prone, consistency is critical, or a specific sequence must be followed.
Think of Claude as exploring a path: a narrow bridge with cliffs needs specific guardrails (low freedom), while an open field allows many routes (high freedom).
### Anatomy of a Skill
Every skill consists of a required SKILL.md file and optional bundled resources:
```
skill-name/
├── SKILL.md (required)
│ ├── YAML frontmatter metadata (required)
│ │ ├── name: (required)
│ │ └── description: (required)
│ └── Markdown instructions (required)
└── Bundled Resources (optional)
├── scripts/ - Executable code (Python/Bash/etc.)
├── references/ - Documentation intended to be loaded into context as needed
└── assets/ - Files used in output (templates, icons, fonts, etc.)
```
#### SKILL.md (required)
Every SKILL.md consists of:
- **Frontmatter** (YAML): Contains `name` and `description` fields. These are the only fields that Claude reads to determine when the skill gets used, thus it is very important to be clear and comprehensive in describing what the skill is, and when it should be used.
- **Body** (Markdown): Instructions and guidance for using the skill. Only loaded AFTER the skill triggers (if at all).
#### Bundled Resources (optional)
##### Scripts (`scripts/`)
Executable code (Python/Bash/etc.) for tasks that require deterministic reliability or are repeatedly rewritten.
- **When to include**: When the same code is being rewritten repeatedly or deterministic reliability is needed
- **Example**: `scripts/rotate_pdf.py` for PDF rotation tasks
- **Benefits**: Token efficient, deterministic, may be executed without loading into context
- **Note**: Scripts may still need to be read by Claude for patching or environment-specific adjustments
##### References (`references/`)
Documentation and reference material intended to be loaded as needed into context to inform Claude's process and thinking.
- **When to include**: For documentation that Claude should reference while working
- **Examples**: `references/finance.md` for financial schemas, `references/mnda.md` for company NDA template, `references/policies.md` for company policies, `references/api_docs.md` for API specifications
- **Use cases**: Database schemas, API documentation, domain knowledge, company policies, detailed workflow guides
- **Benefits**: Keeps SKILL.md lean, loaded only when Claude determines it's needed
- **Best practice**: If files are large (>10k words), include grep search patterns in SKILL.md
- **Avoid duplication**: Information should live in either SKILL.md or references files, not both. Prefer references files for detailed information unless it's truly core to the skill—this keeps SKILL.md lean while making information discoverable without hogging the context window. Keep only essential procedural instructions and workflow guidance in SKILL.md; move detailed reference material, schemas, and examples to references files.
##### Assets (`assets/`)
Files not intended to be loaded into context, but rather used within the output Claude produces.
- **When to include**: When the skill needs files that will be used in the final output
- **Examples**: `assets/logo.png` for brand assets, `assets/slides.pptx` for PowerPoint templates, `assets/frontend-template/` for HTML/React boilerplate, `assets/font.ttf` for typography
- **Use cases**: Templates, images, icons, boilerplate code, fonts, sample documents that get copied or modified
- **Benefits**: Separates output resources from documentation, enables Claude to use files without loading them into context
#### What to Not Include in a Skill
A skill should only contain essential files that directly support its functionality. Do NOT create extraneous documentation or auxiliary files, including:
- README.md
- INSTALLATION_GUIDE.md
- QUICK_REFERENCE.md
- CHANGELOG.md
- etc.
The skill should only contain the information needed for an AI agent to do the job at hand. It should not contain auxilary context about the process that went into creating it, setup and testing procedures, user-facing documentation, etc. Creating additional documentation files just adds clutter and confusion.
### Progressive Disclosure Design Principle
Skills use a three-level loading system to manage context efficiently:
1. **Metadata (name + description)** - Always in context (~100 words)
2. **SKILL.md body** - When skill triggers (<5k words)
3. **Bundled resources** - As needed by Claude (Unlimited because scripts can be executed without reading into context window)
#### Progressive Disclosure Patterns
Keep SKILL.md body to the essentials and under 500 lines to minimize context bloat. Split content into separate files when approaching this limit. When splitting out content into other files, it is very important to reference them from SKILL.md and describe clearly when to read them, to ensure the reader of the skill knows they exist and when to use them.
**Key principle:** When a skill supports multiple variations, frameworks, or options, keep only the core workflow and selection guidance in SKILL.md. Move variant-specific details (patterns, examples, configuration) into separate reference files.
**Pattern 1: High-level guide with references**
```markdown
# PDF Processing
## Quick start
Extract text with pdfplumber:
[code example]
## Advanced features
- **Form filling**: See [FORMS.md](FORMS.md) for complete guide
- **API reference**: See [REFERENCE.md](REFERENCE.md) for all methods
- **Examples**: See [EXAMPLES.md](EXAMPLES.md) for common patterns
```
Claude loads FORMS.md, REFERENCE.md, or EXAMPLES.md only when needed.
**Pattern 2: Domain-specific organization**
For Skills with multiple domains, organize content by domain to avoid loading irrelevant context:
```
bigquery-skill/
├── SKILL.md (overview and navigation)
└── reference/
├── finance.md (revenue, billing metrics)
├── sales.md (opportunities, pipeline)
├── product.md (API usage, features)
└── marketing.md (campaigns, attribution)
```
When a user asks about sales metrics, Claude only reads sales.md.
Similarly, for skills supporting multiple frameworks or variants, organize by variant:
```
cloud-deploy/
├── SKILL.md (workflow + provider selection)
└── references/
├── aws.md (AWS deployment patterns)
├── gcp.md (GCP deployment patterns)
└── azure.md (Azure deployment patterns)
```
When the user chooses AWS, Claude only reads aws.md.
**Pattern 3: Conditional details**
Show basic content, link to advanced content:
```markdown
# DOCX Processing
## Creating documents
Use docx-js for new documents. See [DOCX-JS.md](DOCX-JS.md).
## Editing documents
For simple edits, modify the XML directly.
**For tracked changes**: See [REDLINING.md](REDLINING.md)
**For OOXML details**: See [OOXML.md](OOXML.md)
```
Claude reads REDLINING.md or OOXML.md only when the user needs those features.
**Important guidelines:**
- **Avoid deeply nested references** - Keep references one level deep from SKILL.md. All reference files should link directly from SKILL.md.
- **Structure longer reference files** - For files longer than 100 lines, include a table of contents at the top so Claude can see the full scope when previewing.
## Skill Creation Process
Skill creation involves these steps:
1. Understand the skill with concrete examples
2. Plan reusable skill contents (scripts, references, assets)
3. Initialize the skill (run init_skill.py)
4. Edit the skill (implement resources and write SKILL.md)
5. Package the skill (run package_skill.py)
6. Iterate based on real usage
Follow these steps in order, skipping only if there is a clear reason why they are not applicable.
### Step 1: Understanding the Skill with Concrete Examples
Skip this step only when the skill's usage patterns are already clearly understood. It remains valuable even when working with an existing skill.
To create an effective skill, clearly understand concrete examples of how the skill will be used. This understanding can come from either direct user examples or generated examples that are validated with user feedback.
For example, when building an image-editor skill, relevant questions include:
- "What functionality should the image-editor skill support? Editing, rotating, anything else?"
- "Can you give some examples of how this skill would be used?"
- "I can imagine users asking for things like 'Remove the red-eye from this image' or 'Rotate this image'. Are there other ways you imagine this skill being used?"
- "What would a user say that should trigger this skill?"
To avoid overwhelming users, avoid asking too many questions in a single message. Start with the most important questions and follow up as needed for better effectiveness.
Conclude this step when there is a clear sense of the functionality the skill should support.
### Step 2: Planning the Reusable Skill Contents
To turn concrete examples into an effective skill, analyze each example by:
1. Considering how to execute on the example from scratch
2. Identifying what scripts, references, and assets would be helpful when executing these workflows repeatedly
Example: When building a `pdf-editor` skill to handle queries like "Help me rotate this PDF," the analysis shows:
1. Rotating a PDF requires re-writing the same code each time
2. A `scripts/rotate_pdf.py` script would be helpful to store in the skill
Example: When designing a `frontend-webapp-builder` skill for queries like "Build me a todo app" or "Build me a dashboard to track my steps," the analysis shows:
1. Writing a frontend webapp requires the same boilerplate HTML/React each time
2. An `assets/hello-world/` template containing the boilerplate HTML/React project files would be helpful to store in the skill
Example: When building a `big-query` skill to handle queries like "How many users have logged in today?" the analysis shows:
1. Querying BigQuery requires re-discovering the table schemas and relationships each time
2. A `references/schema.md` file documenting the table schemas would be helpful to store in the skill
To establish the skill's contents, analyze each concrete example to create a list of the reusable resources to include: scripts, references, and assets.
### Step 3: Initializing the Skill
At this point, it is time to actually create the skill.
Skip this step only if the skill being developed already exists, and iteration or packaging is needed. In this case, continue to the next step.
When creating a new skill from scratch, always run the `init_skill.py` script. The script conveniently generates a new template skill directory that automatically includes everything a skill requires, making the skill creation process much more efficient and reliable.
Usage:
```bash
scripts/init_skill.py <skill-name> --path <output-directory>
```
The script:
- Creates the skill directory at the specified path
- Generates a SKILL.md template with proper frontmatter and TODO placeholders
- Creates example resource directories: `scripts/`, `references/`, and `assets/`
- Adds example files in each directory that can be customized or deleted
After initialization, customize or remove the generated SKILL.md and example files as needed.
### Step 4: Edit the Skill
When editing the (newly-generated or existing) skill, remember that the skill is being created for another instance of Claude to use. Include information that would be beneficial and non-obvious to Claude. Consider what procedural knowledge, domain-specific details, or reusable assets would help another Claude instance execute these tasks more effectively.
#### Learn Proven Design Patterns
Consult these helpful guides based on your skill's needs:
- **Multi-step processes**: See references/workflows.md for sequential workflows and conditional logic
- **Specific output formats or quality standards**: See references/output-patterns.md for template and example patterns
These files contain established best practices for effective skill design.
#### Start with Reusable Skill Contents
To begin implementation, start with the reusable resources identified above: `scripts/`, `references/`, and `assets/` files. Note that this step may require user input. For example, when implementing a `brand-guidelines` skill, the user may need to provide brand assets or templates to store in `assets/`, or documentation to store in `references/`.
Added scripts must be tested by actually running them to ensure there are no bugs and that the output matches what is expected. If there are many similar scripts, only a representative sample needs to be tested to ensure confidence that they all work while balancing time to completion.
Any example files and directories not needed for the skill should be deleted. The initialization script creates example files in `scripts/`, `references/`, and `assets/` to demonstrate structure, but most skills won't need all of them.
#### Update SKILL.md
**Writing Guidelines:** Always use imperative/infinitive form.
##### Frontmatter
Write the YAML frontmatter with `name` and `description`:
- `name`: The skill name
- `description`: This is the primary triggering mechanism for your skill, and helps Claude understand when to use the skill.
- Include both what the Skill does and specific triggers/contexts for when to use it.
- Include all "when to use" information here - Not in the body. The body is only loaded after triggering, so "When to Use This Skill" sections in the body are not helpful to Claude.
- Example description for a `docx` skill: "Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. Use when Claude needs to work with professional documents (.docx files) for: (1) Creating new documents, (2) Modifying or editing content, (3) Working with tracked changes, (4) Adding comments, or any other document tasks"
Do not include any other fields in YAML frontmatter.
##### Body
Write instructions for using the skill and its bundled resources.
### Step 5: Packaging a Skill
Once development of the skill is complete, it must be packaged into a distributable .skill file that gets shared with the user. The packaging process automatically validates the skill first to ensure it meets all requirements:
```bash
scripts/package_skill.py <path/to/skill-folder>
```
Optional output directory specification:
```bash
scripts/package_skill.py <path/to/skill-folder> ./dist
```
The packaging script will:
1. **Validate** the skill automatically, checking:
- YAML frontmatter format and required fields
- Skill naming conventions and directory structure
- Description completeness and quality
- File organization and resource references
2. **Package** the skill if validation passes, creating a .skill file named after the skill (e.g., `my-skill.skill`) that includes all files and maintains the proper directory structure for distribution. The .skill file is a zip file with a .skill extension.
If validation fails, the script will report the errors and exit without creating a package. Fix any validation errors and run the packaging command again.
### Step 6: Iterate
After testing the skill, users may request improvements. Often this happens right after using the skill, with fresh context of how the skill performed.
**Iteration workflow:**
1. Use the skill on real tasks
2. Notice struggles or inefficiencies
3. Identify how SKILL.md or bundled resources should be updated
4. Implement changes and test again

View File

@ -0,0 +1,86 @@
# Output Patterns
Use these patterns when skills need to produce consistent, high-quality output.
## Template Pattern
Provide templates for output format. Match the level of strictness to your needs.
**For strict requirements (like API responses or data formats):**
```markdown
## Report structure
ALWAYS use this exact template structure:
# [Analysis Title]
## Executive summary
[One-paragraph overview of key findings]
## Key findings
- Finding 1 with supporting data
- Finding 2 with supporting data
- Finding 3 with supporting data
## Recommendations
1. Specific actionable recommendation
2. Specific actionable recommendation
```
**For flexible guidance (when adaptation is useful):**
```markdown
## Report structure
Here is a sensible default format, but use your best judgment:
# [Analysis Title]
## Executive summary
[Overview]
## Key findings
[Adapt sections based on what you discover]
## Recommendations
[Tailor to the specific context]
Adjust sections as needed for the specific analysis type.
```
## Examples Pattern
For skills where output quality depends on seeing examples, provide input/output pairs:
```markdown
## Commit message format
Generate commit messages following these examples:
**Example 1:**
Input: Added user authentication with JWT tokens
Output:
```
feat(auth): implement JWT-based authentication
Add login endpoint and token validation middleware
```
**Example 2:**
Input: Fixed bug where dates displayed incorrectly in reports
Output:
```
fix(reports): correct date formatting in timezone conversion
Use UTC timestamps consistently across report generation
```
Follow this style: type(scope): brief description, then detailed explanation.
```
Examples help Claude understand the desired style and level of detail more clearly than descriptions alone.

View File

@ -0,0 +1,28 @@
# Workflow Patterns
## Sequential Workflows
For complex tasks, break operations into clear, sequential steps. It is often helpful to give Claude an overview of the process towards the beginning of SKILL.md:
```markdown
Filling a PDF form involves these steps:
1. Analyze the form (run analyze_form.py)
2. Create field mapping (edit fields.json)
3. Validate mapping (run validate_fields.py)
4. Fill the form (run fill_form.py)
5. Verify output (run verify_output.py)
```
## Conditional Workflows
For tasks with branching logic, guide Claude through decision points:
```markdown
1. Determine the modification type:
**Creating new content?** → Follow "Creation workflow" below
**Editing existing content?** → Follow "Editing workflow" below
2. Creation workflow: [steps]
3. Editing workflow: [steps]
```

View File

@ -0,0 +1,300 @@
#!/usr/bin/env python3
"""
Skill Initializer - Creates a new skill from template
Usage:
init_skill.py <skill-name> --path <path>
Examples:
init_skill.py my-new-skill --path skills/public
init_skill.py my-api-helper --path skills/private
init_skill.py custom-skill --path /custom/location
"""
import sys
from pathlib import Path
SKILL_TEMPLATE = """---
name: {skill_name}
description: [TODO: Complete and informative explanation of what the skill does and when to use it. Include WHEN to use this skill - specific scenarios, file types, or tasks that trigger it.]
---
# {skill_title}
## Overview
[TODO: 1-2 sentences explaining what this skill enables]
## Structuring This Skill
[TODO: Choose the structure that best fits this skill's purpose. Common patterns:
**1. Workflow-Based** (best for sequential processes)
- Works well when there are clear step-by-step procedures
- Example: DOCX skill with "Workflow Decision Tree""Reading""Creating""Editing"
- Structure: ## Overview → ## Workflow Decision Tree → ## Step 1 → ## Step 2...
**2. Task-Based** (best for tool collections)
- Works well when the skill offers different operations/capabilities
- Example: PDF skill with "Quick Start""Merge PDFs""Split PDFs""Extract Text"
- Structure: ## Overview → ## Quick Start → ## Task Category 1 → ## Task Category 2...
**3. Reference/Guidelines** (best for standards or specifications)
- Works well for brand guidelines, coding standards, or requirements
- Example: Brand styling with "Brand Guidelines""Colors""Typography""Features"
- Structure: ## Overview → ## Guidelines → ## Specifications → ## Usage...
**4. Capabilities-Based** (best for integrated systems)
- Works well when the skill provides multiple interrelated features
- Example: Product Management with "Core Capabilities" → numbered capability list
- Structure: ## Overview → ## Core Capabilities → ### 1. Feature → ### 2. Feature...
Patterns can be mixed and matched as needed. Most skills combine patterns (e.g., start with task-based, add workflow for complex operations).
Delete this entire "Structuring This Skill" section when done - it's just guidance.]
## [TODO: Replace with the first main section based on chosen structure]
[TODO: Add content here. See examples in existing skills:
- Code samples for technical skills
- Decision trees for complex workflows
- Concrete examples with realistic user requests
- References to scripts/templates/references as needed]
## Resources
This skill includes example resource directories that demonstrate how to organize different types of bundled resources:
### scripts/
Executable code (Python/Bash/etc.) that can be run directly to perform specific operations.
**Examples from other skills:**
- PDF skill: `fill_fillable_fields.py`, `extract_form_field_info.py` - utilities for PDF manipulation
- DOCX skill: `document.py`, `utilities.py` - Python modules for document processing
**Appropriate for:** Python scripts, shell scripts, or any executable code that performs automation, data processing, or specific operations.
**Note:** Scripts may be executed without loading into context, but can still be read by Claude for patching or environment adjustments.
### references/
Documentation and reference material intended to be loaded into context to inform Claude's process and thinking.
**Examples from other skills:**
- Product management: `communication.md`, `context_building.md` - detailed workflow guides
- BigQuery: API reference documentation and query examples
- Finance: Schema documentation, company policies
**Appropriate for:** In-depth documentation, API references, database schemas, comprehensive guides, or any detailed information that Claude should reference while working.
### assets/
Files not intended to be loaded into context, but rather used within the output Claude produces.
**Examples from other skills:**
- Brand styling: PowerPoint template files (.pptx), logo files
- Frontend builder: HTML/React boilerplate project directories
- Typography: Font files (.ttf, .woff2)
**Appropriate for:** Templates, boilerplate code, document templates, images, icons, fonts, or any files meant to be copied or used in the final output.
---
**Any unneeded directories can be deleted.** Not every skill requires all three types of resources.
"""
EXAMPLE_SCRIPT = '''#!/usr/bin/env python3
"""
Example helper script for {skill_name}
This is a placeholder script that can be executed directly.
Replace with actual implementation or delete if not needed.
Example real scripts from other skills:
- pdf/scripts/fill_fillable_fields.py - Fills PDF form fields
- pdf/scripts/convert_pdf_to_images.py - Converts PDF pages to images
"""
def main():
print("This is an example script for {skill_name}")
# TODO: Add actual script logic here
# This could be data processing, file conversion, API calls, etc.
if __name__ == "__main__":
main()
'''
EXAMPLE_REFERENCE = """# Reference Documentation for {skill_title}
This is a placeholder for detailed reference documentation.
Replace with actual reference content or delete if not needed.
Example real reference docs from other skills:
- product-management/references/communication.md - Comprehensive guide for status updates
- product-management/references/context_building.md - Deep-dive on gathering context
- bigquery/references/ - API references and query examples
## When Reference Docs Are Useful
Reference docs are ideal for:
- Comprehensive API documentation
- Detailed workflow guides
- Complex multi-step processes
- Information too lengthy for main SKILL.md
- Content that's only needed for specific use cases
## Structure Suggestions
### API Reference Example
- Overview
- Authentication
- Endpoints with examples
- Error codes
- Rate limits
### Workflow Guide Example
- Prerequisites
- Step-by-step instructions
- Common patterns
- Troubleshooting
- Best practices
"""
EXAMPLE_ASSET = """# Example Asset File
This placeholder represents where asset files would be stored.
Replace with actual asset files (templates, images, fonts, etc.) or delete if not needed.
Asset files are NOT intended to be loaded into context, but rather used within
the output Claude produces.
Example asset files from other skills:
- Brand guidelines: logo.png, slides_template.pptx
- Frontend builder: hello-world/ directory with HTML/React boilerplate
- Typography: custom-font.ttf, font-family.woff2
- Data: sample_data.csv, test_dataset.json
## Common Asset Types
- Templates: .pptx, .docx, boilerplate directories
- Images: .png, .jpg, .svg, .gif
- Fonts: .ttf, .otf, .woff, .woff2
- Boilerplate code: Project directories, starter files
- Icons: .ico, .svg
- Data files: .csv, .json, .xml, .yaml
Note: This is a text placeholder. Actual assets can be any file type.
"""
def title_case_skill_name(skill_name):
"""Convert hyphenated skill name to Title Case for display."""
return " ".join(word.capitalize() for word in skill_name.split("-"))
def init_skill(skill_name, path):
"""
Initialize a new skill directory with template SKILL.md.
Args:
skill_name: Name of the skill
path: Path where the skill directory should be created
Returns:
Path to created skill directory, or None if error
"""
# Determine skill directory path
skill_dir = Path(path).resolve() / skill_name
# Check if directory already exists
if skill_dir.exists():
print(f"❌ Error: Skill directory already exists: {skill_dir}")
return None
# Create skill directory
try:
skill_dir.mkdir(parents=True, exist_ok=False)
print(f"✅ Created skill directory: {skill_dir}")
except Exception as e:
print(f"❌ Error creating directory: {e}")
return None
# Create SKILL.md from template
skill_title = title_case_skill_name(skill_name)
skill_content = SKILL_TEMPLATE.format(skill_name=skill_name, skill_title=skill_title)
skill_md_path = skill_dir / "SKILL.md"
try:
skill_md_path.write_text(skill_content)
print("✅ Created SKILL.md")
except Exception as e:
print(f"❌ Error creating SKILL.md: {e}")
return None
# Create resource directories with example files
try:
# Create scripts/ directory with example script
scripts_dir = skill_dir / "scripts"
scripts_dir.mkdir(exist_ok=True)
example_script = scripts_dir / "example.py"
example_script.write_text(EXAMPLE_SCRIPT.format(skill_name=skill_name))
example_script.chmod(0o755)
print("✅ Created scripts/example.py")
# Create references/ directory with example reference doc
references_dir = skill_dir / "references"
references_dir.mkdir(exist_ok=True)
example_reference = references_dir / "api_reference.md"
example_reference.write_text(EXAMPLE_REFERENCE.format(skill_title=skill_title))
print("✅ Created references/api_reference.md")
# Create assets/ directory with example asset placeholder
assets_dir = skill_dir / "assets"
assets_dir.mkdir(exist_ok=True)
example_asset = assets_dir / "example_asset.txt"
example_asset.write_text(EXAMPLE_ASSET)
print("✅ Created assets/example_asset.txt")
except Exception as e:
print(f"❌ Error creating resource directories: {e}")
return None
# Print next steps
print(f"\n✅ Skill '{skill_name}' initialized successfully at {skill_dir}")
print("\nNext steps:")
print("1. Edit SKILL.md to complete the TODO items and update the description")
print("2. Customize or delete the example files in scripts/, references/, and assets/")
print("3. Run the validator when ready to check the skill structure")
return skill_dir
def main():
if len(sys.argv) < 4 or sys.argv[2] != "--path":
print("Usage: init_skill.py <skill-name> --path <path>")
print("\nSkill name requirements:")
print(" - Hyphen-case identifier (e.g., 'data-analyzer')")
print(" - Lowercase letters, digits, and hyphens only")
print(" - Max 40 characters")
print(" - Must match directory name exactly")
print("\nExamples:")
print(" init_skill.py my-new-skill --path skills/public")
print(" init_skill.py my-api-helper --path skills/private")
print(" init_skill.py custom-skill --path /custom/location")
sys.exit(1)
skill_name = sys.argv[1]
path = sys.argv[3]
print(f"🚀 Initializing skill: {skill_name}")
print(f" Location: {path}")
print()
result = init_skill(skill_name, path)
if result:
sys.exit(0)
else:
sys.exit(1)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,110 @@
#!/usr/bin/env python3
"""
Skill Packager - Creates a distributable .skill file of a skill folder
Usage:
python utils/package_skill.py <path/to/skill-folder> [output-directory]
Example:
python utils/package_skill.py skills/public/my-skill
python utils/package_skill.py skills/public/my-skill ./dist
"""
import sys
import zipfile
from pathlib import Path
from quick_validate import validate_skill
def package_skill(skill_path, output_dir=None):
"""
Package a skill folder into a .skill file.
Args:
skill_path: Path to the skill folder
output_dir: Optional output directory for the .skill file (defaults to current directory)
Returns:
Path to the created .skill file, or None if error
"""
skill_path = Path(skill_path).resolve()
# Validate skill folder exists
if not skill_path.exists():
print(f"❌ Error: Skill folder not found: {skill_path}")
return None
if not skill_path.is_dir():
print(f"❌ Error: Path is not a directory: {skill_path}")
return None
# Validate SKILL.md exists
skill_md = skill_path / "SKILL.md"
if not skill_md.exists():
print(f"❌ Error: SKILL.md not found in {skill_path}")
return None
# Run validation before packaging
print("🔍 Validating skill...")
valid, message = validate_skill(skill_path)
if not valid:
print(f"❌ Validation failed: {message}")
print(" Please fix the validation errors before packaging.")
return None
print(f"{message}\n")
# Determine output location
skill_name = skill_path.name
if output_dir:
output_path = Path(output_dir).resolve()
output_path.mkdir(parents=True, exist_ok=True)
else:
output_path = Path.cwd()
skill_filename = output_path / f"{skill_name}.skill"
# Create the .skill file (zip format)
try:
with zipfile.ZipFile(skill_filename, "w", zipfile.ZIP_DEFLATED) as zipf:
# Walk through the skill directory
for file_path in skill_path.rglob("*"):
if file_path.is_file():
# Calculate the relative path within the zip
arcname = file_path.relative_to(skill_path.parent)
zipf.write(file_path, arcname)
print(f" Added: {arcname}")
print(f"\n✅ Successfully packaged skill to: {skill_filename}")
return skill_filename
except Exception as e:
print(f"❌ Error creating .skill file: {e}")
return None
def main():
if len(sys.argv) < 2:
print("Usage: python utils/package_skill.py <path/to/skill-folder> [output-directory]")
print("\nExample:")
print(" python utils/package_skill.py skills/public/my-skill")
print(" python utils/package_skill.py skills/public/my-skill ./dist")
sys.exit(1)
skill_path = sys.argv[1]
output_dir = sys.argv[2] if len(sys.argv) > 2 else None
print(f"📦 Packaging skill: {skill_path}")
if output_dir:
print(f" Output directory: {output_dir}")
print()
result = package_skill(skill_path, output_dir)
if result:
sys.exit(0)
else:
sys.exit(1)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,97 @@
#!/usr/bin/env python3
"""
Quick validation script for skills - minimal version
"""
import sys
import os
import re
import yaml
from pathlib import Path
def validate_skill(skill_path):
"""Basic validation of a skill"""
skill_path = Path(skill_path)
# Check SKILL.md exists
skill_md = skill_path / "SKILL.md"
if not skill_md.exists():
return False, "SKILL.md not found"
# Read and validate frontmatter
content = skill_md.read_text()
if not content.startswith("---"):
return False, "No YAML frontmatter found"
# Extract frontmatter
match = re.match(r"^---\n(.*?)\n---", content, re.DOTALL)
if not match:
return False, "Invalid frontmatter format"
frontmatter_text = match.group(1)
# Parse YAML frontmatter
try:
frontmatter = yaml.safe_load(frontmatter_text)
if not isinstance(frontmatter, dict):
return False, "Frontmatter must be a YAML dictionary"
except yaml.YAMLError as e:
return False, f"Invalid YAML in frontmatter: {e}"
# Define allowed properties
ALLOWED_PROPERTIES = {"name", "description", "license", "allowed-tools", "metadata"}
# Check for unexpected properties (excluding nested keys under metadata)
unexpected_keys = set(frontmatter.keys()) - ALLOWED_PROPERTIES
if unexpected_keys:
return False, (
f"Unexpected key(s) in SKILL.md frontmatter: {', '.join(sorted(unexpected_keys))}. "
f"Allowed properties are: {', '.join(sorted(ALLOWED_PROPERTIES))}"
)
# Check required fields
if "name" not in frontmatter:
return False, "Missing 'name' in frontmatter"
if "description" not in frontmatter:
return False, "Missing 'description' in frontmatter"
# Extract name for validation
name = frontmatter.get("name", "")
if not isinstance(name, str):
return False, f"Name must be a string, got {type(name).__name__}"
name = name.strip()
if name:
# Check naming convention (hyphen-case: lowercase with hyphens)
if not re.match(r"^[a-z0-9-]+$", name):
return False, f"Name '{name}' should be hyphen-case (lowercase letters, digits, and hyphens only)"
if name.startswith("-") or name.endswith("-") or "--" in name:
return False, f"Name '{name}' cannot start/end with hyphen or contain consecutive hyphens"
# Check name length (max 64 characters per spec)
if len(name) > 64:
return False, f"Name is too long ({len(name)} characters). Maximum is 64 characters."
# Extract and validate description
description = frontmatter.get("description", "")
if not isinstance(description, str):
return False, f"Description must be a string, got {type(description).__name__}"
description = description.strip()
if description:
# Check for angle brackets
if "<" in description or ">" in description:
return False, "Description cannot contain angle brackets (< or >)"
# Check description length (max 1024 characters per spec)
if len(description) > 1024:
return False, f"Description is too long ({len(description)} characters). Maximum is 1024 characters."
return True, "Skill is valid!"
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: python quick_validate.py <skill_directory>")
sys.exit(1)
valid, message = validate_skill(sys.argv[1])
print(message)
sys.exit(0 if valid else 1)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,125 @@
---
name: vercel-react-best-practices
description: React and Next.js performance optimization guidelines from Vercel Engineering. This skill should be used when writing, reviewing, or refactoring React/Next.js code to ensure optimal performance patterns. Triggers on tasks involving React components, Next.js pages, data fetching, bundle optimization, or performance improvements.
license: MIT
metadata:
author: vercel
version: "1.0.0"
---
# Vercel React Best Practices
Comprehensive performance optimization guide for React and Next.js applications, maintained by Vercel. Contains 45 rules across 8 categories, prioritized by impact to guide automated refactoring and code generation.
## When to Apply
Reference these guidelines when:
- Writing new React components or Next.js pages
- Implementing data fetching (client or server-side)
- Reviewing code for performance issues
- Refactoring existing React/Next.js code
- Optimizing bundle size or load times
## Rule Categories by Priority
| Priority | Category | Impact | Prefix |
|----------|----------|--------|--------|
| 1 | Eliminating Waterfalls | CRITICAL | `async-` |
| 2 | Bundle Size Optimization | CRITICAL | `bundle-` |
| 3 | Server-Side Performance | HIGH | `server-` |
| 4 | Client-Side Data Fetching | MEDIUM-HIGH | `client-` |
| 5 | Re-render Optimization | MEDIUM | `rerender-` |
| 6 | Rendering Performance | MEDIUM | `rendering-` |
| 7 | JavaScript Performance | LOW-MEDIUM | `js-` |
| 8 | Advanced Patterns | LOW | `advanced-` |
## Quick Reference
### 1. Eliminating Waterfalls (CRITICAL)
- `async-defer-await` - Move await into branches where actually used
- `async-parallel` - Use Promise.all() for independent operations
- `async-dependencies` - Use better-all for partial dependencies
- `async-api-routes` - Start promises early, await late in API routes
- `async-suspense-boundaries` - Use Suspense to stream content
### 2. Bundle Size Optimization (CRITICAL)
- `bundle-barrel-imports` - Import directly, avoid barrel files
- `bundle-dynamic-imports` - Use next/dynamic for heavy components
- `bundle-defer-third-party` - Load analytics/logging after hydration
- `bundle-conditional` - Load modules only when feature is activated
- `bundle-preload` - Preload on hover/focus for perceived speed
### 3. Server-Side Performance (HIGH)
- `server-cache-react` - Use React.cache() for per-request deduplication
- `server-cache-lru` - Use LRU cache for cross-request caching
- `server-serialization` - Minimize data passed to client components
- `server-parallel-fetching` - Restructure components to parallelize fetches
- `server-after-nonblocking` - Use after() for non-blocking operations
### 4. Client-Side Data Fetching (MEDIUM-HIGH)
- `client-swr-dedup` - Use SWR for automatic request deduplication
- `client-event-listeners` - Deduplicate global event listeners
### 5. Re-render Optimization (MEDIUM)
- `rerender-defer-reads` - Don't subscribe to state only used in callbacks
- `rerender-memo` - Extract expensive work into memoized components
- `rerender-dependencies` - Use primitive dependencies in effects
- `rerender-derived-state` - Subscribe to derived booleans, not raw values
- `rerender-functional-setstate` - Use functional setState for stable callbacks
- `rerender-lazy-state-init` - Pass function to useState for expensive values
- `rerender-transitions` - Use startTransition for non-urgent updates
### 6. Rendering Performance (MEDIUM)
- `rendering-animate-svg-wrapper` - Animate div wrapper, not SVG element
- `rendering-content-visibility` - Use content-visibility for long lists
- `rendering-hoist-jsx` - Extract static JSX outside components
- `rendering-svg-precision` - Reduce SVG coordinate precision
- `rendering-hydration-no-flicker` - Use inline script for client-only data
- `rendering-activity` - Use Activity component for show/hide
- `rendering-conditional-render` - Use ternary, not && for conditionals
### 7. JavaScript Performance (LOW-MEDIUM)
- `js-batch-dom-css` - Group CSS changes via classes or cssText
- `js-index-maps` - Build Map for repeated lookups
- `js-cache-property-access` - Cache object properties in loops
- `js-cache-function-results` - Cache function results in module-level Map
- `js-cache-storage` - Cache localStorage/sessionStorage reads
- `js-combine-iterations` - Combine multiple filter/map into one loop
- `js-length-check-first` - Check array length before expensive comparison
- `js-early-exit` - Return early from functions
- `js-hoist-regexp` - Hoist RegExp creation outside loops
- `js-min-max-loop` - Use loop for min/max instead of sort
- `js-set-map-lookups` - Use Set/Map for O(1) lookups
- `js-tosorted-immutable` - Use toSorted() for immutability
### 8. Advanced Patterns (LOW)
- `advanced-event-handler-refs` - Store event handlers in refs
- `advanced-use-latest` - useLatest for stable callback refs
## How to Use
Read individual rule files for detailed explanations and code examples:
```
rules/async-parallel.md
rules/bundle-barrel-imports.md
rules/_sections.md
```
Each rule file contains:
- Brief explanation of why it matters
- Incorrect code example with explanation
- Correct code example with explanation
- Additional context and references
## Full Compiled Document
For the complete guide with all rules expanded: `AGENTS.md`

View File

@ -0,0 +1,55 @@
---
title: Store Event Handlers in Refs
impact: LOW
impactDescription: stable subscriptions
tags: advanced, hooks, refs, event-handlers, optimization
---
## Store Event Handlers in Refs
Store callbacks in refs when used in effects that shouldn't re-subscribe on callback changes.
**Incorrect (re-subscribes on every render):**
```tsx
function useWindowEvent(event: string, handler: (e) => void) {
useEffect(() => {
window.addEventListener(event, handler)
return () => window.removeEventListener(event, handler)
}, [event, handler])
}
```
**Correct (stable subscription):**
```tsx
function useWindowEvent(event: string, handler: (e) => void) {
const handlerRef = useRef(handler)
useEffect(() => {
handlerRef.current = handler
}, [handler])
useEffect(() => {
const listener = (e) => handlerRef.current(e)
window.addEventListener(event, listener)
return () => window.removeEventListener(event, listener)
}, [event])
}
```
**Alternative: use `useEffectEvent` if you're on latest React:**
```tsx
import { useEffectEvent } from 'react'
function useWindowEvent(event: string, handler: (e) => void) {
const onEvent = useEffectEvent(handler)
useEffect(() => {
window.addEventListener(event, onEvent)
return () => window.removeEventListener(event, onEvent)
}, [event])
}
```
`useEffectEvent` provides a cleaner API for the same pattern: it creates a stable function reference that always calls the latest version of the handler.

View File

@ -0,0 +1,49 @@
---
title: useLatest for Stable Callback Refs
impact: LOW
impactDescription: prevents effect re-runs
tags: advanced, hooks, useLatest, refs, optimization
---
## useLatest for Stable Callback Refs
Access latest values in callbacks without adding them to dependency arrays. Prevents effect re-runs while avoiding stale closures.
**Implementation:**
```typescript
function useLatest<T>(value: T) {
const ref = useRef(value)
useLayoutEffect(() => {
ref.current = value
}, [value])
return ref
}
```
**Incorrect (effect re-runs on every callback change):**
```tsx
function SearchInput({ onSearch }: { onSearch: (q: string) => void }) {
const [query, setQuery] = useState('')
useEffect(() => {
const timeout = setTimeout(() => onSearch(query), 300)
return () => clearTimeout(timeout)
}, [query, onSearch])
}
```
**Correct (stable effect, fresh callback):**
```tsx
function SearchInput({ onSearch }: { onSearch: (q: string) => void }) {
const [query, setQuery] = useState('')
const onSearchRef = useLatest(onSearch)
useEffect(() => {
const timeout = setTimeout(() => onSearchRef.current(query), 300)
return () => clearTimeout(timeout)
}, [query])
}
```

View File

@ -0,0 +1,38 @@
---
title: Prevent Waterfall Chains in API Routes
impact: CRITICAL
impactDescription: 2-10× improvement
tags: api-routes, server-actions, waterfalls, parallelization
---
## Prevent Waterfall Chains in API Routes
In API routes and Server Actions, start independent operations immediately, even if you don't await them yet.
**Incorrect (config waits for auth, data waits for both):**
```typescript
export async function GET(request: Request) {
const session = await auth()
const config = await fetchConfig()
const data = await fetchData(session.user.id)
return Response.json({ data, config })
}
```
**Correct (auth and config start immediately):**
```typescript
export async function GET(request: Request) {
const sessionPromise = auth()
const configPromise = fetchConfig()
const session = await sessionPromise
const [config, data] = await Promise.all([
configPromise,
fetchData(session.user.id)
])
return Response.json({ data, config })
}
```
For operations with more complex dependency chains, use `better-all` to automatically maximize parallelism (see Dependency-Based Parallelization).

View File

@ -0,0 +1,80 @@
---
title: Defer Await Until Needed
impact: HIGH
impactDescription: avoids blocking unused code paths
tags: async, await, conditional, optimization
---
## Defer Await Until Needed
Move `await` operations into the branches where they're actually used to avoid blocking code paths that don't need them.
**Incorrect (blocks both branches):**
```typescript
async function handleRequest(userId: string, skipProcessing: boolean) {
const userData = await fetchUserData(userId)
if (skipProcessing) {
// Returns immediately but still waited for userData
return { skipped: true }
}
// Only this branch uses userData
return processUserData(userData)
}
```
**Correct (only blocks when needed):**
```typescript
async function handleRequest(userId: string, skipProcessing: boolean) {
if (skipProcessing) {
// Returns immediately without waiting
return { skipped: true }
}
// Fetch only when needed
const userData = await fetchUserData(userId)
return processUserData(userData)
}
```
**Another example (early return optimization):**
```typescript
// Incorrect: always fetches permissions
async function updateResource(resourceId: string, userId: string) {
const permissions = await fetchPermissions(userId)
const resource = await getResource(resourceId)
if (!resource) {
return { error: 'Not found' }
}
if (!permissions.canEdit) {
return { error: 'Forbidden' }
}
return await updateResourceData(resource, permissions)
}
// Correct: fetches only when needed
async function updateResource(resourceId: string, userId: string) {
const resource = await getResource(resourceId)
if (!resource) {
return { error: 'Not found' }
}
const permissions = await fetchPermissions(userId)
if (!permissions.canEdit) {
return { error: 'Forbidden' }
}
return await updateResourceData(resource, permissions)
}
```
This optimization is especially valuable when the skipped branch is frequently taken, or when the deferred operation is expensive.

View File

@ -0,0 +1,36 @@
---
title: Dependency-Based Parallelization
impact: CRITICAL
impactDescription: 2-10× improvement
tags: async, parallelization, dependencies, better-all
---
## Dependency-Based Parallelization
For operations with partial dependencies, use `better-all` to maximize parallelism. It automatically starts each task at the earliest possible moment.
**Incorrect (profile waits for config unnecessarily):**
```typescript
const [user, config] = await Promise.all([
fetchUser(),
fetchConfig()
])
const profile = await fetchProfile(user.id)
```
**Correct (config and profile run in parallel):**
```typescript
import { all } from 'better-all'
const { user, config, profile } = await all({
async user() { return fetchUser() },
async config() { return fetchConfig() },
async profile() {
return fetchProfile((await this.$.user).id)
}
})
```
Reference: [https://github.com/shuding/better-all](https://github.com/shuding/better-all)

View File

@ -0,0 +1,28 @@
---
title: Promise.all() for Independent Operations
impact: CRITICAL
impactDescription: 2-10× improvement
tags: async, parallelization, promises, waterfalls
---
## Promise.all() for Independent Operations
When async operations have no interdependencies, execute them concurrently using `Promise.all()`.
**Incorrect (sequential execution, 3 round trips):**
```typescript
const user = await fetchUser()
const posts = await fetchPosts()
const comments = await fetchComments()
```
**Correct (parallel execution, 1 round trip):**
```typescript
const [user, posts, comments] = await Promise.all([
fetchUser(),
fetchPosts(),
fetchComments()
])
```

View File

@ -0,0 +1,99 @@
---
title: Strategic Suspense Boundaries
impact: HIGH
impactDescription: faster initial paint
tags: async, suspense, streaming, layout-shift
---
## Strategic Suspense Boundaries
Instead of awaiting data in async components before returning JSX, use Suspense boundaries to show the wrapper UI faster while data loads.
**Incorrect (wrapper blocked by data fetching):**
```tsx
async function Page() {
const data = await fetchData() // Blocks entire page
return (
<div>
<div>Sidebar</div>
<div>Header</div>
<div>
<DataDisplay data={data} />
</div>
<div>Footer</div>
</div>
)
}
```
The entire layout waits for data even though only the middle section needs it.
**Correct (wrapper shows immediately, data streams in):**
```tsx
function Page() {
return (
<div>
<div>Sidebar</div>
<div>Header</div>
<div>
<Suspense fallback={<Skeleton />}>
<DataDisplay />
</Suspense>
</div>
<div>Footer</div>
</div>
)
}
async function DataDisplay() {
const data = await fetchData() // Only blocks this component
return <div>{data.content}</div>
}
```
Sidebar, Header, and Footer render immediately. Only DataDisplay waits for data.
**Alternative (share promise across components):**
```tsx
function Page() {
// Start fetch immediately, but don't await
const dataPromise = fetchData()
return (
<div>
<div>Sidebar</div>
<div>Header</div>
<Suspense fallback={<Skeleton />}>
<DataDisplay dataPromise={dataPromise} />
<DataSummary dataPromise={dataPromise} />
</Suspense>
<div>Footer</div>
</div>
)
}
function DataDisplay({ dataPromise }: { dataPromise: Promise<Data> }) {
const data = use(dataPromise) // Unwraps the promise
return <div>{data.content}</div>
}
function DataSummary({ dataPromise }: { dataPromise: Promise<Data> }) {
const data = use(dataPromise) // Reuses the same promise
return <div>{data.summary}</div>
}
```
Both components share the same promise, so only one fetch occurs. Layout renders immediately while both components wait together.
**When NOT to use this pattern:**
- Critical data needed for layout decisions (affects positioning)
- SEO-critical content above the fold
- Small, fast queries where suspense overhead isn't worth it
- When you want to avoid layout shift (loading → content jump)
**Trade-off:** Faster initial paint vs potential layout shift. Choose based on your UX priorities.

View File

@ -0,0 +1,59 @@
---
title: Avoid Barrel File Imports
impact: CRITICAL
impactDescription: 200-800ms import cost, slow builds
tags: bundle, imports, tree-shaking, barrel-files, performance
---
## Avoid Barrel File Imports
Import directly from source files instead of barrel files to avoid loading thousands of unused modules. **Barrel files** are entry points that re-export multiple modules (e.g., `index.js` that does `export * from './module'`).
Popular icon and component libraries can have **up to 10,000 re-exports** in their entry file. For many React packages, **it takes 200-800ms just to import them**, affecting both development speed and production cold starts.
**Why tree-shaking doesn't help:** When a library is marked as external (not bundled), the bundler can't optimize it. If you bundle it to enable tree-shaking, builds become substantially slower analyzing the entire module graph.
**Incorrect (imports entire library):**
```tsx
import { Check, X, Menu } from 'lucide-react'
// Loads 1,583 modules, takes ~2.8s extra in dev
// Runtime cost: 200-800ms on every cold start
import { Button, TextField } from '@mui/material'
// Loads 2,225 modules, takes ~4.2s extra in dev
```
**Correct (imports only what you need):**
```tsx
import Check from 'lucide-react/dist/esm/icons/check'
import X from 'lucide-react/dist/esm/icons/x'
import Menu from 'lucide-react/dist/esm/icons/menu'
// Loads only 3 modules (~2KB vs ~1MB)
import Button from '@mui/material/Button'
import TextField from '@mui/material/TextField'
// Loads only what you use
```
**Alternative (Next.js 13.5+):**
```js
// next.config.js - use optimizePackageImports
module.exports = {
experimental: {
optimizePackageImports: ['lucide-react', '@mui/material']
}
}
// Then you can keep the ergonomic barrel imports:
import { Check, X, Menu } from 'lucide-react'
// Automatically transformed to direct imports at build time
```
Direct imports provide 15-70% faster dev boot, 28% faster builds, 40% faster cold starts, and significantly faster HMR.
Libraries commonly affected: `lucide-react`, `@mui/material`, `@mui/icons-material`, `@tabler/icons-react`, `react-icons`, `@headlessui/react`, `@radix-ui/react-*`, `lodash`, `ramda`, `date-fns`, `rxjs`, `react-use`.
Reference: [How we optimized package imports in Next.js](https://vercel.com/blog/how-we-optimized-package-imports-in-next-js)

View File

@ -0,0 +1,31 @@
---
title: Conditional Module Loading
impact: HIGH
impactDescription: loads large data only when needed
tags: bundle, conditional-loading, lazy-loading
---
## Conditional Module Loading
Load large data or modules only when a feature is activated.
**Example (lazy-load animation frames):**
```tsx
function AnimationPlayer({ enabled, setEnabled }: { enabled: boolean; setEnabled: React.Dispatch<React.SetStateAction<boolean>> }) {
const [frames, setFrames] = useState<Frame[] | null>(null)
useEffect(() => {
if (enabled && !frames && typeof window !== 'undefined') {
import('./animation-frames.js')
.then(mod => setFrames(mod.frames))
.catch(() => setEnabled(false))
}
}, [enabled, frames, setEnabled])
if (!frames) return <Skeleton />
return <Canvas frames={frames} />
}
```
The `typeof window !== 'undefined'` check prevents bundling this module for SSR, optimizing server bundle size and build speed.

View File

@ -0,0 +1,49 @@
---
title: Defer Non-Critical Third-Party Libraries
impact: MEDIUM
impactDescription: loads after hydration
tags: bundle, third-party, analytics, defer
---
## Defer Non-Critical Third-Party Libraries
Analytics, logging, and error tracking don't block user interaction. Load them after hydration.
**Incorrect (blocks initial bundle):**
```tsx
import { Analytics } from '@vercel/analytics/react'
export default function RootLayout({ children }) {
return (
<html>
<body>
{children}
<Analytics />
</body>
</html>
)
}
```
**Correct (loads after hydration):**
```tsx
import dynamic from 'next/dynamic'
const Analytics = dynamic(
() => import('@vercel/analytics/react').then(m => m.Analytics),
{ ssr: false }
)
export default function RootLayout({ children }) {
return (
<html>
<body>
{children}
<Analytics />
</body>
</html>
)
}
```

View File

@ -0,0 +1,35 @@
---
title: Dynamic Imports for Heavy Components
impact: CRITICAL
impactDescription: directly affects TTI and LCP
tags: bundle, dynamic-import, code-splitting, next-dynamic
---
## Dynamic Imports for Heavy Components
Use `next/dynamic` to lazy-load large components not needed on initial render.
**Incorrect (Monaco bundles with main chunk ~300KB):**
```tsx
import { MonacoEditor } from './monaco-editor'
function CodePanel({ code }: { code: string }) {
return <MonacoEditor value={code} />
}
```
**Correct (Monaco loads on demand):**
```tsx
import dynamic from 'next/dynamic'
const MonacoEditor = dynamic(
() => import('./monaco-editor').then(m => m.MonacoEditor),
{ ssr: false }
)
function CodePanel({ code }: { code: string }) {
return <MonacoEditor value={code} />
}
```

View File

@ -0,0 +1,50 @@
---
title: Preload Based on User Intent
impact: MEDIUM
impactDescription: reduces perceived latency
tags: bundle, preload, user-intent, hover
---
## Preload Based on User Intent
Preload heavy bundles before they're needed to reduce perceived latency.
**Example (preload on hover/focus):**
```tsx
function EditorButton({ onClick }: { onClick: () => void }) {
const preload = () => {
if (typeof window !== 'undefined') {
void import('./monaco-editor')
}
}
return (
<button
onMouseEnter={preload}
onFocus={preload}
onClick={onClick}
>
Open Editor
</button>
)
}
```
**Example (preload when feature flag is enabled):**
```tsx
function FlagsProvider({ children, flags }: Props) {
useEffect(() => {
if (flags.editorEnabled && typeof window !== 'undefined') {
void import('./monaco-editor').then(mod => mod.init())
}
}, [flags.editorEnabled])
return <FlagsContext.Provider value={flags}>
{children}
</FlagsContext.Provider>
}
```
The `typeof window !== 'undefined'` check prevents bundling preloaded modules for SSR, optimizing server bundle size and build speed.

View File

@ -0,0 +1,74 @@
---
title: Deduplicate Global Event Listeners
impact: LOW
impactDescription: single listener for N components
tags: client, swr, event-listeners, subscription
---
## Deduplicate Global Event Listeners
Use `useSWRSubscription()` to share global event listeners across component instances.
**Incorrect (N instances = N listeners):**
```tsx
function useKeyboardShortcut(key: string, callback: () => void) {
useEffect(() => {
const handler = (e: KeyboardEvent) => {
if (e.metaKey && e.key === key) {
callback()
}
}
window.addEventListener('keydown', handler)
return () => window.removeEventListener('keydown', handler)
}, [key, callback])
}
```
When using the `useKeyboardShortcut` hook multiple times, each instance will register a new listener.
**Correct (N instances = 1 listener):**
```tsx
import useSWRSubscription from 'swr/subscription'
// Module-level Map to track callbacks per key
const keyCallbacks = new Map<string, Set<() => void>>()
function useKeyboardShortcut(key: string, callback: () => void) {
// Register this callback in the Map
useEffect(() => {
if (!keyCallbacks.has(key)) {
keyCallbacks.set(key, new Set())
}
keyCallbacks.get(key)!.add(callback)
return () => {
const set = keyCallbacks.get(key)
if (set) {
set.delete(callback)
if (set.size === 0) {
keyCallbacks.delete(key)
}
}
}
}, [key, callback])
useSWRSubscription('global-keydown', () => {
const handler = (e: KeyboardEvent) => {
if (e.metaKey && keyCallbacks.has(e.key)) {
keyCallbacks.get(e.key)!.forEach(cb => cb())
}
}
window.addEventListener('keydown', handler)
return () => window.removeEventListener('keydown', handler)
})
}
function Profile() {
// Multiple shortcuts will share the same listener
useKeyboardShortcut('p', () => { /* ... */ })
useKeyboardShortcut('k', () => { /* ... */ })
// ...
}
```

View File

@ -0,0 +1,71 @@
---
title: Version and Minimize localStorage Data
impact: MEDIUM
impactDescription: prevents schema conflicts, reduces storage size
tags: client, localStorage, storage, versioning, data-minimization
---
## Version and Minimize localStorage Data
Add version prefix to keys and store only needed fields. Prevents schema conflicts and accidental storage of sensitive data.
**Incorrect:**
```typescript
// No version, stores everything, no error handling
localStorage.setItem('userConfig', JSON.stringify(fullUserObject))
const data = localStorage.getItem('userConfig')
```
**Correct:**
```typescript
const VERSION = 'v2'
function saveConfig(config: { theme: string; language: string }) {
try {
localStorage.setItem(`userConfig:${VERSION}`, JSON.stringify(config))
} catch {
// Throws in incognito/private browsing, quota exceeded, or disabled
}
}
function loadConfig() {
try {
const data = localStorage.getItem(`userConfig:${VERSION}`)
return data ? JSON.parse(data) : null
} catch {
return null
}
}
// Migration from v1 to v2
function migrate() {
try {
const v1 = localStorage.getItem('userConfig:v1')
if (v1) {
const old = JSON.parse(v1)
saveConfig({ theme: old.darkMode ? 'dark' : 'light', language: old.lang })
localStorage.removeItem('userConfig:v1')
}
} catch {}
}
```
**Store minimal fields from server responses:**
```typescript
// User object has 20+ fields, only store what UI needs
function cachePrefs(user: FullUser) {
try {
localStorage.setItem('prefs:v1', JSON.stringify({
theme: user.preferences.theme,
notifications: user.preferences.notifications
}))
} catch {}
}
```
**Always wrap in try-catch:** `getItem()` and `setItem()` throw in incognito/private browsing (Safari, Firefox), when quota exceeded, or when disabled.
**Benefits:** Schema evolution via versioning, reduced storage size, prevents storing tokens/PII/internal flags.

View File

@ -0,0 +1,48 @@
---
title: Use Passive Event Listeners for Scrolling Performance
impact: MEDIUM
impactDescription: eliminates scroll delay caused by event listeners
tags: client, event-listeners, scrolling, performance, touch, wheel
---
## Use Passive Event Listeners for Scrolling Performance
Add `{ passive: true }` to touch and wheel event listeners to enable immediate scrolling. Browsers normally wait for listeners to finish to check if `preventDefault()` is called, causing scroll delay.
**Incorrect:**
```typescript
useEffect(() => {
const handleTouch = (e: TouchEvent) => console.log(e.touches[0].clientX)
const handleWheel = (e: WheelEvent) => console.log(e.deltaY)
document.addEventListener('touchstart', handleTouch)
document.addEventListener('wheel', handleWheel)
return () => {
document.removeEventListener('touchstart', handleTouch)
document.removeEventListener('wheel', handleWheel)
}
}, [])
```
**Correct:**
```typescript
useEffect(() => {
const handleTouch = (e: TouchEvent) => console.log(e.touches[0].clientX)
const handleWheel = (e: WheelEvent) => console.log(e.deltaY)
document.addEventListener('touchstart', handleTouch, { passive: true })
document.addEventListener('wheel', handleWheel, { passive: true })
return () => {
document.removeEventListener('touchstart', handleTouch)
document.removeEventListener('wheel', handleWheel)
}
}, [])
```
**Use passive when:** tracking/analytics, logging, any listener that doesn't call `preventDefault()`.
**Don't use passive when:** implementing custom swipe gestures, custom zoom controls, or any listener that needs `preventDefault()`.

View File

@ -0,0 +1,56 @@
---
title: Use SWR for Automatic Deduplication
impact: MEDIUM-HIGH
impactDescription: automatic deduplication
tags: client, swr, deduplication, data-fetching
---
## Use SWR for Automatic Deduplication
SWR enables request deduplication, caching, and revalidation across component instances.
**Incorrect (no deduplication, each instance fetches):**
```tsx
function UserList() {
const [users, setUsers] = useState([])
useEffect(() => {
fetch('/api/users')
.then(r => r.json())
.then(setUsers)
}, [])
}
```
**Correct (multiple instances share one request):**
```tsx
import useSWR from 'swr'
function UserList() {
const { data: users } = useSWR('/api/users', fetcher)
}
```
**For immutable data:**
```tsx
import { useImmutableSWR } from '@/lib/swr'
function StaticContent() {
const { data } = useImmutableSWR('/api/config', fetcher)
}
```
**For mutations:**
```tsx
import { useSWRMutation } from 'swr/mutation'
function UpdateButton() {
const { trigger } = useSWRMutation('/api/user', updateUser)
return <button onClick={() => trigger()}>Update</button>
}
```
Reference: [https://swr.vercel.app](https://swr.vercel.app)

View File

@ -0,0 +1,57 @@
---
title: Batch DOM CSS Changes
impact: MEDIUM
impactDescription: reduces reflows/repaints
tags: javascript, dom, css, performance, reflow
---
## Batch DOM CSS Changes
Avoid interleaving style writes with layout reads. When you read a layout property (like `offsetWidth`, `getBoundingClientRect()`, or `getComputedStyle()`) between style changes, the browser is forced to trigger a synchronous reflow.
**Incorrect (interleaved reads and writes force reflows):**
```typescript
function updateElementStyles(element: HTMLElement) {
element.style.width = '100px'
const width = element.offsetWidth // Forces reflow
element.style.height = '200px'
const height = element.offsetHeight // Forces another reflow
}
```
**Correct (batch writes, then read once):**
```typescript
function updateElementStyles(element: HTMLElement) {
// Batch all writes together
element.style.width = '100px'
element.style.height = '200px'
element.style.backgroundColor = 'blue'
element.style.border = '1px solid black'
// Read after all writes are done (single reflow)
const { width, height } = element.getBoundingClientRect()
}
```
**Better: use CSS classes**
```css
.highlighted-box {
width: 100px;
height: 200px;
background-color: blue;
border: 1px solid black;
}
```
```typescript
function updateElementStyles(element: HTMLElement) {
element.classList.add('highlighted-box')
const { width, height } = element.getBoundingClientRect()
}
```
Prefer CSS classes over inline styles when possible. CSS files are cached by the browser, and classes provide better separation of concerns and are easier to maintain.

View File

@ -0,0 +1,80 @@
---
title: Cache Repeated Function Calls
impact: MEDIUM
impactDescription: avoid redundant computation
tags: javascript, cache, memoization, performance
---
## Cache Repeated Function Calls
Use a module-level Map to cache function results when the same function is called repeatedly with the same inputs during render.
**Incorrect (redundant computation):**
```typescript
function ProjectList({ projects }: { projects: Project[] }) {
return (
<div>
{projects.map(project => {
// slugify() called 100+ times for same project names
const slug = slugify(project.name)
return <ProjectCard key={project.id} slug={slug} />
})}
</div>
)
}
```
**Correct (cached results):**
```typescript
// Module-level cache
const slugifyCache = new Map<string, string>()
function cachedSlugify(text: string): string {
if (slugifyCache.has(text)) {
return slugifyCache.get(text)!
}
const result = slugify(text)
slugifyCache.set(text, result)
return result
}
function ProjectList({ projects }: { projects: Project[] }) {
return (
<div>
{projects.map(project => {
// Computed only once per unique project name
const slug = cachedSlugify(project.name)
return <ProjectCard key={project.id} slug={slug} />
})}
</div>
)
}
```
**Simpler pattern for single-value functions:**
```typescript
let isLoggedInCache: boolean | null = null
function isLoggedIn(): boolean {
if (isLoggedInCache !== null) {
return isLoggedInCache
}
isLoggedInCache = document.cookie.includes('auth=')
return isLoggedInCache
}
// Clear cache when auth changes
function onAuthChange() {
isLoggedInCache = null
}
```
Use a Map (not a hook) so it works everywhere: utilities, event handlers, not just React components.
Reference: [How we made the Vercel Dashboard twice as fast](https://vercel.com/blog/how-we-made-the-vercel-dashboard-twice-as-fast)

View File

@ -0,0 +1,28 @@
---
title: Cache Property Access in Loops
impact: LOW-MEDIUM
impactDescription: reduces lookups
tags: javascript, loops, optimization, caching
---
## Cache Property Access in Loops
Cache object property lookups in hot paths.
**Incorrect (3 lookups × N iterations):**
```typescript
for (let i = 0; i < arr.length; i++) {
process(obj.config.settings.value)
}
```
**Correct (1 lookup total):**
```typescript
const value = obj.config.settings.value
const len = arr.length
for (let i = 0; i < len; i++) {
process(value)
}
```

View File

@ -0,0 +1,70 @@
---
title: Cache Storage API Calls
impact: LOW-MEDIUM
impactDescription: reduces expensive I/O
tags: javascript, localStorage, storage, caching, performance
---
## Cache Storage API Calls
`localStorage`, `sessionStorage`, and `document.cookie` are synchronous and expensive. Cache reads in memory.
**Incorrect (reads storage on every call):**
```typescript
function getTheme() {
return localStorage.getItem('theme') ?? 'light'
}
// Called 10 times = 10 storage reads
```
**Correct (Map cache):**
```typescript
const storageCache = new Map<string, string | null>()
function getLocalStorage(key: string) {
if (!storageCache.has(key)) {
storageCache.set(key, localStorage.getItem(key))
}
return storageCache.get(key)
}
function setLocalStorage(key: string, value: string) {
localStorage.setItem(key, value)
storageCache.set(key, value) // keep cache in sync
}
```
Use a Map (not a hook) so it works everywhere: utilities, event handlers, not just React components.
**Cookie caching:**
```typescript
let cookieCache: Record<string, string> | null = null
function getCookie(name: string) {
if (!cookieCache) {
cookieCache = Object.fromEntries(
document.cookie.split('; ').map(c => c.split('='))
)
}
return cookieCache[name]
}
```
**Important (invalidate on external changes):**
If storage can change externally (another tab, server-set cookies), invalidate cache:
```typescript
window.addEventListener('storage', (e) => {
if (e.key) storageCache.delete(e.key)
})
document.addEventListener('visibilitychange', () => {
if (document.visibilityState === 'visible') {
storageCache.clear()
}
})
```

View File

@ -0,0 +1,32 @@
---
title: Combine Multiple Array Iterations
impact: LOW-MEDIUM
impactDescription: reduces iterations
tags: javascript, arrays, loops, performance
---
## Combine Multiple Array Iterations
Multiple `.filter()` or `.map()` calls iterate the array multiple times. Combine into one loop.
**Incorrect (3 iterations):**
```typescript
const admins = users.filter(u => u.isAdmin)
const testers = users.filter(u => u.isTester)
const inactive = users.filter(u => !u.isActive)
```
**Correct (1 iteration):**
```typescript
const admins: User[] = []
const testers: User[] = []
const inactive: User[] = []
for (const user of users) {
if (user.isAdmin) admins.push(user)
if (user.isTester) testers.push(user)
if (!user.isActive) inactive.push(user)
}
```

View File

@ -0,0 +1,50 @@
---
title: Early Return from Functions
impact: LOW-MEDIUM
impactDescription: avoids unnecessary computation
tags: javascript, functions, optimization, early-return
---
## Early Return from Functions
Return early when result is determined to skip unnecessary processing.
**Incorrect (processes all items even after finding answer):**
```typescript
function validateUsers(users: User[]) {
let hasError = false
let errorMessage = ''
for (const user of users) {
if (!user.email) {
hasError = true
errorMessage = 'Email required'
}
if (!user.name) {
hasError = true
errorMessage = 'Name required'
}
// Continues checking all users even after error found
}
return hasError ? { valid: false, error: errorMessage } : { valid: true }
}
```
**Correct (returns immediately on first error):**
```typescript
function validateUsers(users: User[]) {
for (const user of users) {
if (!user.email) {
return { valid: false, error: 'Email required' }
}
if (!user.name) {
return { valid: false, error: 'Name required' }
}
}
return { valid: true }
}
```

View File

@ -0,0 +1,45 @@
---
title: Hoist RegExp Creation
impact: LOW-MEDIUM
impactDescription: avoids recreation
tags: javascript, regexp, optimization, memoization
---
## Hoist RegExp Creation
Don't create RegExp inside render. Hoist to module scope or memoize with `useMemo()`.
**Incorrect (new RegExp every render):**
```tsx
function Highlighter({ text, query }: Props) {
const regex = new RegExp(`(${query})`, 'gi')
const parts = text.split(regex)
return <>{parts.map((part, i) => ...)}</>
}
```
**Correct (memoize or hoist):**
```tsx
const EMAIL_REGEX = /^[^\s@]+@[^\s@]+\.[^\s@]+$/
function Highlighter({ text, query }: Props) {
const regex = useMemo(
() => new RegExp(`(${escapeRegex(query)})`, 'gi'),
[query]
)
const parts = text.split(regex)
return <>{parts.map((part, i) => ...)}</>
}
```
**Warning (global regex has mutable state):**
Global regex (`/g`) has mutable `lastIndex` state:
```typescript
const regex = /foo/g
regex.test('foo') // true, lastIndex = 3
regex.test('foo') // false, lastIndex = 0
```

View File

@ -0,0 +1,37 @@
---
title: Build Index Maps for Repeated Lookups
impact: LOW-MEDIUM
impactDescription: 1M ops to 2K ops
tags: javascript, map, indexing, optimization, performance
---
## Build Index Maps for Repeated Lookups
Multiple `.find()` calls by the same key should use a Map.
**Incorrect (O(n) per lookup):**
```typescript
function processOrders(orders: Order[], users: User[]) {
return orders.map(order => ({
...order,
user: users.find(u => u.id === order.userId)
}))
}
```
**Correct (O(1) per lookup):**
```typescript
function processOrders(orders: Order[], users: User[]) {
const userById = new Map(users.map(u => [u.id, u]))
return orders.map(order => ({
...order,
user: userById.get(order.userId)
}))
}
```
Build map once (O(n)), then all lookups are O(1).
For 1000 orders × 1000 users: 1M ops → 2K ops.

View File

@ -0,0 +1,49 @@
---
title: Early Length Check for Array Comparisons
impact: MEDIUM-HIGH
impactDescription: avoids expensive operations when lengths differ
tags: javascript, arrays, performance, optimization, comparison
---
## Early Length Check for Array Comparisons
When comparing arrays with expensive operations (sorting, deep equality, serialization), check lengths first. If lengths differ, the arrays cannot be equal.
In real-world applications, this optimization is especially valuable when the comparison runs in hot paths (event handlers, render loops).
**Incorrect (always runs expensive comparison):**
```typescript
function hasChanges(current: string[], original: string[]) {
// Always sorts and joins, even when lengths differ
return current.sort().join() !== original.sort().join()
}
```
Two O(n log n) sorts run even when `current.length` is 5 and `original.length` is 100. There is also overhead of joining the arrays and comparing the strings.
**Correct (O(1) length check first):**
```typescript
function hasChanges(current: string[], original: string[]) {
// Early return if lengths differ
if (current.length !== original.length) {
return true
}
// Only sort when lengths match
const currentSorted = current.toSorted()
const originalSorted = original.toSorted()
for (let i = 0; i < currentSorted.length; i++) {
if (currentSorted[i] !== originalSorted[i]) {
return true
}
}
return false
}
```
This new approach is more efficient because:
- It avoids the overhead of sorting and joining the arrays when lengths differ
- It avoids consuming memory for the joined strings (especially important for large arrays)
- It avoids mutating the original arrays
- It returns early when a difference is found

View File

@ -0,0 +1,82 @@
---
title: Use Loop for Min/Max Instead of Sort
impact: LOW
impactDescription: O(n) instead of O(n log n)
tags: javascript, arrays, performance, sorting, algorithms
---
## Use Loop for Min/Max Instead of Sort
Finding the smallest or largest element only requires a single pass through the array. Sorting is wasteful and slower.
**Incorrect (O(n log n) - sort to find latest):**
```typescript
interface Project {
id: string
name: string
updatedAt: number
}
function getLatestProject(projects: Project[]) {
const sorted = [...projects].sort((a, b) => b.updatedAt - a.updatedAt)
return sorted[0]
}
```
Sorts the entire array just to find the maximum value.
**Incorrect (O(n log n) - sort for oldest and newest):**
```typescript
function getOldestAndNewest(projects: Project[]) {
const sorted = [...projects].sort((a, b) => a.updatedAt - b.updatedAt)
return { oldest: sorted[0], newest: sorted[sorted.length - 1] }
}
```
Still sorts unnecessarily when only min/max are needed.
**Correct (O(n) - single loop):**
```typescript
function getLatestProject(projects: Project[]) {
if (projects.length === 0) return null
let latest = projects[0]
for (let i = 1; i < projects.length; i++) {
if (projects[i].updatedAt > latest.updatedAt) {
latest = projects[i]
}
}
return latest
}
function getOldestAndNewest(projects: Project[]) {
if (projects.length === 0) return { oldest: null, newest: null }
let oldest = projects[0]
let newest = projects[0]
for (let i = 1; i < projects.length; i++) {
if (projects[i].updatedAt < oldest.updatedAt) oldest = projects[i]
if (projects[i].updatedAt > newest.updatedAt) newest = projects[i]
}
return { oldest, newest }
}
```
Single pass through the array, no copying, no sorting.
**Alternative (Math.min/Math.max for small arrays):**
```typescript
const numbers = [5, 2, 8, 1, 9]
const min = Math.min(...numbers)
const max = Math.max(...numbers)
```
This works for small arrays, but can be slower or just throw an error for very large arrays due to spread operator limitations. Maximal array length is approximately 124000 in Chrome 143 and 638000 in Safari 18; exact numbers may vary - see [the fiddle](https://jsfiddle.net/qw1jabsx/4/). Use the loop approach for reliability.

View File

@ -0,0 +1,24 @@
---
title: Use Set/Map for O(1) Lookups
impact: LOW-MEDIUM
impactDescription: O(n) to O(1)
tags: javascript, set, map, data-structures, performance
---
## Use Set/Map for O(1) Lookups
Convert arrays to Set/Map for repeated membership checks.
**Incorrect (O(n) per check):**
```typescript
const allowedIds = ['a', 'b', 'c', ...]
items.filter(item => allowedIds.includes(item.id))
```
**Correct (O(1) per check):**
```typescript
const allowedIds = new Set(['a', 'b', 'c', ...])
items.filter(item => allowedIds.has(item.id))
```

View File

@ -0,0 +1,57 @@
---
title: Use toSorted() Instead of sort() for Immutability
impact: MEDIUM-HIGH
impactDescription: prevents mutation bugs in React state
tags: javascript, arrays, immutability, react, state, mutation
---
## Use toSorted() Instead of sort() for Immutability
`.sort()` mutates the array in place, which can cause bugs with React state and props. Use `.toSorted()` to create a new sorted array without mutation.
**Incorrect (mutates original array):**
```typescript
function UserList({ users }: { users: User[] }) {
// Mutates the users prop array!
const sorted = useMemo(
() => users.sort((a, b) => a.name.localeCompare(b.name)),
[users]
)
return <div>{sorted.map(renderUser)}</div>
}
```
**Correct (creates new array):**
```typescript
function UserList({ users }: { users: User[] }) {
// Creates new sorted array, original unchanged
const sorted = useMemo(
() => users.toSorted((a, b) => a.name.localeCompare(b.name)),
[users]
)
return <div>{sorted.map(renderUser)}</div>
}
```
**Why this matters in React:**
1. Props/state mutations break React's immutability model - React expects props and state to be treated as read-only
2. Causes stale closure bugs - Mutating arrays inside closures (callbacks, effects) can lead to unexpected behavior
**Browser support (fallback for older browsers):**
`.toSorted()` is available in all modern browsers (Chrome 110+, Safari 16+, Firefox 115+, Node.js 20+). For older environments, use spread operator:
```typescript
// Fallback for older browsers
const sorted = [...items].sort((a, b) => a.value - b.value)
```
**Other immutable array methods:**
- `.toSorted()` - immutable sort
- `.toReversed()` - immutable reverse
- `.toSpliced()` - immutable splice
- `.with()` - immutable element replacement

View File

@ -0,0 +1,26 @@
---
title: Use Activity Component for Show/Hide
impact: MEDIUM
impactDescription: preserves state/DOM
tags: rendering, activity, visibility, state-preservation
---
## Use Activity Component for Show/Hide
Use React's `<Activity>` to preserve state/DOM for expensive components that frequently toggle visibility.
**Usage:**
```tsx
import { Activity } from 'react'
function Dropdown({ isOpen }: Props) {
return (
<Activity mode={isOpen ? 'visible' : 'hidden'}>
<ExpensiveMenu />
</Activity>
)
}
```
Avoids expensive re-renders and state loss.

View File

@ -0,0 +1,47 @@
---
title: Animate SVG Wrapper Instead of SVG Element
impact: LOW
impactDescription: enables hardware acceleration
tags: rendering, svg, css, animation, performance
---
## Animate SVG Wrapper Instead of SVG Element
Many browsers don't have hardware acceleration for CSS3 animations on SVG elements. Wrap SVG in a `<div>` and animate the wrapper instead.
**Incorrect (animating SVG directly - no hardware acceleration):**
```tsx
function LoadingSpinner() {
return (
<svg
className="animate-spin"
width="24"
height="24"
viewBox="0 0 24 24"
>
<circle cx="12" cy="12" r="10" stroke="currentColor" />
</svg>
)
}
```
**Correct (animating wrapper div - hardware accelerated):**
```tsx
function LoadingSpinner() {
return (
<div className="animate-spin">
<svg
width="24"
height="24"
viewBox="0 0 24 24"
>
<circle cx="12" cy="12" r="10" stroke="currentColor" />
</svg>
</div>
)
}
```
This applies to all CSS transforms and transitions (`transform`, `opacity`, `translate`, `scale`, `rotate`). The wrapper div allows browsers to use GPU acceleration for smoother animations.

View File

@ -0,0 +1,40 @@
---
title: Use Explicit Conditional Rendering
impact: LOW
impactDescription: prevents rendering 0 or NaN
tags: rendering, conditional, jsx, falsy-values
---
## Use Explicit Conditional Rendering
Use explicit ternary operators (`? :`) instead of `&&` for conditional rendering when the condition can be `0`, `NaN`, or other falsy values that render.
**Incorrect (renders "0" when count is 0):**
```tsx
function Badge({ count }: { count: number }) {
return (
<div>
{count && <span className="badge">{count}</span>}
</div>
)
}
// When count = 0, renders: <div>0</div>
// When count = 5, renders: <div><span class="badge">5</span></div>
```
**Correct (renders nothing when count is 0):**
```tsx
function Badge({ count }: { count: number }) {
return (
<div>
{count > 0 ? <span className="badge">{count}</span> : null}
</div>
)
}
// When count = 0, renders: <div></div>
// When count = 5, renders: <div><span class="badge">5</span></div>
```

View File

@ -0,0 +1,38 @@
---
title: CSS content-visibility for Long Lists
impact: HIGH
impactDescription: faster initial render
tags: rendering, css, content-visibility, long-lists
---
## CSS content-visibility for Long Lists
Apply `content-visibility: auto` to defer off-screen rendering.
**CSS:**
```css
.message-item {
content-visibility: auto;
contain-intrinsic-size: 0 80px;
}
```
**Example:**
```tsx
function MessageList({ messages }: { messages: Message[] }) {
return (
<div className="overflow-y-auto h-screen">
{messages.map(msg => (
<div key={msg.id} className="message-item">
<Avatar user={msg.author} />
<div>{msg.content}</div>
</div>
))}
</div>
)
}
```
For 1000 messages, browser skips layout/paint for ~990 off-screen items (10× faster initial render).

View File

@ -0,0 +1,46 @@
---
title: Hoist Static JSX Elements
impact: LOW
impactDescription: avoids re-creation
tags: rendering, jsx, static, optimization
---
## Hoist Static JSX Elements
Extract static JSX outside components to avoid re-creation.
**Incorrect (recreates element every render):**
```tsx
function LoadingSkeleton() {
return <div className="animate-pulse h-20 bg-gray-200" />
}
function Container() {
return (
<div>
{loading && <LoadingSkeleton />}
</div>
)
}
```
**Correct (reuses same element):**
```tsx
const loadingSkeleton = (
<div className="animate-pulse h-20 bg-gray-200" />
)
function Container() {
return (
<div>
{loading && loadingSkeleton}
</div>
)
}
```
This is especially helpful for large and static SVG nodes, which can be expensive to recreate on every render.
**Note:** If your project has [React Compiler](https://react.dev/learn/react-compiler) enabled, the compiler automatically hoists static JSX elements and optimizes component re-renders, making manual hoisting unnecessary.

View File

@ -0,0 +1,82 @@
---
title: Prevent Hydration Mismatch Without Flickering
impact: MEDIUM
impactDescription: avoids visual flicker and hydration errors
tags: rendering, ssr, hydration, localStorage, flicker
---
## Prevent Hydration Mismatch Without Flickering
When rendering content that depends on client-side storage (localStorage, cookies), avoid both SSR breakage and post-hydration flickering by injecting a synchronous script that updates the DOM before React hydrates.
**Incorrect (breaks SSR):**
```tsx
function ThemeWrapper({ children }: { children: ReactNode }) {
// localStorage is not available on server - throws error
const theme = localStorage.getItem('theme') || 'light'
return (
<div className={theme}>
{children}
</div>
)
}
```
Server-side rendering will fail because `localStorage` is undefined.
**Incorrect (visual flickering):**
```tsx
function ThemeWrapper({ children }: { children: ReactNode }) {
const [theme, setTheme] = useState('light')
useEffect(() => {
// Runs after hydration - causes visible flash
const stored = localStorage.getItem('theme')
if (stored) {
setTheme(stored)
}
}, [])
return (
<div className={theme}>
{children}
</div>
)
}
```
Component first renders with default value (`light`), then updates after hydration, causing a visible flash of incorrect content.
**Correct (no flicker, no hydration mismatch):**
```tsx
function ThemeWrapper({ children }: { children: ReactNode }) {
return (
<>
<div id="theme-wrapper">
{children}
</div>
<script
dangerouslySetInnerHTML={{
__html: `
(function() {
try {
var theme = localStorage.getItem('theme') || 'light';
var el = document.getElementById('theme-wrapper');
if (el) el.className = theme;
} catch (e) {}
})();
`,
}}
/>
</>
)
}
```
The inline script executes synchronously before showing the element, ensuring the DOM already has the correct value. No flickering, no hydration mismatch.
This pattern is especially useful for theme toggles, user preferences, authentication states, and any client-only data that should render immediately without flashing default values.

View File

@ -0,0 +1,28 @@
---
title: Optimize SVG Precision
impact: LOW
impactDescription: reduces file size
tags: rendering, svg, optimization, svgo
---
## Optimize SVG Precision
Reduce SVG coordinate precision to decrease file size. The optimal precision depends on the viewBox size, but in general reducing precision should be considered.
**Incorrect (excessive precision):**
```svg
<path d="M 10.293847 20.847362 L 30.938472 40.192837" />
```
**Correct (1 decimal place):**
```svg
<path d="M 10.3 20.8 L 30.9 40.2" />
```
**Automate with SVGO:**
```bash
npx svgo --precision=1 --multipass icon.svg
```

View File

@ -0,0 +1,39 @@
---
title: Defer State Reads to Usage Point
impact: MEDIUM
impactDescription: avoids unnecessary subscriptions
tags: rerender, searchParams, localStorage, optimization
---
## Defer State Reads to Usage Point
Don't subscribe to dynamic state (searchParams, localStorage) if you only read it inside callbacks.
**Incorrect (subscribes to all searchParams changes):**
```tsx
function ShareButton({ chatId }: { chatId: string }) {
const searchParams = useSearchParams()
const handleShare = () => {
const ref = searchParams.get('ref')
shareChat(chatId, { ref })
}
return <button onClick={handleShare}>Share</button>
}
```
**Correct (reads on demand, no subscription):**
```tsx
function ShareButton({ chatId }: { chatId: string }) {
const handleShare = () => {
const params = new URLSearchParams(window.location.search)
const ref = params.get('ref')
shareChat(chatId, { ref })
}
return <button onClick={handleShare}>Share</button>
}
```

View File

@ -0,0 +1,45 @@
---
title: Narrow Effect Dependencies
impact: LOW
impactDescription: minimizes effect re-runs
tags: rerender, useEffect, dependencies, optimization
---
## Narrow Effect Dependencies
Specify primitive dependencies instead of objects to minimize effect re-runs.
**Incorrect (re-runs on any user field change):**
```tsx
useEffect(() => {
console.log(user.id)
}, [user])
```
**Correct (re-runs only when id changes):**
```tsx
useEffect(() => {
console.log(user.id)
}, [user.id])
```
**For derived state, compute outside effect:**
```tsx
// Incorrect: runs on width=767, 766, 765...
useEffect(() => {
if (width < 768) {
enableMobileMode()
}
}, [width])
// Correct: runs only on boolean transition
const isMobile = width < 768
useEffect(() => {
if (isMobile) {
enableMobileMode()
}
}, [isMobile])
```

View File

@ -0,0 +1,29 @@
---
title: Subscribe to Derived State
impact: MEDIUM
impactDescription: reduces re-render frequency
tags: rerender, derived-state, media-query, optimization
---
## Subscribe to Derived State
Subscribe to derived boolean state instead of continuous values to reduce re-render frequency.
**Incorrect (re-renders on every pixel change):**
```tsx
function Sidebar() {
const width = useWindowWidth() // updates continuously
const isMobile = width < 768
return <nav className={isMobile ? 'mobile' : 'desktop'} />
}
```
**Correct (re-renders only when boolean changes):**
```tsx
function Sidebar() {
const isMobile = useMediaQuery('(max-width: 767px)')
return <nav className={isMobile ? 'mobile' : 'desktop'} />
}
```

View File

@ -0,0 +1,74 @@
---
title: Use Functional setState Updates
impact: MEDIUM
impactDescription: prevents stale closures and unnecessary callback recreations
tags: react, hooks, useState, useCallback, callbacks, closures
---
## Use Functional setState Updates
When updating state based on the current state value, use the functional update form of setState instead of directly referencing the state variable. This prevents stale closures, eliminates unnecessary dependencies, and creates stable callback references.
**Incorrect (requires state as dependency):**
```tsx
function TodoList() {
const [items, setItems] = useState(initialItems)
// Callback must depend on items, recreated on every items change
const addItems = useCallback((newItems: Item[]) => {
setItems([...items, ...newItems])
}, [items]) // ❌ items dependency causes recreations
// Risk of stale closure if dependency is forgotten
const removeItem = useCallback((id: string) => {
setItems(items.filter(item => item.id !== id))
}, []) // ❌ Missing items dependency - will use stale items!
return <ItemsEditor items={items} onAdd={addItems} onRemove={removeItem} />
}
```
The first callback is recreated every time `items` changes, which can cause child components to re-render unnecessarily. The second callback has a stale closure bug—it will always reference the initial `items` value.
**Correct (stable callbacks, no stale closures):**
```tsx
function TodoList() {
const [items, setItems] = useState(initialItems)
// Stable callback, never recreated
const addItems = useCallback((newItems: Item[]) => {
setItems(curr => [...curr, ...newItems])
}, []) // ✅ No dependencies needed
// Always uses latest state, no stale closure risk
const removeItem = useCallback((id: string) => {
setItems(curr => curr.filter(item => item.id !== id))
}, []) // ✅ Safe and stable
return <ItemsEditor items={items} onAdd={addItems} onRemove={removeItem} />
}
```
**Benefits:**
1. **Stable callback references** - Callbacks don't need to be recreated when state changes
2. **No stale closures** - Always operates on the latest state value
3. **Fewer dependencies** - Simplifies dependency arrays and reduces memory leaks
4. **Prevents bugs** - Eliminates the most common source of React closure bugs
**When to use functional updates:**
- Any setState that depends on the current state value
- Inside useCallback/useMemo when state is needed
- Event handlers that reference state
- Async operations that update state
**When direct updates are fine:**
- Setting state to a static value: `setCount(0)`
- Setting state from props/arguments only: `setName(newName)`
- State doesn't depend on previous value
**Note:** If your project has [React Compiler](https://react.dev/learn/react-compiler) enabled, the compiler can automatically optimize some cases, but functional updates are still recommended for correctness and to prevent stale closure bugs.

View File

@ -0,0 +1,58 @@
---
title: Use Lazy State Initialization
impact: MEDIUM
impactDescription: wasted computation on every render
tags: react, hooks, useState, performance, initialization
---
## Use Lazy State Initialization
Pass a function to `useState` for expensive initial values. Without the function form, the initializer runs on every render even though the value is only used once.
**Incorrect (runs on every render):**
```tsx
function FilteredList({ items }: { items: Item[] }) {
// buildSearchIndex() runs on EVERY render, even after initialization
const [searchIndex, setSearchIndex] = useState(buildSearchIndex(items))
const [query, setQuery] = useState('')
// When query changes, buildSearchIndex runs again unnecessarily
return <SearchResults index={searchIndex} query={query} />
}
function UserProfile() {
// JSON.parse runs on every render
const [settings, setSettings] = useState(
JSON.parse(localStorage.getItem('settings') || '{}')
)
return <SettingsForm settings={settings} onChange={setSettings} />
}
```
**Correct (runs only once):**
```tsx
function FilteredList({ items }: { items: Item[] }) {
// buildSearchIndex() runs ONLY on initial render
const [searchIndex, setSearchIndex] = useState(() => buildSearchIndex(items))
const [query, setQuery] = useState('')
return <SearchResults index={searchIndex} query={query} />
}
function UserProfile() {
// JSON.parse runs only on initial render
const [settings, setSettings] = useState(() => {
const stored = localStorage.getItem('settings')
return stored ? JSON.parse(stored) : {}
})
return <SettingsForm settings={settings} onChange={setSettings} />
}
```
Use lazy initialization when computing initial values from localStorage/sessionStorage, building data structures (indexes, maps), reading from the DOM, or performing heavy transformations.
For simple primitives (`useState(0)`), direct references (`useState(props.value)`), or cheap literals (`useState({})`), the function form is unnecessary.

View File

@ -0,0 +1,44 @@
---
title: Extract to Memoized Components
impact: MEDIUM
impactDescription: enables early returns
tags: rerender, memo, useMemo, optimization
---
## Extract to Memoized Components
Extract expensive work into memoized components to enable early returns before computation.
**Incorrect (computes avatar even when loading):**
```tsx
function Profile({ user, loading }: Props) {
const avatar = useMemo(() => {
const id = computeAvatarId(user)
return <Avatar id={id} />
}, [user])
if (loading) return <Skeleton />
return <div>{avatar}</div>
}
```
**Correct (skips computation when loading):**
```tsx
const UserAvatar = memo(function UserAvatar({ user }: { user: User }) {
const id = useMemo(() => computeAvatarId(user), [user])
return <Avatar id={id} />
})
function Profile({ user, loading }: Props) {
if (loading) return <Skeleton />
return (
<div>
<UserAvatar user={user} />
</div>
)
}
```
**Note:** If your project has [React Compiler](https://react.dev/learn/react-compiler) enabled, manual memoization with `memo()` and `useMemo()` is not necessary. The compiler automatically optimizes re-renders.

View File

@ -0,0 +1,40 @@
---
title: Use Transitions for Non-Urgent Updates
impact: MEDIUM
impactDescription: maintains UI responsiveness
tags: rerender, transitions, startTransition, performance
---
## Use Transitions for Non-Urgent Updates
Mark frequent, non-urgent state updates as transitions to maintain UI responsiveness.
**Incorrect (blocks UI on every scroll):**
```tsx
function ScrollTracker() {
const [scrollY, setScrollY] = useState(0)
useEffect(() => {
const handler = () => setScrollY(window.scrollY)
window.addEventListener('scroll', handler, { passive: true })
return () => window.removeEventListener('scroll', handler)
}, [])
}
```
**Correct (non-blocking updates):**
```tsx
import { startTransition } from 'react'
function ScrollTracker() {
const [scrollY, setScrollY] = useState(0)
useEffect(() => {
const handler = () => {
startTransition(() => setScrollY(window.scrollY))
}
window.addEventListener('scroll', handler, { passive: true })
return () => window.removeEventListener('scroll', handler)
}, [])
}
```

View File

@ -0,0 +1,73 @@
---
title: Use after() for Non-Blocking Operations
impact: MEDIUM
impactDescription: faster response times
tags: server, async, logging, analytics, side-effects
---
## Use after() for Non-Blocking Operations
Use Next.js's `after()` to schedule work that should execute after a response is sent. This prevents logging, analytics, and other side effects from blocking the response.
**Incorrect (blocks response):**
```tsx
import { logUserAction } from '@/app/utils'
export async function POST(request: Request) {
// Perform mutation
await updateDatabase(request)
// Logging blocks the response
const userAgent = request.headers.get('user-agent') || 'unknown'
await logUserAction({ userAgent })
return new Response(JSON.stringify({ status: 'success' }), {
status: 200,
headers: { 'Content-Type': 'application/json' }
})
}
```
**Correct (non-blocking):**
```tsx
import { after } from 'next/server'
import { headers, cookies } from 'next/headers'
import { logUserAction } from '@/app/utils'
export async function POST(request: Request) {
// Perform mutation
await updateDatabase(request)
// Log after response is sent
after(async () => {
const userAgent = (await headers()).get('user-agent') || 'unknown'
const sessionCookie = (await cookies()).get('session-id')?.value || 'anonymous'
logUserAction({ sessionCookie, userAgent })
})
return new Response(JSON.stringify({ status: 'success' }), {
status: 200,
headers: { 'Content-Type': 'application/json' }
})
}
```
The response is sent immediately while logging happens in the background.
**Common use cases:**
- Analytics tracking
- Audit logging
- Sending notifications
- Cache invalidation
- Cleanup tasks
**Important notes:**
- `after()` runs even if the response fails or redirects
- Works in Server Actions, Route Handlers, and Server Components
Reference: [https://nextjs.org/docs/app/api-reference/functions/after](https://nextjs.org/docs/app/api-reference/functions/after)

View File

@ -0,0 +1,41 @@
---
title: Cross-Request LRU Caching
impact: HIGH
impactDescription: caches across requests
tags: server, cache, lru, cross-request
---
## Cross-Request LRU Caching
`React.cache()` only works within one request. For data shared across sequential requests (user clicks button A then button B), use an LRU cache.
**Implementation:**
```typescript
import { LRUCache } from 'lru-cache'
const cache = new LRUCache<string, any>({
max: 1000,
ttl: 5 * 60 * 1000 // 5 minutes
})
export async function getUser(id: string) {
const cached = cache.get(id)
if (cached) return cached
const user = await db.user.findUnique({ where: { id } })
cache.set(id, user)
return user
}
// Request 1: DB query, result cached
// Request 2: cache hit, no DB query
```
Use when sequential user actions hit multiple endpoints needing the same data within seconds.
**With Vercel's [Fluid Compute](https://vercel.com/docs/fluid-compute):** LRU caching is especially effective because multiple concurrent requests can share the same function instance and cache. This means the cache persists across requests without needing external storage like Redis.
**In traditional serverless:** Each invocation runs in isolation, so consider Redis for cross-process caching.
Reference: [https://github.com/isaacs/node-lru-cache](https://github.com/isaacs/node-lru-cache)

View File

@ -0,0 +1,76 @@
---
title: Per-Request Deduplication with React.cache()
impact: MEDIUM
impactDescription: deduplicates within request
tags: server, cache, react-cache, deduplication
---
## Per-Request Deduplication with React.cache()
Use `React.cache()` for server-side request deduplication. Authentication and database queries benefit most.
**Usage:**
```typescript
import { cache } from 'react'
export const getCurrentUser = cache(async () => {
const session = await auth()
if (!session?.user?.id) return null
return await db.user.findUnique({
where: { id: session.user.id }
})
})
```
Within a single request, multiple calls to `getCurrentUser()` execute the query only once.
**Avoid inline objects as arguments:**
`React.cache()` uses shallow equality (`Object.is`) to determine cache hits. Inline objects create new references each call, preventing cache hits.
**Incorrect (always cache miss):**
```typescript
const getUser = cache(async (params: { uid: number }) => {
return await db.user.findUnique({ where: { id: params.uid } })
})
// Each call creates new object, never hits cache
getUser({ uid: 1 })
getUser({ uid: 1 }) // Cache miss, runs query again
```
**Correct (cache hit):**
```typescript
const getUser = cache(async (uid: number) => {
return await db.user.findUnique({ where: { id: uid } })
})
// Primitive args use value equality
getUser(1)
getUser(1) // Cache hit, returns cached result
```
If you must pass objects, pass the same reference:
```typescript
const params = { uid: 1 }
getUser(params) // Query runs
getUser(params) // Cache hit (same reference)
```
**Next.js-Specific Note:**
In Next.js, the `fetch` API is automatically extended with request memoization. Requests with the same URL and options are automatically deduplicated within a single request, so you don't need `React.cache()` for `fetch` calls. However, `React.cache()` is still essential for other async tasks:
- Database queries (Prisma, Drizzle, etc.)
- Heavy computations
- Authentication checks
- File system operations
- Any non-fetch async work
Use `React.cache()` to deduplicate these operations across your component tree.
Reference: [React.cache documentation](https://react.dev/reference/react/cache)

View File

@ -0,0 +1,83 @@
---
title: Parallel Data Fetching with Component Composition
impact: CRITICAL
impactDescription: eliminates server-side waterfalls
tags: server, rsc, parallel-fetching, composition
---
## Parallel Data Fetching with Component Composition
React Server Components execute sequentially within a tree. Restructure with composition to parallelize data fetching.
**Incorrect (Sidebar waits for Page's fetch to complete):**
```tsx
export default async function Page() {
const header = await fetchHeader()
return (
<div>
<div>{header}</div>
<Sidebar />
</div>
)
}
async function Sidebar() {
const items = await fetchSidebarItems()
return <nav>{items.map(renderItem)}</nav>
}
```
**Correct (both fetch simultaneously):**
```tsx
async function Header() {
const data = await fetchHeader()
return <div>{data}</div>
}
async function Sidebar() {
const items = await fetchSidebarItems()
return <nav>{items.map(renderItem)}</nav>
}
export default function Page() {
return (
<div>
<Header />
<Sidebar />
</div>
)
}
```
**Alternative with children prop:**
```tsx
async function Header() {
const data = await fetchHeader()
return <div>{data}</div>
}
async function Sidebar() {
const items = await fetchSidebarItems()
return <nav>{items.map(renderItem)}</nav>
}
function Layout({ children }: { children: ReactNode }) {
return (
<div>
<Header />
{children}
</div>
)
}
export default function Page() {
return (
<Layout>
<Sidebar />
</Layout>
)
}
```

View File

@ -0,0 +1,38 @@
---
title: Minimize Serialization at RSC Boundaries
impact: HIGH
impactDescription: reduces data transfer size
tags: server, rsc, serialization, props
---
## Minimize Serialization at RSC Boundaries
The React Server/Client boundary serializes all object properties into strings and embeds them in the HTML response and subsequent RSC requests. This serialized data directly impacts page weight and load time, so **size matters a lot**. Only pass fields that the client actually uses.
**Incorrect (serializes all 50 fields):**
```tsx
async function Page() {
const user = await fetchUser() // 50 fields
return <Profile user={user} />
}
'use client'
function Profile({ user }: { user: User }) {
return <div>{user.name}</div> // uses 1 field
}
```
**Correct (serializes only 1 field):**
```tsx
async function Page() {
const user = await fetchUser()
return <Profile name={user.name} />
}
'use client'
function Profile({ name }: { name: string }) {
return <div>{name}</div>
}
```

3
.github/labeler.yml vendored Normal file
View File

@ -0,0 +1,3 @@
web:
- changed-files:
- any-glob-to-any-file: 'web/**'

View File

@ -20,4 +20,4 @@
- [x] I understand that this PR may be closed in case there was no previous discussion or issues. (This doesn't apply to typos!)
- [x] I've added a test for each change that was introduced, and I tried as much as possible to make a single atomic change.
- [x] I've updated the documentation accordingly.
- [x] I ran `dev/reformat`(backend) and `cd web && npx lint-staged`(frontend) to appease the lint gods
- [x] I ran `make lint` and `make type-check` (backend) and `cd web && npx lint-staged` (frontend) to appease the lint gods

View File

@ -39,12 +39,6 @@ jobs:
- name: Install dependencies
run: uv sync --project api --dev
- name: Run pyrefly check
run: |
cd api
uv add --dev pyrefly
uv run pyrefly check || true
- name: Run dify config tests
run: uv run --project api dev/pytest/pytest_config_tests.py

View File

@ -16,14 +16,14 @@ jobs:
- name: Check Docker Compose inputs
id: docker-compose-changes
uses: tj-actions/changed-files@v46
uses: tj-actions/changed-files@v47
with:
files: |
docker/generate_docker_compose
docker/.env.example
docker/docker-compose-template.yaml
docker/docker-compose.yaml
- uses: actions/setup-python@v5
- uses: actions/setup-python@v6
with:
python-version: "3.11"
@ -82,6 +82,6 @@ jobs:
# mdformat breaks YAML front matter in markdown files. Add --exclude for directories containing YAML front matter.
- name: mdformat
run: |
uvx --python 3.13 mdformat . --exclude ".claude/skills/**/SKILL.md"
uvx --python 3.13 mdformat . --exclude ".claude/skills/**"
- uses: autofix-ci/action@635ffb0c9798bd160680f18fd73371e355b85f27

View File

@ -112,7 +112,7 @@ jobs:
context: "web"
steps:
- name: Download digests
uses: actions/download-artifact@v4
uses: actions/download-artifact@v7
with:
path: /tmp/digests
pattern: digests-${{ matrix.context }}-*

View File

@ -1,4 +1,4 @@
name: Deploy Trigger Dev
name: Deploy Agent Dev
permissions:
contents: read
@ -7,7 +7,7 @@ on:
workflow_run:
workflows: ["Build and Push API & Web"]
branches:
- "deploy/trigger-dev"
- "deploy/agent-dev"
types:
- completed
@ -16,12 +16,12 @@ jobs:
runs-on: ubuntu-latest
if: |
github.event.workflow_run.conclusion == 'success' &&
github.event.workflow_run.head_branch == 'deploy/trigger-dev'
github.event.workflow_run.head_branch == 'deploy/agent-dev'
steps:
- name: Deploy to server
uses: appleboy/ssh-action@v0.1.8
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.TRIGGER_SSH_HOST }}
host: ${{ secrets.AGENT_DEV_SSH_HOST }}
username: ${{ secrets.SSH_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |

View File

@ -16,7 +16,7 @@ jobs:
github.event.workflow_run.head_branch == 'deploy/dev'
steps:
- name: Deploy to server
uses: appleboy/ssh-action@v0.1.8
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.SSH_HOST }}
username: ${{ secrets.SSH_USER }}

29
.github/workflows/deploy-hitl.yml vendored Normal file
View File

@ -0,0 +1,29 @@
name: Deploy HITL
on:
workflow_run:
workflows: ["Build and Push API & Web"]
branches:
- "feat/hitl-frontend"
- "feat/hitl-backend"
types:
- completed
jobs:
deploy:
runs-on: ubuntu-latest
if: |
github.event.workflow_run.conclusion == 'success' &&
(
github.event.workflow_run.head_branch == 'feat/hitl-frontend' ||
github.event.workflow_run.head_branch == 'feat/hitl-backend'
)
steps:
- name: Deploy to server
uses: appleboy/ssh-action@v1
with:
host: ${{ secrets.HITL_SSH_HOST }}
username: ${{ secrets.SSH_USER }}
key: ${{ secrets.SSH_PRIVATE_KEY }}
script: |
${{ vars.SSH_SCRIPT || secrets.SSH_SCRIPT }}

14
.github/workflows/labeler.yml vendored Normal file
View File

@ -0,0 +1,14 @@
name: "Pull Request Labeler"
on:
pull_request_target:
jobs:
labeler:
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@v6
with:
sync-labels: true

View File

@ -18,7 +18,7 @@ jobs:
pull-requests: write
steps:
- uses: actions/stale@v5
- uses: actions/stale@v10
with:
days-before-issue-stale: 15
days-before-issue-close: 3

View File

@ -65,6 +65,9 @@ jobs:
defaults:
run:
working-directory: ./web
permissions:
checks: write
pull-requests: read
steps:
- name: Checkout code
@ -90,7 +93,7 @@ jobs:
uses: actions/setup-node@v6
if: steps.changed-files.outputs.any_changed == 'true'
with:
node-version: 22
node-version: 24
cache: pnpm
cache-dependency-path: ./web/pnpm-lock.yaml
@ -103,7 +106,21 @@ jobs:
if: steps.changed-files.outputs.any_changed == 'true'
working-directory: ./web
run: |
pnpm run lint
pnpm run lint:ci
# pnpm run lint:report
# continue-on-error: true
# - name: Annotate Code
# if: steps.changed-files.outputs.any_changed == 'true' && github.event_name == 'pull_request'
# uses: DerLev/eslint-annotations@51347b3a0abfb503fc8734d5ae31c4b151297fae
# with:
# eslint-report: web/eslint_report.json
# github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Web tsslint
if: steps.changed-files.outputs.any_changed == 'true'
working-directory: ./web
run: pnpm run lint:tss
- name: Web type check
if: steps.changed-files.outputs.any_changed == 'true'
@ -115,11 +132,6 @@ jobs:
working-directory: ./web
run: pnpm run knip
- name: Web build check
if: steps.changed-files.outputs.any_changed == 'true'
working-directory: ./web
run: pnpm run build
superlinter:
name: SuperLinter
runs-on: ubuntu-latest

View File

@ -16,10 +16,6 @@ jobs:
name: unit test for Node.js SDK
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [16, 18, 20, 22]
defaults:
run:
working-directory: sdks/nodejs-client
@ -29,10 +25,10 @@ jobs:
with:
persist-credentials: false
- name: Use Node.js ${{ matrix.node-version }}
- name: Use Node.js
uses: actions/setup-node@v6
with:
node-version: ${{ matrix.node-version }}
node-version: 24
cache: ''
cache-dependency-path: 'pnpm-lock.yaml'

View File

@ -1,94 +0,0 @@
name: Translate i18n Files Based on English
on:
push:
branches: [main]
paths:
- 'web/i18n/en-US/*.json'
workflow_dispatch:
permissions:
contents: write
pull-requests: write
jobs:
check-and-update:
if: github.repository == 'langgenius/dify'
runs-on: ubuntu-latest
defaults:
run:
working-directory: web
steps:
# Keep use old checkout action version for https://github.com/peter-evans/create-pull-request/issues/4272
- uses: actions/checkout@v4
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Check for file changes in i18n/en-US
id: check_files
run: |
# Skip check for manual trigger, translate all files
if [ "${{ github.event_name }}" == "workflow_dispatch" ]; then
echo "FILES_CHANGED=true" >> $GITHUB_ENV
echo "FILE_ARGS=" >> $GITHUB_ENV
echo "Manual trigger: translating all files"
else
git fetch origin "${{ github.event.before }}" || true
git fetch origin "${{ github.sha }}" || true
changed_files=$(git diff --name-only "${{ github.event.before }}" "${{ github.sha }}" -- 'i18n/en-US/*.json')
echo "Changed files: $changed_files"
if [ -n "$changed_files" ]; then
echo "FILES_CHANGED=true" >> $GITHUB_ENV
file_args=""
for file in $changed_files; do
filename=$(basename "$file" .json)
file_args="$file_args --file $filename"
done
echo "FILE_ARGS=$file_args" >> $GITHUB_ENV
echo "File arguments: $file_args"
else
echo "FILES_CHANGED=false" >> $GITHUB_ENV
fi
fi
- name: Install pnpm
uses: pnpm/action-setup@v4
with:
package_json_file: web/package.json
run_install: false
- name: Set up Node.js
if: env.FILES_CHANGED == 'true'
uses: actions/setup-node@v6
with:
node-version: 'lts/*'
cache: pnpm
cache-dependency-path: ./web/pnpm-lock.yaml
- name: Install dependencies
if: env.FILES_CHANGED == 'true'
working-directory: ./web
run: pnpm install --frozen-lockfile
- name: Generate i18n translations
if: env.FILES_CHANGED == 'true'
working-directory: ./web
run: pnpm run i18n:gen ${{ env.FILE_ARGS }}
- name: Create Pull Request
if: env.FILES_CHANGED == 'true'
uses: peter-evans/create-pull-request@v6
with:
token: ${{ secrets.GITHUB_TOKEN }}
commit-message: 'chore(i18n): update translations based on en-US changes'
title: 'chore(i18n): translate i18n files based on en-US changes'
body: |
This PR was automatically created to update i18n translation files based on changes in en-US locale.
**Triggered by:** ${{ github.sha }}
**Changes included:**
- Updated translation files for all locales
branch: chore/automated-i18n-updates-${{ github.sha }}
delete-branch: true

View File

@ -0,0 +1,421 @@
name: Translate i18n Files with Claude Code
# Note: claude-code-action doesn't support push events directly.
# Push events are handled by trigger-i18n-sync.yml which sends repository_dispatch.
# See: https://github.com/langgenius/dify/issues/30743
on:
repository_dispatch:
types: [i18n-sync]
workflow_dispatch:
inputs:
files:
description: 'Specific files to translate (space-separated, e.g., "app common"). Leave empty for all files.'
required: false
type: string
languages:
description: 'Specific languages to translate (space-separated, e.g., "zh-Hans ja-JP"). Leave empty for all supported languages.'
required: false
type: string
mode:
description: 'Sync mode: incremental (only changes) or full (re-check all keys)'
required: false
default: 'incremental'
type: choice
options:
- incremental
- full
permissions:
contents: write
pull-requests: write
jobs:
translate:
if: github.repository == 'langgenius/dify'
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Configure Git
run: |
git config --global user.name "github-actions[bot]"
git config --global user.email "github-actions[bot]@users.noreply.github.com"
- name: Install pnpm
uses: pnpm/action-setup@v4
with:
package_json_file: web/package.json
run_install: false
- name: Set up Node.js
uses: actions/setup-node@v6
with:
node-version: 24
cache: pnpm
cache-dependency-path: ./web/pnpm-lock.yaml
- name: Detect changed files and generate diff
id: detect_changes
run: |
if [ "${{ github.event_name }}" == "workflow_dispatch" ]; then
# Manual trigger
if [ -n "${{ github.event.inputs.files }}" ]; then
echo "CHANGED_FILES=${{ github.event.inputs.files }}" >> $GITHUB_OUTPUT
else
# Get all JSON files in en-US directory
files=$(ls web/i18n/en-US/*.json 2>/dev/null | xargs -n1 basename | sed 's/.json$//' | tr '\n' ' ')
echo "CHANGED_FILES=$files" >> $GITHUB_OUTPUT
fi
echo "TARGET_LANGS=${{ github.event.inputs.languages }}" >> $GITHUB_OUTPUT
echo "SYNC_MODE=${{ github.event.inputs.mode || 'incremental' }}" >> $GITHUB_OUTPUT
# For manual trigger with incremental mode, get diff from last commit
# For full mode, we'll do a complete check anyway
if [ "${{ github.event.inputs.mode }}" == "full" ]; then
echo "Full mode: will check all keys" > /tmp/i18n-diff.txt
echo "DIFF_AVAILABLE=false" >> $GITHUB_OUTPUT
else
git diff HEAD~1..HEAD -- 'web/i18n/en-US/*.json' > /tmp/i18n-diff.txt 2>/dev/null || echo "" > /tmp/i18n-diff.txt
if [ -s /tmp/i18n-diff.txt ]; then
echo "DIFF_AVAILABLE=true" >> $GITHUB_OUTPUT
else
echo "DIFF_AVAILABLE=false" >> $GITHUB_OUTPUT
fi
fi
elif [ "${{ github.event_name }}" == "repository_dispatch" ]; then
# Triggered by push via trigger-i18n-sync.yml workflow
# Validate required payload fields
if [ -z "${{ github.event.client_payload.changed_files }}" ]; then
echo "Error: repository_dispatch payload missing required 'changed_files' field" >&2
exit 1
fi
echo "CHANGED_FILES=${{ github.event.client_payload.changed_files }}" >> $GITHUB_OUTPUT
echo "TARGET_LANGS=" >> $GITHUB_OUTPUT
echo "SYNC_MODE=${{ github.event.client_payload.sync_mode || 'incremental' }}" >> $GITHUB_OUTPUT
# Decode the base64-encoded diff from the trigger workflow
if [ -n "${{ github.event.client_payload.diff_base64 }}" ]; then
if ! echo "${{ github.event.client_payload.diff_base64 }}" | base64 -d > /tmp/i18n-diff.txt 2>&1; then
echo "Warning: Failed to decode base64 diff payload" >&2
echo "" > /tmp/i18n-diff.txt
echo "DIFF_AVAILABLE=false" >> $GITHUB_OUTPUT
elif [ -s /tmp/i18n-diff.txt ]; then
echo "DIFF_AVAILABLE=true" >> $GITHUB_OUTPUT
else
echo "DIFF_AVAILABLE=false" >> $GITHUB_OUTPUT
fi
else
echo "" > /tmp/i18n-diff.txt
echo "DIFF_AVAILABLE=false" >> $GITHUB_OUTPUT
fi
else
echo "Unsupported event type: ${{ github.event_name }}"
exit 1
fi
# Truncate diff if too large (keep first 50KB)
if [ -f /tmp/i18n-diff.txt ]; then
head -c 50000 /tmp/i18n-diff.txt > /tmp/i18n-diff-truncated.txt
mv /tmp/i18n-diff-truncated.txt /tmp/i18n-diff.txt
fi
echo "Detected files: $(cat $GITHUB_OUTPUT | grep CHANGED_FILES || echo 'none')"
- name: Run Claude Code for Translation Sync
if: steps.detect_changes.outputs.CHANGED_FILES != ''
uses: anthropics/claude-code-action@v1
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
github_token: ${{ secrets.GITHUB_TOKEN }}
prompt: |
You are a professional i18n synchronization engineer for the Dify project.
Your task is to keep all language translations in sync with the English source (en-US).
## CRITICAL TOOL RESTRICTIONS
- Use **Read** tool to read files (NOT cat or bash)
- Use **Edit** tool to modify JSON files (NOT node, jq, or bash scripts)
- Use **Bash** ONLY for: git commands, gh commands, pnpm commands
- Run bash commands ONE BY ONE, never combine with && or ||
- NEVER use `$()` command substitution - it's not supported. Split into separate commands instead.
## WORKING DIRECTORY & ABSOLUTE PATHS
Claude Code sandbox working directory may vary. Always use absolute paths:
- For pnpm: `pnpm --dir ${{ github.workspace }}/web <command>`
- For git: `git -C ${{ github.workspace }} <command>`
- For gh: `gh --repo ${{ github.repository }} <command>`
- For file paths: `${{ github.workspace }}/web/i18n/`
## EFFICIENCY RULES
- **ONE Edit per language file** - batch all key additions into a single Edit
- Insert new keys at the beginning of JSON (after `{`), lint:fix will sort them
- Translate ALL keys for a language mentally first, then do ONE Edit
## Context
- Changed/target files: ${{ steps.detect_changes.outputs.CHANGED_FILES }}
- Target languages (empty means all supported): ${{ steps.detect_changes.outputs.TARGET_LANGS }}
- Sync mode: ${{ steps.detect_changes.outputs.SYNC_MODE }}
- Translation files are located in: ${{ github.workspace }}/web/i18n/{locale}/{filename}.json
- Language configuration is in: ${{ github.workspace }}/web/i18n-config/languages.ts
- Git diff is available: ${{ steps.detect_changes.outputs.DIFF_AVAILABLE }}
## CRITICAL DESIGN: Verify First, Then Sync
You MUST follow this three-phase approach:
═══════════════════════════════════════════════════════════════
║ PHASE 1: VERIFY - Analyze and Generate Change Report ║
═══════════════════════════════════════════════════════════════
### Step 1.1: Analyze Git Diff (for incremental mode)
Use the Read tool to read `/tmp/i18n-diff.txt` to see the git diff.
Parse the diff to categorize changes:
- Lines with `+` (not `+++`): Added or modified values
- Lines with `-` (not `---`): Removed or old values
- Identify specific keys for each category:
* ADD: Keys that appear only in `+` lines (new keys)
* UPDATE: Keys that appear in both `-` and `+` lines (value changed)
* DELETE: Keys that appear only in `-` lines (removed keys)
### Step 1.2: Read Language Configuration
Use the Read tool to read `${{ github.workspace }}/web/i18n-config/languages.ts`.
Extract all languages with `supported: true`.
### Step 1.3: Run i18n:check for Each Language
```bash
pnpm --dir ${{ github.workspace }}/web install --frozen-lockfile
```
```bash
pnpm --dir ${{ github.workspace }}/web run i18n:check
```
This will report:
- Missing keys (need to ADD)
- Extra keys (need to DELETE)
### Step 1.4: Generate Change Report
Create a structured report identifying:
```
╔══════════════════════════════════════════════════════════════╗
║ I18N SYNC CHANGE REPORT ║
╠══════════════════════════════════════════════════════════════╣
║ Files to process: [list] ║
║ Languages to sync: [list] ║
╠══════════════════════════════════════════════════════════════╣
║ ADD (New Keys): ║
║ - [filename].[key]: "English value" ║
║ ... ║
╠══════════════════════════════════════════════════════════════╣
║ UPDATE (Modified Keys - MUST re-translate): ║
║ - [filename].[key]: "Old value" → "New value" ║
║ ... ║
╠══════════════════════════════════════════════════════════════╣
║ DELETE (Extra Keys): ║
║ - [language]/[filename].[key] ║
║ ... ║
╚══════════════════════════════════════════════════════════════╝
```
**IMPORTANT**: For UPDATE detection, compare git diff to find keys where
the English value changed. These MUST be re-translated even if target
language already has a translation (it's now stale!).
═══════════════════════════════════════════════════════════════
║ PHASE 2: SYNC - Execute Changes Based on Report ║
═══════════════════════════════════════════════════════════════
### Step 2.1: Process ADD Operations (BATCH per language file)
**CRITICAL WORKFLOW for efficiency:**
1. First, translate ALL new keys for ALL languages mentally
2. Then, for EACH language file, do ONE Edit operation:
- Read the file once
- Insert ALL new keys at the beginning (right after the opening `{`)
- Don't worry about alphabetical order - lint:fix will sort them later
Example Edit (adding 3 keys to zh-Hans/app.json):
```
old_string: '{\n "accessControl"'
new_string: '{\n "newKey1": "translation1",\n "newKey2": "translation2",\n "newKey3": "translation3",\n "accessControl"'
```
**IMPORTANT**:
- ONE Edit per language file (not one Edit per key!)
- Always use the Edit tool. NEVER use bash scripts, node, or jq.
### Step 2.2: Process UPDATE Operations
**IMPORTANT: Special handling for zh-Hans and ja-JP**
If zh-Hans or ja-JP files were ALSO modified in the same push:
- Run: `git -C ${{ github.workspace }} diff HEAD~1 --name-only` and check for zh-Hans or ja-JP files
- If found, it means someone manually translated them. Apply these rules:
1. **Missing keys**: Still ADD them (completeness required)
2. **Existing translations**: Compare with the NEW English value:
- If translation is **completely wrong** or **unrelated** → Update it
- If translation is **roughly correct** (captures the meaning) → Keep it, respect manual work
- When in doubt, **keep the manual translation**
Example:
- English changed: "Save" → "Save Changes"
- Manual translation: "保存更改" → Keep it (correct meaning)
- Manual translation: "删除" → Update it (completely wrong)
For other languages:
Use Edit tool to replace the old value with the new translation.
You can batch multiple updates in one Edit if they are adjacent.
### Step 2.3: Process DELETE Operations
For extra keys reported by i18n:check:
- Run: `pnpm --dir ${{ github.workspace }}/web run i18n:check --auto-remove`
- Or manually remove from target language JSON files
## Translation Guidelines
- PRESERVE all placeholders exactly as-is:
- `{{variable}}` - Mustache interpolation
- `${variable}` - Template literal
- `<tag>content</tag>` - HTML tags
- `_one`, `_other` - Pluralization suffixes (these are KEY suffixes, not values)
- Use appropriate language register (formal/informal) based on existing translations
- Match existing translation style in each language
- Technical terms: check existing conventions per language
- For CJK languages: no spaces between characters unless necessary
- For RTL languages (ar-TN, fa-IR): ensure proper text handling
## Output Format Requirements
- Alphabetical key ordering (if original file uses it)
- 2-space indentation
- Trailing newline at end of file
- Valid JSON (use proper escaping for special characters)
═══════════════════════════════════════════════════════════════
║ PHASE 3: RE-VERIFY - Confirm All Issues Resolved ║
═══════════════════════════════════════════════════════════════
### Step 3.1: Run Lint Fix (IMPORTANT!)
```bash
pnpm --dir ${{ github.workspace }}/web lint:fix --quiet -- 'i18n/**/*.json'
```
This ensures:
- JSON keys are sorted alphabetically (jsonc/sort-keys rule)
- Valid i18n keys (dify-i18n/valid-i18n-keys rule)
- No extra keys (dify-i18n/no-extra-keys rule)
### Step 3.2: Run Final i18n Check
```bash
pnpm --dir ${{ github.workspace }}/web run i18n:check
```
### Step 3.3: Fix Any Remaining Issues
If check reports issues:
- Go back to PHASE 2 for unresolved items
- Repeat until check passes
### Step 3.4: Generate Final Summary
```
╔══════════════════════════════════════════════════════════════╗
║ SYNC COMPLETED SUMMARY ║
╠══════════════════════════════════════════════════════════════╣
║ Language │ Added │ Updated │ Deleted │ Status ║
╠══════════════════════════════════════════════════════════════╣
║ zh-Hans │ 5 │ 2 │ 1 │ ✓ Complete ║
║ ja-JP │ 5 │ 2 │ 1 │ ✓ Complete ║
║ ... │ ... │ ... │ ... │ ... ║
╠══════════════════════════════════════════════════════════════╣
║ i18n:check │ PASSED - All keys in sync ║
╚══════════════════════════════════════════════════════════════╝
```
## Mode-Specific Behavior
**SYNC_MODE = "incremental"** (default):
- Focus on keys identified from git diff
- Also check i18n:check output for any missing/extra keys
- Efficient for small changes
**SYNC_MODE = "full"**:
- Compare ALL keys between en-US and each language
- Run i18n:check to identify all discrepancies
- Use for first-time sync or fixing historical issues
## Important Notes
1. Always run i18n:check BEFORE and AFTER making changes
2. The check script is the source of truth for missing/extra keys
3. For UPDATE scenario: git diff is the source of truth for changed values
4. Create a single commit with all translation changes
5. If any translation fails, continue with others and report failures
═══════════════════════════════════════════════════════════════
║ PHASE 4: COMMIT AND CREATE PR ║
═══════════════════════════════════════════════════════════════
After all translations are complete and verified:
### Step 4.1: Check for changes
```bash
git -C ${{ github.workspace }} status --porcelain
```
If there are changes:
### Step 4.2: Create a new branch and commit
Run these git commands ONE BY ONE (not combined with &&).
**IMPORTANT**: Do NOT use `$()` command substitution. Use two separate commands:
1. First, get the timestamp:
```bash
date +%Y%m%d-%H%M%S
```
(Note the output, e.g., "20260115-143052")
2. Then create branch using the timestamp value:
```bash
git -C ${{ github.workspace }} checkout -b chore/i18n-sync-20260115-143052
```
(Replace "20260115-143052" with the actual timestamp from step 1)
3. Stage changes:
```bash
git -C ${{ github.workspace }} add web/i18n/
```
4. Commit:
```bash
git -C ${{ github.workspace }} commit -m "chore(i18n): sync translations with en-US - Mode: ${{ steps.detect_changes.outputs.SYNC_MODE }}"
```
5. Push:
```bash
git -C ${{ github.workspace }} push origin HEAD
```
### Step 4.3: Create Pull Request
```bash
gh pr create --repo ${{ github.repository }} --title "chore(i18n): sync translations with en-US" --body "## Summary
This PR was automatically generated to sync i18n translation files.
### Changes
- Mode: ${{ steps.detect_changes.outputs.SYNC_MODE }}
- Files processed: ${{ steps.detect_changes.outputs.CHANGED_FILES }}
### Verification
- [x] \`i18n:check\` passed
- [x] \`lint:fix\` applied
🤖 Generated with Claude Code GitHub Action" --base main
```
claude_args: |
--max-turns 150
--allowedTools "Read,Write,Edit,Bash(git *),Bash(git:*),Bash(gh *),Bash(gh:*),Bash(pnpm *),Bash(pnpm:*),Bash(date *),Bash(date:*),Glob,Grep"

66
.github/workflows/trigger-i18n-sync.yml vendored Normal file
View File

@ -0,0 +1,66 @@
name: Trigger i18n Sync on Push
# This workflow bridges the push event to repository_dispatch
# because claude-code-action doesn't support push events directly.
# See: https://github.com/langgenius/dify/issues/30743
on:
push:
branches: [main]
paths:
- 'web/i18n/en-US/*.json'
permissions:
contents: write
jobs:
trigger:
if: github.repository == 'langgenius/dify'
runs-on: ubuntu-latest
timeout-minutes: 5
steps:
- name: Checkout repository
uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Detect changed files and generate diff
id: detect
run: |
BEFORE_SHA="${{ github.event.before }}"
# Handle edge case: force push may have null/zero SHA
if [ -z "$BEFORE_SHA" ] || [ "$BEFORE_SHA" = "0000000000000000000000000000000000000000" ]; then
BEFORE_SHA="HEAD~1"
fi
# Detect changed i18n files
changed=$(git diff --name-only "$BEFORE_SHA" "${{ github.sha }}" -- 'web/i18n/en-US/*.json' 2>/dev/null | xargs -n1 basename 2>/dev/null | sed 's/.json$//' | tr '\n' ' ' || echo "")
echo "changed_files=$changed" >> $GITHUB_OUTPUT
# Generate diff for context
git diff "$BEFORE_SHA" "${{ github.sha }}" -- 'web/i18n/en-US/*.json' > /tmp/i18n-diff.txt 2>/dev/null || echo "" > /tmp/i18n-diff.txt
# Truncate if too large (keep first 50KB to match receiving workflow)
head -c 50000 /tmp/i18n-diff.txt > /tmp/i18n-diff-truncated.txt
mv /tmp/i18n-diff-truncated.txt /tmp/i18n-diff.txt
# Base64 encode the diff for safe JSON transport (portable, single-line)
diff_base64=$(base64 < /tmp/i18n-diff.txt | tr -d '\n')
echo "diff_base64=$diff_base64" >> $GITHUB_OUTPUT
if [ -n "$changed" ]; then
echo "has_changes=true" >> $GITHUB_OUTPUT
echo "Detected changed files: $changed"
else
echo "has_changes=false" >> $GITHUB_OUTPUT
echo "No i18n changes detected"
fi
- name: Trigger i18n sync workflow
if: steps.detect.outputs.has_changes == 'true'
uses: peter-evans/repository-dispatch@v3
with:
token: ${{ secrets.GITHUB_TOKEN }}
event-type: i18n-sync
client-payload: '{"changed_files": "${{ steps.detect.outputs.changed_files }}", "diff_base64": "${{ steps.detect.outputs.diff_base64 }}", "sync_mode": "incremental", "trigger_sha": "${{ github.sha }}"}'

View File

@ -31,7 +31,7 @@ jobs:
- name: Setup Node.js
uses: actions/setup-node@v6
with:
node-version: 22
node-version: 24
cache: pnpm
cache-dependency-path: ./web/pnpm-lock.yaml
@ -366,3 +366,48 @@ jobs:
path: web/coverage
retention-days: 30
if-no-files-found: error
web-build:
name: Web Build
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./web
steps:
- name: Checkout code
uses: actions/checkout@v6
with:
persist-credentials: false
- name: Check changed files
id: changed-files
uses: tj-actions/changed-files@v47
with:
files: |
web/**
.github/workflows/web-tests.yml
- name: Install pnpm
uses: pnpm/action-setup@v4
with:
package_json_file: web/package.json
run_install: false
- name: Setup NodeJS
uses: actions/setup-node@v6
if: steps.changed-files.outputs.any_changed == 'true'
with:
node-version: 24
cache: pnpm
cache-dependency-path: ./web/pnpm-lock.yaml
- name: Web dependencies
if: steps.changed-files.outputs.any_changed == 'true'
working-directory: ./web
run: pnpm install --frozen-lockfile
- name: Web build check
if: steps.changed-files.outputs.any_changed == 'true'
working-directory: ./web
run: pnpm run build

1
.gitignore vendored
View File

@ -209,6 +209,7 @@ api/.vscode
.history
.idea/
web/migration/
# pnpm
/.pnpm-store

1
.nvmrc
View File

@ -1 +0,0 @@
22.11.0

View File

@ -12,12 +12,8 @@ The codebase is split into:
## Backend Workflow
- Read `api/AGENTS.md` for details
- Run backend CLI commands through `uv run --project api <command>`.
- Before submission, all backend modifications must pass local checks: `make lint`, `make type-check`, and `uv run --project api --dev dev/pytest/pytest_unit_tests.sh`.
- Use Makefile targets for linting and formatting; `make lint` and `make type-check` cover the required checks.
- Integration tests are CI-only and are not expected to run in the local environment.
## Frontend Workflow

View File

@ -60,9 +60,11 @@ check:
@echo "✅ Code check complete"
lint:
@echo "🔧 Running ruff format, check with fixes, and import linter..."
@uv run --project api --dev sh -c 'ruff format ./api && ruff check --fix ./api'
@echo "🔧 Running ruff format, check with fixes, import linter, and dotenv-linter..."
@uv run --project api --dev ruff format ./api
@uv run --project api --dev ruff check --fix ./api
@uv run --directory api --dev lint-imports
@uv run --project api --dev dotenv-linter ./api/.env.example ./web/.env.example
@echo "✅ Linting complete"
type-check:
@ -72,7 +74,12 @@ type-check:
test:
@echo "🧪 Running backend unit tests..."
@uv run --project api --dev dev/pytest/pytest_unit_tests.sh
@if [ -n "$(TARGET_TESTS)" ]; then \
echo "Target: $(TARGET_TESTS)"; \
uv run --project api --dev pytest $(TARGET_TESTS); \
else \
uv run --project api --dev dev/pytest/pytest_unit_tests.sh; \
fi
@echo "✅ Tests complete"
# Build Docker images
@ -122,9 +129,9 @@ help:
@echo "Backend Code Quality:"
@echo " make format - Format code with ruff"
@echo " make check - Check code with ruff"
@echo " make lint - Format and fix code with ruff"
@echo " make lint - Format, fix, and lint code (ruff, imports, dotenv)"
@echo " make type-check - Run type checking with basedpyright"
@echo " make test - Run backend unit tests"
@echo " make test - Run backend unit tests (or TARGET_TESTS=./api/tests/<target_tests>)"
@echo ""
@echo "Docker Build Targets:"
@echo " make build-web - Build web Docker image"

0
agent-notes/.gitkeep Normal file
View File

View File

@ -417,6 +417,8 @@ SMTP_USERNAME=123
SMTP_PASSWORD=abc
SMTP_USE_TLS=true
SMTP_OPPORTUNISTIC_TLS=false
# Optional: override the local hostname used for SMTP HELO/EHLO
SMTP_LOCAL_HOSTNAME=
# Sendgid configuration
SENDGRID_API_KEY=
# Sentry configuration
@ -575,6 +577,10 @@ LOGSTORE_DUAL_WRITE_ENABLED=false
# Enable dual-read fallback to SQL database when LogStore returns no results (default: true)
# Useful for migration scenarios where historical data exists only in SQL database
LOGSTORE_DUAL_READ_ENABLED=true
# Control flag for whether to write the `graph` field to LogStore.
# If LOGSTORE_ENABLE_PUT_GRAPH_FIELD is "true", write the full `graph` field;
# otherwise write an empty {} instead. Defaults to writing the `graph` field.
LOGSTORE_ENABLE_PUT_GRAPH_FIELD=true
# Celery beat configuration
CELERY_BEAT_SCHEDULER_TIME=1
@ -585,6 +591,7 @@ ENABLE_CLEAN_UNUSED_DATASETS_TASK=false
ENABLE_CREATE_TIDB_SERVERLESS_TASK=false
ENABLE_UPDATE_TIDB_SERVERLESS_STATUS_TASK=false
ENABLE_CLEAN_MESSAGES=false
ENABLE_WORKFLOW_RUN_CLEANUP_TASK=false
ENABLE_MAIL_CLEAN_DOCUMENT_NOTIFY_TASK=false
ENABLE_DATASETS_QUEUE_MONITOR=false
ENABLE_CHECK_UPGRADABLE_PLUGIN_TASK=true
@ -708,3 +715,14 @@ ANNOTATION_IMPORT_MAX_CONCURRENT=5
SANDBOX_EXPIRED_RECORDS_CLEAN_GRACEFUL_PERIOD=21
SANDBOX_EXPIRED_RECORDS_CLEAN_BATCH_SIZE=1000
SANDBOX_EXPIRED_RECORDS_RETENTION_DAYS=30
# Sandbox Dify CLI configuration
# Directory containing dify CLI binaries (dify-cli-<os>-<arch>). Defaults to api/bin when unset.
SANDBOX_DIFY_CLI_ROOT=
# CLI API URL for sandbox (dify-sandbox or e2b) to call back to Dify API.
# This URL must be accessible from the sandbox environment.
# For local development: use http://localhost:5001 or http://127.0.0.1:5001
# For Docker deployment: use http://api:5001 (internal Docker network)
# For external sandbox (e.g., e2b): use a publicly accessible URL
CLI_API_URL=http://localhost:5001

View File

@ -3,9 +3,11 @@ root_packages =
core
configs
controllers
extensions
models
tasks
services
include_external_packages = True
[importlinter:contract:workflow]
name = Workflow
@ -33,6 +35,28 @@ ignore_imports =
core.workflow.nodes.loop.loop_node -> core.workflow.graph
core.workflow.nodes.loop.loop_node -> core.workflow.graph_engine.command_channels
[importlinter:contract:workflow-infrastructure-dependencies]
name = Workflow Infrastructure Dependencies
type = forbidden
source_modules =
core.workflow
forbidden_modules =
extensions.ext_database
extensions.ext_redis
allow_indirect_imports = True
ignore_imports =
core.workflow.nodes.agent.agent_node -> extensions.ext_database
core.workflow.nodes.datasource.datasource_node -> extensions.ext_database
core.workflow.nodes.knowledge_index.knowledge_index_node -> extensions.ext_database
core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node -> extensions.ext_database
core.workflow.nodes.llm.file_saver -> extensions.ext_database
core.workflow.nodes.llm.llm_utils -> extensions.ext_database
core.workflow.nodes.llm.node -> extensions.ext_database
core.workflow.nodes.tool.tool_node -> extensions.ext_database
core.workflow.graph_engine.command_channels.redis_channel -> extensions.ext_redis
core.workflow.graph_engine.manager -> extensions.ext_redis
core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node -> extensions.ext_redis
[importlinter:contract:rsc]
name = RSC
type = layers

View File

@ -1,62 +1,236 @@
# Agent Skill Index
# API Agent Guide
## Agent Notes (must-check)
Before you start work on any backend file under `api/`, you MUST check whether a related note exists under:
- `agent-notes/<same-relative-path-as-target-file>.md`
Rules:
- **Path mapping**: for a target file `<path>/<name>.py`, the note must be `agent-notes/<path>/<name>.py.md` (same folder structure, same filename, plus `.md`).
- **Before working**:
- If the note exists, read it first and follow any constraints/decisions recorded there.
- If the note conflicts with the current code, or references an "origin" file/path that has been deleted, renamed, or migrated, treat the **code as the single source of truth** and update the note to match reality.
- If the note does not exist, create it with a short architecture/intent summary and any relevant invariants/edge cases.
- **During working**:
- Keep the note in sync as you discover constraints, make decisions, or change approach.
- If you move/rename a file, migrate its note to the new mapped path (and fix any outdated references inside the note).
- Record non-obvious edge cases, trade-offs, and the test/verification plan as you go (not just at the end).
- Keep notes **coherent**: integrate new findings into the relevant sections and rewrite for clarity; avoid append-only “recent fix” / changelog-style additions unless the note is explicitly intended to be a changelog.
- **When finishing work**:
- Update the related note(s) to reflect what changed, why, and any new edge cases/tests.
- If a file is deleted, remove or clearly deprecate the corresponding note so it cannot be mistaken as current guidance.
- Keep notes concise and accurate; they are meant to prevent repeated rediscovery.
## Skill Index
Start with the section that best matches your need. Each entry lists the problems it solves plus key files/concepts so you know what to expect before opening it.
______________________________________________________________________
### Platform Foundations
## Platform Foundations
- **[Infrastructure Overview](agent_skills/infra.md)**\
When to read this:
#### [Infrastructure Overview](agent_skills/infra.md)
- **When to read this**
- You need to understand where a feature belongs in the architecture.
- Youre wiring storage, Redis, vector stores, or OTEL.
- Youre about to add CLI commands or async jobs.\
What it covers: configuration stack (`configs/app_config.py`, remote settings), storage entry points (`extensions/ext_storage.py`, `core/file/file_manager.py`), Redis conventions (`extensions/ext_redis.py`), plugin runtime topology, vector-store factory (`core/rag/datasource/vdb/*`), observability hooks, SSRF proxy usage, and core CLI commands.
- Youre about to add CLI commands or async jobs.
- **What it covers**
- Configuration stack (`configs/app_config.py`, remote settings)
- Storage entry points (`extensions/ext_storage.py`, `core/file/file_manager.py`)
- Redis conventions (`extensions/ext_redis.py`)
- Plugin runtime topology
- Vector-store factory (`core/rag/datasource/vdb/*`)
- Observability hooks
- SSRF proxy usage
- Core CLI commands
- **[Coding Style](agent_skills/coding_style.md)**\
When to read this:
### Plugin & Extension Development
- Youre writing or reviewing backend code and need the authoritative checklist.
- Youre unsure about Pydantic validators, SQLAlchemy session usage, or logging patterns.
- You want the exact lint/type/test commands used in PRs.\
Includes: Ruff & BasedPyright commands, no-annotation policy, session examples (`with Session(db.engine, ...)`), `@field_validator` usage, logging expectations, and the rule set for file size, helpers, and package management.
______________________________________________________________________
## Plugin & Extension Development
- **[Plugin Systems](agent_skills/plugin.md)**\
When to read this:
#### [Plugin Systems](agent_skills/plugin.md)
- **When to read this**
- Youre building or debugging a marketplace plugin.
- You need to know how manifests, providers, daemons, and migrations fit together.\
What it covers: plugin manifests (`core/plugin/entities/plugin.py`), installation/upgrade flows (`services/plugin/plugin_service.py`, CLI commands), runtime adapters (`core/plugin/impl/*` for tool/model/datasource/trigger/endpoint/agent), daemon coordination (`core/plugin/entities/plugin_daemon.py`), and how provider registries surface capabilities to the rest of the platform.
- You need to know how manifests, providers, daemons, and migrations fit together.
- **What it covers**
- Plugin manifests (`core/plugin/entities/plugin.py`)
- Installation/upgrade flows (`services/plugin/plugin_service.py`, CLI commands)
- Runtime adapters (`core/plugin/impl/*` for tool/model/datasource/trigger/endpoint/agent)
- Daemon coordination (`core/plugin/entities/plugin_daemon.py`)
- How provider registries surface capabilities to the rest of the platform
- **[Plugin OAuth](agent_skills/plugin_oauth.md)**\
When to read this:
#### [Plugin OAuth](agent_skills/plugin_oauth.md)
- **When to read this**
- You must integrate OAuth for a plugin or datasource.
- Youre handling credential encryption or refresh flows.\
Topics: credential storage, encryption helpers (`core/helper/provider_encryption.py`), OAuth client bootstrap (`services/plugin/oauth_service.py`, `services/plugin/plugin_parameter_service.py`), and how console/API layers expose the flows.
- Youre handling credential encryption or refresh flows.
- **Topics**
- Credential storage
- Encryption helpers (`core/helper/provider_encryption.py`)
- OAuth client bootstrap (`services/plugin/oauth_service.py`, `services/plugin/plugin_parameter_service.py`)
- How console/API layers expose the flows
______________________________________________________________________
### Workflow Entry & Execution
## Workflow Entry & Execution
#### [Trigger Concepts](agent_skills/trigger.md)
- **[Trigger Concepts](agent_skills/trigger.md)**\
When to read this:
- **When to read this**
- Youre debugging why a workflow didnt start.
- Youre adding a new trigger type or hook.
- You need to trace async execution, draft debugging, or webhook/schedule pipelines.\
Details: Start-node taxonomy, webhook & schedule internals (`core/workflow/nodes/trigger_*`, `services/trigger/*`), async orchestration (`services/async_workflow_service.py`, Celery queues), debug event bus, and storage/logging interactions.
- You need to trace async execution, draft debugging, or webhook/schedule pipelines.
- **Details**
- Start-node taxonomy
- Webhook & schedule internals (`core/workflow/nodes/trigger_*`, `services/trigger/*`)
- Async orchestration (`services/async_workflow_service.py`, Celery queues)
- Debug event bus
- Storage/logging interactions
______________________________________________________________________
## General Reminders
## Additional Notes for Agents
- All skill docs assume you follow the coding style guide—run Ruff/BasedPyright/tests listed there before submitting changes.
- All skill docs assume you follow the coding style rules below—run the lint/type/test commands before submitting changes.
- When you cannot find an answer in these briefs, search the codebase using the paths referenced (e.g., `core/plugin/impl/tool.py`, `services/dataset_service.py`).
- If you run into cross-cutting concerns (tenancy, configuration, storage), check the infrastructure guide first; it links to most supporting modules.
- Keep multi-tenancy and configuration central: everything flows through `configs.dify_config` and `tenant_id`.
- When touching plugins or triggers, consult both the system overview and the specialised doc to ensure you adjust lifecycle, storage, and observability consistently.
## Coding Style
This is the default standard for backend code in this repo. Follow it for new code and use it as the checklist when reviewing changes.
### Linting & Formatting
- Use Ruff for formatting and linting (follow `.ruff.toml`).
- Keep each line under 120 characters (including spaces).
### Naming Conventions
- Use `snake_case` for variables and functions.
- Use `PascalCase` for classes.
- Use `UPPER_CASE` for constants.
### Typing & Class Layout
- Code should usually include type annotations that match the repos current Python version (avoid untyped public APIs and “mystery” values).
- Prefer modern typing forms (e.g. `list[str]`, `dict[str, int]`) and avoid `Any` unless theres a strong reason.
- For classes, declare member variables at the top of the class body (before `__init__`) so the class shape is obvious at a glance:
```python
from datetime import datetime
class Example:
user_id: str
created_at: datetime
def __init__(self, user_id: str, created_at: datetime) -> None:
self.user_id = user_id
self.created_at = created_at
```
### General Rules
- Use Pydantic v2 conventions.
- Use `uv` for Python package management in this repo (usually with `--project api`).
- Prefer simple functions over small “utility classes” for lightweight helpers.
- Avoid implementing dunder methods unless its clearly needed and matches existing patterns.
- Never start long-running services as part of agent work (`uv run app.py`, `flask run`, etc.); running tests is allowed.
- Keep files below ~800 lines; split when necessary.
- Keep code readable and explicit—avoid clever hacks.
### Architecture & Boundaries
- Mirror the layered architecture: controller → service → core/domain.
- Reuse existing helpers in `core/`, `services/`, and `libs/` before creating new abstractions.
- Optimise for observability: deterministic control flow, clear logging, actionable errors.
### Logging & Errors
- Never use `print`; use a module-level logger:
- `logger = logging.getLogger(__name__)`
- Include tenant/app/workflow identifiers in log context when relevant.
- Raise domain-specific exceptions (`services/errors`, `core/errors`) and translate them into HTTP responses in controllers.
- Log retryable events at `warning`, terminal failures at `error`.
### SQLAlchemy Patterns
- Models inherit from `models.base.TypeBase`; do not create ad-hoc metadata or engines.
- Open sessions with context managers:
```python
from sqlalchemy.orm import Session
with Session(db.engine, expire_on_commit=False) as session:
stmt = select(Workflow).where(
Workflow.id == workflow_id,
Workflow.tenant_id == tenant_id,
)
workflow = session.execute(stmt).scalar_one_or_none()
```
- Prefer SQLAlchemy expressions; avoid raw SQL unless necessary.
- Always scope queries by `tenant_id` and protect write paths with safeguards (`FOR UPDATE`, row counts, etc.).
- Introduce repository abstractions only for very large tables (e.g., workflow executions) or when alternative storage strategies are required.
### Storage & External I/O
- Access storage via `extensions.ext_storage.storage`.
- Use `core.helper.ssrf_proxy` for outbound HTTP fetches.
- Background tasks that touch storage must be idempotent, and should log relevant object identifiers.
### Pydantic Usage
- Define DTOs with Pydantic v2 models and forbid extras by default.
- Use `@field_validator` / `@model_validator` for domain rules.
Example:
```python
from pydantic import BaseModel, ConfigDict, HttpUrl, field_validator
class TriggerConfig(BaseModel):
endpoint: HttpUrl
secret: str
model_config = ConfigDict(extra="forbid")
@field_validator("secret")
def ensure_secret_prefix(cls, value: str) -> str:
if not value.startswith("dify_"):
raise ValueError("secret must start with dify_")
return value
```
### Generics & Protocols
- Use `typing.Protocol` to define behavioural contracts (e.g., cache interfaces).
- Apply generics (`TypeVar`, `Generic`) for reusable utilities like caches or providers.
- Validate dynamic inputs at runtime when generics cannot enforce safety alone.
### Tooling & Checks
Quick checks while iterating:
- Format: `make format`
- Lint (includes auto-fix): `make lint`
- Type check: `make type-check`
- Targeted tests: `make test TARGET_TESTS=./api/tests/<target_tests>`
Before opening a PR / submitting:
- `make lint`
- `make type-check`
- `make test`
### Controllers & Services
- Controllers: parse input via Pydantic, invoke services, return serialised responses; no business logic.
- Services: coordinate repositories, providers, background tasks; keep side effects explicit.
- Document non-obvious behaviour with concise comments.
### Miscellaneous
- Use `configs.dify_config` for configuration—never read environment variables directly.
- Maintain tenant awareness end-to-end; `tenant_id` must flow through every layer touching shared resources.
- Queue async work through `services/async_workflow_service`; implement tasks under `tasks/` with explicit queue selection.
- Keep experimental scripts under `dev/`; do not ship them in production builds.

View File

@ -50,16 +50,33 @@ WORKDIR /app/api
# Create non-root user
ARG dify_uid=1001
ARG NODE_MAJOR=22
ARG NODE_PACKAGE_VERSION=22.21.0-1nodesource1
ARG NODESOURCE_KEY_FPR=6F71F525282841EEDAF851B42F59B5F99B1BE0B4
RUN groupadd -r -g ${dify_uid} dify && \
useradd -r -u ${dify_uid} -g ${dify_uid} -s /bin/bash dify && \
chown -R dify:dify /app
RUN \
apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
gnupg \
&& mkdir -p /etc/apt/keyrings \
&& curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key -o /tmp/nodesource.gpg \
&& gpg --show-keys --with-colons /tmp/nodesource.gpg \
| awk -F: '/^fpr:/ {print $10}' \
| grep -Fx "${NODESOURCE_KEY_FPR}" \
&& gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg /tmp/nodesource.gpg \
&& rm -f /tmp/nodesource.gpg \
&& echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_${NODE_MAJOR}.x nodistro main" \
> /etc/apt/sources.list.d/nodesource.list \
&& apt-get update \
# Install dependencies
&& apt-get install -y --no-install-recommends \
# basic environment
curl nodejs \
nodejs=${NODE_PACKAGE_VERSION} \
# for gmpy2 \
libgmp-dev libmpfr-dev libmpc-dev \
# For Security
@ -79,7 +96,8 @@ COPY --from=packages --chown=dify:dify ${VIRTUAL_ENV} ${VIRTUAL_ENV}
ENV PATH="${VIRTUAL_ENV}/bin:${PATH}"
# Download nltk data
RUN mkdir -p /usr/local/share/nltk_data && NLTK_DATA=/usr/local/share/nltk_data python -c "import nltk; nltk.download('punkt'); nltk.download('averaged_perceptron_tagger'); nltk.download('stopwords')" \
RUN mkdir -p /usr/local/share/nltk_data \
&& NLTK_DATA=/usr/local/share/nltk_data python -c "import nltk; from unstructured.nlp.tokenize import download_nltk_packages; nltk.download('punkt'); nltk.download('averaged_perceptron_tagger'); nltk.download('stopwords'); download_nltk_packages()" \
&& chmod -R 755 /usr/local/share/nltk_data
ENV TIKTOKEN_CACHE_DIR=/app/api/.tiktoken_cache

View File

@ -0,0 +1,52 @@
## Purpose
`api/controllers/console/datasets/datasets_document.py` contains the console (authenticated) APIs for managing dataset documents (list/create/update/delete, processing controls, estimates, etc.).
## Storage model (uploaded files)
- For local file uploads into a knowledge base, the binary is stored via `extensions.ext_storage.storage` under the key:
- `upload_files/<tenant_id>/<uuid>.<ext>`
- File metadata is stored in the `upload_files` table (`UploadFile` model), keyed by `UploadFile.id`.
- Dataset `Document` records reference the uploaded file via:
- `Document.data_source_info.upload_file_id`
## Download endpoint
- `GET /datasets/<dataset_id>/documents/<document_id>/download`
- Only supported when `Document.data_source_type == "upload_file"`.
- Performs dataset permission + tenant checks via `DocumentResource.get_document(...)`.
- Delegates `Document -> UploadFile` validation and signed URL generation to `DocumentService.get_document_download_url(...)`.
- Applies `cloud_edition_billing_rate_limit_check("knowledge")` to match other KB operations.
- Response body is **only**: `{ "url": "<signed-url>" }`.
- `POST /datasets/<dataset_id>/documents/download-zip`
- Accepts `{ "document_ids": ["..."] }` (upload-file only).
- Returns `application/zip` as a single attachment download.
- Rationale: browsers often block multiple automatic downloads; a ZIP avoids that limitation.
- Applies `cloud_edition_billing_rate_limit_check("knowledge")`.
- Delegates dataset permission checks, document/upload-file validation, and download-name generation to
`DocumentService.prepare_document_batch_download_zip(...)` before streaming the ZIP.
## Verification plan
- Upload a document from a local file into a dataset.
- Call the download endpoint and confirm it returns a signed URL.
- Open the URL and confirm:
- Response headers force download (`Content-Disposition`), and
- Downloaded bytes match the uploaded file.
- Select multiple uploaded-file documents and download as ZIP; confirm all selected files exist in the archive.
## Shared helper
- `DocumentService.get_document_download_url(document)` resolves the `UploadFile` and signs a download URL.
- `DocumentService.prepare_document_batch_download_zip(...)` performs dataset permission checks, batches
document + upload file lookups, preserves request order, and generates the client-visible ZIP filename.
- Internal helpers now live in `DocumentService` (`_get_upload_file_id_for_upload_file_document(...)`,
`_get_upload_file_for_upload_file_document(...)`, `_get_upload_files_by_document_id_for_zip_download(...)`).
- ZIP packing is handled by `FileService.build_upload_files_zip_tempfile(...)`, which also:
- sanitizes entry names to avoid path traversal, and
- deduplicates names while preserving extensions (e.g., `doc.txt``doc (1).txt`).
Streaming the response and deferring cleanup is handled by the route via `send_file(path, ...)` + `ExitStack` +
`response.call_on_close(...)` (the file is deleted when the response is closed).

View File

@ -0,0 +1,18 @@
## Purpose
`api/services/dataset_service.py` hosts dataset/document service logic used by console and API controllers.
## Batch document operations
- Batch document workflows should avoid N+1 database queries by using set-based lookups.
- Tenant checks must be enforced consistently across dataset/document operations.
- `DocumentService.get_documents_by_ids(...)` fetches documents for a dataset using `id.in_(...)`.
- `FileService.get_upload_files_by_ids(...)` performs tenant-scoped batch lookup for `UploadFile` (dedupes ids with `set(...)`).
- `DocumentService.get_document_download_url(...)` and `prepare_document_batch_download_zip(...)` handle
dataset/document permission checks plus `Document -> UploadFile` validation for download endpoints.
## Verification plan
- Exercise document list and download endpoints that use the service helpers.
- Confirm batch download uses constant query count for documents + upload files.
- Request a ZIP with a missing document id and confirm a 404 is returned.

View File

@ -0,0 +1,35 @@
## Purpose
`api/services/file_service.py` owns business logic around `UploadFile` objects: upload validation, storage persistence,
previews/generators, and deletion.
## Key invariants
- All storage I/O goes through `extensions.ext_storage.storage`.
- Uploaded file keys follow: `upload_files/<tenant_id>/<uuid>.<ext>`.
- Upload validation is enforced in `FileService.upload_file(...)` (blocked extensions, size limits, dataset-only types).
## Batch lookup helpers
- `FileService.get_upload_files_by_ids(tenant_id, upload_file_ids)` is the canonical tenant-scoped batch loader for
`UploadFile`.
## Dataset document download helpers
The dataset document download/ZIP endpoints now delegate “Document → UploadFile” validation and permission checks to
`DocumentService` (`api/services/dataset_service.py`). `FileService` stays focused on generic `UploadFile` operations
(uploading, previews, deletion), plus generic ZIP serving.
### ZIP serving
- `FileService.build_upload_files_zip_tempfile(...)` builds a ZIP from `UploadFile` objects and yields a seeked
tempfile **path** so callers can stream it (e.g., `send_file(path, ...)`) without hitting "read of closed file"
issues from file-handle lifecycle during streamed responses.
- Flask `send_file(...)` and the `ExitStack`/`call_on_close(...)` cleanup pattern are handled in the route layer.
## Verification plan
- Unit: `api/tests/unit_tests/controllers/console/datasets/test_datasets_document_download.py`
- Verify signed URL generation for upload-file documents and ZIP download behavior for multiple documents.
- Unit: `api/tests/unit_tests/services/test_file_service_zip_and_lookup.py`
- Verify ZIP packing produces a valid, openable archive and preserves file content.

View File

@ -0,0 +1,28 @@
## Purpose
Unit tests for the console dataset document download endpoint:
- `GET /datasets/<dataset_id>/documents/<document_id>/download`
## Testing approach
- Uses `Flask.test_request_context()` and calls the `Resource.get(...)` method directly.
- Monkeypatches console decorators (`login_required`, `setup_required`, rate limit) to no-ops to keep the test focused.
- Mocks:
- `DatasetService.get_dataset` / `check_dataset_permission`
- `DocumentService.get_document` for single-file download tests
- `DocumentService.get_documents_by_ids` + `FileService.get_upload_files_by_ids` for ZIP download tests
- `FileService.get_upload_files_by_ids` for `UploadFile` lookups in single-file tests
- `services.dataset_service.file_helpers.get_signed_file_url` to return a deterministic URL
- Document mocks include `id` fields so batch lookups can map documents by id.
## Covered cases
- Success returns `{ "url": "<signed>" }` for upload-file documents.
- 404 when document is not `upload_file`.
- 404 when `upload_file_id` is missing.
- 404 when referenced `UploadFile` row does not exist.
- 403 when document tenant does not match current tenant.
- Batch ZIP download returns `application/zip` for upload-file documents.
- Batch ZIP download rejects non-upload-file documents.
- Batch ZIP download uses a random `.zip` attachment name (`download_name`), so tests only assert the suffix.

View File

@ -0,0 +1,18 @@
## Purpose
Unit tests for `api/services/file_service.py` helper methods that are not covered by higher-level controller tests.
## Whats covered
- `FileService.build_upload_files_zip_tempfile(...)`
- ZIP entry name sanitization (no directory components / traversal)
- name deduplication while preserving extensions
- writing streamed bytes from `storage.load(...)` into ZIP entries
- yields a tempfile path so callers can open/stream the ZIP without holding a live file handle
- `FileService.get_upload_files_by_ids(...)`
- returns `{}` for empty id lists
- returns an id-keyed mapping for non-empty lists
## Notes
- These tests intentionally stub `storage.load` and `db.session.scalars(...).all()` to avoid needing a real DB/storage.

View File

@ -1,115 +0,0 @@
## Linter
- Always follow `.ruff.toml`.
- Run `uv run ruff check --fix --unsafe-fixes`.
- Keep each line under 100 characters (including spaces).
## Code Style
- `snake_case` for variables and functions.
- `PascalCase` for classes.
- `UPPER_CASE` for constants.
## Rules
- Use Pydantic v2 standard.
- Use `uv` for package management.
- Do not override dunder methods like `__init__`, `__iadd__`, etc.
- Never launch services (`uv run app.py`, `flask run`, etc.); running tests under `tests/` is allowed.
- Prefer simple functions over classes for lightweight helpers.
- Keep files below 800 lines; split when necessary.
- Keep code readable—no clever hacks.
- Never use `print`; log with `logger = logging.getLogger(__name__)`.
## Guiding Principles
- Mirror the projects layered architecture: controller → service → core/domain.
- Reuse existing helpers in `core/`, `services/`, and `libs/` before creating new abstractions.
- Optimise for observability: deterministic control flow, clear logging, actionable errors.
## SQLAlchemy Patterns
- Models inherit from `models.base.Base`; never create ad-hoc metadata or engines.
- Open sessions with context managers:
```python
from sqlalchemy.orm import Session
with Session(db.engine, expire_on_commit=False) as session:
stmt = select(Workflow).where(
Workflow.id == workflow_id,
Workflow.tenant_id == tenant_id,
)
workflow = session.execute(stmt).scalar_one_or_none()
```
- Use SQLAlchemy expressions; avoid raw SQL unless necessary.
- Introduce repository abstractions only for very large tables (e.g., workflow executions) to support alternative storage strategies.
- Always scope queries by `tenant_id` and protect write paths with safeguards (`FOR UPDATE`, row counts, etc.).
## Storage & External IO
- Access storage via `extensions.ext_storage.storage`.
- Use `core.helper.ssrf_proxy` for outbound HTTP fetches.
- Background tasks that touch storage must be idempotent and log the relevant object identifiers.
## Pydantic Usage
- Define DTOs with Pydantic v2 models and forbid extras by default.
- Use `@field_validator` / `@model_validator` for domain rules.
- Example:
```python
from pydantic import BaseModel, ConfigDict, HttpUrl, field_validator
class TriggerConfig(BaseModel):
endpoint: HttpUrl
secret: str
model_config = ConfigDict(extra="forbid")
@field_validator("secret")
def ensure_secret_prefix(cls, value: str) -> str:
if not value.startswith("dify_"):
raise ValueError("secret must start with dify_")
return value
```
## Generics & Protocols
- Use `typing.Protocol` to define behavioural contracts (e.g., cache interfaces).
- Apply generics (`TypeVar`, `Generic`) for reusable utilities like caches or providers.
- Validate dynamic inputs at runtime when generics cannot enforce safety alone.
## Error Handling & Logging
- Raise domain-specific exceptions (`services/errors`, `core/errors`) and translate to HTTP responses in controllers.
- Declare `logger = logging.getLogger(__name__)` at module top.
- Include tenant/app/workflow identifiers in log context.
- Log retryable events at `warning`, terminal failures at `error`.
## Tooling & Checks
- Format/lint: `uv run --project api --dev ruff format ./api` and `uv run --project api --dev ruff check --fix --unsafe-fixes ./api`.
- Type checks: `uv run --directory api --dev basedpyright`.
- Tests: `uv run --project api --dev dev/pytest/pytest_unit_tests.sh`.
- Run all of the above before submitting your work.
## Controllers & Services
- Controllers: parse input via Pydantic, invoke services, return serialised responses; no business logic.
- Services: coordinate repositories, providers, background tasks; keep side effects explicit.
- Avoid repositories unless necessary; direct SQLAlchemy usage is preferred for typical tables.
- Document non-obvious behaviour with concise comments.
## Miscellaneous
- Use `configs.dify_config` for configuration—never read environment variables directly.
- Maintain tenant awareness end-to-end; `tenant_id` must flow through every layer touching shared resources.
- Queue async work through `services/async_workflow_service`; implement tasks under `tasks/` with explicit queue selection.
- Keep experimental scripts under `dev/`; do not ship them in production builds.

View File

@ -71,6 +71,8 @@ def create_app() -> DifyApp:
def initialize_extensions(app: DifyApp):
# Initialize Flask context capture for workflow execution
from context.flask_app_context import init_flask_context
from extensions import (
ext_app_metrics,
ext_blueprints,
@ -100,6 +102,8 @@ def initialize_extensions(app: DifyApp):
ext_warnings,
)
init_flask_context()
extensions = [
ext_timezone,
ext_logging,

BIN
api/bin/dify-cli-darwin-amd64 Executable file

Binary file not shown.

BIN
api/bin/dify-cli-darwin-arm64 Executable file

Binary file not shown.

BIN
api/bin/dify-cli-linux-amd64 Executable file

Binary file not shown.

BIN
api/bin/dify-cli-linux-arm64 Executable file

Binary file not shown.

View File

@ -1,7 +1,9 @@
import base64
import datetime
import json
import logging
import secrets
import time
from typing import Any
import click
@ -21,7 +23,8 @@ from core.rag.datasource.vdb.vector_factory import Vector
from core.rag.datasource.vdb.vector_type import VectorType
from core.rag.index_processor.constant.built_in_field import BuiltInField
from core.rag.models.document import Document
from core.tools.utils.system_oauth_encryption import encrypt_system_oauth_params
from core.sandbox.vm import SandboxBuilder, SandboxType
from core.tools.utils.system_encryption import encrypt_system_params
from events.app_event import app_was_created
from extensions.ext_database import db
from extensions.ext_redis import redis_client
@ -34,7 +37,7 @@ from libs.rsa import generate_key_pair
from models import Tenant
from models.dataset import Dataset, DatasetCollectionBinding, DatasetMetadata, DatasetMetadataBinding, DocumentSegment
from models.dataset import Document as DatasetDocument
from models.model import Account, App, AppAnnotationSetting, AppMode, Conversation, MessageAnnotation, UploadFile
from models.model import App, AppAnnotationSetting, AppMode, Conversation, MessageAnnotation, UploadFile
from models.oauth import DatasourceOauthParamConfig, DatasourceProvider
from models.provider import Provider, ProviderModel
from models.provider_ids import DatasourceProviderID, ToolProviderID
@ -45,6 +48,9 @@ from services.clear_free_plan_tenant_expired_logs import ClearFreePlanTenantExpi
from services.plugin.data_migration import PluginDataMigration
from services.plugin.plugin_migration import PluginMigration
from services.plugin.plugin_service import PluginService
from services.retention.conversation.messages_clean_policy import create_message_clean_policy
from services.retention.conversation.messages_clean_service import MessagesCleanService
from services.retention.workflow_run.clear_free_plan_expired_workflow_run_logs import WorkflowRunCleanup
from tasks.remove_app_and_related_data_task import delete_draft_variables_batch
logger = logging.getLogger(__name__)
@ -62,8 +68,10 @@ def reset_password(email, new_password, password_confirm):
if str(new_password).strip() != str(password_confirm).strip():
click.echo(click.style("Passwords do not match.", fg="red"))
return
normalized_email = email.strip().lower()
with sessionmaker(db.engine, expire_on_commit=False).begin() as session:
account = session.query(Account).where(Account.email == email).one_or_none()
account = AccountService.get_account_by_email_with_case_fallback(email.strip(), session=session)
if not account:
click.echo(click.style(f"Account not found for email: {email}", fg="red"))
@ -84,7 +92,7 @@ def reset_password(email, new_password, password_confirm):
base64_password_hashed = base64.b64encode(password_hashed).decode()
account.password = base64_password_hashed
account.password_salt = base64_salt
AccountService.reset_login_error_rate_limit(email)
AccountService.reset_login_error_rate_limit(normalized_email)
click.echo(click.style("Password reset successfully.", fg="green"))
@ -100,20 +108,22 @@ def reset_email(email, new_email, email_confirm):
if str(new_email).strip() != str(email_confirm).strip():
click.echo(click.style("New emails do not match.", fg="red"))
return
normalized_new_email = new_email.strip().lower()
with sessionmaker(db.engine, expire_on_commit=False).begin() as session:
account = session.query(Account).where(Account.email == email).one_or_none()
account = AccountService.get_account_by_email_with_case_fallback(email.strip(), session=session)
if not account:
click.echo(click.style(f"Account not found for email: {email}", fg="red"))
return
try:
email_validate(new_email)
email_validate(normalized_new_email)
except:
click.echo(click.style(f"Invalid email: {new_email}", fg="red"))
return
account.email = new_email
account.email = normalized_new_email
click.echo(click.style("Email updated successfully.", fg="green"))
@ -235,7 +245,7 @@ def migrate_annotation_vector_database():
if annotations:
for annotation in annotations:
document = Document(
page_content=annotation.question,
page_content=annotation.question_text,
metadata={"annotation_id": annotation.id, "app_id": app.id, "doc_id": annotation.id},
)
documents.append(document)
@ -658,7 +668,7 @@ def create_tenant(email: str, language: str | None = None, name: str | None = No
return
# Create account
email = email.strip()
email = email.strip().lower()
if "@" not in email:
click.echo(click.style("Invalid email address.", fg="red"))
@ -852,6 +862,95 @@ def clear_free_plan_tenant_expired_logs(days: int, batch: int, tenant_ids: list[
click.echo(click.style("Clear free plan tenant expired logs completed.", fg="green"))
@click.command("clean-workflow-runs", help="Clean expired workflow runs and related data for free tenants.")
@click.option(
"--before-days",
"--days",
default=30,
show_default=True,
type=click.IntRange(min=0),
help="Delete workflow runs created before N days ago.",
)
@click.option("--batch-size", default=200, show_default=True, help="Batch size for selecting workflow runs.")
@click.option(
"--from-days-ago",
default=None,
type=click.IntRange(min=0),
help="Lower bound in days ago (older). Must be paired with --to-days-ago.",
)
@click.option(
"--to-days-ago",
default=None,
type=click.IntRange(min=0),
help="Upper bound in days ago (newer). Must be paired with --from-days-ago.",
)
@click.option(
"--start-from",
type=click.DateTime(formats=["%Y-%m-%d", "%Y-%m-%dT%H:%M:%S"]),
default=None,
help="Optional lower bound (inclusive) for created_at; must be paired with --end-before.",
)
@click.option(
"--end-before",
type=click.DateTime(formats=["%Y-%m-%d", "%Y-%m-%dT%H:%M:%S"]),
default=None,
help="Optional upper bound (exclusive) for created_at; must be paired with --start-from.",
)
@click.option(
"--dry-run",
is_flag=True,
help="Preview cleanup results without deleting any workflow run data.",
)
def clean_workflow_runs(
before_days: int,
batch_size: int,
from_days_ago: int | None,
to_days_ago: int | None,
start_from: datetime.datetime | None,
end_before: datetime.datetime | None,
dry_run: bool,
):
"""
Clean workflow runs and related workflow data for free tenants.
"""
if (start_from is None) ^ (end_before is None):
raise click.UsageError("--start-from and --end-before must be provided together.")
if (from_days_ago is None) ^ (to_days_ago is None):
raise click.UsageError("--from-days-ago and --to-days-ago must be provided together.")
if from_days_ago is not None and to_days_ago is not None:
if start_from or end_before:
raise click.UsageError("Choose either day offsets or explicit dates, not both.")
if from_days_ago <= to_days_ago:
raise click.UsageError("--from-days-ago must be greater than --to-days-ago.")
now = datetime.datetime.now()
start_from = now - datetime.timedelta(days=from_days_ago)
end_before = now - datetime.timedelta(days=to_days_ago)
before_days = 0
start_time = datetime.datetime.now(datetime.UTC)
click.echo(click.style(f"Starting workflow run cleanup at {start_time.isoformat()}.", fg="white"))
WorkflowRunCleanup(
days=before_days,
batch_size=batch_size,
start_from=start_from,
end_before=end_before,
dry_run=dry_run,
).run()
end_time = datetime.datetime.now(datetime.UTC)
elapsed = end_time - start_time
click.echo(
click.style(
f"Workflow run cleanup completed. start={start_time.isoformat()} "
f"end={end_time.isoformat()} duration={elapsed}",
fg="green",
)
)
@click.option("-f", "--force", is_flag=True, help="Skip user confirmation and force the command to execute.")
@click.command("clear-orphaned-file-records", help="Clear orphaned file records.")
def clear_orphaned_file_records(force: bool):
@ -1147,7 +1246,7 @@ def remove_orphaned_files_on_storage(force: bool):
click.echo(click.style(f"- Scanning files on storage path {storage_path}", fg="white"))
files = storage.scan(path=storage_path, files=True, directories=False)
all_files_on_storage.extend(files)
except FileNotFoundError as e:
except FileNotFoundError:
click.echo(click.style(f" -> Skipping path {storage_path} as it does not exist.", fg="yellow"))
continue
except Exception as e:
@ -1184,6 +1283,268 @@ def remove_orphaned_files_on_storage(force: bool):
click.echo(click.style(f"Removed {removed_files} orphaned files, with {error_files} errors.", fg="yellow"))
@click.command("file-usage", help="Query file usages and show where files are referenced.")
@click.option("--file-id", type=str, default=None, help="Filter by file UUID.")
@click.option("--key", type=str, default=None, help="Filter by storage key.")
@click.option("--src", type=str, default=None, help="Filter by table.column pattern (e.g., 'documents.%' or '%.icon').")
@click.option("--limit", type=int, default=100, help="Limit number of results (default: 100).")
@click.option("--offset", type=int, default=0, help="Offset for pagination (default: 0).")
@click.option("--json", "output_json", is_flag=True, help="Output results in JSON format.")
def file_usage(
file_id: str | None,
key: str | None,
src: str | None,
limit: int,
offset: int,
output_json: bool,
):
"""
Query file usages and show where files are referenced in the database.
This command reuses the same reference checking logic as clear-orphaned-file-records
and displays detailed information about where each file is referenced.
"""
# define tables and columns to process
files_tables = [
{"table": "upload_files", "id_column": "id", "key_column": "key"},
{"table": "tool_files", "id_column": "id", "key_column": "file_key"},
]
ids_tables = [
{"type": "uuid", "table": "message_files", "column": "upload_file_id", "pk_column": "id"},
{"type": "text", "table": "documents", "column": "data_source_info", "pk_column": "id"},
{"type": "text", "table": "document_segments", "column": "content", "pk_column": "id"},
{"type": "text", "table": "messages", "column": "answer", "pk_column": "id"},
{"type": "text", "table": "workflow_node_executions", "column": "inputs", "pk_column": "id"},
{"type": "text", "table": "workflow_node_executions", "column": "process_data", "pk_column": "id"},
{"type": "text", "table": "workflow_node_executions", "column": "outputs", "pk_column": "id"},
{"type": "text", "table": "conversations", "column": "introduction", "pk_column": "id"},
{"type": "text", "table": "conversations", "column": "system_instruction", "pk_column": "id"},
{"type": "text", "table": "accounts", "column": "avatar", "pk_column": "id"},
{"type": "text", "table": "apps", "column": "icon", "pk_column": "id"},
{"type": "text", "table": "sites", "column": "icon", "pk_column": "id"},
{"type": "json", "table": "messages", "column": "inputs", "pk_column": "id"},
{"type": "json", "table": "messages", "column": "message", "pk_column": "id"},
]
# Stream file usages with pagination to avoid holding all results in memory
paginated_usages = []
total_count = 0
# First, build a mapping of file_id -> storage_key from the base tables
file_key_map = {}
for files_table in files_tables:
query = f"SELECT {files_table['id_column']}, {files_table['key_column']} FROM {files_table['table']}"
with db.engine.begin() as conn:
rs = conn.execute(sa.text(query))
for row in rs:
file_key_map[str(row[0])] = f"{files_table['table']}:{row[1]}"
# If filtering by key or file_id, verify it exists
if file_id and file_id not in file_key_map:
if output_json:
click.echo(json.dumps({"error": f"File ID {file_id} not found in base tables"}))
else:
click.echo(click.style(f"File ID {file_id} not found in base tables.", fg="red"))
return
if key:
valid_prefixes = {f"upload_files:{key}", f"tool_files:{key}"}
matching_file_ids = [fid for fid, fkey in file_key_map.items() if fkey in valid_prefixes]
if not matching_file_ids:
if output_json:
click.echo(json.dumps({"error": f"Key {key} not found in base tables"}))
else:
click.echo(click.style(f"Key {key} not found in base tables.", fg="red"))
return
guid_regexp = "[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}"
# For each reference table/column, find matching file IDs and record the references
for ids_table in ids_tables:
src_filter = f"{ids_table['table']}.{ids_table['column']}"
# Skip if src filter doesn't match (use fnmatch for wildcard patterns)
if src:
if "%" in src or "_" in src:
import fnmatch
# Convert SQL LIKE wildcards to fnmatch wildcards (% -> *, _ -> ?)
pattern = src.replace("%", "*").replace("_", "?")
if not fnmatch.fnmatch(src_filter, pattern):
continue
else:
if src_filter != src:
continue
if ids_table["type"] == "uuid":
# Direct UUID match
query = (
f"SELECT {ids_table['pk_column']}, {ids_table['column']} "
f"FROM {ids_table['table']} WHERE {ids_table['column']} IS NOT NULL"
)
with db.engine.begin() as conn:
rs = conn.execute(sa.text(query))
for row in rs:
record_id = str(row[0])
ref_file_id = str(row[1])
if ref_file_id not in file_key_map:
continue
storage_key = file_key_map[ref_file_id]
# Apply filters
if file_id and ref_file_id != file_id:
continue
if key and not storage_key.endswith(key):
continue
# Only collect items within the requested page range
if offset <= total_count < offset + limit:
paginated_usages.append(
{
"src": f"{ids_table['table']}.{ids_table['column']}",
"record_id": record_id,
"file_id": ref_file_id,
"key": storage_key,
}
)
total_count += 1
elif ids_table["type"] in ("text", "json"):
# Extract UUIDs from text/json content
column_cast = f"{ids_table['column']}::text" if ids_table["type"] == "json" else ids_table["column"]
query = (
f"SELECT {ids_table['pk_column']}, {column_cast} "
f"FROM {ids_table['table']} WHERE {ids_table['column']} IS NOT NULL"
)
with db.engine.begin() as conn:
rs = conn.execute(sa.text(query))
for row in rs:
record_id = str(row[0])
content = str(row[1])
# Find all UUIDs in the content
import re
uuid_pattern = re.compile(guid_regexp, re.IGNORECASE)
matches = uuid_pattern.findall(content)
for ref_file_id in matches:
if ref_file_id not in file_key_map:
continue
storage_key = file_key_map[ref_file_id]
# Apply filters
if file_id and ref_file_id != file_id:
continue
if key and not storage_key.endswith(key):
continue
# Only collect items within the requested page range
if offset <= total_count < offset + limit:
paginated_usages.append(
{
"src": f"{ids_table['table']}.{ids_table['column']}",
"record_id": record_id,
"file_id": ref_file_id,
"key": storage_key,
}
)
total_count += 1
# Output results
if output_json:
result = {
"total": total_count,
"offset": offset,
"limit": limit,
"usages": paginated_usages,
}
click.echo(json.dumps(result, indent=2))
else:
click.echo(
click.style(f"Found {total_count} file usages (showing {len(paginated_usages)} results)", fg="white")
)
click.echo("")
if not paginated_usages:
click.echo(click.style("No file usages found matching the specified criteria.", fg="yellow"))
return
# Print table header
click.echo(
click.style(
f"{'Src (Table.Column)':<50} {'Record ID':<40} {'File ID':<40} {'Storage Key':<60}",
fg="cyan",
)
)
click.echo(click.style("-" * 190, fg="white"))
# Print each usage
for usage in paginated_usages:
click.echo(f"{usage['src']:<50} {usage['record_id']:<40} {usage['file_id']:<40} {usage['key']:<60}")
# Show pagination info
if offset + limit < total_count:
click.echo("")
click.echo(
click.style(
f"Showing {offset + 1}-{offset + len(paginated_usages)} of {total_count} results", fg="white"
)
)
click.echo(click.style(f"Use --offset {offset + limit} to see next page", fg="white"))
@click.command("setup-sandbox-system-config", help="Setup system-level sandbox provider configuration.")
@click.option(
"--provider-type", prompt=True, type=click.Choice(["e2b", "docker", "local"]), help="Sandbox provider type"
)
@click.option("--config", prompt=True, help='Configuration JSON (e.g., {"api_key": "xxx"} for e2b)')
def setup_sandbox_system_config(provider_type: str, config: str):
"""
Setup system-level sandbox provider configuration.
Examples:
flask setup-sandbox-system-config --provider-type e2b --config '{"api_key": "e2b_xxx"}'
flask setup-sandbox-system-config --provider-type docker --config '{"docker_sock": "unix:///var/run/docker.sock"}'
flask setup-sandbox-system-config --provider-type local --config '{}'
"""
from models.sandbox import SandboxProviderSystemConfig
try:
click.echo(click.style(f"Validating config: {config}", fg="yellow"))
config_dict = TypeAdapter(dict[str, Any]).validate_json(config)
click.echo(click.style("Config validated successfully.", fg="green"))
click.echo(click.style(f"Validating config schema for provider type: {provider_type}", fg="yellow"))
SandboxBuilder.validate(SandboxType(provider_type), config_dict)
click.echo(click.style("Config schema validated successfully.", fg="green"))
click.echo(click.style("Encrypting config...", fg="yellow"))
click.echo(click.style(f"Using SECRET_KEY: `{dify_config.SECRET_KEY}`", fg="yellow"))
encrypted_config = encrypt_system_params(config_dict)
click.echo(click.style("Config encrypted successfully.", fg="green"))
except Exception as e:
click.echo(click.style(f"Error validating/encrypting config: {str(e)}", fg="red"))
return
deleted_count = db.session.query(SandboxProviderSystemConfig).filter_by(provider_type=provider_type).delete()
if deleted_count > 0:
click.echo(
click.style(
f"Deleted {deleted_count} existing system config for provider type: {provider_type}", fg="yellow"
)
)
system_config = SandboxProviderSystemConfig(
provider_type=provider_type,
encrypted_config=encrypted_config,
)
db.session.add(system_config)
db.session.commit()
click.echo(click.style(f"Sandbox system config setup successfully. id: {system_config.id}", fg="green"))
click.echo(click.style(f"Provider type: {provider_type}", fg="green"))
@click.command("setup-system-tool-oauth-client", help="Setup system tool oauth client.")
@click.option("--provider", prompt=True, help="Provider name")
@click.option("--client-params", prompt=True, help="Client Params")
@ -1203,7 +1564,7 @@ def setup_system_tool_oauth_client(provider, client_params):
click.echo(click.style(f"Encrypting client params: {client_params}", fg="yellow"))
click.echo(click.style(f"Using SECRET_KEY: `{dify_config.SECRET_KEY}`", fg="yellow"))
oauth_client_params = encrypt_system_oauth_params(client_params_dict)
oauth_client_params = encrypt_system_params(client_params_dict)
click.echo(click.style("Client params encrypted successfully.", fg="green"))
except Exception as e:
click.echo(click.style(f"Error parsing client params: {str(e)}", fg="red"))
@ -1252,7 +1613,7 @@ def setup_system_trigger_oauth_client(provider, client_params):
click.echo(click.style(f"Encrypting client params: {client_params}", fg="yellow"))
click.echo(click.style(f"Using SECRET_KEY: `{dify_config.SECRET_KEY}`", fg="yellow"))
oauth_client_params = encrypt_system_oauth_params(client_params_dict)
oauth_client_params = encrypt_system_params(client_params_dict)
click.echo(click.style("Client params encrypted successfully.", fg="green"))
except Exception as e:
click.echo(click.style(f"Error parsing client params: {str(e)}", fg="red"))
@ -1900,3 +2261,79 @@ def migrate_oss(
except Exception as e:
db.session.rollback()
click.echo(click.style(f"Failed to update DB storage_type: {str(e)}", fg="red"))
@click.command("clean-expired-messages", help="Clean expired messages.")
@click.option(
"--start-from",
type=click.DateTime(formats=["%Y-%m-%d", "%Y-%m-%dT%H:%M:%S"]),
required=True,
help="Lower bound (inclusive) for created_at.",
)
@click.option(
"--end-before",
type=click.DateTime(formats=["%Y-%m-%d", "%Y-%m-%dT%H:%M:%S"]),
required=True,
help="Upper bound (exclusive) for created_at.",
)
@click.option("--batch-size", default=1000, show_default=True, help="Batch size for selecting messages.")
@click.option(
"--graceful-period",
default=21,
show_default=True,
help="Graceful period in days after subscription expiration, will be ignored when billing is disabled.",
)
@click.option("--dry-run", is_flag=True, default=False, help="Show messages logs would be cleaned without deleting")
def clean_expired_messages(
batch_size: int,
graceful_period: int,
start_from: datetime.datetime,
end_before: datetime.datetime,
dry_run: bool,
):
"""
Clean expired messages and related data for tenants based on clean policy.
"""
click.echo(click.style("clean_messages: start clean messages.", fg="green"))
start_at = time.perf_counter()
try:
# Create policy based on billing configuration
# NOTE: graceful_period will be ignored when billing is disabled.
policy = create_message_clean_policy(graceful_period_days=graceful_period)
# Create and run the cleanup service
service = MessagesCleanService.from_time_range(
policy=policy,
start_from=start_from,
end_before=end_before,
batch_size=batch_size,
dry_run=dry_run,
)
stats = service.run()
end_at = time.perf_counter()
click.echo(
click.style(
f"clean_messages: completed successfully\n"
f" - Latency: {end_at - start_at:.2f}s\n"
f" - Batches processed: {stats['batches']}\n"
f" - Total messages scanned: {stats['total_messages']}\n"
f" - Messages filtered: {stats['filtered_messages']}\n"
f" - Messages deleted: {stats['total_deleted']}",
fg="green",
)
)
except Exception as e:
end_at = time.perf_counter()
logger.exception("clean_messages failed")
click.echo(
click.style(
f"clean_messages: failed after {end_at - start_at:.2f}s - {str(e)}",
fg="red",
)
)
raise
click.echo(click.style("messages cleanup completed.", fg="green"))

View File

@ -2,6 +2,7 @@ import logging
from pathlib import Path
from typing import Any
from pydantic import Field
from pydantic.fields import FieldInfo
from pydantic_settings import BaseSettings, PydanticBaseSettingsSource, SettingsConfigDict, TomlConfigSettingsSource
@ -82,6 +83,14 @@ class DifyConfig(
extra="ignore",
)
SANDBOX_DIFY_CLI_ROOT: str | None = Field(
default=None,
description=(
"Filesystem directory containing dify CLI binaries named dify-cli-<os>-<arch>. "
"Defaults to api/bin when unset."
),
)
# Before adding any config,
# please consider to arrange it in the proper config group of existed or added
# for better readability and maintainability.

View File

@ -244,6 +244,17 @@ class PluginConfig(BaseSettings):
)
class CliApiConfig(BaseSettings):
"""
Configuration for CLI API (for dify-cli to call back from external sandbox environments)
"""
CLI_API_URL: str = Field(
description="CLI API URL for external sandbox (e.g., e2b) to call back.",
default="http://localhost:5001",
)
class MarketplaceConfig(BaseSettings):
"""
Configuration for marketplace
@ -949,6 +960,12 @@ class MailConfig(BaseSettings):
default=False,
)
SMTP_LOCAL_HOSTNAME: str | None = Field(
description="Override the local hostname used in SMTP HELO/EHLO. "
"Useful behind NAT or when the default hostname causes rejections.",
default=None,
)
EMAIL_SEND_IP_LIMIT_PER_MINUTE: PositiveInt = Field(
description="Maximum number of emails allowed to be sent from the same IP address in a minute",
default=50,
@ -1101,6 +1118,10 @@ class CeleryScheduleTasksConfig(BaseSettings):
description="Enable clean messages task",
default=False,
)
ENABLE_WORKFLOW_RUN_CLEANUP_TASK: bool = Field(
description="Enable scheduled workflow run cleanup task",
default=False,
)
ENABLE_MAIL_CLEAN_DOCUMENT_NOTIFY_TASK: bool = Field(
description="Enable mail clean document notify task",
default=False,
@ -1299,6 +1320,7 @@ class FeatureConfig(
TriggerConfig,
AsyncWorkflowConfig,
PluginConfig,
CliApiConfig,
MarketplaceConfig,
DataSetConfig,
EndpointConfig,

View File

@ -8,6 +8,11 @@ class HostedCreditConfig(BaseSettings):
default="",
)
HOSTED_POOL_CREDITS: int = Field(
description="Pool credits for hosted service",
default=200,
)
def get_model_credits(self, model_name: str) -> int:
"""
Get credit value for a specific model name.
@ -60,19 +65,46 @@ class HostedOpenAiConfig(BaseSettings):
HOSTED_OPENAI_TRIAL_MODELS: str = Field(
description="Comma-separated list of available models for trial access",
default="gpt-3.5-turbo,"
"gpt-3.5-turbo-1106,"
"gpt-3.5-turbo-instruct,"
default="gpt-4,"
"gpt-4-turbo-preview,"
"gpt-4-turbo-2024-04-09,"
"gpt-4-1106-preview,"
"gpt-4-0125-preview,"
"gpt-4-turbo,"
"gpt-4.1,"
"gpt-4.1-2025-04-14,"
"gpt-4.1-mini,"
"gpt-4.1-mini-2025-04-14,"
"gpt-4.1-nano,"
"gpt-4.1-nano-2025-04-14,"
"gpt-3.5-turbo,"
"gpt-3.5-turbo-16k,"
"gpt-3.5-turbo-16k-0613,"
"gpt-3.5-turbo-1106,"
"gpt-3.5-turbo-0613,"
"gpt-3.5-turbo-0125,"
"text-davinci-003",
)
HOSTED_OPENAI_QUOTA_LIMIT: NonNegativeInt = Field(
description="Quota limit for hosted OpenAI service usage",
default=200,
"gpt-3.5-turbo-instruct,"
"text-davinci-003,"
"chatgpt-4o-latest,"
"gpt-4o,"
"gpt-4o-2024-05-13,"
"gpt-4o-2024-08-06,"
"gpt-4o-2024-11-20,"
"gpt-4o-audio-preview,"
"gpt-4o-audio-preview-2025-06-03,"
"gpt-4o-mini,"
"gpt-4o-mini-2024-07-18,"
"o3-mini,"
"o3-mini-2025-01-31,"
"gpt-5-mini-2025-08-07,"
"gpt-5-mini,"
"o4-mini,"
"o4-mini-2025-04-16,"
"gpt-5-chat-latest,"
"gpt-5,"
"gpt-5-2025-08-07,"
"gpt-5-nano,"
"gpt-5-nano-2025-08-07",
)
HOSTED_OPENAI_PAID_ENABLED: bool = Field(
@ -87,6 +119,13 @@ class HostedOpenAiConfig(BaseSettings):
"gpt-4-turbo-2024-04-09,"
"gpt-4-1106-preview,"
"gpt-4-0125-preview,"
"gpt-4-turbo,"
"gpt-4.1,"
"gpt-4.1-2025-04-14,"
"gpt-4.1-mini,"
"gpt-4.1-mini-2025-04-14,"
"gpt-4.1-nano,"
"gpt-4.1-nano-2025-04-14,"
"gpt-3.5-turbo,"
"gpt-3.5-turbo-16k,"
"gpt-3.5-turbo-16k-0613,"
@ -94,7 +133,150 @@ class HostedOpenAiConfig(BaseSettings):
"gpt-3.5-turbo-0613,"
"gpt-3.5-turbo-0125,"
"gpt-3.5-turbo-instruct,"
"text-davinci-003",
"text-davinci-003,"
"chatgpt-4o-latest,"
"gpt-4o,"
"gpt-4o-2024-05-13,"
"gpt-4o-2024-08-06,"
"gpt-4o-2024-11-20,"
"gpt-4o-audio-preview,"
"gpt-4o-audio-preview-2025-06-03,"
"gpt-4o-mini,"
"gpt-4o-mini-2024-07-18,"
"o3-mini,"
"o3-mini-2025-01-31,"
"gpt-5-mini-2025-08-07,"
"gpt-5-mini,"
"o4-mini,"
"o4-mini-2025-04-16,"
"gpt-5-chat-latest,"
"gpt-5,"
"gpt-5-2025-08-07,"
"gpt-5-nano,"
"gpt-5-nano-2025-08-07",
)
class HostedGeminiConfig(BaseSettings):
"""
Configuration for fetching Gemini service
"""
HOSTED_GEMINI_API_KEY: str | None = Field(
description="API key for hosted Gemini service",
default=None,
)
HOSTED_GEMINI_API_BASE: str | None = Field(
description="Base URL for hosted Gemini API",
default=None,
)
HOSTED_GEMINI_API_ORGANIZATION: str | None = Field(
description="Organization ID for hosted Gemini service",
default=None,
)
HOSTED_GEMINI_TRIAL_ENABLED: bool = Field(
description="Enable trial access to hosted Gemini service",
default=False,
)
HOSTED_GEMINI_TRIAL_MODELS: str = Field(
description="Comma-separated list of available models for trial access",
default="gemini-2.5-flash,gemini-2.0-flash,gemini-2.0-flash-lite,",
)
HOSTED_GEMINI_PAID_ENABLED: bool = Field(
description="Enable paid access to hosted gemini service",
default=False,
)
HOSTED_GEMINI_PAID_MODELS: str = Field(
description="Comma-separated list of available models for paid access",
default="gemini-2.5-flash,gemini-2.0-flash,gemini-2.0-flash-lite,",
)
class HostedXAIConfig(BaseSettings):
"""
Configuration for fetching XAI service
"""
HOSTED_XAI_API_KEY: str | None = Field(
description="API key for hosted XAI service",
default=None,
)
HOSTED_XAI_API_BASE: str | None = Field(
description="Base URL for hosted XAI API",
default=None,
)
HOSTED_XAI_API_ORGANIZATION: str | None = Field(
description="Organization ID for hosted XAI service",
default=None,
)
HOSTED_XAI_TRIAL_ENABLED: bool = Field(
description="Enable trial access to hosted XAI service",
default=False,
)
HOSTED_XAI_TRIAL_MODELS: str = Field(
description="Comma-separated list of available models for trial access",
default="grok-3,grok-3-mini,grok-3-mini-fast",
)
HOSTED_XAI_PAID_ENABLED: bool = Field(
description="Enable paid access to hosted XAI service",
default=False,
)
HOSTED_XAI_PAID_MODELS: str = Field(
description="Comma-separated list of available models for paid access",
default="grok-3,grok-3-mini,grok-3-mini-fast",
)
class HostedDeepseekConfig(BaseSettings):
"""
Configuration for fetching Deepseek service
"""
HOSTED_DEEPSEEK_API_KEY: str | None = Field(
description="API key for hosted Deepseek service",
default=None,
)
HOSTED_DEEPSEEK_API_BASE: str | None = Field(
description="Base URL for hosted Deepseek API",
default=None,
)
HOSTED_DEEPSEEK_API_ORGANIZATION: str | None = Field(
description="Organization ID for hosted Deepseek service",
default=None,
)
HOSTED_DEEPSEEK_TRIAL_ENABLED: bool = Field(
description="Enable trial access to hosted Deepseek service",
default=False,
)
HOSTED_DEEPSEEK_TRIAL_MODELS: str = Field(
description="Comma-separated list of available models for trial access",
default="deepseek-chat,deepseek-reasoner",
)
HOSTED_DEEPSEEK_PAID_ENABLED: bool = Field(
description="Enable paid access to hosted Deepseek service",
default=False,
)
HOSTED_DEEPSEEK_PAID_MODELS: str = Field(
description="Comma-separated list of available models for paid access",
default="deepseek-chat,deepseek-reasoner",
)
@ -144,16 +326,66 @@ class HostedAnthropicConfig(BaseSettings):
default=False,
)
HOSTED_ANTHROPIC_QUOTA_LIMIT: NonNegativeInt = Field(
description="Quota limit for hosted Anthropic service usage",
default=600000,
)
HOSTED_ANTHROPIC_PAID_ENABLED: bool = Field(
description="Enable paid access to hosted Anthropic service",
default=False,
)
HOSTED_ANTHROPIC_TRIAL_MODELS: str = Field(
description="Comma-separated list of available models for paid access",
default="claude-opus-4-20250514,"
"claude-sonnet-4-20250514,"
"claude-3-5-haiku-20241022,"
"claude-3-opus-20240229,"
"claude-3-7-sonnet-20250219,"
"claude-3-haiku-20240307",
)
HOSTED_ANTHROPIC_PAID_MODELS: str = Field(
description="Comma-separated list of available models for paid access",
default="claude-opus-4-20250514,"
"claude-sonnet-4-20250514,"
"claude-3-5-haiku-20241022,"
"claude-3-opus-20240229,"
"claude-3-7-sonnet-20250219,"
"claude-3-haiku-20240307",
)
class HostedTongyiConfig(BaseSettings):
"""
Configuration for hosted Tongyi service
"""
HOSTED_TONGYI_API_KEY: str | None = Field(
description="API key for hosted Tongyi service",
default=None,
)
HOSTED_TONGYI_USE_INTERNATIONAL_ENDPOINT: bool = Field(
description="Use international endpoint for hosted Tongyi service",
default=False,
)
HOSTED_TONGYI_TRIAL_ENABLED: bool = Field(
description="Enable trial access to hosted Tongyi service",
default=False,
)
HOSTED_TONGYI_PAID_ENABLED: bool = Field(
description="Enable paid access to hosted Anthropic service",
default=False,
)
HOSTED_TONGYI_TRIAL_MODELS: str = Field(
description="Comma-separated list of available models for trial access",
default="",
)
HOSTED_TONGYI_PAID_MODELS: str = Field(
description="Comma-separated list of available models for paid access",
default="",
)
class HostedMinmaxConfig(BaseSettings):
"""
@ -246,9 +478,13 @@ class HostedServiceConfig(
HostedOpenAiConfig,
HostedSparkConfig,
HostedZhipuAIConfig,
HostedTongyiConfig,
# moderation
HostedModerationConfig,
# credit config
HostedCreditConfig,
HostedGeminiConfig,
HostedXAIConfig,
HostedDeepseekConfig,
):
pass

Some files were not shown because too many files have changed in this diff Show More