Compare commits

..

53 Commits

Author SHA1 Message Date
4c0f33d0d7 fix(openapi/apps): move uuid fast-path before tag guard in list endpoint 2026-05-08 19:06:05 -07:00
668e9c7864 feat(openapi/apps): list accepts uuid in name param; dispatches to pk lookup 2026-05-08 19:03:22 -07:00
a7c481ce87 fix(openapi/apps): normalise uuid in session.get; validate workspace_id format in query 2026-05-08 18:43:23 -07:00
507eb1f52f feat(openapi/apps): describe accepts name arg; uuid-parse dispatches pk vs name lookup 2026-05-08 18:33:13 -07:00
8e2ab1367b fix(openapi): /run swallowed HTTP errors as 500
Explicit re-raise list at L230-238 only covered UnprocessableEntity +
NotChatAppError/NotWorkflowAppError. Other HTTPException subclasses
raised inside handlers (NotFound, BadRequest, ConversationCompletedError,
ProviderNotInitializeError, ProviderQuotaExceededError, ...) hit
`except Exception` and got squashed to 500. Replace with
`except HTTPException: raise`.

Refactor bundled: collapse 3x try/except ladder into
_translate_service_errors() ctxmgr, inline single-call constraint
enforcers, drop wasted dict() copy in _unpack_blocking, trim module
docstring and stale spec doc reference. -60 net lines.
2026-05-07 13:53:19 -07:00
0c568623d7 test(openapi): pin invoke_from + user-strip invariants on /run
Restores two assertions lost when the legacy per-mode unit tests
were deleted in api-3 Task 4:
- invoke_from == InvokeFrom.OPENAPI on the unified runner
- body-side user field is stripped before reaching the generator
  (Model 2: bearer is identity, body cannot spoof user)

Both run as part of test_run_chat_dispatches_to_chat_handler;
no new tests added.
2026-05-07 01:35:53 -07:00
fb7b8dc151 refactor(openapi): drop legacy per-mode bearer routes
Removes /openapi/v1/apps/<id>/{chat-messages,completion-messages,
workflows/run}. Bearer surface for runs is now the unified /run route
(api-3). Service-API /v1/* per-mode routes (app-key auth) untouched.

Also deletes the corresponding unit test files
(test_chat_messages.py, test_completion_messages.py, test_workflow_run.py)
which targeted the removed handlers; coverage of the unified route lives
in tests/unit_tests/controllers/openapi/test_app_run_dispatch.py and
tests/integration_tests/controllers/openapi/test_app_run.py.
2026-05-07 01:28:12 -07:00
4bc1046f14 fix(openapi): tighten /run handler error path + drop cargo-cult call
- Drop except ValueError: raise. Inherited from per-mode controllers
  without examining purpose; today it converts helper-internal
  ValueErrors into uncaptured 500s with no body or log. Falling
  through to except Exception: gives them a logged trace and a
  structured InternalServerError.
- Drop redundant AppMode.value_of(app_model.mode). App.mode is
  Mapped[AppMode] with an EnumText adapter that returns the enum
  directly; value_of was a no-op iteration.
- Comment the explicit re-raise block to spell out why ordering
  matters before the catch-all.
2026-05-07 01:08:59 -07:00
f2ec17be9b feat(openapi): unified POST /apps/<id>/run with per-mode dispatch
Single bearer-accepting run route on the openapi namespace. Server
reads apps.mode after AppResolver and dispatches via the _DISPATCH
table to the per-mode helper. Per-mode constraints enforced inside
helpers (422). Service-API /v1/* per-mode routes untouched.

Also fixes a pre-existing latent bug in the openapi integration
fixtures: App() rows were constructed without enable_site, which
DB INSERT rejected (column is NOT NULL with no default). Now set
enable_site=True alongside enable_api=True in the three fixtures
that construct App() rows.
2026-05-07 00:35:47 -07:00
6532b4d161 fix(openapi): tighten _DISPATCH return type + cover helper edge cases
- _DISPATCH value type: tuple[Any, dict[str, Any] | None] (was Any, Any).
  Helpers always return one of (stream_obj, None) or (None, dict);
  tightened sig lets mypy flag wrong shapes when Task 3's route handler
  unpacks the result.
- Comment _run_completion's auto_generate_name + query mutations
  (legacy parity; non-obvious without grep-archaeology).
- Unit cover _unpack_blocking (mapping / tuple / non-mapping branches)
  + AppRunRequest.conversation_id field validator (strip-to-None,
  invalid-uuid raise, valid-uuid passthrough).
2026-05-07 00:05:20 -07:00
4bfc4af590 feat(openapi): AppRunRequest schema + per-mode dispatch helpers
Lays the foundation for the unified /openapi/v1/apps/<id>/run route
without yet registering it. Helpers preserve the per-mode exception
fans + response shapes byte-for-byte from the existing chat-messages /
completion-messages / workflows-run controllers.
2026-05-06 23:57:13 -07:00
1fb7329327 fix(openapi): tighten WorkflowRunResponse.mode + outputs default
- WorkflowRunResponse.mode: Literal["workflow"] (was str) — only one
  valid value, so Literal makes the contract explicit.
- WorkflowRunData.outputs: Field(default_factory=dict) — matches the
  sibling metadata field's idiom; avoids the mutable-literal-default
  smell flagged in code review.
- Extends test_response_models_dump_per_mode with an explicit assertion
  on the WorkflowRunResponse.mode echo + exercises CompletionMessageResponse
  (was imported-but-unused).
2026-05-06 23:51:54 -07:00
40ae39a3a3 refactor(openapi): co-locate per-mode run response models in _models.py
Prepares for the api-3 unified /run route which imports all three
response shapes from one location. The per-mode files (chat_messages,
completion_messages, workflow_run) still define their own copies inline
until Task 4 deletes them — no collision since neither location is
imported anywhere else.
2026-05-06 23:46:53 -07:00
35c08f7c3d feat(enterprise): wire /apps/permitted via EE wrapper (app_ids only)
services/enterprise/app_permitted_service.list_permitted_apps calls
EnterpriseService.WebAppAuth.list_externally_accessible_apps and decodes
the camelCase wrapper response into a PermittedAppsPage carrying just
app_ids. The controller hydrates name/mode/tenant/etc. from local
App/Tenant rows.

The EE response shape is {data: [appId], total, hasMore} per ee-2. EE owns
access control only; dify/api owns app data, so the older inner-API
metadata fanout (/inner/api/enterprise/apps/batch-metadata) is removed.

- delete controllers/inner_api/app/metadata.py + its test
- service: ServiceUnavailable on EnterpriseAPIError; 5s timeout via wrapper
- controller: drop fail-fast subject check + unused g/InternalServerError
  imports
2026-05-06 22:50:10 -07:00
7b6ceaebea feat(inner_api): batch-metadata endpoint for EE permitted-apps flow 2026-05-06 03:11:58 -07:00
35d9b6a0f8 feat(openapi): merge /apps/<id>/{info,parameters} into /describe + ?fields
Collapse the openapi-namespace per-app reads into one canonical endpoint
GET /openapi/v1/apps/<id>/describe[?fields=info,parameters,input_schema]
returning a single AppDescribeResponse with all blocks Optional and a new
JSON-Schema input_schema block derived server-side from user_input_form +
app mode.

- AppDescribeQuery (Pydantic, extra=forbid) parses the ?fields allow-list;
  unknown member -> 422.
- _input_schema.build_input_schema(app) derives Draft 2020-12 JSON Schema:
  chat-family modes carry top-level query (string, minLength=1, required);
  workflow / completion only carry inputs. AppUnavailableError -> empty
  sentinel (EMPTY_INPUT_SCHEMA).
- Drop AppByIdApi (/apps/<id>) and AppParametersApi (/apps/<id>/parameters)
  route classes; delete app_info.py module + app_info_payload helper.
- AppDescribeResponse.{info,parameters,input_schema} now Optional[None].

Lock-step deploy with difyctl Phase B (/describe consumer migration).
2026-05-06 00:53:41 -07:00
d1c1c04615 fix(openapi): /apps/permitted hardening + naming
- fail-fast on missing subject_email/subject_issuer for dfoe_
  bearers (was silently coercing None -> empty string and sending
  a malformed query upstream).
- document the has_more contract: total comes from EE inner-API
  unfiltered count; locally-dropped archived rows leave
  len(items) < limit even when has_more=True.
- gate tenant-name lookup in /apps on non-empty rows so empty
  filter results skip the wasted scalar query.
- rename AppListPermittedApi -> AppPermittedListApi for word-order
  consistency with AppPermittedListQuery.
- tests: positive mode acceptance and explicit dfoa_ non-carrier
  assertion.
2026-05-05 21:12:33 -07:00
04ebf8a92f feat(openapi): /apps/permitted — external-subject app discovery (EE)
Split route for dfoe_ external-SSO discovery, separate from /apps
(dfoa_-only workspace catalog). Cross-tenant allow-list query: server
calls Enterprise inner-API POST /inner/api/webapp/permitted-apps and
hydrates app/tenant rows locally. New scope apps:read:permitted (no
dual-meaning with apps:read). Route gated by @enterprise_only — 404
on CE — and validate_bearer(accept=ACCEPT_USER_EXT_SSO) — 403 on dfoa_.
Query validator rejects workspace_id and tag (cross-tenant
unresolvable); mode/name supported.

EE inner-API wire-up depends on ee-2; the service-layer stub raises
ServiceUnavailable until that endpoint ships. CLI dispatches between
/apps and /apps/permitted client-side based on the bearer prefix in
hosts.yml — see docs/specs/v1.0/apps.md §Subject dispatch.

Verified via unit tests on AppPermittedListQuery and Scope wiring;
HTTP integration tests deferred to ee-2 once the inner-API ships.
2026-05-05 20:20:22 -07:00
6f3c2fe97b test(openapi): unit coverage for app payload helpers
app_info_payload, parameters_payload, _EMPTY_PARAMETERS are CLI
contracts. Direct unit tests pin their shape independent of DB +
HTTP plumbing — drift in the helpers fails fast at unit-test time
instead of leaking through into integration runs.
2026-05-05 20:13:03 -07:00
03cd16fc44 refactor(openapi): /account/sessions uses MAX_PAGE_LIMIT
Drops the magic 200 in favor of the shared constant introduced in
Task 1's _models.py rewrite. Behavior unchanged (still caps at 200).
Sibling endpoint /apps already wired the constant through AppListQuery
in Task 3; this closes the loop on the single-source-of-truth goal.
2026-05-05 20:10:51 -07:00
3a6901e718 fix(openapi): /apps 422 body emits JSON
ValidationError -> UnprocessableEntity(exc.json()) so CLI consumers
can parse the error body. The previous str(errors()) produced a
Python repr (single-quoted dicts), not JSON. Also align with
sibling openapi controllers: request.args.to_dict(flat=True)
and 'as exc' naming.

Test cleanup: hoist module-scope imports; add a happy-path
positive case covering every field.
2026-05-05 20:08:43 -07:00
25034612b8 feat(openapi): AppListQuery — Pydantic validation for /apps
Replaces ad-hoc int(request.args.get(...)) parsing in AppListApi.get
with a typed Pydantic query model. Bad inputs (page=abc, limit=-1,
limit=500, mode=invalid, missing workspace_id) raise ValidationError
which the handler converts to 422 with field-level error detail
instead of 500 / silent empty page. Closes the mode whitelist via
AppMode enum.

Verified via direct unit tests on AppListQuery (no HTTP integration
tests required since the model carries the validation contract).
2026-05-05 20:02:47 -07:00
87620050d7 refactor(openapi): tighten _AppReadResource refactor
- Correct docstring: Flask-RESTX iterates method_decorators forward;
  the last entry becomes outermost via composition, not via framework
  reversal.
- Extract shared _APPS_READ_DECORATORS constant; was duplicated
  verbatim between AppReadResource and AppListApi.
- Rename _AppReadResource -> AppReadResource (no longer module-private
  since app_info.py imports it). Drops the pyright ignore.
2026-05-05 19:59:04 -07:00
e006eb7a4b refactor(openapi): _AppReadResource base for per-app reads
Four per-app GETs (/apps/<id>, /info, /parameters, /describe) repeated
the same SSO-guard / app-load / membership-check pattern. Hoist into
_AppReadResource with method_decorators=[require_scope, validate_bearer]
plus _load(app_id) -> (App, AuthContext). Subclasses now 3-line bodies.
Eliminates the per-method # type: ignore[reportUntypedFunctionDecorator]
suppression by relocating the decorator chain to the class attribute.
Endpoints now build typed AppInfoResponse / AppDescribeResponse and
.model_dump() at the boundary.
2026-05-05 19:51:42 -07:00
305de57eff test(openapi): tighten PEP 695 generic assertion
The previous test asserted only that model_fields exposed the
expected names — the legacy Generic[T] form would have passed
identically. Switch to __type_params__, which is non-empty only
under PEP 695 native syntax.
2026-05-05 19:39:49 -07:00
069fdd4894 refactor(openapi): typed app response models + MAX_PAGE_LIMIT
Adds AppListRow, AppInfoResponse, AppDescribeInfo, AppDescribeResponse
per spec docs/specs/v1.0/server/endpoints.md (every response a typed
Pydantic model). Adopts PEP 695 generic syntax for PaginationEnvelope
(drops legacy TypeVar + UP046 noqa). Centralizes the per-endpoint
limit cap as MAX_PAGE_LIMIT = 200.
2026-05-05 19:34:15 -07:00
783dfe38a0 chore(test): consolidate /openapi/v1 integration fixtures
- Shared conftest at tests/integration_tests/controllers/openapi/:
  workspace_account, app_in_workspace, mint_token (factory + tracked
  cleanup of OAuthAccessToken rows), account_token convenience fixture,
  autouse disable_enterprise (default ENTERPRISE_ENABLED=False; tests
  needing the EE branch override in-test), autouse _flush_auth_redis.

- test_auth.py covers Layer 0 + per-token rate limit + scope on /info.
  other_workspace_app fixture is a generator that cleans up the second
  tenant + app on teardown.

- test_apps.py covers the read surface: /apps list with pagination
  envelope, /apps/<id>, /info, /parameters, /describe, /account/sessions
  envelope migration, plus dfoe_ scope rejection on apps:read routes.
2026-05-05 18:08:31 -07:00
86ba361ff1 feat(openapi): app reads + canonical pagination envelope
Read-side surface for difyctl describe / get / list:

- GET /openapi/v1/apps              paginated list (workspace_id required)
- GET /openapi/v1/apps/<id>         single app summary
- GET /openapi/v1/apps/<id>/parameters  port of service_api parameters
- GET /openapi/v1/apps/<id>/describe    merged { info, parameters }

All gated by validate_bearer(ACCEPT_USER_ANY) + require_scope(APPS_READ) +
require_workspace_member(ctx, tenant_id). SSO subjects 404 (account-only
helper account_or_404 deduplicates the guard across the four endpoints).

PaginationEnvelope[T] (page, limit, total, has_more, data) is the canonical
shape for every /openapi/v1/* list endpoint. has_more is computed by the
server from page * limit < total. /account/sessions migrates from the
legacy { sessions: [...] } shape to the envelope; integration tests assert
the legacy key is gone.
2026-05-05 18:08:12 -07:00
591048d7c2 feat(openapi): bearer auth pipeline + Layer 0 + per-token rate limit (CE)
Bearer auth surface for /openapi/v1/* run-routes:

- OAUTH_BEARER_PIPELINE (renamed from APP_PIPELINE for clarity outside this
  module) composes BearerCheck → ScopeCheck → AppResolver →
  WorkspaceMembershipCheck → AppAuthzCheck → CallerMount.
- BearerAuthenticator.authenticate() is the single source of identity +
  rate-limit. Both pipeline (BearerCheck) and decorator (validate_bearer)
  delegate to it, so per-token rate limit fires exactly once per request.
- Layer 0 (workspace membership) is CE-only; on EE the gateway owns
  tenant isolation. Verdicts are cached on the AuthContext entry as
  verified_tenants: dict[str, bool] (legacy "ok"/"denied" strings tolerated
  by from_cache for one TTL cycle, then removed).
- check_workspace_membership(...) is the shared core; the pipeline step
  and the inline require_workspace_member helper both delegate to it.
- Per-token rate limit: 60/min sliding window, RFC-7231-compliant 429
  with Retry-After header + JSON body { error, retry_after_ms }. Bucket
  key is sha256(token) so all replicas share state via Redis.

API hygiene:
- Scope StrEnum (FULL, APPS_READ, APPS_RUN) replaces bare string literals.
- /openapi/v1/apps/<id>/info: scope flipped from apps:run to apps:read.
- /info migrates off the pipeline to validate_bearer + require_scope +
  require_workspace_member (no AppAuthzCheck/CallerMount needed for reads).
- ResolvedRow gains to_cache() / from_cache() classmethods.
- AuthContext gains token_hash + verified_tenants, dropping the per-route
  re-hash and per-request Redis read on the cache hit path.

OPENAPI_RATE_LIMIT_PER_TOKEN config (default 60).
2026-05-05 18:07:47 -07:00
8a62c1d915 chore(api): pyright + ruff cleanup for openapi/cli surface
Type and lint pass over the openapi controllers, auth pipeline, and
oauth bearer/device-flow plumbing. Down from 36 pyright errors and 16
ruff errors to 0/0; 93 openapi unit tests pass.

Logic fixes:
- libs/oauth_bearer.py: drop private-naming on the friend-API methods
  consumed by _VariantResolver (cache_get / cache_set_positive /
  cache_set_negative / hard_expire / session_factory). They were always
  cross-class accessors — leading underscore was misleading. Add public
  registry property on BearerAuthenticator. _hard_expire row_id widened
  to UUID | str (matches the StringUUID column type).
- libs/oauth_bearer.py: type validate_bearer / bearer_feature_required
  with ParamSpec / PEP-695 so wrapped routes preserve their signature.
- libs/rate_limit.py: same — typed rate_limit decorator.
- services/oauth_device_flow.py: mint_oauth_token / _upsert accept
  Session | scoped_session (Flask-SQLAlchemy proxy). Guard row-is-None
  after upsert.
- controllers/openapi/{chat,completion,workflow}_messages.py: tuple-vs-
  Mapping shape narrowing on AppGenerateService.generate return —
  production returns Mapping, tests mock as (body, status). Validate
  through Pydantic Response model in both shapes.
- controllers/openapi/oauth_device.py: replace flask_restx.reqparse (banned)
  with Pydantic Request/Query models — DeviceCodeRequest, DevicePollRequest,
  DeviceLookupQuery, DeviceMutateRequest. Two PEP-695 generic helpers
  (_validate_json / _validate_query) translate ValidationError to BadRequest.
- controllers/openapi/auth/strategies.py: Protocol param-name match
  (subject_type), Optional narrowing on app/tenant/account_id/subject_email.
- controllers/openapi/auth/steps.py: subject_type-is-None guard before
  mounter dispatch.
- core/app/apps/workflow/generate_task_pipeline.py + models/workflow.py:
  add WorkflowAppLogCreatedFrom.OPENAPI + matching match-case branch.
  Fixes match-exhaustiveness and possibly-unbound created_from.
- libs/device_flow_security.py: pyright ignore on flask after_request
  hook (registered by the framework, pyright sees as unused).
- services/oauth_device_flow.py: rename Exceptions to *Error suffix
  (StateNotFoundError / InvalidTransitionError / UserCodeExhaustedError);
  same for libs/oauth_bearer.py (InvalidBearerError / TokenExpiredError).
  Update all callers across openapi controllers.
- controllers/openapi/{oauth_device,oauth_device_sso}.py +
  services/oauth_device_flow.py: switch logger.error in except blocks
  to logger.exception (TRY400) — keeps the traceback for ops.
- configs/feature/__init__.py: OPENAPI_KNOWN_CLIENT_IDS computed_field
  needs an @property alongside for pyright to see it as a value, not a
  method. Matches the existing line-451 pattern.

Plus ruff format + import-sort across the openapi tree (pure formatting).
2026-04-28 21:44:54 -07:00
b083c910b3 fix(web/device): bounce to authorize_account after post-login return
When an unauthenticated user submits a user_code, the chooser view
holds the typed code and redirects to /signin. After login, the page
re-mounts on /device with no URL params (already scrubbed on the
first render) and account loaded — but the existing useEffect path
only advanced when ssoVerified or urlUserCode was present.

Add an early branch: if view is chooser and account just loaded,
advance to authorize_account using the userCode stashed in view
state. Also widen the effect deps to view (not view.kind) so the
nested userCode reads stay current.
2026-04-28 20:42:06 -07:00
9b2a37ceff feat(openapi): cookie auth for device-flow approval routes
Adds the openapi blueprint branch in load_user_from_request so that
account-branch device-flow approval routes (approve / deny /
approval-context) can authenticate via the console session cookie
under @login_required.

Splits extract_access_token into two helpers:
- extract_console_cookie_token (cookie-only) — used by openapi
  approval routes that must not fall through to the Authorization
  header, where dfoa_/dfoe_ bearers live (those aren't JWTs and
  PassportService.verify would crash on them).
- extract_access_token retains both code paths for legacy callers.
2026-04-28 20:41:38 -07:00
cf5ebe9430 feat(openapi): app-run endpoints with auth pipeline
Ports service_api/app/{completion,workflow}.py to bearer-authed
/openapi/v1/apps/<app_id>/{info,chat-messages,completion-messages,workflows/run}.

Architecture:
- New controllers/openapi/auth/ package: Pipeline + Step protocol over
  one mutable Context. Endpoints attach via @APP_PIPELINE.guard(scope=...)
  — single attachment point; forgetting auth is structurally impossible.
- Pipeline order: BearerCheck -> ScopeCheck -> AppResolver -> AppAuthzCheck
  -> CallerMount.
- Strategies vary along independent axes: AclStrategy (EE webapp-auth inner
  API) vs MembershipStrategy (CE TenantAccountJoin); AccountMounter vs
  EndUserMounter dispatched by SubjectType.
- App is in URL path (not header). Each non-GET has typed Pydantic Request;
  each non-SSE response has typed Pydantic Response. Bearer-as-identity:
  body 'user' field stripped, ignored if present.

Adds InvokeFrom.OPENAPI enum variant. Emits app.run.openapi audit log
on successful invocation via standard logger extra={"audit": True, ...}
convention.
2026-04-27 17:25:17 -07:00
85c3f9cbf8 fix(device-flow): scope approval-grant cookie to /openapi/v1/oauth/device
Phase F retired the legacy /v1/oauth/device/* mounts but the cookie path
still pointed at the dead prefix. Browsers therefore dropped the cookie
on the canonical /openapi/v1/oauth/device/* requests, so SSO-branch
approval-context and approve-external returned 401 no_session even
right after sso-complete had set the cookie.
2026-04-27 01:15:44 -07:00
d98fe7916a fix(nginx): route /openapi to api backend
Phase F removed legacy /v1/oauth/device/* and /console/api/oauth/device/*
mounts in favour of /openapi/v1. Without this rule /openapi falls through
to location / and proxies to web:3000, returning 404 for every API call.
2026-04-27 01:06:19 -07:00
0b3b0b5ce8 feat(api): retire legacy /v1/* and /console/api device-flow mounts (Phase F)
Web and CLI consumers now hit /openapi/v1/* directly, so the dual-mount
shims can go:
  - controllers/oauth_device_sso.py (legacy /v1/oauth/device/sso-* + /v1/device/sso-complete)
  - controllers/service_api/oauth.py (legacy /v1/oauth/device/*, /v1/me, /v1/oauth/authorizations/self)
  - controllers/console/auth/oauth_device.py (placeholder for legacy /console/api/oauth/device/{approve,deny})
  - the deferred _register_legacy_console_mount() inside openapi/oauth_device.py

Imports in controllers/console/__init__.py, controllers/service_api/__init__.py,
and extensions/ext_blueprints.py pruned. Tests rewritten to openapi-only.
2026-04-27 00:45:10 -07:00
eb5ef3dba5 feat(web): switch /device page to /openapi/v1 paths (Phase G.21)
Approve/deny + lookup + SSO endpoints now live under /openapi/v1/oauth/device/*.
Approve/deny use direct fetch with console session cookie + CSRF instead of
the /console/api-prefixed post() helper.
2026-04-27 00:32:31 -07:00
a07b32274a feat(api): add /openapi/v1/workspaces reads (Phase E.17)
GET /openapi/v1/workspaces lists tenants the bearer's account is a
member of. GET /openapi/v1/workspaces/<id> returns one workspace
detail, member-gated (404 on non-member, never 403, so workspace IDs
don't leak across tenants).

Bearer-authed via @validate_bearer(accept=ACCEPT_USER_ANY). External
SSO bearers (no account_id) get an empty list / 404 — same posture as
GET /openapi/v1/account.

Cookie-authed /console/api/workspaces stays in console for the
dashboard SPA — different consumer, different auth model. No legacy
/v1/ remount this phase.

Plan: docs/superpowers/plans/2026-04-26-openapi-migration.md (in difyctl repo).
2026-04-27 00:10:16 -07:00
2a38df2b7f refactor(api): consolidate openapi/oauth_device into per-domain modules
Match the existing api-group convention: one module per resource family
with multiple Resource classes per file (cf service_api/dataset/dataset.py
with 7 routes, console/auth/oauth_device.py with 2 before this branch).

The Phase B-D fragmentation (one file per route under
controllers/openapi/oauth_device/) was inconsistent with the codebase.
Collapse into:

  controllers/openapi/oauth_device.py        (5 routes: code, token,
                                              lookup, approve, deny —
                                              account branch)
  controllers/openapi/oauth_device_sso.py    (4 routes: sso-initiate,
                                              sso-complete,
                                              approval-context,
                                              approve-external —
                                              EE-only SSO branch)

The split mirrors the original pre-migration layout: account branch in
console/auth/oauth_device.py, SSO branch in controllers/oauth_device_sso.py
(root). Both legacy mount files updated to import from the new modules.

No behavior change; 59 tests still green. Test files updated to import
from the consolidated module paths.

Plan: docs/superpowers/plans/2026-04-26-openapi-migration.md (in difyctl repo).
2026-04-27 00:07:15 -07:00
71e9e8dda6 feat(api): lift SSO branch device-flow handlers to /openapi/v1 (Phase D.15-16)
The four EE-only SSO handlers (sso_initiate, sso_complete,
approval_context, approve_external) move from controllers/oauth_device_sso.py
to controllers/openapi/oauth_device/. Each is registered on openapi_bp
via @bp.route at the canonical path:

  /openapi/v1/oauth/device/sso-initiate
  /openapi/v1/oauth/device/sso-complete
  /openapi/v1/oauth/device/approval-context
  /openapi/v1/oauth/device/approve-external

sso-complete moves under /oauth/device/ from its previous orphan path
/v1/device/sso-complete; the IdP-side ACS callback URL hardcoded in
sso_initiate now points to the canonical path. Operators must
re-register the ACS callback with each IdP before Phase F deletes the
legacy alias.

oauth_device_sso.py shrinks to a thin re-mount file: same legacy bp
with attach_anti_framing applied, four bp.add_url_rule() calls binding
the legacy paths to the imported view functions. Same handler runs
for both mounts — no duplicated logic.

attach_anti_framing(openapi_bp) added in controllers/openapi/__init__.py
so X-Frame-Options + frame-ancestors CSP cover the canonical paths too.

Plan: docs/superpowers/plans/2026-04-26-openapi-migration.md (in difyctl repo).
2026-04-27 00:00:24 -07:00
772f450b29 feat(api): lift device-flow approve/deny to /openapi/v1 (Phase D.13-14)
DeviceApproveApi + DeviceDenyApi (cookie-authed) move to
controllers/openapi/oauth_device/{approve,deny}.py. Decorator stack
preserved verbatim: setup_required → login_required →
account_initialization_required → bearer_feature_required →
rate_limit. Audit event names ('oauth.device_flow_approved' /
'oauth.device_flow_denied') unchanged so the OTel exporter
registration keeps routing them.

The legacy /console/api/oauth/device/{approve,deny} mounts are
re-registered on console_ns from the bottom of the new files via a
local-import _register_legacy_console_mount() helper. The local
import breaks an import cycle that would otherwise form: openapi
imports console.wraps for setup_required, console.__init__.py imports
console.auth.oauth_device, which would re-import the openapi class
mid-load. Deferring console_ns past the class definition resolves it.

console/auth/oauth_device.py becomes a 9-line placeholder (the
existing console.__init__.py `from .auth import (..., oauth_device,
...)` keeps loading until Phase F prunes the import).

Plan: docs/superpowers/plans/2026-04-26-openapi-migration.md (in difyctl repo).
2026-04-26 23:57:28 -07:00
390f1f74db feat(api): add /openapi/v1/account/sessions endpoints (Phase C.11-12)
GET /openapi/v1/account/sessions lists the bearer's active OAuth
tokens (filtered to revoked_at IS NULL, expires_at > NOW(), token_hash
IS NOT NULL — no phantom devices). DELETE
/openapi/v1/account/sessions/<id> revokes a specific session with a
subject-match guard that returns 404 (not 403) on cross-subject so
token IDs don't leak across subjects.

Subject scoping abstracted into _subject_match(ctx): account subjects
filter by account_id; external_sso subjects filter by (email, issuer)
AND account_id IS NULL — preventing an SSO bearer from touching a
same-email account row from a federated tenant.

_revoke_token_by_id helper extracted so /sessions/self and
/sessions/<id> share the same UPDATE-where-revoked_at-IS-NULL
idempotent revoke + Redis cache invalidation.

No /v1/ equivalents — these are new endpoints (spec §Sessions list shape).

Plan: docs/superpowers/plans/2026-04-26-openapi-migration.md (in difyctl repo).
2026-04-26 23:51:55 -07:00
b7bd9c19ed feat(api): lift identity + self-revoke to /openapi/v1/account (Phase C.9-10)
GET /v1/me moves to GET /openapi/v1/account. DELETE
/v1/oauth/authorizations/self moves to DELETE
/openapi/v1/account/sessions/self. Both classes (AccountApi,
AccountSessionsSelfApi) are now in controllers/openapi/account.py and
re-registered on service_api_ns at the legacy paths.

service_api/oauth.py is now nothing but legacy re-mount declarations
(20 lines). All in-place handler logic has moved to openapi/. Phase F
will delete the file and the legacy mounts together.

Helper functions (_load_memberships, _pick_default_workspace,
_workspace_payload, _account_payload) move with the AccountApi class.

Plan: docs/superpowers/plans/2026-04-26-openapi-migration.md (in difyctl repo).
2026-04-26 23:50:15 -07:00
e93821af46 feat(api): lift GET /oauth/device/lookup to /openapi/v1 (Phase B.8)
Same pattern as B.6 / B.7: OAuthDeviceLookupApi moves to
controllers/openapi/oauth_device/lookup.py and is re-registered on
service_api_ns to keep /v1/oauth/device/lookup serving until Phase F.

service_api/oauth.py is now down to /me + /oauth/authorizations/self
plus three legacy mounts; remaining handlers move in Phase C.
Now-unused imports (LIMIT_LOOKUP_PUBLIC, rate_limit, reqparse, request,
DEVICE_FLOW_TTL_SECONDS, DeviceFlowRedis, DeviceFlowStatus) removed.

Plan: docs/superpowers/plans/2026-04-26-openapi-migration.md (in difyctl repo).
2026-04-26 23:44:05 -07:00
9408759954 feat(api): lift POST /oauth/device/token to /openapi/v1 (Phase B.7)
Same pattern as B.6: OAuthDeviceTokenApi moves to
controllers/openapi/oauth_device/token.py and is re-registered on
service_api_ns to keep /v1/oauth/device/token serving until Phase F.

_audit_cross_ip_if_needed helper moves with the handler. Now-unused
imports removed from service_api/oauth.py.

Plan: docs/superpowers/plans/2026-04-26-openapi-migration.md (in difyctl repo).
2026-04-26 23:42:27 -07:00
fe9412af5d feat(api): lift POST /oauth/device/code to /openapi/v1 (Phase B.6)
Canonical class OAuthDeviceCodeApi now lives in
controllers/openapi/oauth_device/code.py and is registered on
openapi_ns at /openapi/v1/oauth/device/code. service_api/oauth.py
re-registers the same class object on service_api_ns at
/v1/oauth/device/code so existing callers keep working until Phase F.

KNOWN_CLIENT_IDS literal moves to dify_config.OPENAPI_KNOWN_CLIENT_IDS
(CSV-parsed, default "difyctl") so new CLIs / SDKs can be admitted
without code changes (CLAUDE.md rule 8 — no magic strings).

_verification_uri helper moves with the handler. Single source of
truth — no duplicated logic between the two mounts.

Plan: docs/superpowers/plans/2026-04-26-openapi-migration.md (in difyctl repo).
2026-04-26 23:40:58 -07:00
218ef6a447 feat(api): CORS posture for /openapi/v1 (Phase A.5)
OPENAPI_CORS_ALLOW_ORIGINS env var defaults to empty (same-origin only).
Operators expand for third-party integrations via comma-separated list.
Allowed headers: Authorization, Content-Type, X-CSRF-Token. Methods:
GET POST PATCH DELETE OPTIONS. Max-Age 600s. supports_credentials=True
so cookie-authed approve/deny work once Phase D moves them in.

Disallowed origins receive a normal 200 OPTIONS response without the
Access-Control-Allow-Origin header — flask-cors's standard behavior;
browser blocks the cross-origin request from the disallowed origin.

Plan: docs/superpowers/plans/2026-04-26-openapi-migration.md (in difyctl repo).
2026-04-26 23:30:27 -07:00
501c0b8746 feat(api): add require_scope decorator (Phase A.4)
Route-level scope gate; pairs with validate_bearer. Bearer holding the
catch-all SCOPE_FULL ('full', carried by dfoa_) passes any check;
narrower bearers (dfoe_, future PATs) need the exact scope listed in
the route decorator.

No v1.0 route applies it yet — apps/datasets controllers will be the
first consumers when those plans land. Programming-error guard: if
@require_scope runs without @validate_bearer above it, raises
RuntimeError instead of silently allowing.

Plan: docs/superpowers/plans/2026-04-26-openapi-migration.md (in difyctl repo).
2026-04-26 23:27:48 -07:00
4214583ae5 refactor(api): hoist bearer_feature_required to libs/oauth_bearer (Phase A.3)
The decorator was defined inline in console/auth/oauth_device.py. Phase
D will move approve/deny to controllers/openapi/oauth_device/ and the
new SSO branch under the same group needs the same gate. Hoist it to
libs/oauth_bearer.py now so the move stays a pure file rename later.

Behavior unchanged: 503 'bearer_auth_disabled' when ENABLE_OAUTH_BEARER
is off. console/auth/oauth_device.py imports it from libs and drops
the now-unused dify_config / wraps / ServiceUnavailable imports.

Plan: docs/superpowers/plans/2026-04-26-openapi-migration.md (in difyctl repo).
2026-04-26 23:26:13 -07:00
73771cb58c refactor(api): drop vestigial Accepts.APP from validate_bearer (Phase A.2)
Accepts.APP and the matching app- short-circuit existed to let routes
declare "I accept either OAuth or app- tokens", but no production
caller ever did, and the short-circuit returned without doing the
tenant/app/end-user setup that app- tokens actually need (that lives
in service_api/wraps.py:validate_app_token).

After this change, validate_bearer is OAuth-only. app- bearers fall
through the prefix dispatch and surface as InvalidBearer -> 401, which
is what we already promised on /openapi/* (no app- accepted) and what
the docstring claimed all along.

Pre-check rg "Accepts\\.APP" returned zero hits outside the function
being edited; no callers to update.

Plan: docs/superpowers/plans/2026-04-26-openapi-migration.md (in difyctl repo).
2026-04-26 23:24:56 -07:00
f5f224f49d feat(api): scaffold /openapi/v1 blueprint (Phase A.1)
New Flask blueprint at /openapi/v1/ that will host user-scoped
programmatic endpoints (device flow, identity, sessions, workspaces).
Ships only a smoke route GET /openapi/v1/_health for now; subsequent
phases lift handlers in from service_api, console, and the orphan
oauth_device_sso.py.

CORS is intentionally omitted here and configured in step A.5 once
the allowlist envvar lands.

Plan: docs/superpowers/plans/2026-04-26-openapi-migration.md (in difyctl repo).
2026-04-26 23:08:15 -07:00
813da349ec fix(api,web): post-review hardening for OAuth device flow
- api: account-flow stores subject_issuer="dify:account" sentinel
  instead of NULL so the rotate-in-place unique index collides as
  intended (Postgres treats NULLs as distinct in unique indices).
  mint_oauth_token validates prefix-specific issuer rules.
- api: enterprise_only inverts to an allowlist (ACTIVE / EXPIRING) so
  any future LicenseStatus value defaults to denial.
- api: consume_on_poll moved to a single Lua script (GET + status-check
  + DEL) so concurrent pollers can't both observe APPROVED.
- web: typed DeviceFlowError + central error-copy mapping; page
  surfaces rate_limited / lookup_failed view states; URL params
  scrubbed after consumption (RFC 8628 §5.4).
2026-04-26 23:05:07 -07:00
fe8510ad1a feat(api,web): OAuth 2.0 device flow + bearer auth (RFC 8628)
Adds a CLI-friendly authorization flow so difyctl (and future
non-browser clients) can obtain user-scoped tokens without copy-
pasting cookies or raw API keys. Two grant paths share one device
flow surface:

  1. Account branch — user signs in via the existing /signin
     methods, /device page calls console-authed approve, mints a
     dfoa_ token tied to (account_id, tenant).
  2. External-SSO branch (EE) — /v1/oauth/device/sso-initiate signs
     an SSOState envelope, hands off to Enterprise's external ACS,
     receives a signed external-subject assertion, mints a dfoe_
     token tied to (subject_email, subject_issuer).

API surface (all under /v1, EE-only endpoints 404 on CE):

  POST   /v1/oauth/device/code              — RFC 8628 start
  POST   /v1/oauth/device/token             — RFC 8628 poll
  GET    /v1/oauth/device/lookup            — pre-validate user_code
  GET    /v1/oauth/device/sso-initiate      — SSO branch entry
  GET    /v1/device/sso-complete            — SSO callback sink
  GET    /v1/oauth/device/approval-context  — /device cookie probe
  POST   /v1/oauth/device/approve-external  — SSO approve
  GET    /v1/me                             — bearer subject lookup
  DELETE /v1/oauth/authorizations/self      — self-revoke
  POST   /console/api/oauth/device/approve  — account approve
  POST   /console/api/oauth/device/deny     — account deny

Core primitives:
- libs/oauth_bearer.py: prefix-keyed TokenKindRegistry +
  BearerAuthenticator + validate_bearer decorator. Two-tier scope
  (full vs apps:run) stamped from the registry, never from the DB.
- libs/jws.py: HS256 compact JWS keyed on the shared Dify
  SECRET_KEY — same key-set verifies the SSOState envelope, the
  external-subject assertion (minted by Enterprise), and the
  approval-grant cookie.
- libs/device_flow_security.py: enterprise_only gate, approval-
  grant cookie mint/verify/consume (Path=/v1/oauth/device,
  HttpOnly, SameSite=Lax, Secure follows is_secure()), anti-
  framing headers.
- libs/rate_limit.py: typed RateLimit / RateLimitScope dispatch
  with composite-key buckets; both decorator + imperative form.
- services/oauth_device_flow.py: Redis state machine (PENDING ->
  APPROVED|DENIED with atomic consume-on-poll), token mint via
  partial unique index uq_oauth_active_per_device (rotates in
  place), env-driven TTL policy.

Storage: oauth_access_tokens table with partial unique index on
(subject_email, subject_issuer, client_id, device_label) WHERE
revoked_at IS NULL. account_id NULL distinguishes external-SSO
rows. Hard-expire is CAS UPDATE (revoked_at + nullify token_hash)
so audit events keep their token_id. Retention pruner DELETEs
revoked + zombie-expired rows past OAUTH_ACCESS_TOKEN_RETENTION_DAYS.

Frontend: /device page with code-entry, chooser (account vs SSO),
authorize-account, authorize-sso views. SSO branch detaches from
the URL user_code and reads everything from the cookie via
/approval-context. Anti-framing headers on all responses.

Wiring: ENABLE_OAUTH_BEARER feature flag; ext_oauth_bearer binds
the authenticator at startup; clean_oauth_access_tokens_task
scheduled in ext_celery.

Spec: docs/specs/v1.0/server/{device-flow,tokens,middleware,security}.md
2026-04-26 20:06:43 -07:00
2846 changed files with 75131 additions and 160295 deletions

View File

@ -63,7 +63,7 @@ pnpm analyze-component <path> --json
```typescript
// ❌ Before: Complex state logic in component
function Configuration() {
const Configuration: FC = () => {
const [modelConfig, setModelConfig] = useState<ModelConfig>(...)
const [datasetConfigs, setDatasetConfigs] = useState<DatasetConfigs>(...)
const [completionParams, setCompletionParams] = useState<FormValue>({})
@ -85,7 +85,7 @@ export const useModelConfig = (appId: string) => {
}
// Component becomes cleaner
function Configuration() {
const Configuration: FC = () => {
const { modelConfig, setModelConfig } = useModelConfig(appId)
return <div>...</div>
}
@ -189,6 +189,8 @@ const Template = useMemo(() => {
**Dify Convention**:
- This skill is for component decomposition, not query/mutation design.
- When refactoring data fetching, follow `web/AGENTS.md`.
- Use `frontend-query-mutation` for contracts, query shape, data-fetching wrappers, query/mutation call-site patterns, conditional queries, invalidation, and mutation error handling.
- Do not introduce deprecated `useInvalid` / `useReset`.
- Do not add thin passthrough `useQuery` wrappers during refactoring; only extract a custom hook when it truly orchestrates multiple queries/mutations or shared derived state.
@ -365,7 +367,7 @@ For each extraction:
┌────────────────────────────────────────┐
│ 1. Extract code │
│ 2. Run: pnpm lint:fix │
│ 3. Run: pnpm type-check
│ 3. Run: pnpm type-check:tsgo
│ 4. Run: pnpm test │
│ 5. Test functionality manually │
│ 6. PASS? → Next extraction │

View File

@ -60,10 +60,8 @@ const Template = useMemo(() => {
**After** (complexity: ~3):
```typescript
import type { ComponentType } from 'react'
// Define lookup table outside component
const TEMPLATE_MAP: Record<AppModeEnum, Record<string, ComponentType<TemplateProps>>> = {
const TEMPLATE_MAP: Record<AppModeEnum, Record<string, FC<TemplateProps>>> = {
[AppModeEnum.CHAT]: {
[LanguagesSupported[1]]: TemplateChatZh,
[LanguagesSupported[7]]: TemplateChatJa,

View File

@ -65,10 +65,10 @@ interface ConfigurationHeaderProps {
onPublish: () => void
}
function ConfigurationHeader({
const ConfigurationHeader: FC<ConfigurationHeaderProps> = ({
isAdvancedMode,
onPublish,
}: ConfigurationHeaderProps) {
}) => {
const { t } = useTranslation()
return (
@ -136,7 +136,7 @@ const AppInfo = () => {
}
// ✅ After: Separate view components
function AppInfoExpanded({ appDetail, onAction }: AppInfoViewProps) {
const AppInfoExpanded: FC<AppInfoViewProps> = ({ appDetail, onAction }) => {
return (
<div className="expanded">
{/* Clean, focused expanded view */}
@ -144,7 +144,7 @@ function AppInfoExpanded({ appDetail, onAction }: AppInfoViewProps) {
)
}
function AppInfoCollapsed({ appDetail, onAction }: AppInfoViewProps) {
const AppInfoCollapsed: FC<AppInfoViewProps> = ({ appDetail, onAction }) => {
return (
<div className="collapsed">
{/* Clean, focused collapsed view */}
@ -203,12 +203,12 @@ interface AppInfoModalsProps {
onSuccess: () => void
}
function AppInfoModals({
const AppInfoModals: FC<AppInfoModalsProps> = ({
appDetail,
activeModal,
onClose,
onSuccess,
}: AppInfoModalsProps) {
}) => {
const handleEdit = async (data) => { /* logic */ }
const handleDuplicate = async (data) => { /* logic */ }
const handleDelete = async () => { /* logic */ }
@ -296,7 +296,7 @@ interface OperationItemProps {
onAction: (id: string) => void
}
function OperationItem({ operation, onAction }: OperationItemProps) {
const OperationItem: FC<OperationItemProps> = ({ operation, onAction }) => {
return (
<div className="operation-item">
<span className="icon">{operation.icon}</span>
@ -435,7 +435,7 @@ interface ChildProps {
onSubmit: () => void
}
function Child({ value, onChange, onSubmit }: ChildProps) {
const Child: FC<ChildProps> = ({ value, onChange, onSubmit }) => {
return (
<div>
<input value={value} onChange={e => onChange(e.target.value)} />

View File

@ -112,13 +112,13 @@ export const useModelConfig = ({
```typescript
// Before: 50+ lines of state management
function Configuration() {
const Configuration: FC = () => {
const [modelConfig, setModelConfig] = useState<ModelConfig>(...)
// ... lots of related state and effects
}
// After: Clean component
function Configuration() {
const Configuration: FC = () => {
const {
modelConfig,
setModelConfig,
@ -159,6 +159,8 @@ function Configuration() {
When hook extraction touches query or mutation code, do not use this reference as the source of truth for data-layer patterns.
- Follow `web/AGENTS.md` first.
- Use `frontend-query-mutation` for contracts, query shape, data-fetching wrappers, query/mutation call-site patterns, conditional queries, invalidation, and mutation error handling.
- Do not introduce deprecated `useInvalid` / `useReset`.
- Do not extract thin passthrough `useQuery` hooks; only extract orchestration hooks.

View File

@ -23,7 +23,7 @@ Use this skill for Dify's repository-level E2E suite in `e2e/`. Use [`e2e/AGENTS
- `e2e/scripts/run-cucumber.ts` and `e2e/cucumber.config.ts` when tags or execution flow matter
3. Read [`references/playwright-best-practices.md`](references/playwright-best-practices.md) only when locator, assertion, isolation, or waiting choices are involved.
4. Read [`references/cucumber-best-practices.md`](references/cucumber-best-practices.md) only when scenario wording, step granularity, tags, or expression design are involved.
5. Re-check official Playwright or Cucumber docs with the available documentation tools before introducing a new framework pattern.
5. Re-check official docs with Context7 before introducing a new Playwright or Cucumber pattern.
## Local Rules

View File

@ -9,18 +9,18 @@ Category: Performance
When rendering React Flow, prefer `useNodes`/`useEdges` for UI consumption and rely on `useStoreApi` inside callbacks that mutate or read node/edge state. Avoid manually pulling Flow data outside of these hooks.
## Complex prop stability
## Complex prop memoization
IsUrgent: False
IsUrgent: True
Category: Performance
### Description
Only require stable object, array, or map props when there is a clear reason: the child is memoized, the value participates in effect/query dependencies, the value is part of a stable-reference API contract, or profiling/local behavior shows avoidable re-renders. Do not request `useMemo` for every inline object by default; `how-to-write-component` treats memoization as a targeted optimization.
Wrap complex prop values (objects, arrays, maps) in `useMemo` prior to passing them into child components to guarantee stable references and prevent unnecessary renders.
Update this file when adding, editing, or removing Performance rules so the catalog remains accurate.
Risky:
Wrong:
```tsx
<HeavyComp
@ -31,7 +31,7 @@ Risky:
/>
```
Better when stable identity matters:
Right:
```tsx
const config = useMemo(() => ({

View File

@ -0,0 +1,44 @@
---
name: frontend-query-mutation
description: Guide for implementing Dify frontend query and mutation patterns with TanStack Query and oRPC. Trigger when creating or updating contracts in web/contract, wiring router composition, consuming consoleQuery or marketplaceQuery in components or services, deciding whether to call queryOptions() directly or extract a helper or use-* hook, handling conditional queries, cache invalidation, mutation error handling, or migrating legacy service calls to contract-first query and mutation helpers.
---
# Frontend Query & Mutation
## Intent
- Keep contract as the single source of truth in `web/contract/*`.
- Prefer contract-shaped `queryOptions()` and `mutationOptions()`.
- Keep invalidation and mutation flow knowledge in the service layer.
- Keep abstractions minimal to preserve TypeScript inference.
## Workflow
1. Identify the change surface.
- Read `references/contract-patterns.md` for contract files, router composition, client helpers, and query or mutation call-site shape.
- Read `references/runtime-rules.md` for conditional queries, invalidation, error handling, and legacy migrations.
- Read both references when a task spans contract shape and runtime behavior.
2. Implement the smallest abstraction that fits the task.
- Default to direct `useQuery(...)` or `useMutation(...)` calls with oRPC helpers at the call site.
- Extract a small shared query helper only when multiple call sites share the same extra options.
- Create `web/service/use-{domain}.ts` only for orchestration or shared domain behavior.
3. Preserve Dify conventions.
- Keep contract inputs in `{ params, query?, body? }` shape.
- Bind invalidation in the service-layer mutation definition.
- Prefer `mutate(...)`; use `mutateAsync(...)` only when Promise semantics are required.
## Files Commonly Touched
- `web/contract/console/*.ts`
- `web/contract/marketplace.ts`
- `web/contract/router.ts`
- `web/service/client.ts`
- `web/service/use-*.ts`
- component and hook call sites using `consoleQuery` or `marketplaceQuery`
## References
- Use `references/contract-patterns.md` for contract shape, router registration, query and mutation helpers, and anti-patterns that degrade inference.
- Use `references/runtime-rules.md` for conditional queries, invalidation, `mutate` versus `mutateAsync`, and legacy migration rules.
Treat this skill as the single query and mutation entry point for Dify frontend work. Keep detailed rules in the reference files instead of duplicating them in project docs.

View File

@ -0,0 +1,4 @@
interface:
display_name: "Frontend Query & Mutation"
short_description: "Dify TanStack Query and oRPC patterns"
default_prompt: "Use this skill when implementing or reviewing Dify frontend contracts, query and mutation call sites, conditional queries, invalidation, or legacy query/mutation migrations."

View File

@ -0,0 +1,98 @@
# Contract Patterns
## Table of Contents
- Intent
- Minimal structure
- Core workflow
- Query usage decision rule
- Mutation usage decision rule
- Anti-patterns
- Contract rules
- Type export
## Intent
- Keep contract as the single source of truth in `web/contract/*`.
- Default query usage to call-site `useQuery(consoleQuery|marketplaceQuery.xxx.queryOptions(...))` when endpoint behavior maps 1:1 to the contract.
- Keep abstractions minimal and preserve TypeScript inference.
## Minimal Structure
```text
web/contract/
├── base.ts
├── router.ts
├── marketplace.ts
└── console/
├── billing.ts
└── ...other domains
web/service/client.ts
```
## Core Workflow
1. Define contract in `web/contract/console/{domain}.ts` or `web/contract/marketplace.ts`.
- Use `base.route({...}).output(type<...>())` as the baseline.
- Add `.input(type<...>())` only when the request has `params`, `query`, or `body`.
- For `GET` without input, omit `.input(...)`; do not use `.input(type<unknown>())`.
2. Register contract in `web/contract/router.ts`.
- Import directly from domain files and nest by API prefix.
3. Consume from UI call sites via oRPC query utilities.
```typescript
import { useQuery } from '@tanstack/react-query'
import { consoleQuery } from '@/service/client'
const invoiceQuery = useQuery(consoleQuery.billing.invoices.queryOptions({
staleTime: 5 * 60 * 1000,
throwOnError: true,
select: invoice => invoice.url,
}))
```
## Query Usage Decision Rule
1. Default to direct `*.queryOptions(...)` usage at the call site.
2. If 3 or more call sites share the same extra options, extract a small query helper, not a `use-*` passthrough hook.
3. Create `web/service/use-{domain}.ts` only for orchestration.
- Combine multiple queries or mutations.
- Share domain-level derived state or invalidation helpers.
```typescript
const invoicesBaseQueryOptions = () =>
consoleQuery.billing.invoices.queryOptions({ retry: false })
const invoiceQuery = useQuery({
...invoicesBaseQueryOptions(),
throwOnError: true,
})
```
## Mutation Usage Decision Rule
1. Default to mutation helpers from `consoleQuery` or `marketplaceQuery`, for example `useMutation(consoleQuery.billing.bindPartnerStack.mutationOptions(...))`.
2. If the mutation flow is heavily custom, use oRPC clients as `mutationFn`, for example `consoleClient.xxx` or `marketplaceClient.xxx`, instead of handwritten non-oRPC mutation logic.
## Anti-Patterns
- Do not wrap `useQuery` with `options?: Partial<UseQueryOptions>`.
- Do not split local `queryKey` and `queryFn` when oRPC `queryOptions` already exists and fits the use case.
- Do not create thin `use-*` passthrough hooks for a single endpoint.
- These patterns can degrade inference, especially around `throwOnError` and `select`, and add unnecessary indirection.
## Contract Rules
- Input structure: always use `{ params, query?, body? }`.
- No-input `GET`: omit `.input(...)`; do not use `.input(type<unknown>())`.
- Path params: use `{paramName}` in the path and match it in the `params` object.
- Router nesting: group by API prefix, for example `/billing/*` becomes `billing: {}`.
- No barrel files: import directly from specific files.
- Types: import from `@/types/` and use the `type<T>()` helper.
- Mutations: prefer `mutationOptions`; use explicit `mutationKey` mainly for defaults, filtering, and devtools.
## Type Export
```typescript
export type ConsoleInputs = InferContractRouterInputs<typeof consoleRouterContract>
```

View File

@ -0,0 +1,130 @@
# Runtime Rules
## Table of Contents
- Conditional queries
- Cache invalidation
- Key API guide
- `mutate` vs `mutateAsync`
- Legacy migration
## Conditional Queries
Prefer contract-shaped `queryOptions(...)`.
When required input is missing, prefer `input: skipToken` instead of placeholder params or non-null assertions.
Use `enabled` only for extra business gating after the input itself is already valid.
```typescript
import { skipToken, useQuery } from '@tanstack/react-query'
// Disable the query by skipping input construction.
function useAccessMode(appId: string | undefined) {
return useQuery(consoleQuery.accessControl.appAccessMode.queryOptions({
input: appId
? { params: { appId } }
: skipToken,
}))
}
// Avoid runtime-only guards that bypass type checking.
function useBadAccessMode(appId: string | undefined) {
return useQuery(consoleQuery.accessControl.appAccessMode.queryOptions({
input: { params: { appId: appId! } },
enabled: !!appId,
}))
}
```
## Cache Invalidation
Bind invalidation in the service-layer mutation definition.
Components may add UI feedback in call-site callbacks, but they should not decide which queries to invalidate.
Use:
- `.key()` for namespace or prefix invalidation
- `.queryKey(...)` only for exact cache reads or writes such as `getQueryData` and `setQueryData`
- `queryClient.invalidateQueries(...)` in mutation `onSuccess`
Do not use deprecated `useInvalid` from `use-base.ts`.
```typescript
// Service layer owns cache invalidation.
export const useUpdateAccessMode = () => {
const queryClient = useQueryClient()
return useMutation(consoleQuery.accessControl.updateAccessMode.mutationOptions({
onSuccess: () => {
queryClient.invalidateQueries({
queryKey: consoleQuery.accessControl.appWhitelistSubjects.key(),
})
},
}))
}
// Component only adds UI behavior.
updateAccessMode({ appId, mode }, {
onSuccess: () => toast.success('...'),
})
// Avoid putting invalidation knowledge in the component.
mutate({ appId, mode }, {
onSuccess: () => {
queryClient.invalidateQueries({
queryKey: consoleQuery.accessControl.appWhitelistSubjects.key(),
})
},
})
```
## Key API Guide
- `.key(...)`
- Use for partial matching operations.
- Prefer it for invalidation, refetch, and cancel patterns.
- Example: `queryClient.invalidateQueries({ queryKey: consoleQuery.billing.key() })`
- `.queryKey(...)`
- Use for a specific query's full key.
- Prefer it for exact cache addressing and direct reads or writes.
- `.mutationKey(...)`
- Use for a specific mutation's full key.
- Prefer it for mutation defaults registration, mutation-status filtering, and devtools grouping.
## `mutate` vs `mutateAsync`
Prefer `mutate` by default.
Use `mutateAsync` only when Promise semantics are truly required, such as parallel mutations or sequential steps with result dependencies.
Rules:
- Event handlers should usually call `mutate(...)` with `onSuccess` or `onError`.
- Every `await mutateAsync(...)` must be wrapped in `try/catch`.
- Do not use `mutateAsync` when callbacks already express the flow clearly.
```typescript
// Default case.
mutation.mutate(data, {
onSuccess: result => router.push(result.url),
})
// Promise semantics are required.
try {
const order = await createOrder.mutateAsync(orderData)
await confirmPayment.mutateAsync({ orderId: order.id, token })
router.push(`/orders/${order.id}`)
}
catch (error) {
toast.error(error instanceof Error ? error.message : 'Unknown error')
}
```
## Legacy Migration
When touching old code, migrate it toward these rules:
| Old pattern | New pattern |
|---|---|
| `useInvalid(key)` in service layer | `queryClient.invalidateQueries(...)` inside mutation `onSuccess` |
| component-triggered invalidation after mutation | move invalidation into the service-layer mutation definition |
| imperative fetch plus manual invalidation | wrap it in `useMutation(...mutationOptions(...))` |
| `await mutateAsync()` without `try/catch` | switch to `mutate(...)` or add `try/catch` |

View File

@ -5,7 +5,7 @@ description: Generate Vitest + React Testing Library tests for Dify frontend com
# Dify Frontend Testing Skill
This skill enables Codex to generate high-quality, comprehensive frontend tests for the Dify project following established conventions and best practices.
This skill enables Claude to generate high-quality, comprehensive frontend tests for the Dify project following established conventions and best practices.
> **⚠️ Authoritative Source**: This skill is derived from `web/docs/test.md`. Use Vitest mock/timer APIs (`vi.*`).
@ -24,27 +24,35 @@ Apply this skill when the user:
**Do NOT apply** when:
- User is asking about backend/API tests (Python/pytest)
- User is asking about E2E tests (Cucumber + Playwright under `e2e/`)
- User is asking about E2E tests (Playwright/Cypress)
- User is only asking conceptual questions without code context
## Quick Reference
### Key Commands
### Tech Stack
Run these commands from `web/`. From the repository root, prefix them with `pnpm -C web`.
| Tool | Version | Purpose |
|------|---------|---------|
| Vitest | 4.0.16 | Test runner |
| React Testing Library | 16.0 | Component testing |
| jsdom | - | Test environment |
| nock | 14.0 | HTTP mocking |
| TypeScript | 5.x | Type safety |
### Key Commands
```bash
# Run all tests
pnpm test
# Watch mode
pnpm test --watch
pnpm test:watch
# Run specific file
pnpm test path/to/file.spec.tsx
# Generate coverage report
pnpm test --coverage
pnpm test:coverage
# Analyze component complexity
pnpm analyze-component <path>
@ -192,7 +200,7 @@ When assigned to test a directory/path, test **ALL content** within that path:
-**Import real project components** directly (including base components and siblings)
-**Only mock**: API services (`@/service/*`), `next/navigation`, complex context providers
-**DO NOT mock** base components (`@/app/components/base/*`) or dify-ui primitives (`@langgenius/dify-ui/*`)
-**DO NOT mock** base components (`@/app/components/base/*`)
-**DO NOT mock** sibling/child components in the same directory
> See [Test Structure Template](#test-structure-template) for correct import/mock patterns.
@ -220,10 +228,7 @@ Every test should clearly separate:
### 2. Black-Box Testing
- Test observable behavior, not implementation details
- Use semantic queries (`getByRole` with accessible `name`, `getByLabelText`, `getByPlaceholderText`, `getByText`, and scoped `within(...)`)
- Treat `getByTestId` as a last resort. If a control cannot be found by role/name, label, landmark, or dialog scope, fix the component accessibility first instead of adding or relying on `data-testid`.
- Remove production `data-testid` attributes when semantic selectors can cover the behavior. Keep them only for non-visual mocked boundaries, editor/browser shims such as Monaco, canvas/chart output, or third-party widgets with no accessible DOM in the test environment.
- Do not assert decorative icons by test id. Assert the named control that contains them, or mark decorative icons `aria-hidden`.
- Use semantic queries (getByRole, getByLabelText)
- Avoid testing internal state directly
- **Prefer pattern matching over hardcoded strings** in assertions:
@ -320,12 +325,12 @@ For more detailed information, refer to:
### Reference Examples in Codebase
- `web/utils/classnames.spec.ts` - Utility function tests
- `web/app/components/base/radio/__tests__/index.spec.tsx` - Component tests
- `web/app/components/base/button/index.spec.tsx` - Component tests
- `web/__mocks__/provider-context.ts` - Mock factory example
### Project Configuration
- `web/vite.config.ts` - Vite/Vitest configuration
- `web/vitest.config.ts` - Vitest configuration
- `web/vitest.setup.ts` - Test environment setup
- `web/scripts/analyze-component.js` - Component analysis tool
- Modules are not mocked automatically. Global mocks live in `web/vitest.setup.ts` (for example `react-i18next`, `next/image`); mock other modules like `ky` or `mime` locally in test files.

View File

@ -36,7 +36,7 @@ Use this checklist when generating or reviewing tests for Dify frontend componen
### Integration vs Mocking
- [ ] **DO NOT mock base components or dify-ui primitives** (base `Loading`, `Input`, `Badge`; dify-ui `Button`, `Tooltip`, `Dialog`, etc.)
- [ ] **DO NOT mock base components** (`Loading`, `Button`, `Tooltip`, etc.)
- [ ] Import real project components instead of mocking
- [ ] Only mock: API calls, complex context providers, third-party libs with side effects
- [ ] Prefer integration testing when using single spec file
@ -73,7 +73,7 @@ Use this checklist when generating or reviewing tests for Dify frontend componen
### Mocks
- [ ] **DO NOT mock base components or dify-ui primitives** (`@/app/components/base/*` or `@langgenius/dify-ui/*`)
- [ ] **DO NOT mock base components** (`@/app/components/base/*`)
- [ ] `vi.clearAllMocks()` in `beforeEach` (not `afterEach`)
- [ ] Shared mock state reset in `beforeEach`
- [ ] i18n uses global mock (auto-loaded in `web/vitest.setup.ts`); only override locally for custom translations
@ -127,7 +127,7 @@ For the current file being tested:
- [ ] Run full directory test: `pnpm test path/to/directory/`
- [ ] Check coverage report: `pnpm test:coverage`
- [ ] Run `pnpm lint:fix` on all test files
- [ ] Run `pnpm type-check`
- [ ] Run `pnpm type-check:tsgo`
## Common Issues to Watch

View File

@ -2,27 +2,29 @@
## ⚠️ Important: What NOT to Mock
### DO NOT Mock Base Components or dify-ui Primitives
### DO NOT Mock Base Components
**Never mock components from `@/app/components/base/` or from `@langgenius/dify-ui/*`** such as:
**Never mock components from `@/app/components/base/`** such as:
- Legacy base (`@/app/components/base/*`): `Loading`, `Spinner`, `Input`, `Badge`, `Tag`
- dify-ui primitives (`@langgenius/dify-ui/*`): `Button`, `Tooltip`, `Dialog`, `Popover`, `DropdownMenu`, `ContextMenu`, `Select`, `AlertDialog`, `Toast`
- `Loading`, `Spinner`
- `Button`, `Input`, `Select`
- `Tooltip`, `Modal`, `Dropdown`
- `Icon`, `Badge`, `Tag`
**Why?**
- These components have their own dedicated tests
- Base components will have their own dedicated tests
- Mocking them creates false positives (tests pass but real integration fails)
- Using real components tests actual integration behavior
```typescript
// ❌ WRONG: Don't mock base components or dify-ui primitives
// ❌ WRONG: Don't mock base components
vi.mock('@/app/components/base/loading', () => () => <div>Loading</div>)
vi.mock('@langgenius/dify-ui/button', () => ({ Button: ({ children }: any) => <button>{children}</button> }))
vi.mock('@/app/components/base/button', () => ({ children }: any) => <button>{children}</button>)
// ✅ CORRECT: Import and use the real components
// ✅ CORRECT: Import and use real base components
import Loading from '@/app/components/base/loading'
import { Button } from '@langgenius/dify-ui/button'
import Button from '@/app/components/base/button'
// They will render normally in tests
```
@ -56,7 +58,7 @@ See [Zustand Store Testing](#zustand-store-testing) section for full details.
| Location | Purpose |
|----------|---------|
| `web/vitest.setup.ts` | Global mocks shared by all tests (`react-i18next`, `zustand`, clipboard, FloatingPortal, Monaco, localStorage`) |
| `web/vitest.setup.ts` | Global mocks shared by all tests (`react-i18next`, `next/image`, `zustand`) |
| `web/__mocks__/zustand.ts` | Zustand mock implementation (auto-resets stores after each test) |
| `web/__mocks__/` | Reusable mock factories shared across multiple test files |
| Test file | Test-specific mocks, inline with `vi.mock()` |
@ -216,21 +218,28 @@ describe('Component', () => {
})
```
### 5. HTTP and `fetch` Mocking
### 5. HTTP Mocking with Nock
```typescript
import nock from 'nock'
const GITHUB_HOST = 'https://api.github.com'
const GITHUB_PATH = '/repos/owner/repo'
const mockGithubApi = (status: number, body: Record<string, unknown>, delayMs = 0) => {
return nock(GITHUB_HOST)
.get(GITHUB_PATH)
.delay(delayMs)
.reply(status, body)
}
describe('GithubComponent', () => {
beforeEach(() => {
vi.clearAllMocks()
afterEach(() => {
nock.cleanAll()
})
it('should display repo info', async () => {
vi.mocked(globalThis.fetch).mockResolvedValueOnce(
new Response(JSON.stringify({ name: 'dify', stars: 1000 }), {
status: 200,
headers: { 'Content-Type': 'application/json' },
}),
)
mockGithubApi(200, { name: 'dify', stars: 1000 })
render(<GithubComponent />)
@ -240,12 +249,7 @@ describe('GithubComponent', () => {
})
it('should handle API error', async () => {
vi.mocked(globalThis.fetch).mockResolvedValueOnce(
new Response(JSON.stringify({ message: 'Server error' }), {
status: 500,
headers: { 'Content-Type': 'application/json' },
}),
)
mockGithubApi(500, { message: 'Server error' })
render(<GithubComponent />)
@ -256,8 +260,6 @@ describe('GithubComponent', () => {
})
```
Prefer mocking `@/service/*` modules or spying on `global.fetch` / `ky` clients with deterministic responses. Do not introduce an HTTP interception dependency such as `nock` or MSW unless it is already declared in the workspace or adding it is part of the task.
### 6. Context Providers
```typescript
@ -317,7 +319,7 @@ const renderWithQueryClient = (ui: React.ReactElement) => {
### ✅ DO
1. **Use real base components and dify-ui primitives** - Import from `@/app/components/base/` or `@langgenius/dify-ui/*` directly
1. **Use real base components** - Import from `@/app/components/base/` directly
1. **Use real project components** - Prefer importing over mocking
1. **Use real Zustand stores** - Set test state via `store.setState()`
1. **Reset mocks in `beforeEach`**, not `afterEach`
@ -328,11 +330,11 @@ const renderWithQueryClient = (ui: React.ReactElement) => {
### ❌ DON'T
1. **Don't mock base components or dify-ui primitives** (`Loading`, `Input`, `Button`, `Tooltip`, `Dialog`, etc.)
1. **Don't mock base components** (`Loading`, `Button`, `Tooltip`, etc.)
1. **Don't mock Zustand store modules** - Use real stores with `setState()`
1. Don't mock components you can import directly
1. Don't create overly simplified mocks that miss conditional logic
1. Don't leave HTTP mocks or service mock state leaking between tests
1. Don't forget to clean up nock after each test
1. Don't use `any` types in mocks without necessity
### Mock Decision Tree
@ -340,7 +342,7 @@ const renderWithQueryClient = (ui: React.ReactElement) => {
```
Need to use a component in test?
├─ Is it from @/app/components/base/* or @langgenius/dify-ui/*?
├─ Is it from @/app/components/base/*?
│ └─ YES → Import real component, DO NOT mock
├─ Is it a project component?

View File

@ -227,12 +227,12 @@ Failing tests compound:
**Fix failures immediately before proceeding.**
## Integration with Codex's Todo Feature
## Integration with Claude's Todo Feature
When using Codex for multi-file testing:
When using Claude for multi-file testing:
1. **Create a todo list** before starting
1. **Process one file at a time**
1. **Ask Claude to create a todo list** before starting
1. **Request one file at a time** or ensure Claude processes incrementally
1. **Verify each test passes** before asking for the next
1. **Mark todos complete** as you progress

View File

@ -1,71 +0,0 @@
---
name: how-to-write-component
description: React/TypeScript component style guide. Use when writing, refactoring, or reviewing React components, especially around props typing, state boundaries, shared local state with Jotai atoms, API types, query/mutation contracts, navigation, memoization, wrappers, and empty-state handling.
---
# How To Write A Component
Use this as the decision guide for React/TypeScript component structure. Existing code is reference material, not automatic precedent; when it conflicts with these rules, adapt the approach instead of reproducing the violation.
## Core Defaults
- Search before adding UI, hooks, helpers, or styling patterns. Reuse existing base components, feature components, hooks, utilities, and design styles when they fit.
- Group code by feature workflow, route, or ownership area: components, hooks, local types, query helpers, atoms, constants, and small utilities should live near the code that changes with them.
- Promote code to shared only when multiple verticals need the same stable primitive. Otherwise keep it local and compose shared primitives inside the owning feature.
- Use Tailwind CSS v4.1+ rules via the `tailwind-css-rules` skill. Prefer v4 utilities, `gap`, `text-size/line-height`, `min-h-dvh`, and avoid deprecated utilities and `@apply`.
## Ownership
- Put local state, queries, mutations, handlers, and derived UI data in the lowest component that uses them. Extract a purpose-built owner component only when the logic has no natural home.
- Repeated TanStack query calls in sibling components are acceptable when each component independently consumes the data. Do not hoist a query only because it is duplicated; TanStack Query handles deduplication and cache sharing.
- Hoist state, queries, or callbacks to a parent only when the parent consumes the data, coordinates shared loading/error/empty UI, needs one consistent snapshot, or owns a workflow spanning children.
- Avoid prop drilling. One pass-through layer is acceptable; repeated forwarding means ownership should move down or into feature-scoped Jotai UI state. Keep server/cache state in query and API data flow.
- Keep callbacks in a parent only for workflow coordination such as form submission, shared selection, batch behavior, or navigation. Otherwise let the child or row own its action.
- Prefer uncontrolled DOM state and CSS variables before adding controlled props.
## Components, Props, And Types
- Type component signatures directly; do not use `FC` or `React.FC`.
- Prefer `function` for top-level components and module helpers. Use arrow functions for local callbacks, handlers, and lambda-style APIs.
- Prefer named exports. Use default exports only where the framework requires them, such as Next.js route files.
- Type simple one-off props inline. Use a named `Props` type only when reused, exported, complex, or clearer.
- Use API-generated or API-returned types at component boundaries. Keep small UI conversion helpers beside the component that needs them.
- Name values by their domain role and backend API contract, and keep that name stable across the call chain, especially IDs like `appInstanceId`. Normalize framework or route params at the boundary.
- Keep fallback and invariant checks at the lowest component that already handles that state; callers should pass raw values through instead of duplicating checks.
## Queries And Mutations
- Keep `web/contract/*` as the single source of truth for API shape; follow existing domain/router patterns and the `{ params, query?, body? }` input shape.
- Consume queries directly with `useQuery(consoleQuery.xxx.queryOptions(...))` or `useQuery(marketplaceQuery.xxx.queryOptions(...))`.
- Avoid pass-through hooks and thin `web/service/use-*` wrappers that only rename `queryOptions()` or `mutationOptions()`. Extract a small `queryOptions` helper only when repeated call-site options justify it.
- Keep feature hooks for real orchestration, workflow state, or shared domain behavior.
- For missing required query input, use `input: skipToken`; use `enabled` only for extra business gating after the input is valid.
- Consume mutations directly with `useMutation(consoleQuery.xxx.mutationOptions(...))` or `useMutation(marketplaceQuery.xxx.mutationOptions(...))`; use oRPC clients as `mutationFn` only for custom flows.
- Put shared cache behavior in `createTanstackQueryUtils(...experimental_defaults...)`; components may add UI feedback callbacks, but should not own shared invalidation rules.
- Do not use deprecated `useInvalid` or `useReset`.
- Prefer `mutate(...)`; use `mutateAsync(...)` only when Promise semantics are required, and wrap awaited calls in `try/catch`.
## Component Boundaries
- Use the first level below a page or tab to organize independent page sections when it adds real structure. This layer is layout/semantic first, not automatically the data owner.
- Split deeper components by the data and state each layer actually needs. Each component should access only necessary data, and ownership should stay at the lowest consumer.
- Keep cohesive forms, menu bodies, and one-off helpers local unless they need their own state, reuse, or semantic boundary.
- Separate hidden secondary surfaces from the trigger's main flow. For dialogs, dropdowns, popovers, and similar branches, extract a small local component that owns the trigger, open state, and hidden content when it would obscure the parent flow.
- Preserve composability by separating behavior ownership from layout ownership. A dropdown action may own its trigger, open state, and menu content; the caller owns placement such as slots, offsets, and alignment.
- Avoid unnecessary DOM hierarchy. Do not add wrapper elements unless they provide layout, semantics, accessibility, state ownership, or integration with a library API; prefer fragments or styling an existing element when possible.
- Avoid shallow wrappers and prop renaming unless the wrapper adds validation, orchestration, error handling, state ownership, or a real semantic boundary.
## You Might Not Need An Effect
- Use Effects only to synchronize with external systems such as browser APIs, non-React widgets, subscriptions, timers, analytics that must run because the component was shown, or imperative DOM integration.
- Do not use Effects to transform props or state for rendering. Calculate derived values during render, and use `useMemo` only when the calculation is actually expensive.
- Do not use Effects to handle user actions. Put action-specific logic in the event handler where the cause is known.
- Do not use Effects to copy one state value into another state value representing the same concept. Pick one source of truth and derive the rest during render.
- Do not reset or adjust state from props with an Effect. Prefer a `key` reset, storing a stable ID and deriving the selected object, or guarded same-component render-time adjustment when truly necessary.
- Prefer framework data APIs or TanStack Query for data fetching instead of writing request Effects in components.
- If an Effect still seems necessary, first name the external system it synchronizes with. If there is no external system, remove the Effect and restructure the state or event flow.
## Navigation And Performance
- Prefer `Link` for normal navigation. Use router APIs only for command-flow side effects such as mutation success, guarded redirects, or form submission.
- Avoid `memo`, `useMemo`, and `useCallback` unless there is a clear performance reason.

View File

@ -1,367 +0,0 @@
---
name: tailwind-css-rules
description: Tailwind CSS v4.1+ rules and best practices. Use when writing, reviewing, refactoring, or upgrading Tailwind CSS classes and styles, especially v4 utility migrations, layout spacing, typography, responsive variants, dark mode, gradients, CSS variables, and component styling.
---
# Tailwind CSS Rules and Best Practices
## Core Principles
- **Always use Tailwind CSS v4.1+** - Ensure the codebase is using the latest version
- **Do not use deprecated or removed utilities** - ALWAYS use the replacement
- **Never use `@apply`** - Use CSS variables, the `--spacing()` function, or framework components instead
- **Check for redundant classes** - Remove any classes that aren't necessary
- **Group elements logically** to simplify responsive tweaks later
## Upgrading to Tailwind CSS v4
### Before Upgrading
- **Always read the upgrade documentation first** - Read https://tailwindcss.com/docs/upgrade-guide and https://tailwindcss.com/blog/tailwindcss-v4 before starting an upgrade.
- Ensure the git repository is in a clean state before starting
### Upgrade Process
1. Run the upgrade command: `npx @tailwindcss/upgrade@latest` for both major and minor updates
2. The tool will convert JavaScript config files to the new CSS format
3. Review all changes extensively to clean up any false positives
4. Test thoroughly across your application
## Breaking Changes Reference
### Removed Utilities (NEVER use these in v4)
| ❌ Deprecated | ✅ Replacement |
| ----------------------- | ------------------------------------------------- |
| `bg-opacity-*` | Use opacity modifiers like `bg-black/50` |
| `text-opacity-*` | Use opacity modifiers like `text-black/50` |
| `border-opacity-*` | Use opacity modifiers like `border-black/50` |
| `divide-opacity-*` | Use opacity modifiers like `divide-black/50` |
| `ring-opacity-*` | Use opacity modifiers like `ring-black/50` |
| `placeholder-opacity-*` | Use opacity modifiers like `placeholder-black/50` |
| `flex-shrink-*` | `shrink-*` |
| `flex-grow-*` | `grow-*` |
| `overflow-ellipsis` | `text-ellipsis` |
| `decoration-slice` | `box-decoration-slice` |
| `decoration-clone` | `box-decoration-clone` |
### Renamed Utilities
Use the v4 name when migrating code that still carries Tailwind v3 semantics. Do not blanket-replace existing v4 classes: classes such as `rounded-sm`, `shadow-sm`, `ring-1`, and `ring-2` are valid in this codebase when they intentionally represent the current design scale.
| ❌ v3 pattern | ✅ v4 pattern |
| ------------------- | -------------------------------------------------- |
| `bg-gradient-*` | `bg-linear-*` |
| old shadow scale | verify against the current Tailwind/design scale |
| old blur scale | verify against the current Tailwind/design scale |
| old radius scale | use the Dify radius token mapping when applicable |
| `outline-none` | `outline-hidden` |
| bare `ring` utility | use an explicit ring width such as `ring-1`/`ring-2`/`ring-3` |
For Figma radius tokens, follow `packages/dify-ui/AGENTS.md`. For example, `--radius/xs` maps to `rounded-sm`; do not rewrite it to `rounded-xs`.
## Layout and Spacing Rules
### Flexbox and Grid Spacing
#### Always use gap utilities for internal spacing
Gap provides consistent spacing without edge cases (no extra space on last items). It's cleaner and more maintainable than margins on children.
```html
<!-- ❌ Don't do this -->
<div class="flex">
<div class="mr-4">Item 1</div>
<div class="mr-4">Item 2</div>
<div>Item 3</div>
<!-- No margin on last -->
</div>
<!-- ✅ Do this instead -->
<div class="flex gap-4">
<div>Item 1</div>
<div>Item 2</div>
<div>Item 3</div>
</div>
```
#### Gap vs Space utilities
- **Never use `space-x-*` or `space-y-*` in flex/grid layouts** - always use gap
- Space utilities add margins to children and have issues with wrapped items
- Gap works correctly with flex-wrap and all flex directions
```html
<!-- ❌ Avoid space utilities in flex containers -->
<div class="flex flex-wrap space-x-4">
<!-- Space utilities break with wrapped items -->
</div>
<!-- ✅ Use gap for consistent spacing -->
<div class="flex flex-wrap gap-4">
<!-- Gap works perfectly with wrapping -->
</div>
```
### General Spacing Guidelines
- **Prefer top and left margins** over bottom and right margins (unless conditionally rendered)
- **Use padding on parent containers** instead of bottom margins on the last child
- **Always use `min-h-dvh` instead of `min-h-screen`** - `min-h-screen` is buggy on mobile Safari
- **Prefer `size-*` utilities** over separate `w-*` and `h-*` when setting equal dimensions
- For max-widths, prefer the container scale (e.g., `max-w-2xs` over `max-w-72`)
## Typography Rules
### Line Heights
- **Never use `leading-*` classes** - Always use line height modifiers with text size
- **Always use fixed line heights from the spacing scale** - Don't use named values
```html
<!-- ❌ Don't do this -->
<p class="text-base leading-7">Text with separate line height</p>
<p class="text-lg leading-relaxed">Text with named line height</p>
<!-- ✅ Do this instead -->
<p class="text-base/7">Text with line height modifier</p>
<p class="text-lg/8">Text with specific line height</p>
```
### Font Size Reference
Be precise with font sizes - know the actual pixel values:
- `text-xs` = 12px
- `text-sm` = 14px
- `text-base` = 16px
- `text-lg` = 18px
- `text-xl` = 20px
## Color and Opacity
### Opacity Modifiers
**Never use `bg-opacity-*`, `text-opacity-*`, etc.** - use the opacity modifier syntax:
```html
<!-- ❌ Don't do this -->
<div class="bg-red-500 bg-opacity-60">Old opacity syntax</div>
<!-- ✅ Do this instead -->
<div class="bg-red-500/60">Modern opacity syntax</div>
```
## Responsive Design
### Breakpoint Optimization
- **Check for redundant classes across breakpoints**
- **Only add breakpoint variants when values change**
```html
<!-- ❌ Redundant breakpoint classes -->
<div class="px-4 md:px-4 lg:px-4">
<!-- md:px-4 and lg:px-4 are redundant -->
</div>
<!-- ✅ Efficient breakpoint usage -->
<div class="px-4 lg:px-8">
<!-- Only specify when value changes -->
</div>
```
## Dark Mode
### Dark Mode Best Practices
- Use the plain `dark:` variant pattern
- Put light mode styles first, then dark mode styles
- Ensure `dark:` variant comes before other variants
```html
<!-- ✅ Correct dark mode pattern -->
<div class="bg-white text-black dark:bg-black dark:text-white">
<button class="hover:bg-gray-100 dark:hover:bg-gray-800">Click me</button>
</div>
```
## Gradient Utilities
- **ALWAYS Use `bg-linear-*` instead of `bg-gradient-*` utilities** - The gradient utilities were renamed in v4
- Use the new `bg-radial` or `bg-radial-[<position>]` to create radial gradients
- Use the new `bg-conic` or `bg-conic-*` to create conic gradients
```html
<!-- ✅ Use the new gradient utilities -->
<div class="h-14 bg-linear-to-br from-violet-500 to-fuchsia-500"></div>
<div
class="size-18 bg-radial-[at_50%_75%] from-sky-200 via-blue-400 to-indigo-900 to-90%"
></div>
<div
class="size-24 bg-conic-180 from-indigo-600 via-indigo-50 to-indigo-600"
></div>
<!-- ❌ Do not use bg-gradient-* utilities -->
<div class="h-14 bg-gradient-to-br from-violet-500 to-fuchsia-500"></div>
```
## Working with CSS Variables
### Accessing Theme Values
Tailwind CSS v4 exposes all theme values as CSS variables:
```css
/* Access colors, and other theme values */
.custom-element {
background: var(--color-red-500);
border-radius: var(--radius-lg);
}
```
### The `--spacing()` Function
Use the dedicated `--spacing()` function for spacing calculations:
```css
.custom-class {
margin-top: calc(100vh - --spacing(16));
}
```
### Extending theme values
Use CSS to extend theme values:
```css
@import "tailwindcss";
@theme {
--color-mint-500: oklch(0.72 0.11 178);
}
```
```html
<div class="bg-mint-500">
<!-- ... -->
</div>
```
## New v4 Features
### Container Queries
Use the `@container` class and size variants:
```html
<article class="@container">
<div class="flex flex-col @md:flex-row @lg:gap-8">
<img class="w-full @md:w-48" />
<div class="mt-4 @md:mt-0">
<!-- Content adapts to container size -->
</div>
</div>
</article>
```
### Container Query Units
Use container-based units like `cqw` for responsive sizing:
```html
<div class="@container">
<h1 class="text-[50cqw]">Responsive to container width</h1>
</div>
```
### Text Shadows (v4.1)
Use text-shadow-\* utilities from text-shadow-2xs to text-shadow-lg:
```html
<!-- ✅ Text shadow examples -->
<h1 class="text-shadow-lg">Large shadow</h1>
<p class="text-shadow-sm/50">Small shadow with opacity</p>
```
### Masking (v4.1)
Use the new composable mask utilities for image and gradient masks:
```html
<!-- ✅ Linear gradient masks on specific sides -->
<div class="mask-t-from-50%">Top fade</div>
<div class="mask-b-from-20% mask-b-to-80%">Bottom gradient</div>
<div class="mask-linear-from-white mask-linear-to-black/60">
Fade from white to black
</div>
<!-- ✅ Radial gradient masks -->
<div class="mask-radial-[100%_100%] mask-radial-from-75% mask-radial-at-left">
Radial mask
</div>
```
## Component Patterns
### Avoiding Utility Inheritance
Don't add utilities to parents that you override in children:
```html
<!-- ❌ Avoid this pattern -->
<div class="text-center">
<h1>Centered Heading</h1>
<div class="text-left">Left-aligned content</div>
</div>
<!-- ✅ Better approach -->
<div>
<h1 class="text-center">Centered Heading</h1>
<div>Left-aligned content</div>
</div>
```
### Component Extraction
- Extract repeated patterns into framework components, not CSS classes
- Keep utility classes in templates/JSX
- Use data attributes for complex state-based styling
## CSS Best Practices
### Nesting Guidelines
- Use nesting when styling both parent and children
- Avoid empty parent selectors
```css
/* ✅ Good nesting - parent has styles */
.card {
padding: --spacing(4);
> .card-title {
font-weight: bold;
}
}
/* ❌ Avoid empty parents */
ul {
> li {
/* Parent has no styles */
}
}
```
## Common Pitfalls to Avoid
1. **Using old opacity utilities** - Always use `/opacity` syntax like `bg-red-500/60`
2. **Redundant breakpoint classes** - Only specify changes
3. **Space utilities in flex/grid** - Always use gap
4. **Leading utilities** - Use line-height modifiers like `text-sm/6`
5. **Arbitrary values** - Use the design scale
6. **@apply directive** - Use components or CSS variables
7. **min-h-screen on mobile** - Use min-h-dvh
8. **Separate width/height** - Use size utilities when equal
9. **Arbitrary values** - Always use Tailwind's predefined scale whenever possible (e.g., use `ml-4` over `ml-[16px]`)

3
.github/CODEOWNERS vendored
View File

@ -6,9 +6,6 @@
* @crazywoola @laipz8200 @Yeuoly
# ESLint suppression file is maintained by autofix.ci pruning.
/eslint-suppressions.json
# CODEOWNERS file
/.github/CODEOWNERS @laipz8200 @crazywoola

View File

@ -4,7 +4,7 @@ runs:
using: composite
steps:
- name: Setup Vite+
uses: voidzero-dev/setup-vp@4f5aa3e38c781f1b01e78fb9255527cee8a6efa6 # v1.8.0
uses: voidzero-dev/setup-vp@20553a7a7429c429a74894104a2835d7fed28a72 # v1.3.0
with:
node-version-file: .nvmrc
cache: true

1
.github/labeler.yml vendored
View File

@ -6,4 +6,5 @@ web:
- 'package.json'
- 'pnpm-lock.yaml'
- 'pnpm-workspace.yaml'
- '.npmrc'
- '.nvmrc'

19
.github/workflows/anti-slop.yml vendored Normal file
View File

@ -0,0 +1,19 @@
name: Anti-Slop PR Check
on:
pull_request_target:
types: [opened, edited, synchronize]
permissions:
pull-requests: write
contents: read
jobs:
anti-slop:
runs-on: ubuntu-latest
steps:
- uses: peakoss/anti-slop@85daca1880e9e1af197fc06ea03349daf08f4202 # v0.2.1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
close-pr: false
failure-add-pr-labels: "needs-revision"

View File

@ -16,7 +16,7 @@ concurrency:
jobs:
api-unit:
name: API Unit Tests
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
env:
COVERAGE_FILE: coverage-unit
defaults:
@ -35,7 +35,7 @@ jobs:
persist-credentials: false
- name: Setup UV and Python
uses: astral-sh/setup-uv@08807647e7069bb48b6ef5acd8ec9567f424441b # v8.1.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
python-version: ${{ matrix.python-version }}
@ -62,7 +62,7 @@ jobs:
api-integration:
name: API Integration Tests
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
env:
COVERAGE_FILE: coverage-integration
STORAGE_TYPE: opendal
@ -84,7 +84,7 @@ jobs:
persist-credentials: false
- name: Setup UV and Python
uses: astral-sh/setup-uv@08807647e7069bb48b6ef5acd8ec9567f424441b # v8.1.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
python-version: ${{ matrix.python-version }}
@ -99,13 +99,13 @@ jobs:
- name: Set up dotenvs
run: |
cp docker/.env.example docker/.env
cp docker/envs/middleware.env.example docker/middleware.env
cp docker/middleware.env.example docker/middleware.env
- name: Expose Service Ports
run: sh .github/workflows/expose_service_ports.sh
- name: Set up Sandbox
uses: hoverkraft-tech/compose-action@d2bee4f07e8ca410d6b196d00f90c12e7d48c33a # v2.6.0
uses: hoverkraft-tech/compose-action@4894d2492015c1774ee5a13a95b1072093087ec3 # v2.5.0
with:
compose-file: |
docker/docker-compose.middleware.yaml
@ -137,7 +137,7 @@ jobs:
api-coverage:
name: API Coverage
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
needs:
- api-unit
- api-integration
@ -156,7 +156,7 @@ jobs:
persist-credentials: false
- name: Setup UV and Python
uses: astral-sh/setup-uv@08807647e7069bb48b6ef5acd8ec9567f424441b # v8.1.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
python-version: "3.12"

View File

@ -13,7 +13,7 @@ permissions:
jobs:
autofix:
if: github.repository == 'langgenius/dify'
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
steps:
- name: Complete merge group check
if: github.event_name == 'merge_group'
@ -25,7 +25,7 @@ jobs:
- name: Check Docker Compose inputs
if: github.event_name != 'merge_group'
id: docker-compose-changes
uses: tj-actions/changed-files@9426d40962ed5378910ee2e21d5f8c6fcbf2dd96 # v47.0.6
uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
with:
files: |
docker/generate_docker_compose
@ -35,7 +35,7 @@ jobs:
- name: Check web inputs
if: github.event_name != 'merge_group'
id: web-changes
uses: tj-actions/changed-files@9426d40962ed5378910ee2e21d5f8c6fcbf2dd96 # v47.0.6
uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
with:
files: |
web/**
@ -43,11 +43,12 @@ jobs:
package.json
pnpm-lock.yaml
pnpm-workspace.yaml
.npmrc
.nvmrc
- name: Check api inputs
if: github.event_name != 'merge_group'
id: api-changes
uses: tj-actions/changed-files@9426d40962ed5378910ee2e21d5f8c6fcbf2dd96 # v47.0.6
uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
with:
files: |
api/**
@ -57,7 +58,7 @@ jobs:
python-version: "3.11"
- if: github.event_name != 'merge_group'
uses: astral-sh/setup-uv@08807647e7069bb48b6ef5acd8ec9567f424441b # v8.1.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
- name: Generate Docker Compose
if: github.event_name != 'merge_group' && steps.docker-compose-changes.outputs.any_changed == 'true'
@ -113,19 +114,13 @@ jobs:
find . -name "*.py.bak" -type f -delete
- name: Setup web environment
if: github.event_name != 'merge_group'
if: github.event_name != 'merge_group' && steps.web-changes.outputs.any_changed == 'true'
uses: ./.github/actions/setup-web
- name: Generate API docs
if: github.event_name != 'merge_group' && steps.api-changes.outputs.any_changed == 'true'
run: |
cd api
uv run dev/generate_swagger_markdown_docs.py --swagger-dir openapi --markdown-dir openapi/markdown
- name: ESLint autofix
if: github.event_name != 'merge_group' && steps.web-changes.outputs.any_changed == 'true'
run: |
vp exec eslint --concurrency=2 --prune-suppressions --quiet || true
- if: github.event_name != 'merge_group'
uses: autofix-ci/action@c5b2d67aa2274e7b5a18224e8171550871fc7e4a # v1.3.4
uses: autofix-ci/action@7a166d7532b277f34e16238930461bf77f9d7ed8 # v1.3.3

View File

@ -26,9 +26,6 @@ jobs:
build:
runs-on: ${{ matrix.runs_on }}
if: github.repository == 'langgenius/dify'
permissions:
contents: read
id-token: write
strategy:
matrix:
include:
@ -38,28 +35,28 @@ jobs:
build_context: "{{defaultContext}}:api"
file: "Dockerfile"
platform: linux/amd64
runs_on: depot-ubuntu-24.04-4
runs_on: ubuntu-latest
- service_name: "build-api-arm64"
image_name_env: "DIFY_API_IMAGE_NAME"
artifact_context: "api"
build_context: "{{defaultContext}}:api"
file: "Dockerfile"
platform: linux/arm64
runs_on: depot-ubuntu-24.04-4
runs_on: ubuntu-24.04-arm
- service_name: "build-web-amd64"
image_name_env: "DIFY_WEB_IMAGE_NAME"
artifact_context: "web"
build_context: "{{defaultContext}}"
file: "web/Dockerfile"
platform: linux/amd64
runs_on: depot-ubuntu-24.04-4
runs_on: ubuntu-latest
- service_name: "build-web-arm64"
image_name_env: "DIFY_WEB_IMAGE_NAME"
artifact_context: "web"
build_context: "{{defaultContext}}"
file: "web/Dockerfile"
platform: linux/arm64
runs_on: depot-ubuntu-24.04-4
runs_on: ubuntu-24.04-arm
steps:
- name: Prepare
@ -73,8 +70,8 @@ jobs:
username: ${{ env.DOCKERHUB_USER }}
password: ${{ env.DOCKERHUB_TOKEN }}
- name: Set up Depot CLI
uses: depot/setup-action@15c09a5f77a0840ad4bce955686522a257853461 # v1.7.1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
- name: Extract metadata for Docker
id: meta
@ -84,15 +81,16 @@ jobs:
- name: Build Docker image
id: build
uses: depot/build-push-action@5f3b3c2e5a00f0093de47f657aeaefcedff27d18 # v1.17.0
uses: docker/build-push-action@bcafcacb16a39f128d818304e6c9c0c18556b85f # v7.1.0
with:
project: ${{ vars.DEPOT_PROJECT_ID }}
context: ${{ matrix.build_context }}
file: ${{ matrix.file }}
platforms: ${{ matrix.platform }}
build-args: COMMIT_SHA=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.revision'] }}
labels: ${{ steps.meta.outputs.labels }}
outputs: type=image,name=${{ env[matrix.image_name_env] }},push-by-digest=true,name-canonical=true,push=true
cache-from: type=gha,scope=${{ matrix.service_name }}
cache-to: type=gha,mode=max,scope=${{ matrix.service_name }}
- name: Export digest
env:
@ -110,33 +108,9 @@ jobs:
if-no-files-found: error
retention-days: 1
fork-build-validate:
if: github.repository != 'langgenius/dify'
runs-on: ubuntu-24.04
strategy:
matrix:
include:
- service_name: "validate-api-amd64"
build_context: "{{defaultContext}}:api"
file: "Dockerfile"
- service_name: "validate-web-amd64"
build_context: "{{defaultContext}}"
file: "web/Dockerfile"
steps:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@4d04d5d9486b7bd6fa91e7baf45bbb4f8b9deedd # v4.0.0
- name: Validate Docker image
uses: docker/build-push-action@bcafcacb16a39f128d818304e6c9c0c18556b85f # v7.1.0
with:
push: false
context: ${{ matrix.build_context }}
file: ${{ matrix.file }}
platforms: linux/amd64
create-manifest:
needs: build
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
if: github.repository == 'langgenius/dify'
strategy:
matrix:

View File

@ -9,7 +9,7 @@ concurrency:
jobs:
db-migration-test-postgres:
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
steps:
- name: Checkout code
@ -19,7 +19,7 @@ jobs:
persist-credentials: false
- name: Setup UV and Python
uses: astral-sh/setup-uv@08807647e7069bb48b6ef5acd8ec9567f424441b # v8.1.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
python-version: "3.12"
@ -37,10 +37,10 @@ jobs:
- name: Prepare middleware env
run: |
cd docker
cp envs/middleware.env.example middleware.env
cp middleware.env.example middleware.env
- name: Set up Middlewares
uses: hoverkraft-tech/compose-action@d2bee4f07e8ca410d6b196d00f90c12e7d48c33a # v2.6.0
uses: hoverkraft-tech/compose-action@4894d2492015c1774ee5a13a95b1072093087ec3 # v2.5.0
with:
compose-file: |
docker/docker-compose.middleware.yaml
@ -59,7 +59,7 @@ jobs:
run: uv run --directory api flask upgrade-db
db-migration-test-mysql:
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
steps:
- name: Checkout code
@ -69,7 +69,7 @@ jobs:
persist-credentials: false
- name: Setup UV and Python
uses: astral-sh/setup-uv@08807647e7069bb48b6ef5acd8ec9567f424441b # v8.1.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
python-version: "3.12"
@ -87,14 +87,14 @@ jobs:
- name: Prepare middleware env for MySQL
run: |
cd docker
cp envs/middleware.env.example middleware.env
cp middleware.env.example middleware.env
sed -i 's/DB_TYPE=postgresql/DB_TYPE=mysql/' middleware.env
sed -i 's/DB_HOST=db_postgres/DB_HOST=db_mysql/' middleware.env
sed -i 's/DB_PORT=5432/DB_PORT=3306/' middleware.env
sed -i 's/DB_USERNAME=postgres/DB_USERNAME=mysql/' middleware.env
- name: Set up Middlewares
uses: hoverkraft-tech/compose-action@d2bee4f07e8ca410d6b196d00f90c12e7d48c33a # v2.6.0
uses: hoverkraft-tech/compose-action@4894d2492015c1774ee5a13a95b1072093087ec3 # v2.5.0
with:
compose-file: |
docker/docker-compose.middleware.yaml
@ -110,28 +110,6 @@ jobs:
sed -i 's/DB_PORT=5432/DB_PORT=3306/' .env
sed -i 's/DB_USERNAME=postgres/DB_USERNAME=root/' .env
# hoverkraft-tech/compose-action@v2.6.0 only waits for `docker compose up -d`
# to return (container processes started); it does not wait on healthcheck
# status. mysql:8.0's first-time init takes 15-30s, so without an explicit
# wait the migration runs while InnoDB is still initialising and gets
# killed with "Lost connection during query". Poll a real SELECT until it
# succeeds.
- name: Wait for MySQL to accept queries
run: |
set +e
for i in $(seq 1 60); do
if docker run --rm --network host mysql:8.0 \
mysql -h 127.0.0.1 -P 3306 -uroot -pdifyai123456 \
-e 'SELECT 1' >/dev/null 2>&1; then
echo "MySQL ready after ${i}s"
exit 0
fi
sleep 1
done
echo "MySQL not ready after 60s; dumping container logs:"
docker compose -f docker/docker-compose.middleware.yaml --profile mysql logs --tail=200 db_mysql
exit 1
- name: Run DB Migration
env:
DEBUG: true

View File

@ -13,7 +13,7 @@ on:
jobs:
deploy:
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
if: |
github.event.workflow_run.conclusion == 'success' &&
github.event.workflow_run.head_branch == 'deploy/agent-dev'

View File

@ -10,7 +10,7 @@ on:
jobs:
deploy:
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
if: |
github.event.workflow_run.conclusion == 'success' &&
github.event.workflow_run.head_branch == 'deploy/dev'

View File

@ -13,7 +13,7 @@ on:
jobs:
deploy:
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
if: |
github.event.workflow_run.conclusion == 'success' &&
github.event.workflow_run.head_branch == 'deploy/enterprise'

View File

@ -10,7 +10,7 @@ on:
jobs:
deploy:
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
if: |
github.event.workflow_run.conclusion == 'success' &&
github.event.workflow_run.head_branch == 'build/feat/hitl'

View File

@ -14,59 +14,28 @@ concurrency:
jobs:
build-docker:
if: github.event.pull_request.head.repo.full_name == github.repository
runs-on: ${{ matrix.runs_on }}
permissions:
contents: read
id-token: write
strategy:
matrix:
include:
- service_name: "api-amd64"
platform: linux/amd64
runs_on: depot-ubuntu-24.04-4
runs_on: ubuntu-latest
context: "{{defaultContext}}:api"
file: "Dockerfile"
- service_name: "api-arm64"
platform: linux/arm64
runs_on: depot-ubuntu-24.04-4
runs_on: ubuntu-24.04-arm
context: "{{defaultContext}}:api"
file: "Dockerfile"
- service_name: "web-amd64"
platform: linux/amd64
runs_on: depot-ubuntu-24.04-4
runs_on: ubuntu-latest
context: "{{defaultContext}}"
file: "web/Dockerfile"
- service_name: "web-arm64"
platform: linux/arm64
runs_on: depot-ubuntu-24.04-4
context: "{{defaultContext}}"
file: "web/Dockerfile"
steps:
- name: Set up Depot CLI
uses: depot/setup-action@15c09a5f77a0840ad4bce955686522a257853461 # v1.7.1
- name: Build Docker Image
uses: depot/build-push-action@5f3b3c2e5a00f0093de47f657aeaefcedff27d18 # v1.17.0
with:
project: ${{ vars.DEPOT_PROJECT_ID }}
push: false
context: ${{ matrix.context }}
file: ${{ matrix.file }}
platforms: ${{ matrix.platform }}
build-docker-fork:
if: github.event.pull_request.head.repo.full_name != github.repository
runs-on: ubuntu-24.04
permissions:
contents: read
strategy:
matrix:
include:
- service_name: "api-amd64"
context: "{{defaultContext}}:api"
file: "Dockerfile"
- service_name: "web-amd64"
runs_on: ubuntu-24.04-arm
context: "{{defaultContext}}"
file: "web/Dockerfile"
steps:
@ -79,4 +48,6 @@ jobs:
push: false
context: ${{ matrix.context }}
file: ${{ matrix.file }}
platforms: linux/amd64
platforms: ${{ matrix.platform }}
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@ -7,8 +7,8 @@ jobs:
permissions:
contents: read
pull-requests: write
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
steps:
- uses: actions/labeler@f27b608878404679385c85cfa523b85ccb86e213 # v6.1.0
- uses: actions/labeler@634933edcd8ababfe52f92936142cc22ac488b1b # v6.0.1
with:
sync-labels: true

View File

@ -23,7 +23,7 @@ concurrency:
jobs:
pre_job:
name: Skip Duplicate Checks
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip || 'false' }}
steps:
@ -39,7 +39,7 @@ jobs:
name: Check Changed Files
needs: pre_job
if: needs.pre_job.outputs.should_skip != 'true'
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
outputs:
api-changed: ${{ steps.changes.outputs.api }}
e2e-changed: ${{ steps.changes.outputs.e2e }}
@ -57,7 +57,7 @@ jobs:
- '.github/workflows/api-tests.yml'
- '.github/workflows/expose_service_ports.sh'
- 'docker/.env.example'
- 'docker/envs/middleware.env.example'
- 'docker/middleware.env.example'
- 'docker/docker-compose.middleware.yaml'
- 'docker/docker-compose-template.yaml'
- 'docker/generate_docker_compose'
@ -69,6 +69,7 @@ jobs:
- 'package.json'
- 'pnpm-lock.yaml'
- 'pnpm-workspace.yaml'
- '.npmrc'
- '.nvmrc'
- '.github/workflows/web-tests.yml'
- '.github/actions/setup-web/**'
@ -82,9 +83,10 @@ jobs:
- 'package.json'
- 'pnpm-lock.yaml'
- 'pnpm-workspace.yaml'
- '.npmrc'
- '.nvmrc'
- 'docker/docker-compose.middleware.yaml'
- 'docker/envs/middleware.env.example'
- 'docker/middleware.env.example'
- '.github/workflows/web-e2e.yml'
- '.github/actions/setup-web/**'
vdb:
@ -94,7 +96,7 @@ jobs:
- '.github/workflows/vdb-tests.yml'
- '.github/workflows/expose_service_ports.sh'
- 'docker/.env.example'
- 'docker/envs/middleware.env.example'
- 'docker/middleware.env.example'
- 'docker/docker-compose.yaml'
- 'docker/docker-compose-template.yaml'
- 'docker/generate_docker_compose'
@ -116,7 +118,7 @@ jobs:
- '.github/workflows/db-migration-test.yml'
- '.github/workflows/expose_service_ports.sh'
- 'docker/.env.example'
- 'docker/envs/middleware.env.example'
- 'docker/middleware.env.example'
- 'docker/docker-compose.middleware.yaml'
- 'docker/docker-compose-template.yaml'
- 'docker/generate_docker_compose'
@ -139,7 +141,7 @@ jobs:
- pre_job
- check-changes
if: needs.pre_job.outputs.should_skip != 'true' && needs.check-changes.outputs.api-changed != 'true'
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
steps:
- name: Report skipped API tests
run: echo "No API-related changes detected; skipping API tests."
@ -152,7 +154,7 @@ jobs:
- check-changes
- api-tests-run
- api-tests-skip
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
steps:
- name: Finalize API Tests status
env:
@ -199,7 +201,7 @@ jobs:
- pre_job
- check-changes
if: needs.pre_job.outputs.should_skip != 'true' && needs.check-changes.outputs.web-changed != 'true'
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
steps:
- name: Report skipped web tests
run: echo "No web-related changes detected; skipping web tests."
@ -212,7 +214,7 @@ jobs:
- check-changes
- web-tests-run
- web-tests-skip
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
steps:
- name: Finalize Web Tests status
env:
@ -258,7 +260,7 @@ jobs:
- pre_job
- check-changes
if: needs.pre_job.outputs.should_skip != 'true' && needs.check-changes.outputs.e2e-changed != 'true'
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
steps:
- name: Report skipped web full-stack e2e
run: echo "No E2E-related changes detected; skipping web full-stack E2E."
@ -271,7 +273,7 @@ jobs:
- check-changes
- web-e2e-run
- web-e2e-skip
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
steps:
- name: Finalize Web Full-Stack E2E status
env:
@ -323,7 +325,7 @@ jobs:
- pre_job
- check-changes
if: needs.pre_job.outputs.should_skip != 'true' && needs.check-changes.outputs.vdb-changed != 'true'
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
steps:
- name: Report skipped VDB tests
run: echo "No VDB-related changes detected; skipping VDB tests."
@ -336,7 +338,7 @@ jobs:
- check-changes
- vdb-tests-run
- vdb-tests-skip
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
steps:
- name: Finalize VDB Tests status
env:
@ -382,7 +384,7 @@ jobs:
- pre_job
- check-changes
if: needs.pre_job.outputs.should_skip != 'true' && needs.check-changes.outputs.migration-changed != 'true'
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
steps:
- name: Report skipped DB migration tests
run: echo "No migration-related changes detected; skipping DB migration tests."
@ -395,7 +397,7 @@ jobs:
- check-changes
- db-migration-test-run
- db-migration-test-skip
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
steps:
- name: Finalize DB Migration Test status
env:

View File

@ -12,7 +12,7 @@ permissions: {}
jobs:
comment:
name: Comment PR with pyrefly diff
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
@ -77,28 +77,10 @@ jobs:
}
if (diff.trim()) {
const body = '### Pyrefly Diff\n<details>\n<summary>base → PR</summary>\n\n```diff\n' + diff + '\n```\n</details>';
const marker = '### Pyrefly Diff';
const { data: comments } = await github.rest.issues.listComments({
await github.rest.issues.createComment({
issue_number: prNumber,
owner: context.repo.owner,
repo: context.repo.repo,
body: '### Pyrefly Diff\n<details>\n<summary>base → PR</summary>\n\n```diff\n' + diff + '\n```\n</details>',
});
const existing = comments.find((comment) => comment.body.startsWith(marker));
if (existing) {
await github.rest.issues.updateComment({
comment_id: existing.id,
owner: context.repo.owner,
repo: context.repo.repo,
body,
});
} else {
await github.rest.issues.createComment({
issue_number: prNumber,
owner: context.repo.owner,
repo: context.repo.repo,
body,
});
}
}

View File

@ -10,7 +10,7 @@ permissions:
jobs:
pyrefly-diff:
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
permissions:
contents: read
issues: write
@ -22,7 +22,7 @@ jobs:
fetch-depth: 0
- name: Setup Python & UV
uses: astral-sh/setup-uv@08807647e7069bb48b6ef5acd8ec9567f424441b # v8.1.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
@ -103,26 +103,9 @@ jobs:
].join('\n')
: '### Pyrefly Diff\nNo changes detected.';
const marker = '### Pyrefly Diff';
const { data: comments } = await github.rest.issues.listComments({
await github.rest.issues.createComment({
issue_number: prNumber,
owner: context.repo.owner,
repo: context.repo.repo,
body,
});
const existing = comments.find((comment) => comment.body.startsWith(marker));
if (existing) {
await github.rest.issues.updateComment({
comment_id: existing.id,
owner: context.repo.owner,
repo: context.repo.repo,
body,
});
} else {
await github.rest.issues.createComment({
issue_number: prNumber,
owner: context.repo.owner,
repo: context.repo.repo,
body,
});
}

View File

@ -12,7 +12,7 @@ permissions: {}
jobs:
comment:
name: Comment PR with type coverage
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
@ -24,7 +24,7 @@ jobs:
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
- name: Setup Python & UV
uses: astral-sh/setup-uv@08807647e7069bb48b6ef5acd8ec9567f424441b # v8.1.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true

View File

@ -10,7 +10,7 @@ permissions:
jobs:
pyrefly-type-coverage:
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
permissions:
contents: read
issues: write
@ -22,7 +22,7 @@ jobs:
fetch-depth: 0
- name: Setup Python & UV
uses: astral-sh/setup-uv@08807647e7069bb48b6ef5acd8ec9567f424441b # v8.1.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true

View File

@ -16,7 +16,7 @@ jobs:
name: Validate PR title
permissions:
pull-requests: read
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
steps:
- name: Complete merge group check
if: github.event_name == 'merge_group'

View File

@ -12,7 +12,7 @@ on:
jobs:
stale:
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write

View File

@ -15,7 +15,7 @@ permissions:
jobs:
python-style:
name: Python Style
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
steps:
- name: Checkout code
@ -25,7 +25,7 @@ jobs:
- name: Check changed files
id: changed-files
uses: tj-actions/changed-files@9426d40962ed5378910ee2e21d5f8c6fcbf2dd96 # v47.0.6
uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
with:
files: |
api/**
@ -33,7 +33,7 @@ jobs:
- name: Setup UV and Python
if: steps.changed-files.outputs.any_changed == 'true'
uses: astral-sh/setup-uv@08807647e7069bb48b6ef5acd8ec9567f424441b # v8.1.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: false
python-version: "3.12"
@ -57,7 +57,7 @@ jobs:
web-style:
name: Web Style
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./web
@ -73,7 +73,7 @@ jobs:
- name: Check changed files
id: changed-files
uses: tj-actions/changed-files@9426d40962ed5378910ee2e21d5f8c6fcbf2dd96 # v47.0.6
uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
with:
files: |
web/**
@ -83,6 +83,7 @@ jobs:
package.json
pnpm-lock.yaml
pnpm-workspace.yaml
.npmrc
.nvmrc
.github/workflows/style.yml
.github/actions/setup-web/**
@ -94,7 +95,7 @@ jobs:
- name: Restore ESLint cache
if: steps.changed-files.outputs.any_changed == 'true'
id: eslint-cache-restore
uses: actions/cache/restore@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: .eslintcache
key: ${{ runner.os }}-eslint-${{ hashFiles('pnpm-lock.yaml', 'eslint.config.mjs', 'web/eslint.config.mjs', 'web/eslint.constants.mjs', 'web/plugins/eslint/**') }}-${{ github.sha }}
@ -109,8 +110,6 @@ jobs:
- name: Web tsslint
if: steps.changed-files.outputs.any_changed == 'true'
working-directory: ./web
env:
NODE_OPTIONS: --max-old-space-size=4096
run: vp run lint:tss
- name: Web type check
@ -125,14 +124,14 @@ jobs:
- name: Save ESLint cache
if: steps.changed-files.outputs.any_changed == 'true' && success() && steps.eslint-cache-restore.outputs.cache-hit != 'true'
uses: actions/cache/save@27d5ce7f107fe9357f9df03efb73ab90386fccae # v5.0.5
uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4
with:
path: .eslintcache
key: ${{ steps.eslint-cache-restore.outputs.cache-primary-key }}
superlinter:
name: SuperLinter
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
steps:
- name: Checkout code
@ -143,7 +142,7 @@ jobs:
- name: Check changed files
id: changed-files
uses: tj-actions/changed-files@9426d40962ed5378910ee2e21d5f8c6fcbf2dd96 # v47.0.6
uses: tj-actions/changed-files@22103cc46bda19c2b464ffe86db46df6922fd323 # v47.0.5
with:
files: |
**.sh

View File

@ -9,6 +9,7 @@ on:
- package.json
- pnpm-lock.yaml
- pnpm-workspace.yaml
- .npmrc
concurrency:
group: sdk-tests-${{ github.head_ref || github.run_id }}
@ -17,7 +18,7 @@ concurrency:
jobs:
build:
name: unit test for Node.js SDK
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
defaults:
run:
@ -29,7 +30,7 @@ jobs:
persist-credentials: false
- name: Use Node.js
uses: actions/setup-node@48b55a011bda9f5d6aeb4c2d9c7362e8dae4041e # v6.4.0
uses: actions/setup-node@53b83947a5a98c8d113130e565377fae1a50d02f # v6.3.0
with:
node-version: 22
cache: ''

View File

@ -35,7 +35,7 @@ concurrency:
jobs:
translate:
if: github.repository == 'langgenius/dify'
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
timeout-minutes: 120
steps:
@ -158,7 +158,7 @@ jobs:
- name: Run Claude Code for Translation Sync
if: steps.context.outputs.CHANGED_FILES != ''
uses: anthropics/claude-code-action@476e359e6203e73dad705c8b322e333fabbd7416 # v1.0.119
uses: anthropics/claude-code-action@b47fd721da662d48c5680e154ad16a73ed74d2e0 # v1.0.93
with:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
github_token: ${{ secrets.GITHUB_TOKEN }}

View File

@ -16,7 +16,7 @@ concurrency:
jobs:
trigger:
if: github.repository == 'langgenius/dify'
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
timeout-minutes: 5
steps:

View File

@ -16,7 +16,7 @@ jobs:
test:
name: Full VDB Tests
if: github.repository == 'langgenius/dify'
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
@ -36,7 +36,7 @@ jobs:
remove_tool_cache: true
- name: Setup UV and Python
uses: astral-sh/setup-uv@08807647e7069bb48b6ef5acd8ec9567f424441b # v8.1.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
python-version: ${{ matrix.python-version }}
@ -51,7 +51,7 @@ jobs:
- name: Set up dotenvs
run: |
cp docker/.env.example docker/.env
cp docker/envs/middleware.env.example docker/middleware.env
cp docker/middleware.env.example docker/middleware.env
- name: Expose Service Ports
run: sh .github/workflows/expose_service_ports.sh
@ -65,7 +65,7 @@ jobs:
# tiflash
- name: Set up Full Vector Store Matrix
uses: hoverkraft-tech/compose-action@d2bee4f07e8ca410d6b196d00f90c12e7d48c33a # v2.6.0
uses: hoverkraft-tech/compose-action@4894d2492015c1774ee5a13a95b1072093087ec3 # v2.5.0
with:
compose-file: |
docker/docker-compose.yaml

View File

@ -13,7 +13,7 @@ concurrency:
jobs:
test:
name: VDB Smoke Tests
runs-on: depot-ubuntu-24.04
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
@ -33,7 +33,7 @@ jobs:
remove_tool_cache: true
- name: Setup UV and Python
uses: astral-sh/setup-uv@08807647e7069bb48b6ef5acd8ec9567f424441b # v8.1.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
python-version: ${{ matrix.python-version }}
@ -48,7 +48,7 @@ jobs:
- name: Set up dotenvs
run: |
cp docker/.env.example docker/.env
cp docker/envs/middleware.env.example docker/middleware.env
cp docker/middleware.env.example docker/middleware.env
- name: Expose Service Ports
run: sh .github/workflows/expose_service_ports.sh
@ -62,7 +62,7 @@ jobs:
# tiflash
- name: Set up Vector Stores for Smoke Coverage
uses: hoverkraft-tech/compose-action@d2bee4f07e8ca410d6b196d00f90c12e7d48c33a # v2.6.0
uses: hoverkraft-tech/compose-action@4894d2492015c1774ee5a13a95b1072093087ec3 # v2.5.0
with:
compose-file: |
docker/docker-compose.yaml

View File

@ -13,7 +13,7 @@ concurrency:
jobs:
test:
name: Web Full-Stack E2E
runs-on: depot-ubuntu-24.04-4
runs-on: ubuntu-latest
defaults:
run:
shell: bash
@ -28,7 +28,7 @@ jobs:
uses: ./.github/actions/setup-web
- name: Setup UV and Python
uses: astral-sh/setup-uv@08807647e7069bb48b6ef5acd8ec9567f424441b # v8.1.0
uses: astral-sh/setup-uv@cec208311dfd045dd5311c1add060b2062131d57 # v8.0.0
with:
enable-cache: true
python-version: "3.12"

View File

@ -16,7 +16,7 @@ concurrency:
jobs:
test:
name: Web Tests (${{ matrix.shardIndex }}/${{ matrix.shardTotal }})
runs-on: depot-ubuntu-24.04-4
runs-on: ubuntu-latest
env:
VITEST_COVERAGE_SCOPE: app-components
strategy:
@ -54,7 +54,7 @@ jobs:
name: Merge Test Reports
if: ${{ !cancelled() }}
needs: [test]
runs-on: depot-ubuntu-24.04-4
runs-on: ubuntu-latest
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
defaults:
@ -92,7 +92,7 @@ jobs:
dify-ui-test:
name: dify-ui Tests
runs-on: depot-ubuntu-24.04-4
runs-on: ubuntu-latest
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
defaults:

7
.gitignore vendored
View File

@ -219,9 +219,6 @@ node_modules
# plugin migrate
plugins.jsonl
# generated API OpenAPI specs
packages/contracts/openapi/
# mise
mise.toml
@ -240,10 +237,6 @@ scripts/stress-test/reports/
.playwright-mcp/
.serena/
# vitest browser mode attachments (failure screenshots, traces, etc.)
.vitest-attachments/
**/__screenshots__/
# settings
*.local.json
*.local.md

1
.npmrc Normal file
View File

@ -0,0 +1 @@
save-exact=true

View File

@ -30,7 +30,7 @@ The codebase is split into:
## Language Style
- **Python**: Keep type hints on functions and attributes, and implement relevant special methods (e.g., `__repr__`, `__str__`). Prefer `TypedDict` over `dict` or `Mapping` for type safety and better code documentation.
- **TypeScript**: Use the strict config, rely on ESLint (`pnpm lint:fix` preferred) plus `pnpm type-check`, and avoid `any` types.
- **TypeScript**: Use the strict config, rely on ESLint (`pnpm lint:fix` preferred) plus `pnpm type-check:tsgo`, and avoid `any` types.
## General Practices

View File

@ -3,10 +3,6 @@ DOCKER_REGISTRY=langgenius
WEB_IMAGE=$(DOCKER_REGISTRY)/dify-web
API_IMAGE=$(DOCKER_REGISTRY)/dify-api
VERSION=latest
DOCKER_DIR=docker
DOCKER_MIDDLEWARE_ENV=$(DOCKER_DIR)/middleware.env
DOCKER_MIDDLEWARE_ENV_EXAMPLE=$(DOCKER_DIR)/envs/middleware.env.example
DOCKER_MIDDLEWARE_PROJECT=dify-middlewares-dev
# Default target - show help
.DEFAULT_GOAL := help
@ -21,13 +17,8 @@ dev-setup: prepare-docker prepare-web prepare-api
# Step 1: Prepare Docker middleware
prepare-docker:
@echo "🐳 Setting up Docker middleware..."
@if [ ! -f "$(DOCKER_MIDDLEWARE_ENV)" ]; then \
cp "$(DOCKER_MIDDLEWARE_ENV_EXAMPLE)" "$(DOCKER_MIDDLEWARE_ENV)"; \
echo "Docker middleware.env created"; \
else \
echo "Docker middleware.env already exists"; \
fi
@cd $(DOCKER_DIR) && docker compose -f docker-compose.middleware.yaml --env-file middleware.env -p $(DOCKER_MIDDLEWARE_PROJECT) up -d
@cp -n docker/middleware.env.example docker/middleware.env 2>/dev/null || echo "Docker middleware.env already exists"
@cd docker && docker compose -f docker-compose.middleware.yaml --env-file middleware.env -p dify-middlewares-dev up -d
@echo "✅ Docker middleware started"
# Step 2: Prepare web environment
@ -48,18 +39,12 @@ prepare-api:
# Clean dev environment
dev-clean:
@echo "⚠️ Stopping Docker containers..."
@if [ -f "$(DOCKER_MIDDLEWARE_ENV)" ]; then \
cd $(DOCKER_DIR) && docker compose -f docker-compose.middleware.yaml --env-file middleware.env -p $(DOCKER_MIDDLEWARE_PROJECT) down; \
else \
echo "Docker middleware.env does not exist, skipping compose down"; \
fi
@cd docker && docker compose -f docker-compose.middleware.yaml --env-file middleware.env -p dify-middlewares-dev down
@echo "🗑️ Removing volumes..."
@rm -rf docker/volumes/db
@rm -rf docker/volumes/mysql
@rm -rf docker/volumes/redis
@rm -rf docker/volumes/plugin_daemon
@rm -rf docker/volumes/weaviate
@rm -rf docker/volumes/sandbox/dependencies
@rm -rf api/storage
@echo "✅ Cleanup complete"
@ -86,13 +71,13 @@ type-check:
@echo "📝 Running type checks (basedpyright + pyrefly + mypy)..."
@./dev/basedpyright-check $(PATH_TO_CHECK)
@./dev/pyrefly-check-local
@uv --directory api run mypy --exclude-gitignore --exclude 'tests/' --exclude 'migrations/' --exclude 'dev/generate_swagger_specs.py' --check-untyped-defs --disable-error-code=import-untyped .
@uv --directory api run mypy --exclude-gitignore --exclude 'tests/' --exclude 'migrations/' --check-untyped-defs --disable-error-code=import-untyped .
@echo "✅ Type checks complete"
type-check-core:
@echo "📝 Running core type checks (basedpyright + mypy)..."
@./dev/basedpyright-check $(PATH_TO_CHECK)
@uv --directory api run mypy --exclude-gitignore --exclude 'tests/' --exclude 'migrations/' --exclude 'dev/generate_swagger_specs.py' --exclude 'dev/generate_fastopenapi_specs.py' --check-untyped-defs --disable-error-code=import-untyped .
@uv --directory api run mypy --exclude-gitignore --exclude 'tests/' --exclude 'migrations/' --check-untyped-defs --disable-error-code=import-untyped .
@echo "✅ Core type checks complete"
test:
@ -147,7 +132,7 @@ help:
@echo " make prepare-docker - Set up Docker middleware"
@echo " make prepare-web - Set up web environment"
@echo " make prepare-api - Set up API environment"
@echo " make dev-clean - Stop Docker middleware containers and remove dev data"
@echo " make dev-clean - Stop Docker middleware containers"
@echo ""
@echo "Backend Code Quality:"
@echo " make format - Format code with ruff"

View File

@ -137,7 +137,20 @@ Star Dify on GitHub and be instantly notified of new releases.
### Custom configurations
If you need to customize the configuration, edit `docker/.env`. The essential startup defaults live in [`docker/.env.example`](docker/.env.example), and optional advanced variables are split under `docker/envs/` by theme. After making any changes, re-run `docker compose up -d` from the `docker` directory. You can find the full list of available environment variables [here](https://docs.dify.ai/getting-started/install-self-hosted/environments).
If you need to customize the configuration, please refer to the comments in our [.env.example](docker/.env.example) file and update the corresponding values in your `.env` file. Additionally, you might need to make adjustments to the `docker-compose.yaml` file itself, such as changing image versions, port mappings, or volume mounts, based on your specific deployment environment and requirements. After making any changes, please re-run `docker compose up -d`. You can find the full list of available environment variables [here](https://docs.dify.ai/getting-started/install-self-hosted/environments).
#### Customizing Suggested Questions
You can now customize the "Suggested Questions After Answer" feature to better fit your use case. For example, to generate longer, more technical questions:
```bash
# In your .env file
SUGGESTED_QUESTIONS_PROMPT='Please help me predict the five most likely technical follow-up questions a developer would ask. Focus on implementation details, best practices, and architecture considerations. Keep each question between 40-60 characters. Output must be JSON array: ["question1","question2","question3","question4","question5"]'
SUGGESTED_QUESTIONS_MAX_TOKENS=512
SUGGESTED_QUESTIONS_TEMPERATURE=0.3
```
See the [Suggested Questions Configuration Guide](docs/suggested-questions-configuration.md) for detailed examples and usage instructions.
### Metrics Monitoring with Grafana
@ -147,7 +160,7 @@ Import the dashboard to Grafana, using Dify's PostgreSQL database as data source
### Deployment with Kubernetes
If you'd like to configure a highly available setup, there are community-contributed [Helm Charts](https://helm.sh/) and YAML files which allow Dify to be deployed on Kubernetes.
If you'd like to configure a highly-available setup, there are community-contributed [Helm Charts](https://helm.sh/) and YAML files which allow Dify to be deployed on Kubernetes.
- [Helm Chart by @LeoQuote](https://github.com/douban/charts/tree/master/charts/dify)
- [Helm Chart by @BorisPolonsky](https://github.com/BorisPolonsky/dify-helm)

View File

@ -34,7 +34,7 @@ TRIGGER_URL=http://localhost:5001
FILES_ACCESS_TIMEOUT=300
# Collaboration mode toggle
ENABLE_COLLABORATION_MODE=true
ENABLE_COLLABORATION_MODE=false
# Access token expiration time in minutes
ACCESS_TOKEN_EXPIRE_MINUTES=60
@ -88,10 +88,6 @@ REDIS_HEALTH_CHECK_INTERVAL=30
CELERY_BROKER_URL=redis://:difyai123456@localhost:${REDIS_PORT}/1
CELERY_BACKEND=redis
# Ops trace retry configuration
OPS_TRACE_RETRYABLE_DISPATCH_MAX_RETRIES=60
OPS_TRACE_RETRYABLE_DISPATCH_DELAY_SECONDS=5
# Database configuration
DB_TYPE=postgresql
DB_USERNAME=postgres
@ -102,8 +98,6 @@ DB_DATABASE=dify
SQLALCHEMY_POOL_PRE_PING=true
SQLALCHEMY_POOL_TIMEOUT=30
# Connection pool reset behavior on return
SQLALCHEMY_POOL_RESET_ON_RETURN=rollback
# Storage configuration
# use for store upload files, private keys...
@ -387,7 +381,7 @@ VIKINGDB_ACCESS_KEY=your-ak
VIKINGDB_SECRET_KEY=your-sk
VIKINGDB_REGION=cn-shanghai
VIKINGDB_HOST=api-vikingdb.xxx.volces.com
VIKINGDB_SCHEME=http
VIKINGDB_SCHEMA=http
VIKINGDB_CONNECTION_TIMEOUT=30
VIKINGDB_SOCKET_TIMEOUT=30
@ -438,6 +432,8 @@ UPLOAD_FILE_EXTENSION_BLACKLIST=
# Model configuration
MULTIMODAL_SEND_FORMAT=base64
PROMPT_GENERATION_MAX_TOKENS=512
CODE_GENERATION_MAX_TOKENS=1024
PLUGIN_BASED_TOKEN_COUNTING_ENABLED=false
# Mail configuration, support: resend, smtp, sendgrid
@ -663,11 +659,6 @@ INNER_API_KEY_FOR_PLUGIN=QaHbTe77CtuXmsfyhR7+vRjI/+XbV1AaFy691iy+kGDv2Jvy0/eAh8Y
MARKETPLACE_ENABLED=true
MARKETPLACE_API_URL=https://marketplace.dify.ai
# Creators Platform configuration
CREATORS_PLATFORM_FEATURES_ENABLED=true
CREATORS_PLATFORM_API_URL=https://creators.dify.ai
CREATORS_PLATFORM_OAUTH_CLIENT_ID=
# Endpoint configuration
ENDPOINT_URL_TEMPLATE=http://localhost:5002/e/{hook_id}
@ -718,6 +709,22 @@ SWAGGER_UI_PATH=/swagger-ui.html
# Set to false to export dataset IDs as plain text for easier cross-environment import
DSL_EXPORT_ENCRYPT_DATASET_ID=true
# Suggested Questions After Answer Configuration
# These environment variables allow customization of the suggested questions feature
#
# Custom prompt for generating suggested questions (optional)
# If not set, uses the default prompt that generates 3 questions under 20 characters each
# Example: "Please help me predict the five most likely technical follow-up questions a developer would ask. Focus on implementation details, best practices, and architecture considerations. Keep each question between 40-60 characters. Output must be JSON array: [\"question1\",\"question2\",\"question3\",\"question4\",\"question5\"]"
# SUGGESTED_QUESTIONS_PROMPT=
# Maximum number of tokens for suggested questions generation (default: 256)
# Adjust this value for longer questions or more questions
# SUGGESTED_QUESTIONS_MAX_TOKENS=256
# Temperature for suggested questions generation (default: 0.0)
# Higher values (0.5-1.0) produce more creative questions, lower values (0.0-0.3) produce more focused questions
# SUGGESTED_QUESTIONS_TEMPERATURE=0
# Tenant isolated task queue configuration
TENANT_ISOLATED_TASK_CONCURRENCY=1

View File

@ -193,10 +193,6 @@ Before opening a PR / submitting:
- Controllers: parse input via Pydantic, invoke services, return serialised responses; no business logic.
- Services: coordinate repositories, providers, background tasks; keep side effects explicit.
- Document non-obvious behaviour with concise docstrings and comments.
- For Flask-RESTX controller request, query, and response schemas, follow `controllers/API_SCHEMA_GUIDE.md`.
In short: use Pydantic models, document GET query params with `query_params_from_model(...)`, register response
DTOs with `register_response_schema_models(...)`, serialize with `ResponseModel.model_validate(...).model_dump(...)`,
and avoid adding new legacy `ns.model(...)`, `@marshal_with(...)`, or GET `@ns.expect(...)` patterns.
### Miscellaneous

View File

@ -101,11 +101,3 @@ The scripts resolve paths relative to their location, so you can run them from a
uv run ruff format ./ # Format code
uv run basedpyright . # Type checking
```
## Generate TS stub
```
uv run dev/generate_swagger_specs.py --output-dir openapi
```
use https://jsontotable.org/openapi-to-typescript to convert to typescript

View File

@ -159,6 +159,7 @@ def initialize_extensions(app: DifyApp):
ext_logstore,
ext_mail,
ext_migrate,
ext_oauth_bearer,
ext_orjson,
ext_otel,
ext_proxy_fix,
@ -203,6 +204,7 @@ def initialize_extensions(app: DifyApp):
ext_enterprise_telemetry,
ext_request_logging,
ext_session_factory,
ext_oauth_bearer,
]
for ext in extensions:
short_name = ext.__name__.split(".")[-1]

View File

@ -3,8 +3,6 @@ CLI command modules extracted from `commands.py`.
"""
from .account import create_tenant, reset_email, reset_password
from .app_maintenance import convert_to_agent_apps, fix_app_site_missing
from .database import upgrade_db
from .plugin import (
extract_plugins,
extract_unique_plugins,
@ -27,6 +25,7 @@ from .retention import (
restore_workflow_runs,
)
from .storage import clear_orphaned_file_records, file_usage, migrate_oss, remove_orphaned_files_on_storage
from .system import convert_to_agent_apps, fix_app_site_missing, reset_encrypt_key_pair, upgrade_db
from .vector import (
add_qdrant_index,
migrate_annotation_vector_database,
@ -34,8 +33,6 @@ from .vector import (
old_metadata_migration,
vdb_migrate,
)
from .workflow_migration import migrate_legacy_sys_files_workflows
from .workspace import reset_encrypt_key_pair
__all__ = [
"add_qdrant_index",
@ -58,7 +55,6 @@ __all__ = [
"migrate_annotation_vector_database",
"migrate_data_for_plugin",
"migrate_knowledge_vector_database",
"migrate_legacy_sys_files_workflows",
"migrate_oss",
"old_metadata_migration",
"remove_orphaned_files_on_storage",

View File

@ -113,18 +113,8 @@ def create_tenant(email: str, language: str | None = None, name: str | None = No
# Validates name encoding for non-Latin characters.
name = name.strip().encode("utf-8").decode("utf-8") if name else None
# Generate a random password that satisfies the password policy.
# The iteration limit guards against infinite loops caused by unexpected bugs in valid_password.
for _ in range(100):
new_password = secrets.token_urlsafe(16)
try:
valid_password(new_password)
break
except Exception:
continue
else:
click.echo(click.style("Failed to generate a valid password. Please try again.", fg="red"))
return
# generate random password
new_password = secrets.token_urlsafe(16)
# register account
account = RegisterService.register(

View File

@ -1,45 +0,0 @@
"""Database schema migration CLI commands."""
import logging
import click
from extensions.ext_redis import redis_client
from libs.db_migration_lock import DbMigrationAutoRenewLock
logger = logging.getLogger(__name__)
DB_UPGRADE_LOCK_TTL_SECONDS = 60
@click.command("upgrade-db", help="Upgrade the database")
def upgrade_db() -> None:
click.echo("Preparing database migration...")
lock = DbMigrationAutoRenewLock(
redis_client=redis_client,
name="db_upgrade_lock",
ttl_seconds=DB_UPGRADE_LOCK_TTL_SECONDS,
logger=logger,
log_context="db_migration",
)
if lock.acquire(blocking=False):
migration_succeeded = False
try:
click.echo(click.style("Starting database migration.", fg="green"))
import flask_migrate
flask_migrate.upgrade()
migration_succeeded = True
click.echo(click.style("Database migration successful!", fg="green"))
except Exception as e:
logger.exception("Failed to execute database migration")
click.echo(click.style(f"Database migration failed: {e}", fg="red"))
raise SystemExit(1)
finally:
status = "successful" if migration_succeeded else "failed"
lock.release_safely(status=status)
else:
click.echo("Database migration skipped")

View File

@ -11,7 +11,7 @@ from configs import dify_config
from core.helper import encrypter
from core.plugin.entities.plugin_daemon import CredentialType
from core.plugin.impl.plugin import PluginInstaller
from core.tools.utils.system_encryption import encrypt_system_params
from core.tools.utils.system_oauth_encryption import encrypt_system_oauth_params
from extensions.ext_database import db
from models import Tenant
from models.oauth import DatasourceOauthParamConfig, DatasourceProvider
@ -44,7 +44,7 @@ def setup_system_tool_oauth_client(provider, client_params):
click.echo(click.style(f"Encrypting client params: {client_params}", fg="yellow"))
click.echo(click.style(f"Using SECRET_KEY: `{dify_config.SECRET_KEY}`", fg="yellow"))
oauth_client_params = encrypt_system_params(client_params_dict)
oauth_client_params = encrypt_system_oauth_params(client_params_dict)
click.echo(click.style("Client params encrypted successfully.", fg="green"))
except Exception as e:
click.echo(click.style(f"Error parsing client params: {str(e)}", fg="red"))
@ -94,7 +94,7 @@ def setup_system_trigger_oauth_client(provider, client_params):
click.echo(click.style(f"Encrypting client params: {client_params}", fg="yellow"))
click.echo(click.style(f"Using SECRET_KEY: `{dify_config.SECRET_KEY}`", fg="yellow"))
oauth_client_params = encrypt_system_params(client_params_dict)
oauth_client_params = encrypt_system_oauth_params(client_params_dict)
click.echo(click.style("Client params encrypted successfully.", fg="green"))
except Exception as e:
click.echo(click.style(f"Error parsing client params: {str(e)}", fg="red"))

View File

@ -1,26 +1,74 @@
"""App data maintenance CLI commands."""
import logging
import click
import sqlalchemy as sa
from sqlalchemy import select, update
from sqlalchemy import delete, select, update
from sqlalchemy.orm import sessionmaker
from configs import dify_config
from events.app_event import app_was_created
from extensions.ext_database import db
from extensions.ext_redis import redis_client
from libs.db_migration_lock import DbMigrationAutoRenewLock
from libs.rsa import generate_key_pair
from models import Tenant
from models.model import App, AppMode, Conversation
from models.provider import Provider, ProviderModel
logger = logging.getLogger(__name__)
DB_UPGRADE_LOCK_TTL_SECONDS = 60
@click.command(
"reset-encrypt-key-pair",
help="Reset the asymmetric key pair of workspace for encrypt LLM credentials. "
"After the reset, all LLM credentials will become invalid, "
"requiring re-entry."
"Only support SELF_HOSTED mode.",
)
@click.confirmation_option(
prompt=click.style(
"Are you sure you want to reset encrypt key pair? This operation cannot be rolled back!", fg="red"
)
)
def reset_encrypt_key_pair():
"""
Reset the encrypted key pair of workspace for encrypt LLM credentials.
After the reset, all LLM credentials will become invalid, requiring re-entry.
Only support SELF_HOSTED mode.
"""
if dify_config.EDITION != "SELF_HOSTED":
click.echo(click.style("This command is only for SELF_HOSTED installations.", fg="red"))
return
with sessionmaker(db.engine, expire_on_commit=False).begin() as session:
tenants = session.scalars(select(Tenant)).all()
for tenant in tenants:
if not tenant:
click.echo(click.style("No workspaces found. Run /install first.", fg="red"))
return
tenant.encrypt_public_key = generate_key_pair(tenant.id)
session.execute(delete(Provider).where(Provider.provider_type == "custom", Provider.tenant_id == tenant.id))
session.execute(delete(ProviderModel).where(ProviderModel.tenant_id == tenant.id))
click.echo(
click.style(
f"Congratulations! The asymmetric key pair of workspace {tenant.id} has been reset.",
fg="green",
)
)
@click.command("convert-to-agent-apps", help="Convert Agent Assistant to Agent App.")
def convert_to_agent_apps() -> None:
def convert_to_agent_apps():
"""
Convert Agent Assistant to Agent App.
"""
click.echo(click.style("Starting convert to agent apps.", fg="green"))
proceeded_app_ids: list[str] = []
proceeded_app_ids = []
while True:
# fetch first 1000 apps
@ -73,14 +121,48 @@ def convert_to_agent_apps() -> None:
click.echo(click.style(f"Conversion complete. Converted {len(proceeded_app_ids)} agent apps.", fg="green"))
@click.command("upgrade-db", help="Upgrade the database")
def upgrade_db():
click.echo("Preparing database migration...")
lock = DbMigrationAutoRenewLock(
redis_client=redis_client,
name="db_upgrade_lock",
ttl_seconds=DB_UPGRADE_LOCK_TTL_SECONDS,
logger=logger,
log_context="db_migration",
)
if lock.acquire(blocking=False):
migration_succeeded = False
try:
click.echo(click.style("Starting database migration.", fg="green"))
# run db migration
import flask_migrate
flask_migrate.upgrade()
migration_succeeded = True
click.echo(click.style("Database migration successful!", fg="green"))
except Exception as e:
logger.exception("Failed to execute database migration")
click.echo(click.style(f"Database migration failed: {e}", fg="red"))
raise SystemExit(1)
finally:
status = "successful" if migration_succeeded else "failed"
lock.release_safely(status=status)
else:
click.echo("Database migration skipped")
@click.command("fix-app-site-missing", help="Fix app related site missing issue.")
def fix_app_site_missing() -> None:
def fix_app_site_missing():
"""
Fix app related site missing issue.
"""
click.echo(click.style("Starting fix for missing app-related sites.", fg="green"))
failed_app_ids: list[str] = []
failed_app_ids = []
while True:
sql = """select apps.id as id from apps left join sites on sites.app_id=apps.id
where sites.id is null limit 1000"""

View File

@ -1,172 +0,0 @@
"""Workflow data migration CLI commands.
TODO: Remove the legacy system file workflow migration command after the production migration is complete.
"""
import logging
from dataclasses import dataclass
import click
from sqlalchemy import select
from sqlalchemy.orm import Session, sessionmaker
from extensions.ext_database import db
from models.workflow import Workflow, WorkflowType
logger = logging.getLogger(__name__)
@dataclass
class LegacySysFilesWorkflowMigrationStats:
scanned: int = 0
migrated: int = 0
failed: int = 0
batches: int = 0
last_id: str | None = None
def _build_legacy_sys_files_workflow_query(
*,
start_after_id: str | None,
batch_size: int,
tenant_id: str | None,
app_id: str | None,
):
# Workflow IDs are UUID4, so this is not chronological pagination. The migration only needs a stable total
# order that matches the resume cursor; ordering by the same primary-key column used in the `id > cursor`
# predicate lets each batch continue deterministically without offset scans.
stmt = (
select(Workflow)
.where(Workflow.type.in_((WorkflowType.WORKFLOW, WorkflowType.CHAT)))
.order_by(Workflow.id)
.limit(batch_size)
)
if start_after_id:
stmt = stmt.where(Workflow.id > start_after_id)
if tenant_id:
stmt = stmt.where(Workflow.tenant_id == tenant_id)
if app_id:
stmt = stmt.where(Workflow.app_id == app_id)
return stmt
def _migrate_legacy_sys_files_workflow_batch(
*,
session: Session,
start_after_id: str | None,
batch_size: int,
tenant_id: str | None,
app_id: str | None,
dry_run: bool,
) -> LegacySysFilesWorkflowMigrationStats:
stats = LegacySysFilesWorkflowMigrationStats()
workflows = session.scalars(
_build_legacy_sys_files_workflow_query(
start_after_id=start_after_id,
batch_size=batch_size,
tenant_id=tenant_id,
app_id=app_id,
)
).all()
for workflow in workflows:
stats.scanned += 1
stats.last_id = workflow.id
try:
if workflow.migrate_legacy_sys_files_graph_in_place():
stats.migrated += 1
except Exception:
stats.failed += 1
logger.exception("Failed to migrate legacy sys.files workflow, workflow_id=%s", workflow.id)
if dry_run:
session.rollback()
else:
session.commit()
return stats
def run_legacy_sys_files_workflow_migration(
*,
batch_size: int,
limit: int | None,
start_after_id: str | None,
tenant_id: str | None,
app_id: str | None,
dry_run: bool,
) -> LegacySysFilesWorkflowMigrationStats:
"""Scan Workflow and Advanced Chat graphs in keyset-paginated batches."""
if batch_size <= 0:
raise click.UsageError("--batch-size must be greater than 0")
if limit is not None and limit <= 0:
raise click.UsageError("--limit must be greater than 0 when provided")
session_maker = sessionmaker(db.engine, expire_on_commit=False)
total = LegacySysFilesWorkflowMigrationStats(last_id=start_after_id)
next_start_after_id = start_after_id
while limit is None or total.scanned < limit:
remaining = None if limit is None else limit - total.scanned
current_batch_size = batch_size if remaining is None else min(batch_size, remaining)
if current_batch_size <= 0:
break
with session_maker() as session:
batch_stats = _migrate_legacy_sys_files_workflow_batch(
session=session,
start_after_id=next_start_after_id,
batch_size=current_batch_size,
tenant_id=tenant_id,
app_id=app_id,
dry_run=dry_run,
)
if batch_stats.scanned == 0:
break
total.scanned += batch_stats.scanned
total.migrated += batch_stats.migrated
total.failed += batch_stats.failed
total.batches += 1
total.last_id = batch_stats.last_id
next_start_after_id = batch_stats.last_id
if batch_stats.scanned < current_batch_size:
break
return total
@click.command(
"migrate-legacy-sys-files-workflows",
help="Migrate Workflow and Advanced Chat graphs that still reference deprecated sys.files.",
)
@click.option("--batch-size", default=1000, show_default=True, type=int, help="Number of workflows to scan per batch.")
@click.option("--limit", default=None, type=int, help="Maximum number of workflows to scan in this run.")
@click.option("--start-after-id", default=None, help="Resume scanning after this workflow ID.")
@click.option("--tenant-id", default=None, help="Limit migration to one tenant.")
@click.option("--app-id", default=None, help="Limit migration to one app.")
@click.option("--dry-run", is_flag=True, default=False, help="Scan and report without saving changes.")
def migrate_legacy_sys_files_workflows(
batch_size: int,
limit: int | None,
start_after_id: str | None,
tenant_id: str | None,
app_id: str | None,
dry_run: bool,
) -> None:
stats = run_legacy_sys_files_workflow_migration(
batch_size=batch_size,
limit=limit,
start_after_id=start_after_id,
tenant_id=tenant_id,
app_id=app_id,
dry_run=dry_run,
)
click.echo(
"Legacy sys.files workflow migration finished: "
f"scanned={stats.scanned} migrated={stats.migrated} failed={stats.failed} "
f"batches={stats.batches} last_id={stats.last_id or ''}"
)
if dry_run:
click.echo("Dry run only: no workflow graph changes were saved.")

View File

@ -1,52 +0,0 @@
"""Workspace maintenance CLI commands."""
import click
from sqlalchemy import delete, select
from sqlalchemy.orm import sessionmaker
from configs import dify_config
from extensions.ext_database import db
from libs.rsa import generate_key_pair
from models import Tenant
from models.provider import Provider, ProviderModel
@click.command(
"reset-encrypt-key-pair",
help="Reset the asymmetric key pair of workspace for encrypt LLM credentials. "
"After the reset, all LLM credentials will become invalid, "
"requiring re-entry."
"Only support SELF_HOSTED mode.",
)
@click.confirmation_option(
prompt=click.style(
"Are you sure you want to reset encrypt key pair? This operation cannot be rolled back!", fg="red"
)
)
def reset_encrypt_key_pair() -> None:
"""
Reset the encrypted key pair of workspace for encrypt LLM credentials.
After the reset, all LLM credentials will become invalid, requiring re-entry.
Only support SELF_HOSTED mode.
"""
if dify_config.EDITION != "SELF_HOSTED":
click.echo(click.style("This command is only for SELF_HOSTED installations.", fg="red"))
return
with sessionmaker(db.engine, expire_on_commit=False).begin() as session:
tenants = session.scalars(select(Tenant)).all()
for tenant in tenants:
if not tenant:
click.echo(click.style("No workspaces found. Run /install first.", fg="red"))
return
tenant.encrypt_public_key = generate_key_pair(tenant.id)
session.execute(delete(Provider).where(Provider.provider_type == "custom", Provider.tenant_id == tenant.id))
session.execute(delete(ProviderModel).where(ProviderModel.tenant_id == tenant.id))
click.echo(
click.style(
f"Congratulations! The asymmetric key pair of workspace {tenant.id} has been reset.",
fg="green",
)
)

View File

@ -287,27 +287,6 @@ class MarketplaceConfig(BaseSettings):
)
class CreatorsPlatformConfig(BaseSettings):
"""
Configuration for Creators Platform integration
"""
CREATORS_PLATFORM_FEATURES_ENABLED: bool = Field(
description="Enable or disable Creators Platform features",
default=True,
)
CREATORS_PLATFORM_API_URL: HttpUrl = Field(
description="Creators Platform API URL",
default=HttpUrl("https://creators.dify.ai"),
)
CREATORS_PLATFORM_OAUTH_CLIENT_ID: str = Field(
description="OAuth client ID for Creators Platform integration",
default="",
)
class EndpointConfig(BaseSettings):
"""
Configuration for various application endpoints and URLs
@ -520,6 +499,35 @@ class HttpConfig(BaseSettings):
def WEB_API_CORS_ALLOW_ORIGINS(self) -> list[str]:
return self.inner_WEB_API_CORS_ALLOW_ORIGINS.split(",")
inner_OPENAPI_CORS_ALLOW_ORIGINS: str = Field(
description=(
"Comma-separated allowlist for /openapi/v1/* CORS. "
"Default empty = same-origin only. Browser-cookie routes within "
"the group reject cross-origin OPTIONS regardless of this list."
),
validation_alias=AliasChoices("OPENAPI_CORS_ALLOW_ORIGINS"),
default="",
)
@computed_field
def OPENAPI_CORS_ALLOW_ORIGINS(self) -> list[str]:
return [o for o in self.inner_OPENAPI_CORS_ALLOW_ORIGINS.split(",") if o]
inner_OPENAPI_KNOWN_CLIENT_IDS: str = Field(
description=(
"Comma-separated client_id values accepted at "
"POST /openapi/v1/oauth/device/code. New CLIs / SDKs added here "
"without code changes. Unknown client_id returns 400 unsupported_client."
),
validation_alias=AliasChoices("OPENAPI_KNOWN_CLIENT_IDS"),
default="difyctl",
)
@computed_field # type: ignore[misc]
@property
def OPENAPI_KNOWN_CLIENT_IDS(self) -> frozenset[str]:
return frozenset(c for c in self.inner_OPENAPI_KNOWN_CLIENT_IDS.split(",") if c)
HTTP_REQUEST_MAX_CONNECT_TIMEOUT: int = Field(
ge=1, description="Maximum connection timeout in seconds for HTTP requests", default=10
)
@ -895,6 +903,17 @@ class AuthConfig(BaseSettings):
default=86400,
)
ENABLE_OAUTH_BEARER: bool = Field(
description="Enable OAuth bearer authentication (device-flow + Service API /v1/* bearer middleware).",
default=True,
)
OPENAPI_RATE_LIMIT_PER_TOKEN: PositiveInt = Field(
description="Per-token rate limit on /openapi/v1/* (requests per minute). "
"Bucket keyed on sha256(token), shared across api replicas via Redis.",
default=60,
)
class ModerationConfig(BaseSettings):
"""
@ -1137,18 +1156,6 @@ class MultiModalTransferConfig(BaseSettings):
)
class OpsTraceConfig(BaseSettings):
OPS_TRACE_RETRYABLE_DISPATCH_MAX_RETRIES: PositiveInt = Field(
description="Maximum retry attempts for transient ops trace provider dispatch failures.",
default=60,
)
OPS_TRACE_RETRYABLE_DISPATCH_DELAY_SECONDS: PositiveInt = Field(
description="Delay in seconds between transient ops trace provider dispatch retry attempts.",
default=5,
)
class CeleryBeatConfig(BaseSettings):
CELERY_BEAT_SCHEDULER_TIME: int = Field(
description="Interval in days for Celery Beat scheduler execution, default to 1 day",
@ -1181,6 +1188,14 @@ class CeleryScheduleTasksConfig(BaseSettings):
description="Enable scheduled workflow run cleanup task",
default=False,
)
ENABLE_CLEAN_OAUTH_ACCESS_TOKENS_TASK: bool = Field(
description="Enable scheduled cleanup of revoked/expired OAuth access-token rows past retention.",
default=True,
)
OAUTH_ACCESS_TOKEN_RETENTION_DAYS: PositiveInt = Field(
description="Days to retain revoked OAuth access-token rows before deletion.",
default=30,
)
ENABLE_MAIL_CLEAN_DOCUMENT_NOTIFY_TASK: bool = Field(
description="Enable mail clean document notify task",
default=False,
@ -1310,7 +1325,7 @@ class PositionConfig(BaseSettings):
class CollaborationConfig(BaseSettings):
ENABLE_COLLABORATION_MODE: bool = Field(
description="Whether to enable collaboration mode features across the workspace",
default=True,
default=False,
)
@ -1412,7 +1427,6 @@ class FeatureConfig(
AuthConfig, # Changed from OAuthConfig to AuthConfig
BillingConfig,
CodeExecutionSandboxConfig,
CreatorsPlatformConfig,
TriggerConfig,
AsyncWorkflowConfig,
PluginConfig,
@ -1429,7 +1443,6 @@ class FeatureConfig(
ModelLoadBalanceConfig,
ModerationConfig,
MultiModalTransferConfig,
OpsTraceConfig,
PositionConfig,
RagEtlConfig,
RepositoryConfig,

View File

@ -114,7 +114,7 @@ class SQLAlchemyEngineOptionsDict(TypedDict):
pool_pre_ping: bool
connect_args: dict[str, str]
pool_use_lifo: bool
pool_reset_on_return: Literal["commit", "rollback", None]
pool_reset_on_return: None
pool_timeout: int
@ -223,11 +223,6 @@ class DatabaseConfig(BaseSettings):
default=30,
)
SQLALCHEMY_POOL_RESET_ON_RETURN: Literal["commit", "rollback", None] = Field(
description="Connection pool reset behavior on return. Options: 'commit', 'rollback', or None",
default="rollback",
)
RETRIEVAL_SERVICE_EXECUTORS: NonNegativeInt = Field(
description="Number of processes for the retrieval service, default to CPU cores.",
default=os.cpu_count() or 1,
@ -257,7 +252,7 @@ class DatabaseConfig(BaseSettings):
"pool_pre_ping": self.SQLALCHEMY_POOL_PRE_PING,
"connect_args": connect_args,
"pool_use_lifo": self.SQLALCHEMY_POOL_USE_LIFO,
"pool_reset_on_return": self.SQLALCHEMY_POOL_RESET_ON_RETURN,
"pool_reset_on_return": None,
"pool_timeout": self.SQLALCHEMY_POOL_TIMEOUT,
}
return result

View File

@ -19,7 +19,7 @@
"name": "Website Generator"
},
"app_id": "b53545b1-79ea-4da3-b31a-c39391c6f041",
"categories": ["Programming"],
"category": "Programming",
"copyright": null,
"description": null,
"is_listed": true,
@ -35,7 +35,7 @@
"name": "Investment Analysis Report Copilot"
},
"app_id": "a23b57fa-85da-49c0-a571-3aff375976c1",
"categories": ["Agent"],
"category": "Agent",
"copyright": "Dify.AI",
"description": "Welcome to your personalized Investment Analysis Copilot service, where we delve into the depths of stock analysis to provide you with comprehensive insights. \n",
"is_listed": true,
@ -51,7 +51,7 @@
"name": "Workflow Planning Assistant "
},
"app_id": "f3303a7d-a81c-404e-b401-1f8711c998c1",
"categories": ["Workflow"],
"category": "Workflow",
"copyright": null,
"description": "An assistant that helps you plan and select the right node for a workflow (V0.6.0). ",
"is_listed": true,
@ -67,7 +67,7 @@
"name": "Automated Email Reply "
},
"app_id": "e9d92058-7d20-4904-892f-75d90bef7587",
"categories": ["Workflow"],
"category": "Workflow",
"copyright": null,
"description": "Reply emails using Gmail API. It will automatically retrieve email in your inbox and create a response in Gmail. \nConfigure your Gmail API in Google Cloud Console. ",
"is_listed": true,
@ -83,7 +83,7 @@
"name": "Book Translation "
},
"app_id": "98b87f88-bd22-4d86-8b74-86beba5e0ed4",
"categories": ["Workflow"],
"category": "Workflow",
"copyright": null,
"description": "A workflow designed to translate a full book up to 15000 tokens per run. Uses Code node to separate text into chunks and Iteration to translate each chunk. ",
"is_listed": true,
@ -99,7 +99,7 @@
"name": "Python bug fixer"
},
"app_id": "cae337e6-aec5-4c7b-beca-d6f1a808bd5e",
"categories": ["Programming"],
"category": "Programming",
"copyright": null,
"description": null,
"is_listed": true,
@ -115,7 +115,7 @@
"name": "Code Interpreter"
},
"app_id": "d077d587-b072-4f2c-b631-69ed1e7cdc0f",
"categories": ["Programming"],
"category": "Programming",
"copyright": "Copyright 2023 Dify",
"description": "Code interpreter, clarifying the syntax and semantics of the code.",
"is_listed": true,
@ -131,7 +131,7 @@
"name": "SVG Logo Design "
},
"app_id": "73fbb5f1-c15d-4d74-9cc8-46d9db9b2cca",
"categories": ["Agent"],
"category": "Agent",
"copyright": "Dify.AI",
"description": "Hello, I am your creative partner in bringing ideas to vivid life! I can assist you in creating stunning designs by leveraging abilities of DALL·E 3. ",
"is_listed": true,
@ -147,7 +147,7 @@
"name": "Long Story Generator (Iteration) "
},
"app_id": "5efb98d7-176b-419c-b6ef-50767391ab62",
"categories": ["Workflow"],
"category": "Workflow",
"copyright": null,
"description": "A workflow demonstrating how to use Iteration node to generate long article that is longer than the context length of LLMs. ",
"is_listed": true,
@ -163,7 +163,7 @@
"name": "Text Summarization Workflow"
},
"app_id": "f00c4531-6551-45ee-808f-1d7903099515",
"categories": ["Workflow"],
"category": "Workflow",
"copyright": null,
"description": "Based on users' choice, retrieve external knowledge to more accurately summarize articles.",
"is_listed": true,
@ -179,7 +179,7 @@
"name": "YouTube Channel Data Analysis"
},
"app_id": "be591209-2ca8-410f-8f3b-ca0e530dd638",
"categories": ["Agent"],
"category": "Agent",
"copyright": "Dify.AI",
"description": "I am a YouTube Channel Data Analysis Copilot, I am here to provide expert data analysis tailored to your needs. ",
"is_listed": true,
@ -195,7 +195,7 @@
"name": "Article Grading Bot"
},
"app_id": "a747f7b4-c48b-40d6-b313-5e628232c05f",
"categories": ["Writing"],
"category": "Writing",
"copyright": null,
"description": "Assess the quality of articles and text based on user defined criteria. ",
"is_listed": true,
@ -211,7 +211,7 @@
"name": "SEO Blog Generator"
},
"app_id": "18f3bd03-524d-4d7a-8374-b30dbe7c69d5",
"categories": ["Workflow"],
"category": "Workflow",
"copyright": null,
"description": "Workflow for retrieving information from the internet, followed by segmented generation of SEO blogs.",
"is_listed": true,
@ -227,7 +227,7 @@
"name": "SQL Creator"
},
"app_id": "050ef42e-3e0c-40c1-a6b6-a64f2c49d744",
"categories": ["Programming"],
"category": "Programming",
"copyright": "Copyright 2023 Dify",
"description": "Write SQL from natural language by pasting in your schema with the request.Please describe your query requirements in natural language and select the target database type.",
"is_listed": true,
@ -243,7 +243,7 @@
"name": "Sentiment Analysis "
},
"app_id": "f06bf86b-d50c-4895-a942-35112dbe4189",
"categories": ["Workflow"],
"category": "Workflow",
"copyright": null,
"description": "Batch sentiment analysis of text, followed by JSON output of sentiment classification along with scores.",
"is_listed": true,
@ -259,7 +259,7 @@
"name": "Strategic Consulting Expert"
},
"app_id": "7e8ca1ae-02f2-4b5f-979e-62d19133bee2",
"categories": ["Assistant"],
"category": "Assistant",
"copyright": "Copyright 2023 Dify",
"description": "I can answer your questions related to strategic marketing.",
"is_listed": true,
@ -275,7 +275,7 @@
"name": "Code Converter"
},
"app_id": "4006c4b2-0735-4f37-8dbb-fb1a8c5bd87a",
"categories": ["Programming"],
"category": "Programming",
"copyright": "Copyright 2023 Dify",
"description": "This is an application that provides the ability to convert code snippets in multiple programming languages. You can input the code you wish to convert, select the target programming language, and get the desired output.",
"is_listed": true,
@ -291,7 +291,7 @@
"name": "Question Classifier + Knowledge + Chatbot "
},
"app_id": "d9f6b733-e35d-4a40-9f38-ca7bbfa009f7",
"categories": ["Workflow"],
"category": "Workflow",
"copyright": null,
"description": "Basic Workflow Template, a chatbot capable of identifying intents alongside with a knowledge base.",
"is_listed": true,
@ -307,7 +307,7 @@
"name": "AI Front-end interviewer"
},
"app_id": "127efead-8944-4e20-ba9d-12402eb345e0",
"categories": ["HR"],
"category": "HR",
"copyright": "Copyright 2023 Dify",
"description": "A simulated front-end interviewer that tests the skill level of front-end development through questioning.",
"is_listed": true,
@ -323,7 +323,7 @@
"name": "Knowledge Retrieval + Chatbot "
},
"app_id": "e9870913-dd01-4710-9f06-15d4180ca1ce",
"categories": ["Workflow"],
"category": "Workflow",
"copyright": null,
"description": "Basic Workflow Template, A chatbot with a knowledge base. ",
"is_listed": true,
@ -339,7 +339,7 @@
"name": "Email Assistant Workflow "
},
"app_id": "dd5b6353-ae9b-4bce-be6a-a681a12cf709",
"categories": ["Workflow"],
"category": "Workflow",
"copyright": null,
"description": "A multifunctional email assistant capable of summarizing, replying, composing, proofreading, and checking grammar.",
"is_listed": true,
@ -355,7 +355,7 @@
"name": "Customer Review Analysis Workflow "
},
"app_id": "9c0cd31f-4b62-4005-adf5-e3888d08654a",
"categories": ["Workflow"],
"category": "Workflow",
"copyright": null,
"description": "Utilize LLM (Large Language Models) to classify customer reviews and forward them to the internal system.",
"is_listed": true,

View File

@ -1,193 +0,0 @@
# API Schema Guide
This guide describes the expected Flask-RESTX + Pydantic pattern for controller request payloads, query
parameters, response schemas, and Swagger documentation.
## Principles
- Use Pydantic `BaseModel` for request bodies and query parameters.
- Use `fields.base.ResponseModel` for response DTOs.
- Keep runtime validation and Swagger documentation wired to the same Pydantic model.
- Prefer explicit validation and serialization in controller methods over Flask-RESTX marshalling.
- Do not add new Flask-RESTX `fields.*` dictionaries, `Namespace.model(...)` exports, or `@marshal_with(...)` for migrated or new endpoints.
- Do not use `@ns.expect(...)` for GET query parameters. Flask-RESTX documents that as a request body.
## Naming
- Request body models: use a `Payload` suffix.
- Example: `WorkflowRunPayload`, `DatasourceVariablesPayload`.
- Query parameter models: use a `Query` suffix.
- Example: `WorkflowRunListQuery`, `MessageListQuery`.
- Response models: use a `Response` suffix and inherit from `ResponseModel`.
- Example: `WorkflowRunDetailResponse`, `WorkflowRunNodeExecutionListResponse`.
- Use `ListResponse` or `PaginationResponse` for wrapper responses.
- Example: `WorkflowRunNodeExecutionListResponse`, `WorkflowRunPaginationResponse`.
- Keep these models near the controller when they are endpoint-specific. Move them to `fields/*_fields.py` only when shared by multiple controllers.
## Registering Models For Swagger
Use helpers from `controllers.common.schema`.
```python
from controllers.common.schema import (
query_params_from_model,
register_response_schema_models,
register_schema_models,
)
```
Register request payload and query models with `register_schema_models(...)`:
```python
register_schema_models(
console_ns,
WorkflowRunPayload,
WorkflowRunListQuery,
)
```
Register response models with `register_response_schema_models(...)`:
```python
register_response_schema_models(
console_ns,
WorkflowRunDetailResponse,
WorkflowRunPaginationResponse,
)
```
Response models are registered in Pydantic serialization mode. This matters when a response model uses
`validation_alias` to read internal object attributes but emits public API field names. For example, a response model
can validate from `inputs_dict` while documenting and serializing `inputs`.
## Request Bodies
For non-GET request bodies:
1. Define a Pydantic `Payload` model.
2. Register it with `register_schema_models(...)`.
3. Use `@ns.expect(ns.models[Payload.__name__])` for Swagger documentation.
4. Validate from `ns.payload or {}` inside the controller.
```python
class DraftWorkflowNodeRunPayload(BaseModel):
inputs: dict[str, Any]
query: str = ""
register_schema_models(console_ns, DraftWorkflowNodeRunPayload)
@console_ns.expect(console_ns.models[DraftWorkflowNodeRunPayload.__name__])
def post(self, app_model: App, node_id: str):
payload = DraftWorkflowNodeRunPayload.model_validate(console_ns.payload or {})
result = service.run(..., inputs=payload.inputs, query=payload.query)
return WorkflowRunNodeExecutionResponse.model_validate(result, from_attributes=True).model_dump(mode="json")
```
## Query Parameters
For GET query parameters:
1. Define a Pydantic `Query` model.
2. Register it with `register_schema_models(...)` if it is referenced elsewhere in docs, or only use
`query_params_from_model(...)` if a body schema is not needed.
3. Use `@ns.doc(params=query_params_from_model(QueryModel))`.
4. Validate from `request.args.to_dict(flat=True)` or an explicit dict when type coercion is needed.
```python
class WorkflowRunListQuery(BaseModel):
last_id: str | None = Field(default=None, description="Last run ID for pagination")
limit: int = Field(default=20, ge=1, le=100, description="Number of items per page (1-100)")
@console_ns.doc(params=query_params_from_model(WorkflowRunListQuery))
def get(self, app_model: App):
query = WorkflowRunListQuery.model_validate(request.args.to_dict(flat=True))
result = service.list(..., limit=query.limit, last_id=query.last_id)
return WorkflowRunPaginationResponse.model_validate(result, from_attributes=True).model_dump(mode="json")
```
Do not do this for GET query parameters:
```python
@console_ns.expect(console_ns.models[WorkflowRunListQuery.__name__])
def get(...):
...
```
That documents a GET request body and is not the expected contract.
## Responses
Response models should inherit from `ResponseModel`:
```python
class WorkflowRunNodeExecutionResponse(ResponseModel):
id: str
inputs: Any = Field(default=None, validation_alias="inputs_dict")
process_data: Any = Field(default=None, validation_alias="process_data_dict")
outputs: Any = Field(default=None, validation_alias="outputs_dict")
```
Document response models with `@ns.response(...)`:
```python
@console_ns.response(
200,
"Node run started successfully",
console_ns.models[WorkflowRunNodeExecutionResponse.__name__],
)
def post(...):
...
```
Serialize explicitly:
```python
return WorkflowRunNodeExecutionResponse.model_validate(
workflow_node_execution,
from_attributes=True,
).model_dump(mode="json")
```
If the service can return `None`, translate that into the expected HTTP error before validation:
```python
workflow_run = service.get_workflow_run(...)
if workflow_run is None:
raise NotFound("Workflow run not found")
return WorkflowRunDetailResponse.model_validate(workflow_run, from_attributes=True).model_dump(mode="json")
```
## Legacy Flask-RESTX Patterns
Avoid adding these patterns to new or migrated endpoints:
- `ns.model(...)` for new request/response DTOs.
- Module-level exported RESTX model objects such as `workflow_run_detail_model`.
- `fields.Nested({...})` with raw inline dict field maps.
- `@marshal_with(...)` for response serialization.
- `@ns.expect(...)` for GET query params.
Existing legacy field dictionaries may remain where an endpoint has not yet been migrated. Keep that compatibility local
to the legacy area and avoid importing RESTX model objects from controllers.
## Verifying Swagger
For schema and documentation changes, run focused tests and generate Swagger JSON:
```bash
uv run --project . pytest tests/unit_tests/controllers/common/test_schema.py
uv run --project . pytest tests/unit_tests/commands/test_generate_swagger_specs.py tests/unit_tests/controllers/test_swagger.py
uv run --project . dev/generate_swagger_specs.py --output-dir /tmp/dify-openapi-check
```
Inspect affected endpoints with `jq`. Check that:
- GET parameters are `in: query`.
- Request bodies appear only where the endpoint has a body.
- Responses reference the expected `*Response` schema.
- Response schemas use public serialized names, not internal validation aliases like `inputs_dict`.

View File

@ -41,8 +41,7 @@ def guess_file_info_from_response(response: httpx.Response):
# Try to extract filename from URL
parsed_url = urllib.parse.urlparse(url)
url_path = parsed_url.path
# Decode percent-encoded characters in the path segment
filename = urllib.parse.unquote(os.path.basename(url_path))
filename = os.path.basename(url_path)
# If filename couldn't be extracted, use Content-Disposition header
if not filename:

View File

@ -1,6 +0,0 @@
from pydantic import BaseModel, JsonValue
class HumanInputFormSubmitPayload(BaseModel):
inputs: dict[str, JsonValue]
action: str

View File

@ -1,14 +1,6 @@
"""Helpers for registering Pydantic models with Flask-RESTX namespaces.
"""Helpers for registering Pydantic models with Flask-RESTX namespaces."""
Flask-RESTX treats `SchemaModel` bodies as opaque JSON schemas; it does not
promote Pydantic's nested `$defs` into top-level Swagger `definitions`.
These helpers keep that translation centralized so models registered through
`register_schema_models` emit resolvable Swagger 2.0 references.
"""
from collections.abc import Mapping
from enum import StrEnum
from typing import Any, Literal, NotRequired, TypedDict
from flask_restx import Namespace
from pydantic import BaseModel, TypeAdapter
@ -16,59 +8,10 @@ from pydantic import BaseModel, TypeAdapter
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
QueryParamDoc = TypedDict(
"QueryParamDoc",
{
"in": NotRequired[str],
"type": NotRequired[str],
"items": NotRequired[dict[str, object]],
"required": NotRequired[bool],
"description": NotRequired[str],
"enum": NotRequired[list[object]],
"default": NotRequired[object],
"minimum": NotRequired[int | float],
"maximum": NotRequired[int | float],
"minLength": NotRequired[int],
"maxLength": NotRequired[int],
"minItems": NotRequired[int],
"maxItems": NotRequired[int],
},
)
def _register_json_schema(namespace: Namespace, name: str, schema: dict) -> None:
"""Register a JSON schema and promote any nested Pydantic `$defs`."""
nested_definitions = schema.get("$defs")
schema_to_register = dict(schema)
if isinstance(nested_definitions, dict):
schema_to_register.pop("$defs")
namespace.schema_model(name, schema_to_register)
if not isinstance(nested_definitions, dict):
return
for nested_name, nested_schema in nested_definitions.items():
if isinstance(nested_schema, dict):
_register_json_schema(namespace, nested_name, nested_schema)
JsonSchemaMode = Literal["validation", "serialization"]
def _register_schema_model(namespace: Namespace, model: type[BaseModel], *, mode: JsonSchemaMode) -> None:
_register_json_schema(
namespace,
model.__name__,
model.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0, mode=mode),
)
def register_schema_model(namespace: Namespace, model: type[BaseModel]) -> None:
"""Register a BaseModel and its nested schema definitions for Swagger documentation."""
"""Register a single BaseModel with a namespace for Swagger documentation."""
_register_schema_model(namespace, model, mode="validation")
namespace.schema_model(model.__name__, model.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
def register_schema_models(namespace: Namespace, *models: type[BaseModel]) -> None:
@ -78,19 +21,6 @@ def register_schema_models(namespace: Namespace, *models: type[BaseModel]) -> No
register_schema_model(namespace, model)
def register_response_schema_model(namespace: Namespace, model: type[BaseModel]) -> None:
"""Register a BaseModel using its serialized response shape."""
_register_schema_model(namespace, model, mode="serialization")
def register_response_schema_models(namespace: Namespace, *models: type[BaseModel]) -> None:
"""Register multiple response BaseModels using their serialized response shape."""
for model in models:
register_response_schema_model(namespace, model)
def get_or_create_model(model_name: str, field_def):
# Import lazily to avoid circular imports between console controllers and schema helpers.
from controllers.console import console_ns
@ -104,114 +34,15 @@ def get_or_create_model(model_name: str, field_def):
def register_enum_models(namespace: Namespace, *models: type[StrEnum]) -> None:
"""Register multiple StrEnum with a namespace."""
for model in models:
_register_json_schema(
namespace,
model.__name__,
TypeAdapter(model).json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
namespace.schema_model(
model.__name__, TypeAdapter(model).json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)
)
def query_params_from_model(model: type[BaseModel]) -> dict[str, QueryParamDoc]:
"""Build Flask-RESTX query parameter docs from a flat Pydantic model.
`Namespace.expect()` treats Pydantic schema models as request bodies, so GET
endpoints should keep runtime validation on the Pydantic model and feed this
derived mapping to `Namespace.doc(params=...)` for Swagger documentation.
"""
schema = model.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)
properties = schema.get("properties", {})
if not isinstance(properties, Mapping):
return {}
required = schema.get("required", [])
required_names = set(required) if isinstance(required, list) else set()
params: dict[str, QueryParamDoc] = {}
for name, property_schema in properties.items():
if not isinstance(name, str) or not isinstance(property_schema, Mapping):
continue
params[name] = _query_param_from_property(property_schema, required=name in required_names)
return params
def _query_param_from_property(property_schema: Mapping[str, Any], *, required: bool) -> QueryParamDoc:
param_schema = _nullable_property_schema(property_schema)
param_doc: QueryParamDoc = {"in": "query", "required": required}
description = param_schema.get("description")
if isinstance(description, str):
param_doc["description"] = description
schema_type = param_schema.get("type")
if isinstance(schema_type, str) and schema_type in {"array", "boolean", "integer", "number", "string"}:
param_doc["type"] = schema_type
if schema_type == "array":
items = param_schema.get("items")
if isinstance(items, Mapping):
item_type = items.get("type")
if isinstance(item_type, str):
param_doc["items"] = {"type": item_type}
enum = param_schema.get("enum")
if isinstance(enum, list):
param_doc["enum"] = enum
default = param_schema.get("default")
if default is not None:
param_doc["default"] = default
minimum = param_schema.get("minimum")
if isinstance(minimum, int | float):
param_doc["minimum"] = minimum
maximum = param_schema.get("maximum")
if isinstance(maximum, int | float):
param_doc["maximum"] = maximum
min_length = param_schema.get("minLength")
if isinstance(min_length, int):
param_doc["minLength"] = min_length
max_length = param_schema.get("maxLength")
if isinstance(max_length, int):
param_doc["maxLength"] = max_length
min_items = param_schema.get("minItems")
if isinstance(min_items, int):
param_doc["minItems"] = min_items
max_items = param_schema.get("maxItems")
if isinstance(max_items, int):
param_doc["maxItems"] = max_items
return param_doc
def _nullable_property_schema(property_schema: Mapping[str, Any]) -> Mapping[str, Any]:
any_of = property_schema.get("anyOf")
if not isinstance(any_of, list):
return property_schema
non_null_candidates = [
candidate for candidate in any_of if isinstance(candidate, Mapping) and candidate.get("type") != "null"
]
if len(non_null_candidates) == 1:
return {**property_schema, **non_null_candidates[0]}
return property_schema
__all__ = [
"DEFAULT_REF_TEMPLATE_SWAGGER_2_0",
"get_or_create_model",
"query_params_from_model",
"register_enum_models",
"register_response_schema_model",
"register_response_schema_models",
"register_schema_model",
"register_schema_models",
]

View File

@ -3,7 +3,6 @@ import io
from collections.abc import Callable
from functools import wraps
from typing import cast
from uuid import UUID
from flask import request
from flask_restx import Resource
@ -13,7 +12,6 @@ from werkzeug.exceptions import BadRequest, NotFound, Unauthorized
from configs import dify_config
from constants.languages import supported_language
from controllers.common.schema import register_schema_models
from controllers.console import console_ns
from controllers.console.wraps import only_edition_cloud
from core.db.session_factory import session_factory
@ -22,6 +20,8 @@ from libs.token import extract_access_token
from models.model import App, ExporleBanner, InstalledApp, RecommendedApp, TrialApp
from services.billing_service import BillingService, LangContentDict
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class InsertExploreAppPayload(BaseModel):
app_id: str = Field(...)
@ -58,7 +58,15 @@ class InsertExploreBannerPayload(BaseModel):
model_config = {"populate_by_name": True}
register_schema_models(console_ns, InsertExploreAppPayload, InsertExploreBannerPayload)
console_ns.schema_model(
InsertExploreAppPayload.__name__,
InsertExploreAppPayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
console_ns.schema_model(
InsertExploreBannerPayload.__name__,
InsertExploreBannerPayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
def admin_required[**P, R](view: Callable[P, R]) -> Callable[P, R]:
@ -182,7 +190,7 @@ class InsertExploreAppApi(Resource):
@console_ns.response(204, "App removed successfully")
@only_edition_cloud
@admin_required
def delete(self, app_id: UUID):
def delete(self, app_id):
with session_factory.create_session() as session:
recommended_app = session.execute(
select(RecommendedApp).where(RecommendedApp.app_id == str(app_id))
@ -293,7 +301,15 @@ class BatchAddNotificationAccountsPayload(BaseModel):
user_email: list[str] = Field(..., description="List of account email addresses")
register_schema_models(console_ns, UpsertNotificationPayload, BatchAddNotificationAccountsPayload)
console_ns.schema_model(
UpsertNotificationPayload.__name__,
UpsertNotificationPayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
console_ns.schema_model(
BatchAddNotificationAccountsPayload.__name__,
BatchAddNotificationAccountsPayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
@console_ns.route("/admin/upsert_notification")
@ -395,11 +411,11 @@ class BatchAddNotificationAccountsApi(Resource):
raise BadRequest("Invalid file type. Only CSV (.csv) and TXT (.txt) files are allowed.")
try:
content = file.stream.read().decode("utf-8")
content = file.read().decode("utf-8")
except UnicodeDecodeError:
try:
file.stream.seek(0)
content = file.stream.read().decode("gbk")
file.seek(0)
content = file.read().decode("gbk")
except UnicodeDecodeError:
raise BadRequest("Unable to decode the file. Please use UTF-8 or GBK encoding.")

View File

@ -34,7 +34,7 @@ class AdvancedPromptTemplateList(Resource):
@login_required
@account_initialization_required
def get(self):
args = AdvancedPromptTemplateQuery.model_validate(request.args.to_dict(flat=True))
args = AdvancedPromptTemplateQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
prompt_args: AdvancedPromptTemplateArgs = {
"app_mode": args.app_mode,
"model_mode": args.model_mode,

View File

@ -2,7 +2,6 @@ from flask import request
from flask_restx import Resource, fields
from pydantic import BaseModel, Field, field_validator
from controllers.common.schema import register_schema_models
from controllers.console import console_ns
from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import account_initialization_required, setup_required
@ -11,6 +10,8 @@ from libs.login import login_required
from models.model import AppMode
from services.agent_service import AgentService
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class AgentLogQuery(BaseModel):
message_id: str = Field(..., description="Message UUID")
@ -22,7 +23,9 @@ class AgentLogQuery(BaseModel):
return uuid_value(value)
register_schema_models(console_ns, AgentLogQuery)
console_ns.schema_model(
AgentLogQuery.__name__, AgentLogQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)
)
@console_ns.route("/apps/<uuid:app_id>/agent/logs")
@ -41,6 +44,6 @@ class AgentLogApi(Resource):
@get_app_model(mode=[AppMode.AGENT_CHAT])
def get(self, app_model):
"""Get agent logs"""
args = AgentLogQuery.model_validate(request.args.to_dict(flat=True))
args = AgentLogQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
return AgentService.get_agent_logs(app_model, args.conversation_id, args.message_id)

View File

@ -1,5 +1,4 @@
from typing import Any, Literal
from uuid import UUID
from flask import abort, make_response, request
from flask_restx import Resource
@ -34,6 +33,8 @@ from services.annotation_service import (
UpsertAnnotationArgs,
)
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class AnnotationReplyPayload(BaseModel):
score_threshold: float = Field(..., description="Score threshold for annotation matching")
@ -86,6 +87,17 @@ class AnnotationFilePayload(BaseModel):
return uuid_value(value)
def reg(model: type[BaseModel]) -> None:
console_ns.schema_model(model.__name__, model.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
reg(AnnotationReplyPayload)
reg(AnnotationSettingUpdatePayload)
reg(AnnotationListQuery)
reg(CreateAnnotationPayload)
reg(UpdateAnnotationPayload)
reg(AnnotationReplyStatusQuery)
reg(AnnotationFilePayload)
register_schema_models(
console_ns,
Annotation,
@ -93,13 +105,6 @@ register_schema_models(
AnnotationExportList,
AnnotationHitHistory,
AnnotationHitHistoryList,
AnnotationReplyPayload,
AnnotationSettingUpdatePayload,
AnnotationListQuery,
CreateAnnotationPayload,
UpdateAnnotationPayload,
AnnotationReplyStatusQuery,
AnnotationFilePayload,
)
@ -116,7 +121,8 @@ class AnnotationReplyActionApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check("annotation")
@edit_permission_required
def post(self, app_id: UUID, action: Literal["enable", "disable"]):
def post(self, app_id, action: Literal["enable", "disable"]):
app_id = str(app_id)
args = AnnotationReplyPayload.model_validate(console_ns.payload)
match action:
case "enable":
@ -125,9 +131,9 @@ class AnnotationReplyActionApi(Resource):
"embedding_provider_name": args.embedding_provider_name,
"embedding_model_name": args.embedding_model_name,
}
result = AppAnnotationService.enable_app_annotation(enable_args, str(app_id))
result = AppAnnotationService.enable_app_annotation(enable_args, app_id)
case "disable":
result = AppAnnotationService.disable_app_annotation(str(app_id))
result = AppAnnotationService.disable_app_annotation(app_id)
return result, 200
@ -142,8 +148,9 @@ class AppAnnotationSettingDetailApi(Resource):
@login_required
@account_initialization_required
@edit_permission_required
def get(self, app_id: UUID):
result = AppAnnotationService.get_app_annotation_setting_by_app_id(str(app_id))
def get(self, app_id):
app_id = str(app_id)
result = AppAnnotationService.get_app_annotation_setting_by_app_id(app_id)
return result, 200
@ -159,13 +166,14 @@ class AppAnnotationSettingUpdateApi(Resource):
@login_required
@account_initialization_required
@edit_permission_required
def post(self, app_id: UUID, annotation_setting_id):
def post(self, app_id, annotation_setting_id):
app_id = str(app_id)
annotation_setting_id = str(annotation_setting_id)
args = AnnotationSettingUpdatePayload.model_validate(console_ns.payload)
setting_args: UpdateAnnotationSettingArgs = {"score_threshold": args.score_threshold}
result = AppAnnotationService.update_app_annotation_setting(str(app_id), annotation_setting_id, setting_args)
result = AppAnnotationService.update_app_annotation_setting(app_id, annotation_setting_id, setting_args)
return result, 200
@ -181,7 +189,7 @@ class AnnotationReplyActionStatusApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check("annotation")
@edit_permission_required
def get(self, app_id: UUID, job_id, action):
def get(self, app_id, job_id, action):
job_id = str(job_id)
app_annotation_job_key = f"{action}_app_annotation_job_{str(job_id)}"
cache_result = redis_client.get(app_annotation_job_key)
@ -209,13 +217,14 @@ class AnnotationApi(Resource):
@login_required
@account_initialization_required
@edit_permission_required
def get(self, app_id: UUID):
args = AnnotationListQuery.model_validate(request.args.to_dict(flat=True))
def get(self, app_id):
args = AnnotationListQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
page = args.page
limit = args.limit
keyword = args.keyword
annotation_list, total = AppAnnotationService.get_annotation_list_by_app_id(str(app_id), page, limit, keyword)
app_id = str(app_id)
annotation_list, total = AppAnnotationService.get_annotation_list_by_app_id(app_id, page, limit, keyword)
annotation_models = TypeAdapter(list[Annotation]).validate_python(annotation_list, from_attributes=True)
response = AnnotationList(
data=annotation_models,
@ -237,7 +246,8 @@ class AnnotationApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check("annotation")
@edit_permission_required
def post(self, app_id: UUID):
def post(self, app_id):
app_id = str(app_id)
args = CreateAnnotationPayload.model_validate(console_ns.payload)
upsert_args: UpsertAnnotationArgs = {}
if args.answer is not None:
@ -248,14 +258,15 @@ class AnnotationApi(Resource):
upsert_args["message_id"] = args.message_id
if args.question is not None:
upsert_args["question"] = args.question
annotation = AppAnnotationService.up_insert_app_annotation_from_message(upsert_args, str(app_id))
annotation = AppAnnotationService.up_insert_app_annotation_from_message(upsert_args, app_id)
return Annotation.model_validate(annotation, from_attributes=True).model_dump(mode="json")
@setup_required
@login_required
@account_initialization_required
@edit_permission_required
def delete(self, app_id: UUID):
def delete(self, app_id):
app_id = str(app_id)
# Use request.args.getlist to get annotation_ids array directly
annotation_ids = request.args.getlist("annotation_id")
@ -269,11 +280,11 @@ class AnnotationApi(Resource):
"message": "annotation_ids are required if the parameter is provided.",
}, 400
result = AppAnnotationService.delete_app_annotations_in_batch(str(app_id), annotation_ids)
result = AppAnnotationService.delete_app_annotations_in_batch(app_id, annotation_ids)
return result, 204
# If no annotation_ids are provided, handle clearing all annotations
else:
AppAnnotationService.clear_all_annotations(str(app_id))
AppAnnotationService.clear_all_annotations(app_id)
return {"result": "success"}, 204
@ -292,8 +303,9 @@ class AnnotationExportApi(Resource):
@login_required
@account_initialization_required
@edit_permission_required
def get(self, app_id: UUID):
annotation_list = AppAnnotationService.export_annotation_list_by_app_id(str(app_id))
def get(self, app_id):
app_id = str(app_id)
annotation_list = AppAnnotationService.export_annotation_list_by_app_id(app_id)
annotation_models = TypeAdapter(list[Annotation]).validate_python(annotation_list, from_attributes=True)
response_data = AnnotationExportList(data=annotation_models).model_dump(mode="json")
@ -319,22 +331,26 @@ class AnnotationUpdateDeleteApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check("annotation")
@edit_permission_required
def post(self, app_id: UUID, annotation_id: UUID):
def post(self, app_id, annotation_id):
app_id = str(app_id)
annotation_id = str(annotation_id)
args = UpdateAnnotationPayload.model_validate(console_ns.payload)
update_args: UpdateAnnotationArgs = {}
if args.answer is not None:
update_args["answer"] = args.answer
if args.question is not None:
update_args["question"] = args.question
annotation = AppAnnotationService.update_app_annotation_directly(update_args, str(app_id), str(annotation_id))
annotation = AppAnnotationService.update_app_annotation_directly(update_args, app_id, annotation_id)
return Annotation.model_validate(annotation, from_attributes=True).model_dump(mode="json")
@setup_required
@login_required
@account_initialization_required
@edit_permission_required
def delete(self, app_id: UUID, annotation_id: UUID):
AppAnnotationService.delete_app_annotation(str(app_id), str(annotation_id))
def delete(self, app_id, annotation_id):
app_id = str(app_id)
annotation_id = str(annotation_id)
AppAnnotationService.delete_app_annotation(app_id, annotation_id)
return {"result": "success"}, 204
@ -355,9 +371,11 @@ class AnnotationBatchImportApi(Resource):
@annotation_import_rate_limit
@annotation_import_concurrency_limit
@edit_permission_required
def post(self, app_id: UUID):
def post(self, app_id):
from configs import dify_config
app_id = str(app_id)
# check file
if "file" not in request.files:
raise NoFileUploadedError()
@ -373,9 +391,9 @@ class AnnotationBatchImportApi(Resource):
raise ValueError("Invalid file type. Only CSV files are allowed")
# Check file size before processing
file.stream.seek(0, 2) # Seek to end of file
file_size = file.stream.tell()
file.stream.seek(0) # Reset to beginning
file.seek(0, 2) # Seek to end of file
file_size = file.tell()
file.seek(0) # Reset to beginning
max_size_bytes = dify_config.ANNOTATION_IMPORT_FILE_SIZE_LIMIT * 1024 * 1024
if file_size > max_size_bytes:
@ -388,7 +406,7 @@ class AnnotationBatchImportApi(Resource):
if file_size == 0:
raise ValueError("The uploaded file is empty")
return AppAnnotationService.batch_import_app_annotations(str(app_id), file)
return AppAnnotationService.batch_import_app_annotations(app_id, file)
@console_ns.route("/apps/<uuid:app_id>/annotations/batch-import-status/<uuid:job_id>")
@ -403,7 +421,8 @@ class AnnotationBatchImportStatusApi(Resource):
@account_initialization_required
@cloud_edition_billing_resource_check("annotation")
@edit_permission_required
def get(self, app_id: UUID, job_id: UUID):
def get(self, app_id, job_id):
job_id = str(job_id)
indexing_cache_key = f"app_annotation_batch_import_{str(job_id)}"
cache_result = redis_client.get(indexing_cache_key)
if cache_result is None:
@ -437,11 +456,13 @@ class AnnotationHitHistoryListApi(Resource):
@login_required
@account_initialization_required
@edit_permission_required
def get(self, app_id: UUID, annotation_id: UUID):
def get(self, app_id, annotation_id):
page = request.args.get("page", default=1, type=int)
limit = request.args.get("limit", default=20, type=int)
app_id = str(app_id)
annotation_id = str(annotation_id)
annotation_hit_history_list, total = AppAnnotationService.get_annotation_hit_histories(
str(app_id), str(annotation_id), page, limit
app_id, annotation_id, page, limit
)
history_models = TypeAdapter(list[AnnotationHitHistory]).validate_python(
annotation_hit_history_list, from_attributes=True

View File

@ -1,16 +1,13 @@
import logging
import re
import uuid
from datetime import datetime
from typing import Any, Literal
from uuid import UUID
from flask import request
from flask_restx import Resource
from pydantic import AliasChoices, BaseModel, Field, computed_field, field_validator
from sqlalchemy import select
from sqlalchemy.orm import Session
from werkzeug.datastructures import MultiDict
from werkzeug.exceptions import BadRequest
from controllers.common.helpers import FileInfo
@ -26,7 +23,6 @@ from controllers.console.wraps import (
is_admin_or_owner_required,
setup_required,
)
from core.db.session_factory import session_factory
from core.ops.ops_trace_manager import OpsTraceManager
from core.rag.entities import PreProcessingRule, Rule, Segmentation
from core.rag.retrieval.retrieval_methods import RetrievalMethod
@ -39,7 +35,7 @@ from libs.login import current_account_with_tenant, login_required
from models import App, DatasetPermissionEnum, Workflow
from models.model import IconType
from services.app_dsl_service import AppDslService
from services.app_service import AppListParams, AppService, CreateAppParams
from services.app_service import AppService
from services.enterprise.enterprise_service import EnterpriseService
from services.entities.dsl_entities import ImportMode, ImportStatus
from services.entities.knowledge_entities.knowledge_entities import (
@ -61,7 +57,6 @@ ALLOW_CREATE_APP_MODES = ["chat", "agent-chat", "advanced-chat", "workflow", "co
register_enum_models(console_ns, IconType)
_logger = logging.getLogger(__name__)
_TAG_IDS_BRACKET_PATTERN = re.compile(r"^tag_ids\[(\d+)\]$")
class AppListQuery(BaseModel):
@ -71,19 +66,22 @@ class AppListQuery(BaseModel):
default="all", description="App mode filter"
)
name: str | None = Field(default=None, description="Filter by app name")
tag_ids: list[str] | None = Field(default=None, description="Filter by tag IDs")
tag_ids: list[str] | None = Field(default=None, description="Comma-separated tag IDs")
is_created_by_me: bool | None = Field(default=None, description="Filter by creator")
@field_validator("tag_ids", mode="before")
@classmethod
def validate_tag_ids(cls, value: list[str] | None) -> list[str] | None:
def validate_tag_ids(cls, value: str | list[str] | None) -> list[str] | None:
if not value:
return None
if not isinstance(value, list):
raise ValueError("Unsupported tag_ids type.")
if isinstance(value, str):
items = [item.strip() for item in value.split(",") if item.strip()]
elif isinstance(value, list):
items = [str(item).strip() for item in value if item and str(item).strip()]
else:
raise TypeError("Unsupported tag_ids type.")
items = [str(item).strip() for item in value if item and str(item).strip()]
if not items:
return None
@ -93,26 +91,6 @@ class AppListQuery(BaseModel):
raise ValueError("Invalid UUID format in tag_ids.") from exc
def _normalize_app_list_query_args(query_args: MultiDict[str, str]) -> dict[str, str | list[str]]:
normalized: dict[str, str | list[str]] = {}
indexed_tag_ids: list[tuple[int, str]] = []
for key in query_args:
match = _TAG_IDS_BRACKET_PATTERN.fullmatch(key)
if match:
indexed_tag_ids.extend((int(match.group(1)), value) for value in query_args.getlist(key))
continue
value = query_args.get(key)
if value is not None:
normalized[key] = value
if indexed_tag_ids:
normalized["tag_ids"] = [value for _, value in sorted(indexed_tag_ids)]
return normalized
class CreateAppPayload(BaseModel):
name: str = Field(..., min_length=1, description="App name")
description: str | None = Field(default=None, description="App description (max 400 chars)", max_length=400)
@ -477,19 +455,12 @@ class AppListApi(Resource):
"""Get app list"""
current_user, current_tenant_id = current_account_with_tenant()
args = AppListQuery.model_validate(_normalize_app_list_query_args(request.args))
params = AppListParams(
page=args.page,
limit=args.limit,
mode=args.mode,
name=args.name,
tag_ids=args.tag_ids,
is_created_by_me=args.is_created_by_me,
)
args = AppListQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
args_dict = args.model_dump()
# get app list
app_service = AppService()
app_pagination = app_service.get_paginate_apps(current_user.id, current_tenant_id, params)
app_pagination = app_service.get_paginate_apps(current_user.id, current_tenant_id, args_dict)
if not app_pagination:
empty = AppPagination(page=args.page, limit=args.limit, total=0, has_more=False, data=[])
return empty.model_dump(mode="json"), 200
@ -553,17 +524,9 @@ class AppListApi(Resource):
"""Create app"""
current_user, current_tenant_id = current_account_with_tenant()
args = CreateAppPayload.model_validate(console_ns.payload)
params = CreateAppParams(
name=args.name,
description=args.description,
mode=args.mode,
icon_type=args.icon_type,
icon=args.icon,
icon_background=args.icon_background,
)
app_service = AppService()
app = app_service.create_app(current_tenant_id, params, current_user)
app = app_service.create_app(current_tenant_id, args.model_dump(), current_user)
app_detail = AppDetail.model_validate(app, from_attributes=True)
return app_detail.model_dump(mode="json"), 201
@ -717,7 +680,7 @@ class AppExportApi(Resource):
@edit_permission_required
def get(self, app_model):
"""Export app"""
args = AppExportQuery.model_validate(request.args.to_dict(flat=True))
args = AppExportQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
payload = AppExportResponse(
data=AppDslService.export_dsl(
@ -729,32 +692,6 @@ class AppExportApi(Resource):
return payload.model_dump(mode="json")
@console_ns.route("/apps/<uuid:app_id>/publish-to-creators-platform")
class AppPublishToCreatorsPlatformApi(Resource):
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=None)
@edit_permission_required
def post(self, app_model):
"""Publish app to Creators Platform"""
from configs import dify_config
from core.helper.creators import get_redirect_url, upload_dsl
if not dify_config.CREATORS_PLATFORM_FEATURES_ENABLED:
return {"error": "Creators Platform features are not enabled"}, 403
current_user, _ = current_account_with_tenant()
dsl_content = AppDslService.export_dsl(app_model=app_model, include_secret=False)
dsl_bytes = dsl_content.encode("utf-8")
claim_code = upload_dsl(dsl_bytes)
redirect_url = get_redirect_url(str(current_user.id), claim_code)
return {"redirect_url": redirect_url}
@console_ns.route("/apps/<uuid:app_id>/name")
class AppNameApi(Resource):
@console_ns.doc("check_app_name")
@ -856,10 +793,9 @@ class AppTraceApi(Resource):
@setup_required
@login_required
@account_initialization_required
def get(self, app_id: UUID):
def get(self, app_id):
"""Get app trace"""
with session_factory.create_session() as session:
app_trace_config = OpsTraceManager.get_app_tracing_config(str(app_id), session)
app_trace_config = OpsTraceManager.get_app_tracing_config(app_id=app_id)
return app_trace_config
@ -873,12 +809,12 @@ class AppTraceApi(Resource):
@login_required
@account_initialization_required
@edit_permission_required
def post(self, app_id: UUID):
def post(self, app_id):
# add app trace
args = AppTracePayload.model_validate(console_ns.payload)
OpsTraceManager.update_app_tracing_config(
app_id=str(app_id),
app_id=app_id,
enabled=args.enabled,
tracing_provider=args.tracing_provider,
)

View File

@ -2,7 +2,7 @@ from flask_restx import Resource
from pydantic import BaseModel, Field
from sqlalchemy.orm import Session
from controllers.common.schema import register_enum_models, register_schema_models
from controllers.common.schema import register_schema_models
from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import (
account_initialization_required,
@ -33,7 +33,6 @@ class AppImportPayload(BaseModel):
app_id: str | None = Field(None)
register_enum_models(console_ns, ImportStatus)
register_schema_models(console_ns, AppImportPayload, Import, CheckDependenciesResult)

View File

@ -173,7 +173,7 @@ class TextModesApi(Resource):
@account_initialization_required
def get(self, app_model):
try:
args = TextToSpeechVoiceQuery.model_validate(request.args.to_dict(flat=True))
args = TextToSpeechVoiceQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
response = AudioService.transcript_tts_voices(
tenant_id=app_model.tenant_id,

View File

@ -7,7 +7,6 @@ from pydantic import BaseModel, Field, field_validator
from werkzeug.exceptions import InternalServerError, NotFound
import services
from controllers.common.schema import register_schema_models
from controllers.console import console_ns
from controllers.console.app.error import (
AppUnavailableError,
@ -38,6 +37,7 @@ from services.app_task_service import AppTaskService
from services.errors.llm import InvokeRateLimitError
logger = logging.getLogger(__name__)
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class BaseMessagePayload(BaseModel):
@ -65,7 +65,13 @@ class ChatMessagePayload(BaseMessagePayload):
return uuid_value(value)
register_schema_models(console_ns, CompletionMessagePayload, ChatMessagePayload)
console_ns.schema_model(
CompletionMessagePayload.__name__,
CompletionMessagePayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
console_ns.schema_model(
ChatMessagePayload.__name__, ChatMessagePayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)
)
# define completion message api for user

View File

@ -39,6 +39,8 @@ from models.model import AppMode
from services.conversation_service import ConversationService
from services.errors.conversation import ConversationNotExistsError
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class BaseConversationQuery(BaseModel):
keyword: str | None = Field(default=None, description="Search keyword")
@ -68,6 +70,15 @@ class ChatConversationQuery(BaseConversationQuery):
)
console_ns.schema_model(
CompletionConversationQuery.__name__,
CompletionConversationQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
console_ns.schema_model(
ChatConversationQuery.__name__,
ChatConversationQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
register_schema_models(
console_ns,
CompletionConversationQuery,
@ -78,8 +89,6 @@ register_schema_models(
ConversationWithSummaryPaginationResponse,
ConversationDetailResponse,
ResultResponse,
CompletionConversationQuery,
ChatConversationQuery,
)
@ -98,7 +107,7 @@ class CompletionConversationApi(Resource):
@edit_permission_required
def get(self, app_model):
current_user, _ = current_account_with_tenant()
args = CompletionConversationQuery.model_validate(request.args.to_dict(flat=True))
args = CompletionConversationQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
query = sa.select(Conversation).where(
Conversation.app_id == app_model.id, Conversation.mode == "completion", Conversation.is_deleted.is_(False)
@ -212,7 +221,7 @@ class ChatConversationApi(Resource):
@edit_permission_required
def get(self, app_model):
current_user, _ = current_account_with_tenant()
args = ChatConversationQuery.model_validate(request.args.to_dict(flat=True))
args = ChatConversationQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
subquery = (
sa.select(Conversation.id.label("conversation_id"), EndUser.session_id.label("from_end_user_session_id"))

View File

@ -100,7 +100,7 @@ class ConversationVariablesApi(Resource):
@account_initialization_required
@get_app_model(mode=AppMode.ADVANCED_CHAT)
def get(self, app_model):
args = ConversationVariablesQuery.model_validate(request.args.to_dict(flat=True))
args = ConversationVariablesQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
stmt = (
select(ConversationVariable)

View File

@ -3,7 +3,6 @@ from collections.abc import Sequence
from flask_restx import Resource
from pydantic import BaseModel, Field
from controllers.common.schema import register_enum_models, register_schema_models
from controllers.console import console_ns
from controllers.console.app.error import (
CompletionRequestError,
@ -20,12 +19,13 @@ from core.helper.code_executor.python3.python3_code_provider import Python3CodeP
from core.llm_generator.entities import RuleCodeGeneratePayload, RuleGeneratePayload, RuleStructuredOutputPayload
from core.llm_generator.llm_generator import LLMGenerator
from extensions.ext_database import db
from graphon.model_runtime.entities.llm_entities import LLMMode
from graphon.model_runtime.errors.invoke import InvokeError
from libs.login import current_account_with_tenant, login_required
from models import App
from services.workflow_service import WorkflowService
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class InstructionGeneratePayload(BaseModel):
flow_id: str = Field(..., description="Workflow/Flow ID")
@ -41,16 +41,16 @@ class InstructionTemplatePayload(BaseModel):
type: str = Field(..., description="Instruction template type")
register_enum_models(console_ns, LLMMode)
register_schema_models(
console_ns,
RuleGeneratePayload,
RuleCodeGeneratePayload,
RuleStructuredOutputPayload,
InstructionGeneratePayload,
InstructionTemplatePayload,
ModelConfig,
)
def reg(cls: type[BaseModel]):
console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
reg(RuleGeneratePayload)
reg(RuleCodeGeneratePayload)
reg(RuleStructuredOutputPayload)
reg(InstructionGeneratePayload)
reg(InstructionTemplatePayload)
reg(ModelConfig)
@console_ns.route("/rule-generate")

View File

@ -1,18 +1,18 @@
from typing import Any
from uuid import UUID
from flask import request
from flask_restx import Resource, fields
from pydantic import BaseModel, Field
from werkzeug.exceptions import BadRequest
from controllers.common.schema import register_schema_models
from controllers.console import console_ns
from controllers.console.app.error import TracingConfigCheckError, TracingConfigIsExist, TracingConfigNotExist
from controllers.console.wraps import account_initialization_required, setup_required
from libs.login import login_required
from services.ops_service import OpsService
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class TraceProviderQuery(BaseModel):
tracing_provider: str = Field(..., description="Tracing provider name")
@ -23,7 +23,13 @@ class TraceConfigPayload(BaseModel):
tracing_config: dict[str, Any] = Field(..., description="Tracing configuration data")
register_schema_models(console_ns, TraceProviderQuery, TraceConfigPayload)
console_ns.schema_model(
TraceProviderQuery.__name__,
TraceProviderQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
console_ns.schema_model(
TraceConfigPayload.__name__, TraceConfigPayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)
)
@console_ns.route("/apps/<uuid:app_id>/trace-config")
@ -43,11 +49,11 @@ class TraceAppConfigApi(Resource):
@setup_required
@login_required
@account_initialization_required
def get(self, app_id: UUID):
args = TraceProviderQuery.model_validate(request.args.to_dict(flat=True))
def get(self, app_id):
args = TraceProviderQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
try:
trace_config = OpsService.get_tracing_app_config(app_id=str(app_id), tracing_provider=args.tracing_provider)
trace_config = OpsService.get_tracing_app_config(app_id=app_id, tracing_provider=args.tracing_provider)
if not trace_config:
return {"has_not_configured": True}
return trace_config
@ -65,13 +71,13 @@ class TraceAppConfigApi(Resource):
@setup_required
@login_required
@account_initialization_required
def post(self, app_id: UUID):
def post(self, app_id):
"""Create a new trace app configuration"""
args = TraceConfigPayload.model_validate(console_ns.payload)
try:
result = OpsService.create_tracing_app_config(
app_id=str(app_id), tracing_provider=args.tracing_provider, tracing_config=args.tracing_config
app_id=app_id, tracing_provider=args.tracing_provider, tracing_config=args.tracing_config
)
if not result:
raise TracingConfigIsExist()
@ -90,13 +96,13 @@ class TraceAppConfigApi(Resource):
@setup_required
@login_required
@account_initialization_required
def patch(self, app_id: UUID):
def patch(self, app_id):
"""Update an existing trace app configuration"""
args = TraceConfigPayload.model_validate(console_ns.payload)
try:
result = OpsService.update_tracing_app_config(
app_id=str(app_id), tracing_provider=args.tracing_provider, tracing_config=args.tracing_config
app_id=app_id, tracing_provider=args.tracing_provider, tracing_config=args.tracing_config
)
if not result:
raise TracingConfigNotExist()
@ -113,12 +119,12 @@ class TraceAppConfigApi(Resource):
@setup_required
@login_required
@account_initialization_required
def delete(self, app_id: UUID):
def delete(self, app_id):
"""Delete an existing trace app configuration"""
args = TraceProviderQuery.model_validate(request.args.to_dict(flat=True))
args = TraceProviderQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
try:
result = OpsService.delete_tracing_app_config(app_id=str(app_id), tracing_provider=args.tracing_provider)
result = OpsService.delete_tracing_app_config(app_id=app_id, tracing_provider=args.tracing_provider)
if not result:
raise TracingConfigNotExist()
return {"result": "success"}, 204

View File

@ -5,7 +5,6 @@ from flask import abort, jsonify, request
from flask_restx import Resource, fields
from pydantic import BaseModel, Field, field_validator
from controllers.common.schema import register_schema_models
from controllers.console import console_ns
from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import account_initialization_required, setup_required
@ -16,6 +15,8 @@ from libs.helper import convert_datetime_to_date
from libs.login import current_account_with_tenant, login_required
from models import AppMode
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class StatisticTimeRangeQuery(BaseModel):
start: str | None = Field(default=None, description="Start date (YYYY-MM-DD HH:MM)")
@ -29,7 +30,10 @@ class StatisticTimeRangeQuery(BaseModel):
return value
register_schema_models(console_ns, StatisticTimeRangeQuery)
console_ns.schema_model(
StatisticTimeRangeQuery.__name__,
StatisticTimeRangeQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
@console_ns.route("/apps/<uuid:app_id>/statistics/daily-messages")
@ -50,7 +54,7 @@ class DailyMessageStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True))
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
converted_created_at = convert_datetime_to_date("created_at")
sql_query = f"""SELECT
@ -107,7 +111,7 @@ class DailyConversationStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True))
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
converted_created_at = convert_datetime_to_date("created_at")
sql_query = f"""SELECT
@ -163,7 +167,7 @@ class DailyTerminalsStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True))
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
converted_created_at = convert_datetime_to_date("created_at")
sql_query = f"""SELECT
@ -220,7 +224,7 @@ class DailyTokenCostStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True))
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
converted_created_at = convert_datetime_to_date("created_at")
sql_query = f"""SELECT
@ -280,7 +284,7 @@ class AverageSessionInteractionStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True))
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
converted_created_at = convert_datetime_to_date("c.created_at")
sql_query = f"""SELECT
@ -356,7 +360,7 @@ class UserSatisfactionRateStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True))
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
converted_created_at = convert_datetime_to_date("m.created_at")
sql_query = f"""SELECT
@ -422,7 +426,7 @@ class AverageResponseTimeStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True))
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
converted_created_at = convert_datetime_to_date("created_at")
sql_query = f"""SELECT
@ -478,7 +482,7 @@ class TokensPerSecondStatistic(Resource):
@account_initialization_required
def get(self, app_model):
account, _ = current_account_with_tenant()
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True))
args = StatisticTimeRangeQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
converted_created_at = convert_datetime_to_date("created_at")
sql_query = f"""SELECT

View File

@ -11,9 +11,9 @@ from werkzeug.exceptions import BadRequest, Forbidden, InternalServerError, NotF
import services
from controllers.common.controller_schemas import DefaultBlockConfigQuery, WorkflowListQuery, WorkflowUpdatePayload
from controllers.common.schema import register_response_schema_model, register_schema_models
from controllers.console import console_ns
from controllers.console.app.error import ConversationCompletedError, DraftWorkflowNotExist, DraftWorkflowNotSync
from controllers.console.app.workflow_run import workflow_run_node_execution_model
from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import account_initialization_required, edit_permission_required, setup_required
from controllers.web.error import InvokeRateLimitError as InvokeRateLimitHttpError
@ -37,7 +37,6 @@ from factories import file_factory, variable_factory
from fields.member_fields import simple_account_fields
from fields.online_user_fields import online_user_list_fields
from fields.workflow_fields import workflow_fields, workflow_pagination_fields
from fields.workflow_run_fields import WorkflowRunNodeExecutionResponse
from graphon.enums import NodeType
from graphon.file import File
from graphon.file import helpers as file_helpers
@ -57,13 +56,11 @@ from services.errors.llm import InvokeRateLimitError
from services.workflow_service import DraftWorkflowDeletionError, WorkflowInUseError, WorkflowService
logger = logging.getLogger(__name__)
_file_access_controller = DatabaseFileAccessController()
LISTENING_RETRY_IN = 2000
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
RESTORE_SOURCE_WORKFLOW_MUST_BE_PUBLISHED_MESSAGE = "source workflow must be published"
MAX_WORKFLOW_ONLINE_USERS_REQUEST_IDS = 1000
WORKFLOW_ONLINE_USERS_REDIS_BATCH_SIZE = 50
MAX_WORKFLOW_ONLINE_USERS_QUERY_IDS = 50
# Register models for flask_restx to avoid dict type issues in Swagger
# Register in dependency order: base models first, then dependent models
@ -161,13 +158,8 @@ class WorkflowFeaturesPayload(BaseModel):
features: dict[str, Any] = Field(..., description="Workflow feature configuration")
class WorkflowOnlineUsersPayload(BaseModel):
app_ids: list[str] = Field(default_factory=list, description="App IDs")
@field_validator("app_ids")
@classmethod
def normalize_app_ids(cls, app_ids: list[str]) -> list[str]:
return list(dict.fromkeys(app_id.strip() for app_id in app_ids if app_id.strip()))
class WorkflowOnlineUsersQuery(BaseModel):
app_ids: str = Field(..., description="Comma-separated app IDs")
class DraftWorkflowTriggerRunPayload(BaseModel):
@ -178,25 +170,25 @@ class DraftWorkflowTriggerRunAllPayload(BaseModel):
node_ids: list[str]
register_schema_models(
console_ns,
SyncDraftWorkflowPayload,
AdvancedChatWorkflowRunPayload,
IterationNodeRunPayload,
LoopNodeRunPayload,
DraftWorkflowRunPayload,
DraftWorkflowNodeRunPayload,
PublishWorkflowPayload,
DefaultBlockConfigQuery,
ConvertToWorkflowPayload,
WorkflowListQuery,
WorkflowUpdatePayload,
WorkflowFeaturesPayload,
WorkflowOnlineUsersPayload,
DraftWorkflowTriggerRunPayload,
DraftWorkflowTriggerRunAllPayload,
)
register_response_schema_model(console_ns, WorkflowRunNodeExecutionResponse)
def reg(cls: type[BaseModel]):
console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
reg(SyncDraftWorkflowPayload)
reg(AdvancedChatWorkflowRunPayload)
reg(IterationNodeRunPayload)
reg(LoopNodeRunPayload)
reg(DraftWorkflowRunPayload)
reg(DraftWorkflowNodeRunPayload)
reg(PublishWorkflowPayload)
reg(DefaultBlockConfigQuery)
reg(ConvertToWorkflowPayload)
reg(WorkflowListQuery)
reg(WorkflowUpdatePayload)
reg(WorkflowFeaturesPayload)
reg(WorkflowOnlineUsersQuery)
reg(DraftWorkflowTriggerRunPayload)
reg(DraftWorkflowTriggerRunAllPayload)
# TODO(QuantumGhost): Refactor existing node run API to handle file parameter parsing
@ -542,12 +534,9 @@ class HumanInputDeliveryTestPayload(BaseModel):
)
register_schema_models(
console_ns,
HumanInputFormPreviewPayload,
HumanInputFormSubmitPayload,
HumanInputDeliveryTestPayload,
)
reg(HumanInputFormPreviewPayload)
reg(HumanInputFormSubmitPayload)
reg(HumanInputDeliveryTestPayload)
@console_ns.route("/apps/<uuid:app_id>/advanced-chat/workflows/draft/human-input/nodes/<string:node_id>/form/preview")
@ -765,17 +754,14 @@ class DraftWorkflowNodeRunApi(Resource):
@console_ns.doc(description="Run draft workflow node")
@console_ns.doc(params={"app_id": "Application ID", "node_id": "Node ID"})
@console_ns.expect(console_ns.models[DraftWorkflowNodeRunPayload.__name__])
@console_ns.response(
200,
"Node run started successfully",
console_ns.models[WorkflowRunNodeExecutionResponse.__name__],
)
@console_ns.response(200, "Node run started successfully", workflow_run_node_execution_model)
@console_ns.response(403, "Permission denied")
@console_ns.response(404, "Node not found")
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
@marshal_with(workflow_run_node_execution_model)
@edit_permission_required
def post(self, app_model: App, node_id: str):
"""
@ -807,9 +793,7 @@ class DraftWorkflowNodeRunApi(Resource):
files=files,
)
return WorkflowRunNodeExecutionResponse.model_validate(
workflow_node_execution, from_attributes=True
).model_dump(mode="json")
return workflow_node_execution
@console_ns.route("/apps/<uuid:app_id>/workflows/publish")
@ -912,7 +896,7 @@ class DefaultBlockConfigApi(Resource):
"""
Get default block config
"""
args = DefaultBlockConfigQuery.model_validate(request.args.to_dict(flat=True))
args = DefaultBlockConfigQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
filters = None
if args.q:
@ -1005,7 +989,7 @@ class PublishedAllWorkflowApi(Resource):
"""
current_user, _ = current_account_with_tenant()
args = WorkflowListQuery.model_validate(request.args.to_dict(flat=True))
args = WorkflowListQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
page = args.page
limit = args.limit
user_id = args.user_id
@ -1153,17 +1137,14 @@ class DraftWorkflowNodeLastRunApi(Resource):
@console_ns.doc("get_draft_workflow_node_last_run")
@console_ns.doc(description="Get last run result for draft workflow node")
@console_ns.doc(params={"app_id": "Application ID", "node_id": "Node ID"})
@console_ns.response(
200,
"Node last run retrieved successfully",
console_ns.models[WorkflowRunNodeExecutionResponse.__name__],
)
@console_ns.response(200, "Node last run retrieved successfully", workflow_run_node_execution_model)
@console_ns.response(404, "Node last run not found")
@console_ns.response(403, "Permission denied")
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
@marshal_with(workflow_run_node_execution_model)
def get(self, app_model: App, node_id: str):
srv = WorkflowService()
workflow = srv.get_draft_workflow(app_model)
@ -1176,7 +1157,7 @@ class DraftWorkflowNodeLastRunApi(Resource):
)
if node_exec is None:
raise NotFound("last run not found")
return WorkflowRunNodeExecutionResponse.model_validate(node_exec, from_attributes=True).model_dump(mode="json")
return node_exec
@console_ns.route("/apps/<uuid:app_id>/workflows/draft/trigger/run")
@ -1403,19 +1384,19 @@ class DraftWorkflowTriggerRunAllApi(Resource):
@console_ns.route("/apps/workflows/online-users")
class WorkflowOnlineUsersApi(Resource):
@console_ns.expect(console_ns.models[WorkflowOnlineUsersPayload.__name__])
@console_ns.expect(console_ns.models[WorkflowOnlineUsersQuery.__name__])
@console_ns.doc("get_workflow_online_users")
@console_ns.doc(description="Get workflow online users")
@setup_required
@login_required
@account_initialization_required
@marshal_with(online_user_list_fields)
def post(self):
args = WorkflowOnlineUsersPayload.model_validate(console_ns.payload or {})
def get(self):
args = WorkflowOnlineUsersQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
app_ids = args.app_ids
if len(app_ids) > MAX_WORKFLOW_ONLINE_USERS_REQUEST_IDS:
raise BadRequest(f"Maximum {MAX_WORKFLOW_ONLINE_USERS_REQUEST_IDS} app_ids are allowed per request.")
app_ids = list(dict.fromkeys(app_id.strip() for app_id in args.app_ids.split(",") if app_id.strip()))
if len(app_ids) > MAX_WORKFLOW_ONLINE_USERS_QUERY_IDS:
raise BadRequest(f"Maximum {MAX_WORKFLOW_ONLINE_USERS_QUERY_IDS} app_ids are allowed per request.")
if not app_ids:
return {"data": []}
@ -1423,24 +1404,13 @@ class WorkflowOnlineUsersApi(Resource):
_, current_tenant_id = current_account_with_tenant()
workflow_service = WorkflowService()
accessible_app_ids = workflow_service.get_accessible_app_ids(app_ids, current_tenant_id)
ordered_accessible_app_ids = [app_id for app_id in app_ids if app_id in accessible_app_ids]
users_json_by_app_id: dict[str, Any] = {}
for start_index in range(0, len(ordered_accessible_app_ids), WORKFLOW_ONLINE_USERS_REDIS_BATCH_SIZE):
app_id_batch = ordered_accessible_app_ids[
start_index : start_index + WORKFLOW_ONLINE_USERS_REDIS_BATCH_SIZE
]
pipe = redis_client.pipeline(transaction=False)
for app_id in app_id_batch:
pipe.hgetall(f"{WORKFLOW_ONLINE_USERS_PREFIX}{app_id}")
users_json_batch = pipe.execute()
for app_id, users_json in zip(app_id_batch, users_json_batch):
users_json_by_app_id[app_id] = users_json
results = []
for app_id in ordered_accessible_app_ids:
users_json = users_json_by_app_id.get(app_id, {})
for app_id in app_ids:
if app_id not in accessible_app_ids:
continue
users_json = redis_client.hgetall(f"{WORKFLOW_ONLINE_USERS_PREFIX}{app_id}")
users = []
for _, user_info_json in users_json.items():

View File

@ -185,7 +185,7 @@ class WorkflowAppLogApi(Resource):
"""
Get workflow app logs
"""
args = WorkflowAppLogQuery.model_validate(request.args.to_dict(flat=True))
args = WorkflowAppLogQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
# get paginate workflow app logs
workflow_app_service = WorkflowAppService()
@ -228,7 +228,7 @@ class WorkflowArchivedLogApi(Resource):
"""
Get workflow archived logs
"""
args = WorkflowAppLogQuery.model_validate(request.args.to_dict(flat=True))
args = WorkflowAppLogQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
workflow_app_service = WorkflowAppService()
with sessionmaker(db.engine, expire_on_commit=False).begin() as session:

View File

@ -23,6 +23,7 @@ from services.account_service import TenantService
from services.workflow_comment_service import WorkflowCommentService
logger = logging.getLogger(__name__)
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class WorkflowCommentCreatePayload(BaseModel):
@ -51,14 +52,13 @@ class WorkflowCommentMentionUsersPayload(BaseModel):
users: list[AccountWithRole]
register_schema_models(
console_ns,
AccountWithRole,
WorkflowCommentMentionUsersPayload,
for model in (
WorkflowCommentCreatePayload,
WorkflowCommentUpdatePayload,
WorkflowCommentReplyPayload,
)
):
console_ns.schema_model(model.__name__, model.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
register_schema_models(console_ns, AccountWithRole, WorkflowCommentMentionUsersPayload)
workflow_comment_basic_model = console_ns.model("WorkflowCommentBasic", workflow_comment_basic_fields)
workflow_comment_detail_model = console_ns.model("WorkflowCommentDetail", workflow_comment_detail_fields)

View File

@ -8,7 +8,6 @@ from flask_restx import Resource, fields, marshal, marshal_with
from pydantic import BaseModel, Field
from sqlalchemy.orm import sessionmaker
from controllers.common.schema import register_schema_models
from controllers.console import console_ns
from controllers.console.app.error import (
DraftWorkflowNotExist,
@ -34,6 +33,7 @@ from services.workflow_service import WorkflowService
logger = logging.getLogger(__name__)
_file_access_controller = DatabaseFileAccessController()
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class WorkflowDraftVariableListQuery(BaseModel):
@ -56,25 +56,33 @@ class EnvironmentVariableUpdatePayload(BaseModel):
environment_variables: list[dict[str, Any]] = Field(..., description="Environment variables for the draft workflow")
register_schema_models(
console_ns,
WorkflowDraftVariableListQuery,
WorkflowDraftVariableUpdatePayload,
ConversationVariableUpdatePayload,
EnvironmentVariableUpdatePayload,
console_ns.schema_model(
WorkflowDraftVariableListQuery.__name__,
WorkflowDraftVariableListQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
console_ns.schema_model(
WorkflowDraftVariableUpdatePayload.__name__,
WorkflowDraftVariableUpdatePayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
console_ns.schema_model(
ConversationVariableUpdatePayload.__name__,
ConversationVariableUpdatePayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
console_ns.schema_model(
EnvironmentVariableUpdatePayload.__name__,
EnvironmentVariableUpdatePayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
def _convert_values_to_json_serializable_object(value: Segment):
match value:
case FileSegment():
return value.value.model_dump()
case ArrayFileSegment():
return [i.model_dump() for i in value.value]
case SegmentGroup():
return [_convert_values_to_json_serializable_object(i) for i in value.value]
case _:
return value.value
if isinstance(value, FileSegment):
return value.value.model_dump()
elif isinstance(value, ArrayFileSegment):
return [i.model_dump() for i in value.value]
elif isinstance(value, SegmentGroup):
return [_convert_values_to_json_serializable_object(i) for i in value.value]
else:
return value.value
def _serialize_var_value(variable: WorkflowDraftVariable):
@ -251,7 +259,7 @@ class WorkflowVariableCollectionApi(Resource):
"""
Get draft workflow
"""
args = WorkflowDraftVariableListQuery.model_validate(request.args.to_dict(flat=True))
args = WorkflowDraftVariableListQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
# fetch draft workflow by app_model
workflow_service = WorkflowService()

View File

@ -1,28 +1,30 @@
from datetime import UTC, datetime, timedelta
from typing import Literal, cast
from typing import Literal, TypedDict, cast
from flask import request
from flask_restx import Resource
from flask_restx import Resource, fields, marshal_with
from pydantic import BaseModel, Field, field_validator
from sqlalchemy import select
from sqlalchemy.orm import sessionmaker
from configs import dify_config
from controllers.common.schema import query_params_from_model, register_response_schema_models, register_schema_models
from controllers.console import console_ns
from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import account_initialization_required, setup_required
from controllers.web.error import NotFoundError
from core.workflow.human_input_forms import load_form_tokens_by_form_id as _load_form_tokens_by_form_id
from extensions.ext_database import db
from fields.base import ResponseModel
from fields.end_user_fields import simple_end_user_fields
from fields.member_fields import simple_account_fields
from fields.workflow_run_fields import (
AdvancedChatWorkflowRunPaginationResponse,
WorkflowRunCountResponse,
WorkflowRunDetailResponse,
WorkflowRunNodeExecutionListResponse,
WorkflowRunNodeExecutionResponse,
WorkflowRunPaginationResponse,
advanced_chat_workflow_run_for_list_fields,
advanced_chat_workflow_run_pagination_fields,
workflow_run_count_fields,
workflow_run_detail_fields,
workflow_run_for_list_fields,
workflow_run_node_execution_fields,
workflow_run_node_execution_list_fields,
workflow_run_pagination_fields,
)
from graphon.entities.pause_reason import HumanInputRequired
from graphon.enums import WorkflowExecutionStatus
@ -50,6 +52,82 @@ def _build_backstage_input_url(form_token: str | None) -> str | None:
WORKFLOW_RUN_STATUS_CHOICES = ["running", "succeeded", "failed", "stopped", "partial-succeeded"]
EXPORT_SIGNED_URL_EXPIRE_SECONDS = 3600
# Register models for flask_restx to avoid dict type issues in Swagger
# Register in dependency order: base models first, then dependent models
# Base models
simple_account_model = console_ns.model("SimpleAccount", simple_account_fields)
simple_end_user_model = console_ns.model("SimpleEndUser", simple_end_user_fields)
# Models that depend on simple_account_fields
workflow_run_for_list_fields_copy = workflow_run_for_list_fields.copy()
workflow_run_for_list_fields_copy["created_by_account"] = fields.Nested(
simple_account_model, attribute="created_by_account", allow_null=True
)
workflow_run_for_list_model = console_ns.model("WorkflowRunForList", workflow_run_for_list_fields_copy)
advanced_chat_workflow_run_for_list_fields_copy = advanced_chat_workflow_run_for_list_fields.copy()
advanced_chat_workflow_run_for_list_fields_copy["created_by_account"] = fields.Nested(
simple_account_model, attribute="created_by_account", allow_null=True
)
advanced_chat_workflow_run_for_list_model = console_ns.model(
"AdvancedChatWorkflowRunForList", advanced_chat_workflow_run_for_list_fields_copy
)
workflow_run_detail_fields_copy = workflow_run_detail_fields.copy()
workflow_run_detail_fields_copy["created_by_account"] = fields.Nested(
simple_account_model, attribute="created_by_account", allow_null=True
)
workflow_run_detail_fields_copy["created_by_end_user"] = fields.Nested(
simple_end_user_model, attribute="created_by_end_user", allow_null=True
)
workflow_run_detail_model = console_ns.model("WorkflowRunDetail", workflow_run_detail_fields_copy)
workflow_run_node_execution_fields_copy = workflow_run_node_execution_fields.copy()
workflow_run_node_execution_fields_copy["created_by_account"] = fields.Nested(
simple_account_model, attribute="created_by_account", allow_null=True
)
workflow_run_node_execution_fields_copy["created_by_end_user"] = fields.Nested(
simple_end_user_model, attribute="created_by_end_user", allow_null=True
)
workflow_run_node_execution_model = console_ns.model(
"WorkflowRunNodeExecution", workflow_run_node_execution_fields_copy
)
# Simple models without nested dependencies
workflow_run_count_model = console_ns.model("WorkflowRunCount", workflow_run_count_fields)
# Pagination models that depend on list models
advanced_chat_workflow_run_pagination_fields_copy = advanced_chat_workflow_run_pagination_fields.copy()
advanced_chat_workflow_run_pagination_fields_copy["data"] = fields.List(
fields.Nested(advanced_chat_workflow_run_for_list_model), attribute="data"
)
advanced_chat_workflow_run_pagination_model = console_ns.model(
"AdvancedChatWorkflowRunPagination", advanced_chat_workflow_run_pagination_fields_copy
)
workflow_run_pagination_fields_copy = workflow_run_pagination_fields.copy()
workflow_run_pagination_fields_copy["data"] = fields.List(fields.Nested(workflow_run_for_list_model), attribute="data")
workflow_run_pagination_model = console_ns.model("WorkflowRunPagination", workflow_run_pagination_fields_copy)
workflow_run_node_execution_list_fields_copy = workflow_run_node_execution_list_fields.copy()
workflow_run_node_execution_list_fields_copy["data"] = fields.List(fields.Nested(workflow_run_node_execution_model))
workflow_run_node_execution_list_model = console_ns.model(
"WorkflowRunNodeExecutionList", workflow_run_node_execution_list_fields_copy
)
workflow_run_export_fields = console_ns.model(
"WorkflowRunExport",
{
"status": fields.String(description="Export status: success/failed"),
"presigned_url": fields.String(description="Pre-signed URL for download", required=False),
"presigned_url_expires_at": fields.String(description="Pre-signed URL expiration time", required=False),
},
)
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class WorkflowRunListQuery(BaseModel):
last_id: str | None = Field(default=None, description="Last run ID for pagination")
@ -58,7 +136,7 @@ class WorkflowRunListQuery(BaseModel):
default=None, description="Workflow run status filter"
)
triggered_from: Literal["debugging", "app-run"] | None = Field(
default=None, description="Filter by trigger source: debugging or app-run. Default: debugging"
default=None, description="Filter by trigger source: debugging or app-run"
)
@field_validator("last_id")
@ -73,15 +151,9 @@ class WorkflowRunCountQuery(BaseModel):
status: Literal["running", "succeeded", "failed", "stopped", "partial-succeeded"] | None = Field(
default=None, description="Workflow run status filter"
)
time_range: str | None = Field(
default=None,
description=(
"Filter by time range (optional): e.g., 7d (7 days), 4h (4 hours), "
"30m (30 minutes), 30s (30 seconds). Filters by created_at field."
),
)
time_range: str | None = Field(default=None, description="Time range filter (e.g., 7d, 4h, 30m, 30s)")
triggered_from: Literal["debugging", "app-run"] | None = Field(
default=None, description="Filter by trigger source: debugging or app-run. Default: debugging"
default=None, description="Filter by trigger source: debugging or app-run"
)
@field_validator("time_range")
@ -92,69 +164,56 @@ class WorkflowRunCountQuery(BaseModel):
return time_duration(value)
class WorkflowRunExportResponse(ResponseModel):
status: str = Field(description="Export status: success/failed")
presigned_url: str | None = Field(default=None, description="Pre-signed URL for download")
presigned_url_expires_at: str | None = Field(default=None, description="Pre-signed URL expiration time")
console_ns.schema_model(
WorkflowRunListQuery.__name__, WorkflowRunListQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)
)
console_ns.schema_model(
WorkflowRunCountQuery.__name__,
WorkflowRunCountQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
class HumanInputPauseTypeResponse(ResponseModel):
class HumanInputPauseTypeResponse(TypedDict):
type: Literal["human_input"]
form_id: str
backstage_input_url: str | None = None
backstage_input_url: str | None
class PausedNodeResponse(ResponseModel):
class PausedNodeResponse(TypedDict):
node_id: str
node_title: str
pause_type: HumanInputPauseTypeResponse
class WorkflowPauseDetailsResponse(ResponseModel):
paused_at: str | None = None
class WorkflowPauseDetailsResponse(TypedDict):
paused_at: str | None
paused_nodes: list[PausedNodeResponse]
register_schema_models(
console_ns,
WorkflowRunListQuery,
WorkflowRunCountQuery,
)
register_response_schema_models(
console_ns,
AdvancedChatWorkflowRunPaginationResponse,
WorkflowRunPaginationResponse,
WorkflowRunCountResponse,
WorkflowRunDetailResponse,
WorkflowRunNodeExecutionResponse,
WorkflowRunNodeExecutionListResponse,
WorkflowRunExportResponse,
HumanInputPauseTypeResponse,
PausedNodeResponse,
WorkflowPauseDetailsResponse,
)
@console_ns.route("/apps/<uuid:app_id>/advanced-chat/workflow-runs")
class AdvancedChatAppWorkflowRunListApi(Resource):
@console_ns.doc("get_advanced_chat_workflow_runs")
@console_ns.doc(description="Get advanced chat workflow run list")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.doc(params=query_params_from_model(WorkflowRunListQuery))
@console_ns.response(
200,
"Workflow runs retrieved successfully",
console_ns.models[AdvancedChatWorkflowRunPaginationResponse.__name__],
@console_ns.doc(params={"last_id": "Last run ID for pagination", "limit": "Number of items per page (1-100)"})
@console_ns.doc(
params={"status": "Filter by status (optional): running, succeeded, failed, stopped, partial-succeeded"}
)
@console_ns.doc(
params={"triggered_from": "Filter by trigger source (optional): debugging or app-run. Default: debugging"}
)
@console_ns.expect(console_ns.models[WorkflowRunListQuery.__name__])
@console_ns.response(200, "Workflow runs retrieved successfully", advanced_chat_workflow_run_pagination_model)
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT])
@marshal_with(advanced_chat_workflow_run_pagination_model)
def get(self, app_model: App):
"""
Get advanced chat app workflow run list
"""
args_model = WorkflowRunListQuery.model_validate(request.args.to_dict(flat=True))
args_model = WorkflowRunListQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
args: WorkflowRunListArgs = {"limit": args_model.limit}
if args_model.last_id is not None:
args["last_id"] = args_model.last_id
@ -173,9 +232,7 @@ class AdvancedChatAppWorkflowRunListApi(Resource):
app_model=app_model, args=args, triggered_from=triggered_from
)
return AdvancedChatWorkflowRunPaginationResponse.model_validate(result, from_attributes=True).model_dump(
mode="json"
)
return result
@console_ns.route("/apps/<uuid:app_id>/workflow-runs/<uuid:run_id>/export")
@ -183,7 +240,7 @@ class WorkflowRunExportApi(Resource):
@console_ns.doc("get_workflow_run_export_url")
@console_ns.doc(description="Generate a download URL for an archived workflow run.")
@console_ns.doc(params={"app_id": "Application ID", "run_id": "Workflow run ID"})
@console_ns.response(200, "Export URL generated", console_ns.models[WorkflowRunExportResponse.__name__])
@console_ns.response(200, "Export URL generated", workflow_run_export_fields)
@setup_required
@login_required
@account_initialization_required
@ -221,14 +278,11 @@ class WorkflowRunExportApi(Resource):
expires_in=EXPORT_SIGNED_URL_EXPIRE_SECONDS,
)
expires_at = datetime.now(UTC) + timedelta(seconds=EXPORT_SIGNED_URL_EXPIRE_SECONDS)
response = WorkflowRunExportResponse.model_validate(
{
"status": "success",
"presigned_url": presigned_url,
"presigned_url_expires_at": expires_at.isoformat(),
}
)
return response.model_dump(mode="json"), 200
return {
"status": "success",
"presigned_url": presigned_url,
"presigned_url_expires_at": expires_at.isoformat(),
}, 200
@console_ns.route("/apps/<uuid:app_id>/advanced-chat/workflow-runs/count")
@ -236,21 +290,32 @@ class AdvancedChatAppWorkflowRunCountApi(Resource):
@console_ns.doc("get_advanced_chat_workflow_runs_count")
@console_ns.doc(description="Get advanced chat workflow runs count statistics")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.doc(params=query_params_from_model(WorkflowRunCountQuery))
@console_ns.response(
200,
"Workflow runs count retrieved successfully",
console_ns.models[WorkflowRunCountResponse.__name__],
@console_ns.doc(
params={"status": "Filter by status (optional): running, succeeded, failed, stopped, partial-succeeded"}
)
@console_ns.doc(
params={
"time_range": (
"Filter by time range (optional): e.g., 7d (7 days), 4h (4 hours), "
"30m (30 minutes), 30s (30 seconds). Filters by created_at field."
)
}
)
@console_ns.doc(
params={"triggered_from": "Filter by trigger source (optional): debugging or app-run. Default: debugging"}
)
@console_ns.response(200, "Workflow runs count retrieved successfully", workflow_run_count_model)
@console_ns.expect(console_ns.models[WorkflowRunCountQuery.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT])
@marshal_with(workflow_run_count_model)
def get(self, app_model: App):
"""
Get advanced chat workflow runs count statistics
"""
args_model = WorkflowRunCountQuery.model_validate(request.args.to_dict(flat=True))
args_model = WorkflowRunCountQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
args = args_model.model_dump(exclude_none=True)
# Default to DEBUGGING if not specified
@ -268,7 +333,7 @@ class AdvancedChatAppWorkflowRunCountApi(Resource):
triggered_from=triggered_from,
)
return WorkflowRunCountResponse.model_validate(result).model_dump(mode="json")
return result
@console_ns.route("/apps/<uuid:app_id>/workflow-runs")
@ -276,21 +341,25 @@ class WorkflowRunListApi(Resource):
@console_ns.doc("get_workflow_runs")
@console_ns.doc(description="Get workflow run list")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.doc(params=query_params_from_model(WorkflowRunListQuery))
@console_ns.response(
200,
"Workflow runs retrieved successfully",
console_ns.models[WorkflowRunPaginationResponse.__name__],
@console_ns.doc(params={"last_id": "Last run ID for pagination", "limit": "Number of items per page (1-100)"})
@console_ns.doc(
params={"status": "Filter by status (optional): running, succeeded, failed, stopped, partial-succeeded"}
)
@console_ns.doc(
params={"triggered_from": "Filter by trigger source (optional): debugging or app-run. Default: debugging"}
)
@console_ns.response(200, "Workflow runs retrieved successfully", workflow_run_pagination_model)
@console_ns.expect(console_ns.models[WorkflowRunListQuery.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
@marshal_with(workflow_run_pagination_model)
def get(self, app_model: App):
"""
Get workflow run list
"""
args_model = WorkflowRunListQuery.model_validate(request.args.to_dict(flat=True))
args_model = WorkflowRunListQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
args: WorkflowRunListArgs = {"limit": args_model.limit}
if args_model.last_id is not None:
args["last_id"] = args_model.last_id
@ -309,7 +378,7 @@ class WorkflowRunListApi(Resource):
app_model=app_model, args=args, triggered_from=triggered_from
)
return WorkflowRunPaginationResponse.model_validate(result, from_attributes=True).model_dump(mode="json")
return result
@console_ns.route("/apps/<uuid:app_id>/workflow-runs/count")
@ -317,21 +386,32 @@ class WorkflowRunCountApi(Resource):
@console_ns.doc("get_workflow_runs_count")
@console_ns.doc(description="Get workflow runs count statistics")
@console_ns.doc(params={"app_id": "Application ID"})
@console_ns.doc(params=query_params_from_model(WorkflowRunCountQuery))
@console_ns.response(
200,
"Workflow runs count retrieved successfully",
console_ns.models[WorkflowRunCountResponse.__name__],
@console_ns.doc(
params={"status": "Filter by status (optional): running, succeeded, failed, stopped, partial-succeeded"}
)
@console_ns.doc(
params={
"time_range": (
"Filter by time range (optional): e.g., 7d (7 days), 4h (4 hours), "
"30m (30 minutes), 30s (30 seconds). Filters by created_at field."
)
}
)
@console_ns.doc(
params={"triggered_from": "Filter by trigger source (optional): debugging or app-run. Default: debugging"}
)
@console_ns.response(200, "Workflow runs count retrieved successfully", workflow_run_count_model)
@console_ns.expect(console_ns.models[WorkflowRunCountQuery.__name__])
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
@marshal_with(workflow_run_count_model)
def get(self, app_model: App):
"""
Get workflow runs count statistics
"""
args_model = WorkflowRunCountQuery.model_validate(request.args.to_dict(flat=True))
args_model = WorkflowRunCountQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
args = args_model.model_dump(exclude_none=True)
# Default to DEBUGGING for workflow if not specified (backward compatibility)
@ -349,7 +429,7 @@ class WorkflowRunCountApi(Resource):
triggered_from=triggered_from,
)
return WorkflowRunCountResponse.model_validate(result).model_dump(mode="json")
return result
@console_ns.route("/apps/<uuid:app_id>/workflow-runs/<uuid:run_id>")
@ -357,16 +437,13 @@ class WorkflowRunDetailApi(Resource):
@console_ns.doc("get_workflow_run_detail")
@console_ns.doc(description="Get workflow run detail")
@console_ns.doc(params={"app_id": "Application ID", "run_id": "Workflow run ID"})
@console_ns.response(
200,
"Workflow run detail retrieved successfully",
console_ns.models[WorkflowRunDetailResponse.__name__],
)
@console_ns.response(200, "Workflow run detail retrieved successfully", workflow_run_detail_model)
@console_ns.response(404, "Workflow run not found")
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
@marshal_with(workflow_run_detail_model)
def get(self, app_model: App, run_id):
"""
Get workflow run detail
@ -375,10 +452,8 @@ class WorkflowRunDetailApi(Resource):
workflow_run_service = WorkflowRunService()
workflow_run = workflow_run_service.get_workflow_run(app_model=app_model, run_id=run_id)
if workflow_run is None:
raise NotFoundError("Workflow run not found")
return WorkflowRunDetailResponse.model_validate(workflow_run, from_attributes=True).model_dump(mode="json")
return workflow_run
@console_ns.route("/apps/<uuid:app_id>/workflow-runs/<uuid:run_id>/node-executions")
@ -386,16 +461,13 @@ class WorkflowRunNodeExecutionListApi(Resource):
@console_ns.doc("get_workflow_run_node_executions")
@console_ns.doc(description="Get workflow run node execution list")
@console_ns.doc(params={"app_id": "Application ID", "run_id": "Workflow run ID"})
@console_ns.response(
200,
"Node executions retrieved successfully",
console_ns.models[WorkflowRunNodeExecutionListResponse.__name__],
)
@console_ns.response(200, "Node executions retrieved successfully", workflow_run_node_execution_list_model)
@console_ns.response(404, "Workflow run not found")
@setup_required
@login_required
@account_initialization_required
@get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW])
@marshal_with(workflow_run_node_execution_list_model)
def get(self, app_model: App, run_id):
"""
Get workflow run node execution list
@ -410,24 +482,13 @@ class WorkflowRunNodeExecutionListApi(Resource):
user=user,
)
return WorkflowRunNodeExecutionListResponse.model_validate(
{"data": node_executions}, from_attributes=True
).model_dump(mode="json")
return {"data": node_executions}
@console_ns.route("/workflow/<string:workflow_run_id>/pause-details")
class ConsoleWorkflowPauseDetailsApi(Resource):
"""Console API for getting workflow pause details."""
@console_ns.doc("get_workflow_pause_details")
@console_ns.doc(description="Get workflow pause details")
@console_ns.doc(params={"workflow_run_id": "Workflow run ID"})
@console_ns.response(
200,
"Workflow pause details retrieved successfully",
console_ns.models[WorkflowPauseDetailsResponse.__name__],
)
@console_ns.response(404, "Workflow run not found")
@setup_required
@login_required
@account_initialization_required
@ -454,8 +515,11 @@ class ConsoleWorkflowPauseDetailsApi(Resource):
# Check if workflow is suspended
is_paused = workflow_run.status == WorkflowExecutionStatus.PAUSED
if not is_paused:
empty_response = WorkflowPauseDetailsResponse(paused_at=None, paused_nodes=[])
return empty_response.model_dump(mode="json"), 200
empty_response: WorkflowPauseDetailsResponse = {
"paused_at": None,
"paused_nodes": [],
}
return empty_response, 200
pause_entity = workflow_run_repo.get_workflow_pause(workflow_run_id)
pause_reasons = pause_entity.get_pause_reasons() if pause_entity else []
@ -466,25 +530,27 @@ class ConsoleWorkflowPauseDetailsApi(Resource):
# Build response
paused_at = pause_entity.paused_at if pause_entity else None
paused_nodes: list[PausedNodeResponse] = []
response: WorkflowPauseDetailsResponse = {
"paused_at": paused_at.isoformat() + "Z" if paused_at else None,
"paused_nodes": paused_nodes,
}
for reason in pause_reasons:
if isinstance(reason, HumanInputRequired):
paused_nodes.append(
PausedNodeResponse(
node_id=reason.node_id,
node_title=reason.node_title,
pause_type=HumanInputPauseTypeResponse(
type="human_input",
form_id=reason.form_id,
backstage_input_url=_build_backstage_input_url(form_tokens_by_form_id.get(reason.form_id)),
),
)
{
"node_id": reason.node_id,
"node_title": reason.node_title,
"pause_type": {
"type": "human_input",
"form_id": reason.form_id,
"backstage_input_url": _build_backstage_input_url(
form_tokens_by_form_id.get(reason.form_id)
),
},
}
)
else:
raise AssertionError("unimplemented.")
response = WorkflowPauseDetailsResponse(
paused_at=paused_at.isoformat() + "Z" if paused_at else None,
paused_nodes=paused_nodes,
)
return response.model_dump(mode="json"), 200
return response, 200

View File

@ -3,7 +3,6 @@ from flask_restx import Resource
from pydantic import BaseModel, Field, field_validator
from sqlalchemy.orm import sessionmaker
from controllers.common.schema import register_schema_models
from controllers.console import console_ns
from controllers.console.app.wraps import get_app_model
from controllers.console.wraps import account_initialization_required, setup_required
@ -14,6 +13,8 @@ from models.enums import WorkflowRunTriggeredFrom
from models.model import AppMode
from repositories.factory import DifyAPIRepositoryFactory
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class WorkflowStatisticQuery(BaseModel):
start: str | None = Field(default=None, description="Start date and time (YYYY-MM-DD HH:MM)")
@ -27,7 +28,10 @@ class WorkflowStatisticQuery(BaseModel):
return value
register_schema_models(console_ns, WorkflowStatisticQuery)
console_ns.schema_model(
WorkflowStatisticQuery.__name__,
WorkflowStatisticQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
@console_ns.route("/apps/<uuid:app_id>/workflow/statistics/daily-conversations")
@ -49,7 +53,7 @@ class WorkflowDailyRunsStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
args = WorkflowStatisticQuery.model_validate(request.args.to_dict(flat=True))
args = WorkflowStatisticQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
assert account.timezone is not None
@ -89,7 +93,7 @@ class WorkflowDailyTerminalsStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
args = WorkflowStatisticQuery.model_validate(request.args.to_dict(flat=True))
args = WorkflowStatisticQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
assert account.timezone is not None
@ -129,7 +133,7 @@ class WorkflowDailyTokenCostStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
args = WorkflowStatisticQuery.model_validate(request.args.to_dict(flat=True))
args = WorkflowStatisticQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
assert account.timezone is not None
@ -169,7 +173,7 @@ class WorkflowAverageAppInteractionStatistic(Resource):
def get(self, app_model):
account, _ = current_account_with_tenant()
args = WorkflowStatisticQuery.model_validate(request.args.to_dict(flat=True))
args = WorkflowStatisticQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
assert account.timezone is not None

View File

@ -94,7 +94,7 @@ class WebhookTriggerApi(Resource):
@console_ns.response(200, "Success", console_ns.models[WebhookTriggerResponse.__name__])
def get(self, app_model: App):
"""Get webhook trigger for a node"""
args = Parser.model_validate(request.args.to_dict(flat=True))
args = Parser.model_validate(request.args.to_dict(flat=True)) # type: ignore
node_id = args.node_id

View File

@ -63,7 +63,7 @@ class ActivateCheckApi(Resource):
console_ns.models[ActivationCheckResponse.__name__],
)
def get(self):
args = ActivateCheckQuery.model_validate(request.args.to_dict(flat=True))
args = ActivateCheckQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
workspaceId = args.workspace_id
token = args.token

View File

@ -1,7 +1,6 @@
from flask_restx import Resource
from pydantic import BaseModel, Field
from controllers.common.schema import register_schema_models
from libs.login import current_account_with_tenant, login_required
from services.auth.api_key_auth_service import ApiKeyAuthService
@ -9,6 +8,8 @@ from .. import console_ns
from ..auth.error import ApiKeyAuthFailedError
from ..wraps import account_initialization_required, is_admin_or_owner_required, setup_required
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class ApiKeyAuthBindingPayload(BaseModel):
category: str = Field(...)
@ -16,7 +17,10 @@ class ApiKeyAuthBindingPayload(BaseModel):
credentials: dict = Field(...)
register_schema_models(console_ns, ApiKeyAuthBindingPayload)
console_ns.schema_model(
ApiKeyAuthBindingPayload.__name__,
ApiKeyAuthBindingPayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0),
)
@console_ns.route("/api-key-auth/data-source")

View File

@ -4,7 +4,6 @@ from pydantic import BaseModel, Field, field_validator
from configs import dify_config
from constants.languages import languages
from controllers.common.schema import register_schema_models
from controllers.console import console_ns
from controllers.console.auth.error import (
EmailAlreadyInUseError,
@ -24,6 +23,8 @@ from services.errors.account import AccountNotFoundError, AccountRegisterError
from ..error import AccountInFreezeError, EmailSendIpLimitError
from ..wraps import email_password_login_enabled, email_register_enabled, setup_required
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class EmailRegisterSendPayload(BaseModel):
email: EmailStr = Field(..., description="Email address")
@ -47,7 +48,8 @@ class EmailRegisterResetPayload(BaseModel):
return valid_password(value)
register_schema_models(console_ns, EmailRegisterSendPayload, EmailRegisterValidityPayload, EmailRegisterResetPayload)
for model in (EmailRegisterSendPayload, EmailRegisterValidityPayload, EmailRegisterResetPayload):
console_ns.schema_model(model.__name__, model.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
@console_ns.route("/email-register/send-email")

View File

@ -28,6 +28,8 @@ from services.entities.auth_entities import (
)
from services.feature_service import FeatureService
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class ForgotPasswordEmailResponse(BaseModel):
result: str = Field(description="Operation result")

View File

@ -9,7 +9,6 @@ from werkzeug.exceptions import Unauthorized
import services
from configs import dify_config
from constants.languages import get_valid_language
from controllers.common.schema import register_schema_models
from controllers.console import console_ns
from controllers.console.auth.error import (
AuthenticationFailedError,
@ -51,6 +50,7 @@ from services.errors.account import AccountRegisterError
from services.errors.workspace import WorkSpaceNotAllowedCreateError, WorkspacesLimitExceededError
from services.feature_service import FeatureService
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
logger = logging.getLogger(__name__)
@ -71,7 +71,13 @@ class EmailCodeLoginPayload(BaseModel):
language: str | None = Field(default=None)
register_schema_models(console_ns, LoginPayload, EmailPayload, EmailCodeLoginPayload)
def reg(cls: type[BaseModel]):
console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
reg(LoginPayload)
reg(EmailPayload)
reg(EmailCodeLoginPayload)
@console_ns.route("/login")

View File

@ -606,63 +606,63 @@ class DatasetIndexingEstimateApi(Resource):
# validate args
DocumentService.estimate_args_validate(args)
extract_settings = []
match args["info_list"]["data_source_type"]:
case "upload_file":
file_ids = args["info_list"]["file_info_list"]["file_ids"]
file_details = db.session.scalars(
select(UploadFile).where(UploadFile.tenant_id == current_tenant_id, UploadFile.id.in_(file_ids))
).all()
if file_details is None:
raise NotFound("File not found.")
if args["info_list"]["data_source_type"] == "upload_file":
file_ids = args["info_list"]["file_info_list"]["file_ids"]
file_details = db.session.scalars(
select(UploadFile).where(UploadFile.tenant_id == current_tenant_id, UploadFile.id.in_(file_ids))
).all()
if file_details:
for file_detail in file_details:
extract_setting = ExtractSetting(
datasource_type=DatasourceType.FILE,
upload_file=file_detail,
document_model=args["doc_form"],
)
extract_settings.append(extract_setting)
case "notion_import":
notion_info_list = args["info_list"]["notion_info_list"]
for notion_info in notion_info_list:
workspace_id = notion_info["workspace_id"]
credential_id = notion_info.get("credential_id")
for page in notion_info["pages"]:
extract_setting = ExtractSetting(
datasource_type=DatasourceType.NOTION,
notion_info=NotionInfo.model_validate(
{
"credential_id": credential_id,
"notion_workspace_id": workspace_id,
"notion_obj_id": page["page_id"],
"notion_page_type": page["type"],
"tenant_id": current_tenant_id,
}
),
document_model=args["doc_form"],
)
extract_settings.append(extract_setting)
case "website_crawl":
website_info_list = args["info_list"]["website_info_list"]
for url in website_info_list["urls"]:
if file_details is None:
raise NotFound("File not found.")
if file_details:
for file_detail in file_details:
extract_setting = ExtractSetting(
datasource_type=DatasourceType.WEBSITE,
website_info=WebsiteInfo.model_validate(
datasource_type=DatasourceType.FILE,
upload_file=file_detail,
document_model=args["doc_form"],
)
extract_settings.append(extract_setting)
elif args["info_list"]["data_source_type"] == "notion_import":
notion_info_list = args["info_list"]["notion_info_list"]
for notion_info in notion_info_list:
workspace_id = notion_info["workspace_id"]
credential_id = notion_info.get("credential_id")
for page in notion_info["pages"]:
extract_setting = ExtractSetting(
datasource_type=DatasourceType.NOTION,
notion_info=NotionInfo.model_validate(
{
"provider": website_info_list["provider"],
"job_id": website_info_list["job_id"],
"url": url,
"credential_id": credential_id,
"notion_workspace_id": workspace_id,
"notion_obj_id": page["page_id"],
"notion_page_type": page["type"],
"tenant_id": current_tenant_id,
"mode": "crawl",
"only_main_content": website_info_list["only_main_content"],
}
),
document_model=args["doc_form"],
)
extract_settings.append(extract_setting)
case _:
raise ValueError("Data source type not support")
elif args["info_list"]["data_source_type"] == "website_crawl":
website_info_list = args["info_list"]["website_info_list"]
for url in website_info_list["urls"]:
extract_setting = ExtractSetting(
datasource_type=DatasourceType.WEBSITE,
website_info=WebsiteInfo.model_validate(
{
"provider": website_info_list["provider"],
"job_id": website_info_list["job_id"],
"url": url,
"tenant_id": current_tenant_id,
"mode": "crawl",
"only_main_content": website_info_list["only_main_content"],
}
),
document_model=args["doc_form"],
)
extract_settings.append(extract_setting)
else:
raise ValueError("Data source type not support")
indexing_runner = IndexingRunner()
try:
response = indexing_runner.indexing_estimate(

View File

@ -369,31 +369,28 @@ class DatasetDocumentListApi(Resource):
else:
sort_logic = asc
match sort:
case "hit_count":
sub_query = (
sa.select(
DocumentSegment.document_id, sa.func.sum(DocumentSegment.hit_count).label("total_hit_count")
)
.where(DocumentSegment.dataset_id == str(dataset_id))
.group_by(DocumentSegment.document_id)
.subquery()
)
if sort == "hit_count":
sub_query = (
sa.select(DocumentSegment.document_id, sa.func.sum(DocumentSegment.hit_count).label("total_hit_count"))
.where(DocumentSegment.dataset_id == str(dataset_id))
.group_by(DocumentSegment.document_id)
.subquery()
)
query = query.outerjoin(sub_query, sub_query.c.document_id == Document.id).order_by(
sort_logic(sa.func.coalesce(sub_query.c.total_hit_count, 0)),
sort_logic(Document.position),
)
case "created_at":
query = query.order_by(
sort_logic(Document.created_at),
sort_logic(Document.position),
)
case _:
query = query.order_by(
desc(Document.created_at),
desc(Document.position),
)
query = query.outerjoin(sub_query, sub_query.c.document_id == Document.id).order_by(
sort_logic(sa.func.coalesce(sub_query.c.total_hit_count, 0)),
sort_logic(Document.position),
)
elif sort == "created_at":
query = query.order_by(
sort_logic(Document.created_at),
sort_logic(Document.position),
)
else:
query = query.order_by(
desc(Document.created_at),
desc(Document.position),
)
paginated_documents = db.paginate(select=query, page=page, per_page=limit, max_per_page=100, error_out=False)
documents = paginated_documents.items

View File

@ -38,48 +38,6 @@ class HitTestingPayload(BaseModel):
class DatasetsHitTestingBase:
@staticmethod
def _normalize_hit_testing_query(query: Any) -> str:
"""Return the user-visible query string from legacy and current response shapes."""
if isinstance(query, str):
return query
if isinstance(query, dict):
content = query.get("content")
if isinstance(content, str):
return content
raise ValueError("Invalid hit testing query response")
@staticmethod
def _normalize_hit_testing_records(records: Any) -> list[dict[str, Any]]:
"""Coerce nullable collection fields into lists before response validation."""
if not isinstance(records, list):
return []
normalized_records: list[dict[str, Any]] = []
for record in records:
if not isinstance(record, dict):
continue
normalized_record = dict(record)
segment = normalized_record.get("segment")
if isinstance(segment, dict):
normalized_segment = dict(segment)
if normalized_segment.get("keywords") is None:
normalized_segment["keywords"] = []
normalized_record["segment"] = normalized_segment
if normalized_record.get("child_chunks") is None:
normalized_record["child_chunks"] = []
if normalized_record.get("files") is None:
normalized_record["files"] = []
normalized_records.append(normalized_record)
return normalized_records
@staticmethod
def get_and_validate_dataset(dataset_id: str):
assert isinstance(current_user, Account)
@ -117,12 +75,7 @@ class DatasetsHitTestingBase:
attachment_ids=args.get("attachment_ids"),
limit=10,
)
return {
"query": DatasetsHitTestingBase._normalize_hit_testing_query(response.get("query")),
"records": DatasetsHitTestingBase._normalize_hit_testing_records(
marshal(response.get("records", []), hit_testing_record_fields)
),
}
return {"query": response["query"], "records": marshal(response["records"], hit_testing_record_fields)}
except services.errors.index.IndexNotInitializedError:
raise DatasetNotInitializedError()
except ProviderTokenNotInitError as ex:

View File

@ -4,7 +4,6 @@ from flask_restx import ( # type: ignore
from pydantic import BaseModel
from werkzeug.exceptions import Forbidden
from controllers.common.schema import register_schema_models
from controllers.console import console_ns
from controllers.console.datasets.wraps import get_rag_pipeline
from controllers.console.wraps import account_initialization_required, setup_required
@ -13,6 +12,8 @@ from models import Account
from models.dataset import Pipeline
from services.rag_pipeline.rag_pipeline import RagPipelineService
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class Parser(BaseModel):
inputs: dict
@ -20,7 +21,7 @@ class Parser(BaseModel):
credential_id: str | None = None
register_schema_models(console_ns, Parser)
console_ns.schema_model(Parser.__name__, Parser.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0))
@console_ns.route("/rag/pipelines/<uuid:pipeline_id>/workflows/published/datasource/nodes/<string:node_id>/preview")

View File

@ -10,7 +10,7 @@ from werkzeug.exceptions import BadRequest, Forbidden, InternalServerError, NotF
import services
from controllers.common.controller_schemas import DefaultBlockConfigQuery, WorkflowListQuery, WorkflowUpdatePayload
from controllers.common.schema import register_response_schema_models, register_schema_models
from controllers.common.schema import register_schema_models
from controllers.console import console_ns
from controllers.console.app.error import (
ConversationCompletedError,
@ -22,6 +22,12 @@ from controllers.console.app.workflow import (
workflow_model,
workflow_pagination_model,
)
from controllers.console.app.workflow_run import (
workflow_run_detail_model,
workflow_run_node_execution_list_model,
workflow_run_node_execution_model,
workflow_run_pagination_model,
)
from controllers.console.datasets.wraps import get_rag_pipeline
from controllers.console.wraps import (
account_initialization_required,
@ -34,12 +40,6 @@ from core.app.apps.pipeline.pipeline_generator import PipelineGenerator
from core.app.entities.app_invoke_entities import InvokeFrom
from extensions.ext_database import db
from factories import variable_factory
from fields.workflow_run_fields import (
WorkflowRunDetailResponse,
WorkflowRunNodeExecutionListResponse,
WorkflowRunNodeExecutionResponse,
WorkflowRunPaginationResponse,
)
from graphon.model_runtime.utils.encoders import jsonable_encoder
from libs import helper
from libs.helper import TimestampField, UUIDStrOrEmpty
@ -131,13 +131,6 @@ register_schema_models(
DatasourceVariablesPayload,
RagPipelineRecommendedPluginQuery,
)
register_response_schema_models(
console_ns,
WorkflowRunDetailResponse,
WorkflowRunNodeExecutionListResponse,
WorkflowRunNodeExecutionResponse,
WorkflowRunPaginationResponse,
)
@console_ns.route("/rag/pipelines/<uuid:pipeline_id>/workflows/draft")
@ -422,16 +415,12 @@ class RagPipelineDraftDatasourceNodeRunApi(Resource):
@console_ns.route("/rag/pipelines/<uuid:pipeline_id>/workflows/draft/nodes/<string:node_id>/run")
class RagPipelineDraftNodeRunApi(Resource):
@console_ns.expect(console_ns.models[NodeRunRequiredPayload.__name__])
@console_ns.response(
200,
"Node run started successfully",
console_ns.models[WorkflowRunNodeExecutionResponse.__name__],
)
@setup_required
@login_required
@edit_permission_required
@account_initialization_required
@get_rag_pipeline
@marshal_with(workflow_run_node_execution_model)
def post(self, pipeline: Pipeline, node_id: str):
"""
Run draft workflow node
@ -450,9 +439,7 @@ class RagPipelineDraftNodeRunApi(Resource):
if workflow_node_execution is None:
raise ValueError("Workflow node execution not found")
return WorkflowRunNodeExecutionResponse.model_validate(
workflow_node_execution, from_attributes=True
).model_dump(mode="json")
return workflow_node_execution
@console_ns.route("/rag/pipelines/<uuid:pipeline_id>/workflow-runs/tasks/<string:task_id>/stop")
@ -791,15 +778,11 @@ class DraftRagPipelineSecondStepApi(Resource):
@console_ns.route("/rag/pipelines/<uuid:pipeline_id>/workflow-runs")
class RagPipelineWorkflowRunListApi(Resource):
@console_ns.response(
200,
"Workflow runs retrieved successfully",
console_ns.models[WorkflowRunPaginationResponse.__name__],
)
@setup_required
@login_required
@account_initialization_required
@get_rag_pipeline
@marshal_with(workflow_run_pagination_model)
def get(self, pipeline: Pipeline):
"""
Get workflow run list
@ -818,20 +801,16 @@ class RagPipelineWorkflowRunListApi(Resource):
rag_pipeline_service = RagPipelineService()
result = rag_pipeline_service.get_rag_pipeline_paginate_workflow_runs(pipeline=pipeline, args=args)
return WorkflowRunPaginationResponse.model_validate(result, from_attributes=True).model_dump(mode="json")
return result
@console_ns.route("/rag/pipelines/<uuid:pipeline_id>/workflow-runs/<uuid:run_id>")
class RagPipelineWorkflowRunDetailApi(Resource):
@console_ns.response(
200,
"Workflow run detail retrieved successfully",
console_ns.models[WorkflowRunDetailResponse.__name__],
)
@setup_required
@login_required
@account_initialization_required
@get_rag_pipeline
@marshal_with(workflow_run_detail_model)
def get(self, pipeline: Pipeline, run_id):
"""
Get workflow run detail
@ -840,23 +819,17 @@ class RagPipelineWorkflowRunDetailApi(Resource):
rag_pipeline_service = RagPipelineService()
workflow_run = rag_pipeline_service.get_rag_pipeline_workflow_run(pipeline=pipeline, run_id=run_id)
if workflow_run is None:
raise NotFound("Workflow run not found")
return WorkflowRunDetailResponse.model_validate(workflow_run, from_attributes=True).model_dump(mode="json")
return workflow_run
@console_ns.route("/rag/pipelines/<uuid:pipeline_id>/workflow-runs/<uuid:run_id>/node-executions")
class RagPipelineWorkflowRunNodeExecutionListApi(Resource):
@console_ns.response(
200,
"Node executions retrieved successfully",
console_ns.models[WorkflowRunNodeExecutionListResponse.__name__],
)
@setup_required
@login_required
@account_initialization_required
@get_rag_pipeline
@marshal_with(workflow_run_node_execution_list_model)
def get(self, pipeline: Pipeline, run_id: str):
"""
Get workflow run node execution list
@ -871,9 +844,7 @@ class RagPipelineWorkflowRunNodeExecutionListApi(Resource):
user=user,
)
return WorkflowRunNodeExecutionListResponse.model_validate(
{"data": node_executions}, from_attributes=True
).model_dump(mode="json")
return {"data": node_executions}
@console_ns.route("/rag/pipelines/datasource-plugins")
@ -888,15 +859,11 @@ class DatasourceListApi(Resource):
@console_ns.route("/rag/pipelines/<uuid:pipeline_id>/workflows/draft/nodes/<string:node_id>/last-run")
class RagPipelineWorkflowLastRunApi(Resource):
@console_ns.response(
200,
"Node last run retrieved successfully",
console_ns.models[WorkflowRunNodeExecutionResponse.__name__],
)
@setup_required
@login_required
@account_initialization_required
@get_rag_pipeline
@marshal_with(workflow_run_node_execution_model)
def get(self, pipeline: Pipeline, node_id: str):
rag_pipeline_service = RagPipelineService()
workflow = rag_pipeline_service.get_draft_workflow(pipeline=pipeline)
@ -909,7 +876,7 @@ class RagPipelineWorkflowLastRunApi(Resource):
)
if node_exec is None:
raise NotFound("last run not found")
return WorkflowRunNodeExecutionResponse.model_validate(node_exec, from_attributes=True).model_dump(mode="json")
return node_exec
@console_ns.route("/rag/pipelines/transform/datasets/<uuid:dataset_id>")
@ -932,16 +899,12 @@ class RagPipelineTransformApi(Resource):
@console_ns.route("/rag/pipelines/<uuid:pipeline_id>/workflows/draft/datasource/variables-inspect")
class RagPipelineDatasourceVariableApi(Resource):
@console_ns.expect(console_ns.models[DatasourceVariablesPayload.__name__])
@console_ns.response(
200,
"Datasource variables set successfully",
console_ns.models[WorkflowRunNodeExecutionResponse.__name__],
)
@setup_required
@login_required
@account_initialization_required
@get_rag_pipeline
@edit_permission_required
@marshal_with(workflow_run_node_execution_model)
def post(self, pipeline: Pipeline):
"""
Set datasource variables
@ -955,9 +918,7 @@ class RagPipelineDatasourceVariableApi(Resource):
args=args,
current_user=current_user,
)
return WorkflowRunNodeExecutionResponse.model_validate(
workflow_node_execution, from_attributes=True
).model_dump(mode="json")
return workflow_node_execution
@console_ns.route("/rag/pipelines/recommended-plugins")

View File

@ -1,12 +1,11 @@
from typing import Any
from uuid import UUID
from flask import request
from flask_restx import Resource
from pydantic import BaseModel, Field, computed_field, field_validator
from constants.languages import languages
from controllers.common.schema import query_params_from_model, register_schema_models
from controllers.common.schema import register_schema_models
from controllers.console import console_ns
from controllers.console.wraps import account_initialization_required
from fields.base import ResponseModel
@ -16,7 +15,7 @@ from services.recommended_app_service import RecommendedAppService
class RecommendedAppsQuery(BaseModel):
language: str | None = Field(default=None, description="Language code for recommended app localization")
language: str | None = Field(default=None)
class RecommendedAppInfoResponse(ResponseModel):
@ -53,7 +52,7 @@ class RecommendedAppResponse(ResponseModel):
copyright: str | None = None
privacy_policy: str | None = None
custom_disclaimer: str | None = None
categories: list[str] = Field(default_factory=list)
category: str | None = None
position: int | None = None
is_listed: bool | None = None
can_trial: bool | None = None
@ -75,13 +74,13 @@ register_schema_models(
@console_ns.route("/explore/apps")
class RecommendedAppListApi(Resource):
@console_ns.doc(params=query_params_from_model(RecommendedAppsQuery))
@console_ns.expect(console_ns.models[RecommendedAppsQuery.__name__])
@console_ns.response(200, "Success", console_ns.models[RecommendedAppListResponse.__name__])
@login_required
@account_initialization_required
def get(self):
# language args
args = RecommendedAppsQuery.model_validate(request.args.to_dict(flat=True))
args = RecommendedAppsQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore
language = args.language
if language and language in languages:
language_prefix = language
@ -100,5 +99,6 @@ class RecommendedAppListApi(Resource):
class RecommendedAppApi(Resource):
@login_required
@account_initialization_required
def get(self, app_id: UUID):
return RecommendedAppService.get_recommend_app_detail(str(app_id))
def get(self, app_id):
app_id = str(app_id)
return RecommendedAppService.get_recommend_app_detail(app_id)

View File

@ -10,7 +10,7 @@ from werkzeug.exceptions import Forbidden, InternalServerError, NotFound
import services
from controllers.common.fields import Parameters as ParametersResponse
from controllers.common.fields import Site as SiteResponse
from controllers.common.schema import get_or_create_model, register_schema_models
from controllers.common.schema import get_or_create_model
from controllers.console import console_ns
from controllers.console.app.error import (
AppUnavailableError,
@ -120,6 +120,10 @@ workflow_fields_copy["rag_pipeline_variables"] = fields.List(fields.Nested(pipel
workflow_model = get_or_create_model("TrialWorkflow", workflow_fields_copy)
# Pydantic models for request validation
DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}"
class WorkflowRunRequest(BaseModel):
inputs: dict
files: list | None = None
@ -149,7 +153,19 @@ class CompletionRequest(BaseModel):
retriever_from: str = "explore_app"
register_schema_models(console_ns, WorkflowRunRequest, ChatRequest, TextToSpeechRequest, CompletionRequest)
# Register schemas for Swagger documentation
console_ns.schema_model(
WorkflowRunRequest.__name__, WorkflowRunRequest.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)
)
console_ns.schema_model(
ChatRequest.__name__, ChatRequest.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)
)
console_ns.schema_model(
TextToSpeechRequest.__name__, TextToSpeechRequest.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)
)
console_ns.schema_model(
CompletionRequest.__name__, CompletionRequest.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)
)
class TrialAppWorkflowRunApi(TrialAppResource):

Some files were not shown because too many files have changed in this diff Show More