Files
dify/api
yyh 12ca422c8a fix(app-assets): restore atomic batch upload for nested folder targets
The previous nested folder upload flow bypassed the backend batch-upload
contract when parentId was set. Instead of creating the whole metadata
tree in one backend operation, the frontend recursively called
createFolder/getFileUploadUrl for each node.

That introduced two regressions for uploads into subfolders:

- consistency regression: mid-sequence failures could leave partially
  created folder trees under the destination folder
- performance regression: metadata creation degraded from a single
  batch request to O(files + folders) round-trips before file bytes
  were uploaded

This change moves nested uploads back to the original batch semantics:

- add optional parent_id support to app asset batch-upload payload
- create the whole nested tree under the target parent in
  AppAssetService.batch_create_from_tree
- pass parentId through useBatchUpload instead of using per-node
  createFolder/getFileUploadUrl calls
- remove the now-unnecessary useBatchUploadOperation wrapper
- add a backend unit test covering batch tree creation under an
  existing parent folder

After this change, both root uploads and subfolder uploads use the same
single-request metadata creation path, preserving atomic tree creation
semantics and avoiding avoidable metadata round-trips.
2026-03-26 15:16:55 +08:00
..
2026-03-25 05:20:02 +00:00
2026-03-25 09:04:25 +08:00

Dify Backend API

Setup and Run

Important

In the v1.3.0 release, poetry has been replaced with uv as the package manager for Dify API backend service.

uv and pnpm are required to run the setup and development commands below.

The scripts resolve paths relative to their location, so you can run them from anywhere.

  1. Run setup (copies env files and installs dependencies).

    ./dev/setup
    
  2. Review api/.env, web/.env.local, and docker/middleware.env values (see the SECRET_KEY note below).

  3. Start middleware (PostgreSQL/Redis/Weaviate).

    ./dev/start-docker-compose
    
  4. Start backend (runs migrations first).

    ./dev/start-api
    
  5. Start Dify web service.

    ./dev/start-web
    
  6. Set up your application by visiting http://localhost:3000.

  7. Start the worker service (async and scheduler tasks, runs from api).

    ./dev/start-worker
    
  8. Optional: start Celery Beat (scheduled tasks).

    ./dev/start-beat
    

Manual commands

Show manual setup and run steps

These commands assume you start from the repository root.

  1. Start the docker-compose stack.

    The backend requires middleware, including PostgreSQL, Redis, and Weaviate, which can be started together using docker-compose.

    cp docker/middleware.env.example docker/middleware.env
    # Use mysql or another vector database profile if you are not using postgres/weaviate.
    docker compose -f docker/docker-compose.middleware.yaml --profile postgresql --profile weaviate -p dify up -d
    
  2. Copy env files.

    cp api/.env.example api/.env
    cp web/.env.example web/.env.local
    
  3. Install UV if needed.

    pip install uv
    # Or on macOS
    brew install uv
    
  4. Install API dependencies.

    cd api
    uv sync --group dev
    
  5. Install web dependencies.

    cd web
    pnpm install
    cd ..
    
  6. Start backend (runs migrations first, in a new terminal).

    cd api
    uv run flask db upgrade
    uv run flask run --host 0.0.0.0 --port=5001 --debug
    
  7. Start Dify web service (in a new terminal).

    cd web
    pnpm dev:inspect
    
  8. Set up your application by visiting http://localhost:3000.

  9. Optional: start the worker service (async tasks, in a new terminal).

    cd api
    uv run celery -A app.celery worker -P threads -c 2 --loglevel INFO -Q api_token,dataset,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_executor,retention,workflow_based_app_execution
    
  10. Optional: start Celery Beat (scheduled tasks, in a new terminal).

    cd api
    uv run celery -A app.celery beat
    

Environment notes

Important

When the frontend and backend run on different subdomains, set COOKIE_DOMAIN to the sites top-level domain (e.g., example.com). The frontend and backend must be under the same top-level domain in order to share authentication cookies.

  • Generate a SECRET_KEY in the .env file.

    bash for Linux

    sed -i "/^SECRET_KEY=/c\\SECRET_KEY=$(openssl rand -base64 42)" .env
    

    bash for Mac

    secret_key=$(openssl rand -base64 42)
    sed -i '' "/^SECRET_KEY=/c\\
    SECRET_KEY=${secret_key}" .env
    

Testing

  1. Install dependencies for both the backend and the test environment

    cd api
    uv sync --group dev
    
  2. Run the tests locally with mocked system environment variables in tool.pytest_env section in pyproject.toml, more can check Claude.md

    cd api
    uv run pytest                           # Run all tests
    uv run pytest tests/unit_tests/         # Unit tests only
    uv run pytest tests/integration_tests/  # Integration tests
    
    # Code quality
    ./dev/reformat               # Run all formatters and linters
    uv run ruff check --fix ./   # Fix linting issues
    uv run ruff format ./        # Format code
    uv run basedpyright .        # Type checking