* feat(assets): align local API with cloud spec Unify response models, add missing fields, and align input schemas with the cloud OpenAPI spec at cloud.comfy.org/openapi. - Replace AssetSummary/AssetDetail/AssetUpdated with single Asset model - Add is_immutable, metadata (system_metadata), prompt_id fields - Support mime_type and preview_id in update endpoint - Make CreateFromHashBody.name optional, add mime_type, require >=1 tag - Add id/mime_type/preview_id to upload, relax tags to optional - Rename total_tags → tags in tag add/remove responses - Add GET /api/assets/tags/refine histogram endpoint - Add DB migration for system_metadata and prompt_id columns Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Fix review issues: tags validation, size nullability, type annotation, hash mismatch check, and add tag histogram tests - Remove contradictory min_length=1 from CreateFromHashBody.tags default - Restore size field to int|None=None for proper null semantics - Add Union type annotation to _build_asset_response result param - Add hash mismatch validation on idempotent upload path (409 HASH_MISMATCH) - Add unit tests for list_tag_histogram service function Amp-Thread-ID: https://ampcode.com/threads/T-019cd993-f43c-704e-b3d7-6cfc3d4d4a80 Co-authored-by: Amp <amp@ampcode.com> * Add preview_url to /assets API response using /api/view endpoint For input and output assets, generate a preview_url pointing to the existing /api/view endpoint using the asset's filename and tag-derived type (input/output). Handles subdirectories via subfolder param and URL-encodes filenames with spaces, unicode, and special characters. This aligns the OSS backend response with the frontend AssetCard expectation for thumbnail rendering. Amp-Thread-ID: https://ampcode.com/threads/T-019cda3f-5c2c-751a-a906-ac6c9153ac5c Co-authored-by: Amp <amp@ampcode.com> * chore: remove unused imports from asset_reference queries Amp-Thread-ID: https://ampcode.com/threads/T-019cda7d-cb21-77b4-a51b-b965af60208c Co-authored-by: Amp <amp@ampcode.com> * feat: resolve blake3 hashes in /view endpoint via asset database Amp-Thread-ID: https://ampcode.com/threads/T-019cda7d-cb21-77b4-a51b-b965af60208c Co-authored-by: Amp <amp@ampcode.com> * Register uploaded images in asset database when --enable-assets is set Add register_file_in_place() service function to ingest module for registering already-saved files without moving them. Call it from the /upload/image endpoint to return asset metadata in the response. Amp-Thread-ID: https://ampcode.com/threads/T-019ce023-3384-7560-bacf-de40b0de0dd2 Co-authored-by: Amp <amp@ampcode.com> * Exclude None fields from asset API JSON responses Add exclude_none=True to model_dump() calls across asset routes to keep response payloads clean by omitting unset optional fields. Amp-Thread-ID: https://ampcode.com/threads/T-019ce023-3384-7560-bacf-de40b0de0dd2 Co-authored-by: Amp <amp@ampcode.com> * Add comment explaining why /view resolves blake3 hashes Amp-Thread-ID: https://ampcode.com/threads/T-019ce023-3384-7560-bacf-de40b0de0dd2 Co-authored-by: Amp <amp@ampcode.com> * Move blake3 hash resolution to asset_management service Extract resolve_hash_to_path() into asset_management.py and remove _resolve_blake3_to_path from server.py. Also revert loopback origin check to original logic. Amp-Thread-ID: https://ampcode.com/threads/T-019ce023-3384-7560-bacf-de40b0de0dd2 Co-authored-by: Amp <amp@ampcode.com> * Require at least one tag in UploadAssetSpec Enforce non-empty tags at the Pydantic validation layer so uploads with no tags are rejected with a 400 before reaching ingest. Adds test_upload_empty_tags_rejected to cover this case. Amp-Thread-ID: https://ampcode.com/threads/T-019ce377-8bde-7048-bc28-a9df063409f9 Co-authored-by: Amp <amp@ampcode.com> * Add owner_id check to resolve_hash_to_path Filter asset references by owner visibility so the /view endpoint only resolves hashes for assets the requesting user can access. Adds table-driven tests for owner visibility cases. Amp-Thread-ID: https://ampcode.com/threads/T-019ce377-8bde-7048-bc28-a9df063409f9 Co-authored-by: Amp <amp@ampcode.com> * Make ReferenceData.created_at and updated_at required Remove None defaults and type: ignore comments. Move fields before optional fields to satisfy dataclass ordering. Amp-Thread-ID: https://ampcode.com/threads/T-019ce377-8bde-7048-bc28-a9df063409f9 Co-authored-by: Amp <amp@ampcode.com> * Fix double commit in create_from_hash Move mime_type update into _register_existing_asset so it shares a single transaction with reference creation. Log a warning when the hash is not found instead of silently returning None. Amp-Thread-ID: https://ampcode.com/threads/T-019ce377-8bde-7048-bc28-a9df063409f9 Co-authored-by: Amp <amp@ampcode.com> * Add exclude_none=True to create/upload responses Align with get/update/list endpoints for consistent JSON output. Amp-Thread-ID: https://ampcode.com/threads/T-019ce377-8bde-7048-bc28-a9df063409f9 Co-authored-by: Amp <amp@ampcode.com> * Change preview_id to reference asset by reference ID, not content ID Clients receive preview_id in API responses but could not dereference it through public routes (which use reference IDs). Now preview_id is a self-referential FK to asset_references.id so the value is directly usable in the public API. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Filter soft-deleted and missing refs from visibility queries list_references_by_asset_id and list_tags_with_usage were not filtering out deleted_at/is_missing refs, allowing /view?filename=blake3:... to serve files through hidden references and inflating tag usage counts. Add list_all_file_paths_by_asset_id for orphan cleanup which intentionally needs unfiltered access. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Pass preview_id and mime_type through all asset creation fast paths The duplicate-content upload path and hash-based creation paths were silently dropping preview_id and mime_type. This wires both fields through _register_existing_asset, create_from_hash, and all route call sites so behavior is consistent regardless of whether the asset content already exists. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Remove unimplemented client-provided ID from upload API The `id` field on UploadAssetSpec was advertised for idempotent creation but never actually honored when creating new references. Remove it rather than implementing the feature. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Make asset mime_type immutable after first ingest Prevents cross-tenant metadata mutation when multiple references share the same content-addressed Asset row. mime_type can now only be set when NULL (first ingest); subsequent attempts to change it are silently ignored. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Use resolved content_type from asset lookup in /view endpoint The /view endpoint was discarding the content_type computed by resolve_hash_to_path() and re-guessing from the filename, which produced wrong results for extensionless files or mismatched extensions. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Merge system+user metadata into filter projection Extract rebuild_metadata_projection() to build AssetReferenceMeta rows from {**system_metadata, **user_metadata}, so system-generated metadata is queryable via metadata_filter and user keys override system keys. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Standardize tag ordering to alphabetical across all endpoints Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Derive subfolder tags from path in register_file_in_place Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Reject client-provided id, fix preview URLs, rename tags→total_tags - Reject 'id' field in multipart upload with 400 UNSUPPORTED_FIELD instead of silently ignoring it - Build preview URL from the preview asset's own metadata rather than the parent asset's - Rename 'tags' to 'total_tags' in TagsAdd/TagsRemove response schemas for clarity Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: SQLite migration 0003 FK drop fails on file-backed DBs (MB-2) Add naming_convention to Base.metadata so Alembic batch-mode reflection can match unnamed FK constraints created by migration 0002. Pass naming_convention and render_as_batch=True through env.py online config. Add migration roundtrip tests (upgrade/downgrade/cycle from baseline). Amp-Thread-ID: https://ampcode.com/threads/T-019ce466-1683-7471-b6e1-bb078223cda0 Co-authored-by: Amp <amp@ampcode.com> * Fix missing tag count for is_missing references and update test for total_tags field - Allow is_missing=True references to be counted in list_tags_with_usage when the tag is 'missing', so the missing tag count reflects all references that have been tagged as missing - Add update_is_missing_by_asset_id query helper for bulk updates by asset - Update test_add_and_remove_tags to use 'total_tags' matching the API schema Amp-Thread-ID: https://ampcode.com/threads/T-019ce482-05e7-7324-a1b0-a56a929cc7ef Co-authored-by: Amp <amp@ampcode.com> * Remove unused imports in scanner.py Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Rename prompt_id to job_id on asset_references Rename the column in the DB model, migration, and service schemas. The API response emits both job_id and prompt_id (deprecated alias) for backward compatibility with the cloud API. Amp-Thread-ID: https://ampcode.com/threads/T-019cef41-60b0-752a-aa3c-ed7f20fda2f7 Co-authored-by: Amp <amp@ampcode.com> * Add index on asset_references.preview_id for FK cascade performance Amp-Thread-ID: https://ampcode.com/threads/T-019cef45-a4d2-7548-86d2-d46bcd3db419 Co-authored-by: Amp <amp@ampcode.com> * Add clarifying comments for Asset/AssetReference naming and preview_id Amp-Thread-ID: https://ampcode.com/threads/T-019cef49-f94e-7348-bf23-9a19ebf65e0d Co-authored-by: Amp <amp@ampcode.com> * Disallow all-null meta rows: add CHECK constraint, skip null values on write - convert_metadata_to_rows returns [] for None values instead of an all-null row - Remove dead None branch from _scalar_to_row - Simplify null filter in common.py to just check for row absence - Add CHECK constraint ck_asset_reference_meta_has_value to model and migration 0003 Amp-Thread-ID: https://ampcode.com/threads/T-019cef4e-5240-7749-bb25-1f17fcf9c09c Co-authored-by: Amp <amp@ampcode.com> * Remove dead None guards on result.asset in upload handler register_file_in_place guarantees a non-None asset, so the 'if result.asset else None' checks were unreachable. Amp-Thread-ID: https://ampcode.com/threads/T-019cef5b-4cf8-723c-8a98-8fb8f333c133 Co-authored-by: Amp <amp@ampcode.com> * Remove mime_type from asset update API Clients can no longer modify mime_type after asset creation via the PUT /api/assets/{id} endpoint. This reduces the risk of mime_type spoofing. The internal update_asset_hash_and_mime function remains available for server-side use (e.g., enrichment). Amp-Thread-ID: https://ampcode.com/threads/T-019cef5d-8d61-75cc-a1c6-2841ac395648 Co-authored-by: Amp <amp@ampcode.com> * Fix migration constraint naming double-prefix and NULL in mixed metadata lists - Use fully-rendered constraint names in migration 0003 to avoid the naming convention doubling the ck_ prefix on batch operations. - Add table_args to downgrade so SQLite batch mode can find the CHECK constraint (not exposed by SQLite reflection). - Fix model CheckConstraint name to use bare 'has_value' (convention auto-prefixes). - Skip None items when converting metadata lists to rows, preventing all-NULL rows that violate the has_value check constraint. Amp-Thread-ID: https://ampcode.com/threads/T-019cef87-94f9-7172-a6af-c6282290ce4f Co-authored-by: Amp <amp@ampcode.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Amp <amp@ampcode.com>
ComfyUI lets you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. Available on Windows, Linux, and macOS.
Get Started
Local
Desktop Application
- The easiest way to get started.
- Available on Windows & macOS.
Windows Portable Package
- Get the latest commits and completely portable.
- Available on Windows.
Manual Install
Supports all operating systems and GPU types (NVIDIA, AMD, Intel, Apple Silicon, Ascend).
Cloud
Comfy Cloud
- Our official paid cloud version for those who can't afford local hardware.
Examples
See what ComfyUI can do with the newer template workflows or old example workflows.
Features
- Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything.
- Image Models
- SD1.x, SD2.x (unCLIP)
- SDXL, SDXL Turbo
- Stable Cascade
- SD3 and SD3.5
- Pixart Alpha and Sigma
- AuraFlow
- HunyuanDiT
- Flux
- Lumina Image 2.0
- HiDream
- Qwen Image
- Hunyuan Image 2.1
- Flux 2
- Z Image
- Image Editing Models
- Video Models
- Audio Models
- 3D Models
- Asynchronous Queue system
- Many optimizations: Only re-executes the parts of the workflow that changes between executions.
- Smart memory management: can automatically run large models on GPUs with as low as 1GB vram with smart offloading.
- Works even if you don't have a GPU with:
--cpu(slow) - Can load ckpt and safetensors: All in one checkpoints or standalone diffusion models, VAEs and CLIP models.
- Safe loading of ckpt, pt, pth, etc.. files.
- Embeddings/Textual inversion
- Loras (regular, locon and loha)
- Hypernetworks
- Loading full workflows (with seeds) from generated PNG, WebP and FLAC files.
- Saving/Loading workflows as Json files.
- Nodes interface can be used to create complex workflows like one for Hires fix or much more advanced ones.
- Area Composition
- Inpainting with both regular and inpainting models.
- ControlNet and T2I-Adapter
- Upscale Models (ESRGAN, ESRGAN variants, SwinIR, Swin2SR, etc...)
- GLIGEN
- Model Merging
- LCM models and Loras
- Latent previews with TAESD
- Works fully offline: core will never download anything unless you want to.
- Optional API nodes to use paid models from external providers through the online Comfy API disable with:
--disable-api-nodes - Config file to set the search paths for models.
Workflow examples can be found on the Examples page
Release Process
ComfyUI follows a weekly release cycle targeting Monday but this regularly changes because of model releases or large changes to the codebase. There are three interconnected repositories:
-
- Releases a new stable version (e.g., v0.7.0) roughly every week.
- Starting from v0.4.0 patch versions will be used for fixes backported onto the current stable release.
- Minor versions will be used for releases off the master branch.
- Patch versions may still be used for releases on the master branch in cases where a backport would not make sense.
- Commits outside of the stable release tags may be very unstable and break many custom nodes.
- Serves as the foundation for the desktop release
-
- Builds a new release using the latest stable core version
-
- Weekly frontend updates are merged into the core repository
- Features are frozen for the upcoming core release
- Development continues for the next release cycle
Shortcuts
| Keybind | Explanation |
|---|---|
Ctrl + Enter |
Queue up current graph for generation |
Ctrl + Shift + Enter |
Queue up current graph as first for generation |
Ctrl + Alt + Enter |
Cancel current generation |
Ctrl + Z/Ctrl + Y |
Undo/Redo |
Ctrl + S |
Save workflow |
Ctrl + O |
Load workflow |
Ctrl + A |
Select all nodes |
Alt + C |
Collapse/uncollapse selected nodes |
Ctrl + M |
Mute/unmute selected nodes |
Ctrl + B |
Bypass selected nodes (acts like the node was removed from the graph and the wires reconnected through) |
Delete/Backspace |
Delete selected nodes |
Ctrl + Backspace |
Delete the current graph |
Space |
Move the canvas around when held and moving the cursor |
Ctrl/Shift + Click |
Add clicked node to selection |
Ctrl + C/Ctrl + V |
Copy and paste selected nodes (without maintaining connections to outputs of unselected nodes) |
Ctrl + C/Ctrl + Shift + V |
Copy and paste selected nodes (maintaining connections from outputs of unselected nodes to inputs of pasted nodes) |
Shift + Drag |
Move multiple selected nodes at the same time |
Ctrl + D |
Load default graph |
Alt + + |
Canvas Zoom in |
Alt + - |
Canvas Zoom out |
Ctrl + Shift + LMB + Vertical drag |
Canvas Zoom in/out |
P |
Pin/Unpin selected nodes |
Ctrl + G |
Group selected nodes |
Q |
Toggle visibility of the queue |
H |
Toggle visibility of history |
R |
Refresh graph |
F |
Show/Hide menu |
. |
Fit view to selection (Whole graph when nothing is selected) |
| Double-Click LMB | Open node quick search palette |
Shift + Drag |
Move multiple wires at once |
Ctrl + Alt + LMB |
Disconnect all wires from clicked slot |
Ctrl can also be replaced with Cmd instead for macOS users
Installing
Windows Portable
There is a portable standalone build for Windows that should work for running on Nvidia GPUs or for running on your CPU only on the releases page.
Direct link to download
Simply download, extract with 7-Zip or with the windows explorer on recent windows versions and run. For smaller models you normally only need to put the checkpoints (the huge ckpt/safetensors files) in: ComfyUI\models\checkpoints but many of the larger models have multiple files. Make sure to follow the instructions to know which subfolder to put them in ComfyUI\models\
If you have trouble extracting it, right click the file -> properties -> unblock
The portable above currently comes with python 3.13 and pytorch cuda 13.0. Update your Nvidia drivers if it doesn't start.
Alternative Downloads:
Experimental portable for AMD GPUs
Portable with pytorch cuda 12.6 and python 3.12 (Supports Nvidia 10 series and older GPUs).
How do I share models between another UI and ComfyUI?
See the Config file to set the search paths for models. In the standalone windows build you can find this file in the ComfyUI directory. Rename this file to extra_model_paths.yaml and edit it with your favorite text editor.
comfy-cli
You can install and start ComfyUI using comfy-cli:
pip install comfy-cli
comfy install
Manual Install (Windows, Linux)
Python 3.14 works but some custom nodes may have issues. The free threaded variant works but some dependencies will enable the GIL so it's not fully supported.
Python 3.13 is very well supported. If you have trouble with some custom node dependencies on 3.13 you can try 3.12
torch 2.4 and above is supported but some features and optimizations might only work on newer versions. We generally recommend using the latest major version of pytorch with the latest cuda version unless it is less than 2 weeks old.
Instructions:
Git clone this repo.
Put your SD checkpoints (the huge ckpt/safetensors files) in: models/checkpoints
Put your VAE in: models/vae
AMD GPUs (Linux)
AMD users can install rocm and pytorch with pip if you don't have it already installed, this is the command to install the stable version:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm7.1
This is the command to install the nightly with ROCm 7.2 which might have some performance improvements:
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm7.2
AMD GPUs (Experimental: Windows and Linux), RDNA 3, 3.5 and 4 only.
These have less hardware support than the builds above but they work on windows. You also need to install the pytorch version specific to your hardware.
RDNA 3 (RX 7000 series):
pip install --pre torch torchvision torchaudio --index-url https://rocm.nightlies.amd.com/v2/gfx110X-all/
RDNA 3.5 (Strix halo/Ryzen AI Max+ 365):
pip install --pre torch torchvision torchaudio --index-url https://rocm.nightlies.amd.com/v2/gfx1151/
RDNA 4 (RX 9000 series):
pip install --pre torch torchvision torchaudio --index-url https://rocm.nightlies.amd.com/v2/gfx120X-all/
Intel GPUs (Windows and Linux)
Intel Arc GPU users can install native PyTorch with torch.xpu support using pip. More information can be found here
- To install PyTorch xpu, use the following command:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/xpu
This is the command to install the Pytorch xpu nightly which might have some performance improvements:
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/xpu
NVIDIA
Nvidia users should install stable pytorch using this command:
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu130
This is the command to install pytorch nightly instead which might have performance improvements.
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu130
Troubleshooting
If you get the "Torch not compiled with CUDA enabled" error, uninstall torch with:
pip uninstall torch
And install it again with the command above.
Dependencies
Install the dependencies by opening your terminal inside the ComfyUI folder and:
pip install -r requirements.txt
After this you should have everything installed and can proceed to running ComfyUI.
Others:
Apple Mac silicon
You can install ComfyUI in Apple Mac silicon (M1 or M2) with any recent macOS version.
- Install pytorch nightly. For instructions, read the Accelerated PyTorch training on Mac Apple Developer guide (make sure to install the latest pytorch nightly).
- Follow the ComfyUI manual installation instructions for Windows and Linux.
- Install the ComfyUI dependencies. If you have another Stable Diffusion UI you might be able to reuse the dependencies.
- Launch ComfyUI by running
python main.py
Note
: Remember to add your models, VAE, LoRAs etc. to the corresponding Comfy folders, as discussed in ComfyUI manual installation.
Ascend NPUs
For models compatible with Ascend Extension for PyTorch (torch_npu). To get started, ensure your environment meets the prerequisites outlined on the installation page. Here's a step-by-step guide tailored to your platform and installation method:
- Begin by installing the recommended or newer kernel version for Linux as specified in the Installation page of torch-npu, if necessary.
- Proceed with the installation of Ascend Basekit, which includes the driver, firmware, and CANN, following the instructions provided for your specific platform.
- Next, install the necessary packages for torch-npu by adhering to the platform-specific instructions on the Installation page.
- Finally, adhere to the ComfyUI manual installation guide for Linux. Once all components are installed, you can run ComfyUI as described earlier.
Cambricon MLUs
For models compatible with Cambricon Extension for PyTorch (torch_mlu). Here's a step-by-step guide tailored to your platform and installation method:
- Install the Cambricon CNToolkit by adhering to the platform-specific instructions on the Installation
- Next, install the PyTorch(torch_mlu) following the instructions on the Installation
- Launch ComfyUI by running
python main.py
Iluvatar Corex
For models compatible with Iluvatar Extension for PyTorch. Here's a step-by-step guide tailored to your platform and installation method:
- Install the Iluvatar Corex Toolkit by adhering to the platform-specific instructions on the Installation
- Launch ComfyUI by running
python main.py
ComfyUI-Manager
ComfyUI-Manager is an extension that allows you to easily install, update, and manage custom nodes for ComfyUI.
Setup
-
Install the manager dependencies:
pip install -r manager_requirements.txt -
Enable the manager with the
--enable-managerflag when running ComfyUI:python main.py --enable-manager
Command Line Options
| Flag | Description |
|---|---|
--enable-manager |
Enable ComfyUI-Manager |
--enable-manager-legacy-ui |
Use the legacy manager UI instead of the new UI (requires --enable-manager) |
--disable-manager-ui |
Disable the manager UI and endpoints while keeping background features like security checks and scheduled installation completion (requires --enable-manager) |
Running
python main.py
For AMD cards not officially supported by ROCm
Try running it with this command if you have issues:
For 6700, 6600 and maybe other RDNA2 or older: HSA_OVERRIDE_GFX_VERSION=10.3.0 python main.py
For AMD 7600 and maybe other RDNA3 cards: HSA_OVERRIDE_GFX_VERSION=11.0.0 python main.py
AMD ROCm Tips
You can enable experimental memory efficient attention on recent pytorch in ComfyUI on some AMD GPUs using this command, it should already be enabled by default on RDNA3. If this improves speed for you on latest pytorch on your GPU please report it so that I can enable it by default.
TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 python main.py --use-pytorch-cross-attention
You can also try setting this env variable PYTORCH_TUNABLEOP_ENABLED=1 which might speed things up at the cost of a very slow initial run.
Notes
Only parts of the graph that have an output with all the correct inputs will be executed.
Only parts of the graph that change from each execution to the next will be executed, if you submit the same graph twice only the first will be executed. If you change the last part of the graph only the part you changed and the part that depends on it will be executed.
Dragging a generated png on the webpage or loading one will give you the full workflow including seeds that were used to create it.
You can use () to change emphasis of a word or phrase like: (good code:1.2) or (bad code:0.8). The default emphasis for () is 1.1. To use () characters in your actual prompt escape them like \( or \).
You can use {day|night}, for wildcard/dynamic prompts. With this syntax "{wild|card|test}" will be randomly replaced by either "wild", "card" or "test" by the frontend every time you queue the prompt. To use {} characters in your actual prompt escape them like: \{ or \}.
Dynamic prompts also support C-style comments, like // comment or /* comment */.
To use a textual inversion concepts/embeddings in a text prompt put them in the models/embeddings directory and use them in the CLIPTextEncode node like this (you can omit the .pt extension):
embedding:embedding_filename.pt
How to show high-quality previews?
Use --preview-method auto to enable previews.
The default installation includes a fast latent preview method that's low-resolution. To enable higher-quality previews with TAESD, download the taesd_decoder.pth, taesdxl_decoder.pth, taesd3_decoder.pth and taef1_decoder.pth and place them in the models/vae_approx folder. Once they're installed, restart ComfyUI and launch it with --preview-method taesd to enable high-quality previews.
How to use TLS/SSL?
Generate a self-signed certificate (not appropriate for shared/production use) and key by running the command: openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 3650 -nodes -subj "/C=XX/ST=StateName/L=CityName/O=CompanyName/OU=CompanySectionName/CN=CommonNameOrHostname"
Use --tls-keyfile key.pem --tls-certfile cert.pem to enable TLS/SSL, the app will now be accessible with https://... instead of http://....
Note: Windows users can use alexisrolland/docker-openssl or one of the 3rd party binary distributions to run the command example above.
If you use a container, note that the volume mount-vcan be a relative path so... -v ".\:/openssl-certs" ...would create the key & cert files in the current directory of your command prompt or powershell terminal.
Support and dev channel
Discord: Try the #help or #feedback channels.
Matrix space: #comfyui_space:matrix.org (it's like discord but open source).
See also: https://www.comfy.org/
Frontend Development
As of August 15, 2024, we have transitioned to a new frontend, which is now hosted in a separate repository: ComfyUI Frontend. This repository now hosts the compiled JS (from TS/Vue) under the web/ directory.
Reporting Issues and Requesting Features
For any bugs, issues, or feature requests related to the frontend, please use the ComfyUI Frontend repository. This will help us manage and address frontend-specific concerns more efficiently.
Using the Latest Frontend
The new frontend is now the default for ComfyUI. However, please note:
- The frontend in the main ComfyUI repository is updated fortnightly.
- Daily releases are available in the separate frontend repository.
To use the most up-to-date frontend version:
-
For the latest daily release, launch ComfyUI with this command line argument:
--front-end-version Comfy-Org/ComfyUI_frontend@latest -
For a specific version, replace
latestwith the desired version number:--front-end-version Comfy-Org/ComfyUI_frontend@1.2.2
This approach allows you to easily switch between the stable fortnightly release and the cutting-edge daily updates, or even specific versions for testing purposes.
Accessing the Legacy Frontend
If you need to use the legacy frontend for any reason, you can access it using the following command line argument:
--front-end-version Comfy-Org/ComfyUI_legacy_frontend@latest
This will use a snapshot of the legacy frontend preserved in the ComfyUI Legacy Frontend repository.