Compare commits

..

14 Commits

Author SHA1 Message Date
431abadd6a spec: add workflow_id / workflow_version_id to PromptRequest with x-runtime tag
Adds two optional, nullable UUID fields to PromptRequest for runtimes
that wrap workflow execution in a workflow-version entity (the
hosted-cloud runtime does this; local ComfyUI does not). Both fields
are tagged `x-runtime: [cloud]` to mark them as runtime-specific —
local ComfyUI returns `null` (or omits them entirely) and that's
correct behavior, not drift.

## Why these fields belong in the OSS spec

Hosted-cloud's frontend and backend share `openapi.yaml` as their
single source of truth via auto-generated client types. Without the
fields declared in the spec, the cloud runtime has to either:

  1. Hand-edit a vendored copy of openapi.yaml (drift between vendor
     and upstream — unsustainable).
  2. Maintain a separate cloud-only spec file (forks the contract,
     defeats the point of a shared OSS spec).

Both options have been tried and both produce maintenance pain. The
shape that scales is: cloud-only fields live in OSS spec under their
intended path, declared nullable, with an explicit `x-runtime` tag so
local-only readers can ignore them programmatically and human readers
can see what each runtime populates.

## About the `x-runtime` extension

This is the first use of `x-runtime` in this spec. Convention:

  - `x-runtime: [cloud]` — only the hosted-cloud runtime populates the
    field; local returns null or omits.
  - `x-runtime: [local]` — only local populates; cloud returns null.
  - Tag absent — both runtimes populate the field (the default).

This is a vendor extension (`x-` prefix) and is ignored by spec
validators that don't recognize it, including `kin-openapi`. Local
clients reading the spec see two extra optional nullable fields, which
is forward-compatible with all existing readers.

## What this does not change

  - No Python code changes. `PromptRequest` already accepts arbitrary
    optional fields (`extra_data: additionalProperties: true` on the
    same schema is a stronger guarantee). The Python server already
    silently accepts and ignores both fields today.
  - No required-fields change. Both fields stay outside `required`,
    so older clients that don't know about them keep validating.
  - No nullability widening on existing fields.

## Verification

  - YAML parses (`yaml.safe_load`).
  - `kin-openapi` `loader.LoadFromFile` accepts the modified spec.
  - `openapi3filter.ValidateRequest` on a PromptRequest with both
    fields set to `null`, set to a valid UUID, or omitted — all pass.
2026-05-04 18:52:48 -07:00
35819e35a8 fix(spec): mark DeviceStats.index and NodeInfo.essentials_category as nullable (#13706)
* fix(spec): mark DeviceStats.index and NodeInfo.essentials_category as nullable

Two fields in openapi.yaml are declared as required/non-nullable but
the Python implementation legitimately returns `null` for them, so any
client that response-validates against the spec will fail.

`DeviceStats.index` (used by GET /api/system_stats):
- server.py emits `"index": device.index` unconditionally
- For the CPU device (--cpu mode), `torch.device("cpu").index` is `None`
- → JSON response includes `"index": null` for CPU devices

`NodeInfo.essentials_category` (used by GET /api/object_info):
- The V3 schema-based path (comfy_api/latest/_io.py:1654) unconditionally
  passes `essentials_category=self.essentials_category` into NodeInfoV1
  and serializes via dataclasses.asdict(), so the key is always present
- Schema's `essentials_category` defaults to `None` for nodes that
  don't set it in `define_schema` (e.g. the APG node)
- → JSON response includes `"essentials_category": null` for those nodes
- (The V1 path in server.py uses `hasattr` and so omits the key
  entirely when not set, but the V3 path is the one that produces nulls)

Both fields keep their existing `required` status — they're always
present in the response, the value is just nullable. Descriptions
expanded to spell out when `null` is expected.

* docs(spec): clarify essentials_category presence rules

The previous description said "null for nodes that don't set
ESSENTIALS_CATEGORY (V1)" — that's wrong. server.py:739-740 uses
`hasattr` and OMITS the key when the V1 attribute isn't defined; null
only happens if the attribute is explicitly set to None. Spell out
all three legal shapes (string / null / absent) and which path
produces which.
2026-05-04 18:28:21 -07:00
15a4494a4e chore: Update display names and categories (CORE-151) (#13693)
* Standardize DEPRECATED label in node display name

* Promote category image/video to root level video/

* Update images and masks names and categories
2026-05-04 17:37:25 -07:00
1265955b34 ops: handle multi-compute of the same weight (#13705)
If the same weight is used multiple times within the same prefetch
window, it should only apply compute state mutations once. Mark the
weight as fully resident on the first pass accordingly.
2026-05-04 16:40:57 -07:00
1ac78180b3 make control-net load order deterministic (#13701)
Make this deterministic so speeds dont change base of load order. Load
them in reverse order so whatever the caller lists first is the top
priority.
2026-05-04 12:58:06 -07:00
c47633f3be prefetch: guard against no offload (#13703)
cast_ will return no stream if there is no work to do. guard against
this is the consume logic.
2026-05-04 12:56:05 -07:00
c33d26c283 fix: Proper memory estimation for frame interpolation when not using dynamic VRAM (#13698) 2026-05-04 20:20:40 +03:00
f3ea976cba Fix a1111 typo in extra_model_paths.yaml (#2720) 2026-05-04 16:01:46 +08:00
5538f62b0b fix: Update ColorTransfer node ref_image to be mandatory (#13691) 2026-05-04 12:33:11 +08:00
2806163f6e Default control_after_generate to fixed in PrimitiveInt node (#13690) 2026-05-04 07:21:34 +08:00
cea8d0925f Refactor LoadImageMask to use LoadImage code. (#13687) 2026-05-03 16:18:27 -04:00
b138133ffa Enable triton comfy kitchen via cli-arg (#12730) 2026-05-03 14:07:21 -04:00
025e6792ee Batch broadcasting in JoinImageWithAlpha node (#13686)
* Batch broadcasting in JoinImageWithAlpha node
2026-05-03 16:30:00 +03:00
867b8d2408 fix: gracefully handle port-in-use error on server startup (#13001)
Catch EADDRINUSE OSError when binding the TCP site and exit with a clear error message instead of an unhandled traceback.
2026-05-03 20:44:20 +08:00
25 changed files with 168 additions and 382 deletions

View File

@ -91,6 +91,7 @@ parser.add_argument("--directml", type=int, nargs="?", metavar="DIRECTML_DEVICE"
parser.add_argument("--oneapi-device-selector", type=str, default=None, metavar="SELECTOR_STRING", help="Sets the oneAPI device(s) this instance will use.")
parser.add_argument("--supports-fp8-compute", action="store_true", help="ComfyUI will act like if the device supports fp8 compute.")
parser.add_argument("--enable-triton-backend", action="store_true", help="ComfyUI will enable the use of Triton backend in comfy-kitchen. Is disabled at launch by default.")
class LatentPreviewMethod(enum.Enum):
NoPreviews = "none"

View File

@ -721,13 +721,15 @@ def load_models_gpu(models, memory_required=0, force_patch_weights=False, minimu
else:
minimum_memory_required = max(inference_memory, minimum_memory_required + extra_reserved_memory())
models_temp = set()
# Order-preserving dedup. A plain set() would randomize iteration order across runs
models_temp = {}
for m in models:
models_temp.add(m)
models_temp[m] = None
for mm in m.model_patches_models():
models_temp.add(mm)
models_temp[mm] = None
models = models_temp
models = list(models_temp)
models.reverse()
models_to_load = []

View File

@ -37,7 +37,8 @@ def prefetch_queue_pop(queue, device, module):
consumed = queue.pop(0)
if consumed is not None:
offload_stream, prefetch_state = consumed
offload_stream.wait_stream(comfy.model_management.current_stream(device))
if offload_stream is not None:
offload_stream.wait_stream(comfy.model_management.current_stream(device))
_, comfy_modules = prefetch_state
if comfy_modules is not None:
cleanup_prefetched_modules(comfy_modules)

View File

@ -253,6 +253,9 @@ def resolve_cast_module_with_vbar(s, dtype, device, bias_dtype, compute_dtype, w
if bias is not None:
bias = post_cast(s, "bias", bias, bias_dtype, prefetch["resident"], update_weight)
if prefetch["signature"] is not None:
prefetch["resident"] = True
return weight, bias

View File

@ -1,6 +1,8 @@
import torch
import logging
from comfy.cli_args import args
try:
import comfy_kitchen as ck
from comfy_kitchen.tensor import (
@ -21,7 +23,15 @@ try:
ck.registry.disable("cuda")
logging.warning("WARNING: You need pytorch with cu130 or higher to use optimized CUDA operations.")
ck.registry.disable("triton")
if args.enable_triton_backend:
try:
import triton
logging.info("Found triton %s. Enabling comfy-kitchen triton backend.", triton.__version__)
except ImportError as e:
logging.error(f"Failed to import triton, Error: {e}, the comfy-kitchen triton backend will not be available.")
ck.registry.disable("triton")
else:
ck.registry.disable("triton")
for k, v in ck.list_backends().items():
logging.info(f"Found comfy_kitchen backend {k}: {v}")
except ImportError as e:

View File

@ -89,7 +89,8 @@ def get_additional_models(conds, dtype):
gligen += get_models_from_cond(conds[k], "gligen")
add_models += get_models_from_cond(conds[k], "additional_models")
control_nets = set(cnets)
# Order-preserving dedup. A plain set() would randomize iteration order across runs
control_nets = list(dict.fromkeys(cnets))
inference_memory = 0
control_models = []

View File

@ -43,67 +43,7 @@ class UploadType(str, Enum):
model = "file_upload"
class RemoteItemSchema:
"""Describes how to map API response objects to rich dropdown items.
All *_field parameters use dot-path notation (e.g. ``"labels.gender"``).
``label_field`` and ``description_field`` additionally support template strings
with ``{field}`` placeholders (e.g. ``"{name} ({labels.accent})"``).
"""
def __init__(
self,
value_field: str,
label_field: str,
preview_url_field: str | None = None,
preview_type: Literal["image", "video", "audio"] = "image",
description_field: str | None = None,
search_fields: list[str] | None = None,
):
if preview_type not in ("image", "video", "audio"):
raise ValueError(
f"RemoteItemSchema: 'preview_type' must be 'image', 'video', or 'audio'; got {preview_type!r}."
)
if search_fields is not None:
for f in search_fields:
if "{" in f or "}" in f:
raise ValueError(
f"RemoteItemSchema: 'search_fields' must be dot-paths, not template strings (got {f!r})."
)
self.value_field = value_field
"""Dot-path to the unique identifier within each item.
This value is stored in the widget and passed to execute()."""
self.label_field = label_field
"""Dot-path to the display name, or a template string with {field} placeholders."""
self.preview_url_field = preview_url_field
"""Dot-path to a preview media URL. If None, no preview is shown."""
self.preview_type = preview_type
"""How to render the preview: "image", "video", or "audio"."""
self.description_field = description_field
"""Optional dot-path or template for a subtitle line shown below the label."""
self.search_fields = search_fields
"""Dot-paths to fields included in the search index. When unset, search falls back to
the resolved label (i.e. ``label_field`` after template substitution). Note that template
label strings (e.g. ``"{first} {last}"``) are not valid path entries here — list the
underlying paths (``["first", "last"]``) instead."""
def as_dict(self):
return prune_dict({
"value_field": self.value_field,
"label_field": self.label_field,
"preview_url_field": self.preview_url_field,
"preview_type": self.preview_type,
"description_field": self.description_field,
"search_fields": self.search_fields,
})
class RemoteOptions:
"""Plain remote combo: fetches a list of strings/objects and populates a standard dropdown.
Use this for lightweight lists from endpoints that return a bare array (or an array under
``response_key``). For rich dropdowns with previews, search, filtering, or pagination,
use :class:`RemoteComboOptions` and the ``remote_combo=`` parameter on ``Combo.Input``.
"""
def __init__(self, route: str, refresh_button: bool, control_after_refresh: Literal["first", "last"]="first",
timeout: int=None, max_retries: int=None, refresh: int=None):
self.route = route
@ -130,80 +70,6 @@ class RemoteOptions:
})
class RemoteComboOptions:
"""Rich remote combo: populates a Vue dropdown with previews, search, and filtering.
Attached to a :class:`Combo.Input` via ``remote_combo=`` (not ``remote=``). Requires an
``item_schema`` describing how to map API response objects to dropdown items.
Response-shape contract: the endpoint returns the full items array in a single response
(either at the top level, or at the dot-path given by ``response_key``). Backing endpoints
that paginate upstream are expected to aggregate and cache server-side.
"""
def __init__(
self,
route: str,
item_schema: RemoteItemSchema,
refresh_button: bool = True,
auto_select: Literal["first", "last"] | None = None,
timeout: int | None = None,
max_retries: int | None = None,
refresh: int | None = None,
response_key: str | None = None,
):
if auto_select is not None and auto_select not in ("first", "last"):
raise ValueError(
f"RemoteComboOptions: 'auto_select' must be 'first', 'last', or None; got {auto_select!r}."
)
if refresh is not None and 0 < refresh < 128:
raise ValueError(
f"RemoteComboOptions: 'refresh' must be >= 128 (ms TTL) or <= 0 (cache never expires); got {refresh}."
)
if timeout is not None and timeout < 0:
raise ValueError(
f"RemoteComboOptions: 'timeout' must be >= 0 (got {timeout})."
)
if max_retries is not None and max_retries < 0:
raise ValueError(
f"RemoteComboOptions: 'max_retries' must be >= 0 (got {max_retries})."
)
if not route.startswith("/"):
raise ValueError(
f"RemoteComboOptions: 'route' must be a relative path starting with '/'; got {route!r}."
)
self.route = route
"""Relative path to the remote source (must start with ``/``). The frontend resolves this
against the comfy-api base URL and injects auth headers; absolute URLs are rejected."""
self.item_schema = item_schema
"""Required: describes how each API response object maps to a dropdown item."""
self.refresh_button = refresh_button
"""Specifies whether to show a refresh button next to the widget."""
self.auto_select = auto_select
"""Fallback item to select when the widget's value is empty. Never overrides an existing
selection. Default None means no fallback."""
self.timeout = timeout
"""Maximum time to wait for a response, in milliseconds."""
self.max_retries = max_retries
"""Maximum number of retries before aborting the request. Default None uses the frontend's built-in limit."""
self.refresh = refresh
"""TTL of the cached value in milliseconds. Must be >= 128 (ms TTL) or <= 0 (cache never expires,
re-fetched only via the refresh button). Default None uses the frontend's built-in behavior."""
self.response_key = response_key
"""Dot-path to the items array within the response (when not at the top level)."""
def as_dict(self):
return prune_dict({
"route": self.route,
"item_schema": self.item_schema.as_dict(),
"refresh_button": self.refresh_button,
"auto_select": self.auto_select,
"timeout": self.timeout,
"max_retries": self.max_retries,
"refresh": self.refresh,
"response_key": self.response_key,
})
class NumberDisplay(str, Enum):
number = "number"
slider = "slider"
@ -493,16 +359,11 @@ class Combo(ComfyTypeIO):
upload: UploadType=None,
image_folder: FolderType=None,
remote: RemoteOptions=None,
remote_combo: RemoteComboOptions=None,
socketless: bool=None,
extra_dict=None,
raw_link: bool=None,
advanced: bool=None,
):
if remote is not None and remote_combo is not None:
raise ValueError("Combo.Input: pass either 'remote' or 'remote_combo', not both.")
if options is not None and remote_combo is not None:
raise ValueError("Combo.Input: pass either 'options' or 'remote_combo', not both.")
if isinstance(options, type) and issubclass(options, Enum):
options = [v.value for v in options]
if isinstance(default, Enum):
@ -514,7 +375,6 @@ class Combo(ComfyTypeIO):
self.upload = upload
self.image_folder = image_folder
self.remote = remote
self.remote_combo = remote_combo
self.default: str
def as_dict(self):
@ -525,7 +385,6 @@ class Combo(ComfyTypeIO):
**({self.upload.value: True} if self.upload is not None else {}),
"image_folder": self.image_folder.value if self.image_folder else None,
"remote": self.remote.as_dict() if self.remote else None,
"remote_combo": self.remote_combo.as_dict() if self.remote_combo else None,
})
class Output(Output):
@ -2362,9 +2221,7 @@ class NodeReplace:
__all__ = [
"FolderType",
"UploadType",
"RemoteItemSchema",
"RemoteOptions",
"RemoteComboOptions",
"NumberDisplay",
"ControlAfterGenerate",

View File

@ -33,7 +33,7 @@ class OpenAIVideoSora2(IO.ComfyNode):
def define_schema(cls):
return IO.Schema(
node_id="OpenAIVideoSora2",
display_name="OpenAI Sora - Video (Deprecated)",
display_name="OpenAI Sora - Video (DEPRECATED)",
category="api node/video/Sora",
description=(
"OpenAI video and audio generation.\n\n"

View File

@ -199,6 +199,9 @@ class FILMNet(nn.Module):
def get_dtype(self):
return self.extract.extract_sublevels.convs[0][0].conv.weight.dtype
def memory_used_forward(self, shape, dtype):
return 1700 * shape[1] * shape[2] * dtype.itemsize
def _build_warp_grids(self, H, W, device):
"""Pre-compute warp grids for all pyramid levels."""
if (H, W) in self._warp_grids:

View File

@ -74,6 +74,9 @@ class IFNet(nn.Module):
def get_dtype(self):
return self.encode.cnn0.weight.dtype
def memory_used_forward(self, shape, dtype):
return 300 * shape[1] * shape[2] * dtype.itemsize
def _build_warp_grids(self, H, W, device):
if (H, W) in self._warp_grids:
return

View File

@ -202,14 +202,11 @@ class JoinImageWithAlpha(io.ComfyNode):
@classmethod
def execute(cls, image: torch.Tensor, alpha: torch.Tensor) -> io.NodeOutput:
batch_size = min(len(image), len(alpha))
out_images = []
batch_size = max(len(image), len(alpha))
alpha = 1.0 - resize_mask(alpha, image.shape[1:])
for i in range(batch_size):
out_images.append(torch.cat((image[i][:,:,:3], alpha[i].unsqueeze(2)), dim=2))
return io.NodeOutput(torch.stack(out_images))
alpha = comfy.utils.repeat_to_batch_size(alpha, batch_size)
image = comfy.utils.repeat_to_batch_size(image, batch_size)
return io.NodeOutput(torch.cat((image[..., :3], alpha.unsqueeze(-1)), dim=-1))
class CompositingExtension(ComfyExtension):

View File

@ -37,7 +37,7 @@ class FrameInterpolationModelLoader(io.ComfyNode):
model = cls._detect_and_load(sd)
dtype = torch.float16 if model_management.should_use_fp16(model_management.get_torch_device()) else torch.float32
model.eval().to(dtype)
patcher = comfy.model_patcher.ModelPatcher(
patcher = comfy.model_patcher.CoreModelPatcher(
model,
load_device=model_management.get_torch_device(),
offload_device=model_management.unet_offload_device(),
@ -78,7 +78,7 @@ class FrameInterpolate(io.ComfyNode):
return io.Schema(
node_id="FrameInterpolate",
display_name="Frame Interpolate",
category="image/video",
category="video",
search_aliases=["rife", "film", "frame interpolation", "slow motion", "interpolate frames", "vfi"],
inputs=[
FrameInterpolationModel.Input("interp_model"),
@ -98,16 +98,13 @@ class FrameInterpolate(io.ComfyNode):
if num_frames < 2 or multiplier < 2:
return io.NodeOutput(images)
model_management.load_model_gpu(interp_model)
device = interp_model.load_device
dtype = interp_model.model_dtype()
inference_model = interp_model.model
# Free VRAM for inference activations (model weights + ~20x a single frame's worth)
H, W = images.shape[1], images.shape[2]
activation_mem = H * W * 3 * images.element_size() * 20
model_management.free_memory(activation_mem, device)
activation_mem = inference_model.memory_used_forward(images.shape, dtype)
model_management.load_models_gpu([interp_model], memory_required=activation_mem)
align = getattr(inference_model, "pad_align", 1)
H, W = images.shape[1], images.shape[2]
# Prepare a single padded frame on device for determining output dimensions
def prepare_frame(idx):

View File

@ -11,7 +11,7 @@ class ImageCompare(IO.ComfyNode):
def define_schema(cls):
return IO.Schema(
node_id="ImageCompare",
display_name="Image Compare",
display_name="Compare Images",
description="Compares two images side by side with a slider.",
category="image",
essentials_category="Image Tools",

View File

@ -24,7 +24,7 @@ class ImageCrop(IO.ComfyNode):
return IO.Schema(
node_id="ImageCrop",
search_aliases=["trim"],
display_name="Image Crop (Deprecated)",
display_name="Crop Image (DEPRECATED)",
category="image/transform",
is_deprecated=True,
essentials_category="Image Tools",
@ -56,7 +56,7 @@ class ImageCropV2(IO.ComfyNode):
return IO.Schema(
node_id="ImageCropV2",
search_aliases=["trim"],
display_name="Image Crop",
display_name="Crop Image",
category="image/transform",
essentials_category="Image Tools",
has_intermediate_output=True,
@ -109,6 +109,7 @@ class RepeatImageBatch(IO.ComfyNode):
return IO.Schema(
node_id="RepeatImageBatch",
search_aliases=["duplicate image", "clone image"],
display_name="Repeat Image Batch",
category="image/batch",
inputs=[
IO.Image.Input("image"),
@ -131,6 +132,7 @@ class ImageFromBatch(IO.ComfyNode):
return IO.Schema(
node_id="ImageFromBatch",
search_aliases=["select image", "pick from batch", "extract image"],
display_name="Get Image from Batch",
category="image/batch",
inputs=[
IO.Image.Input("image"),
@ -157,7 +159,8 @@ class ImageAddNoise(IO.ComfyNode):
return IO.Schema(
node_id="ImageAddNoise",
search_aliases=["film grain"],
category="image",
display_name="Add Noise to Image",
category="image/postprocessing",
inputs=[
IO.Image.Input("image"),
IO.Int.Input(
@ -259,7 +262,7 @@ class ImageStitch(IO.ComfyNode):
return IO.Schema(
node_id="ImageStitch",
search_aliases=["combine images", "join images", "concatenate images", "side by side"],
display_name="Image Stitch",
display_name="Stitch Images",
description="Stitches image2 to image1 in the specified direction.\n"
"If image2 is not provided, returns image1 unchanged.\n"
"Optional spacing can be added between images.",
@ -434,6 +437,7 @@ class ResizeAndPadImage(IO.ComfyNode):
return IO.Schema(
node_id="ResizeAndPadImage",
search_aliases=["fit to size"],
display_name="Resize And Pad Image",
category="image/transform",
inputs=[
IO.Image.Input("image"),
@ -485,6 +489,7 @@ class SaveSVGNode(IO.ComfyNode):
return IO.Schema(
node_id="SaveSVGNode",
search_aliases=["export vector", "save vector graphics"],
display_name="Save SVG",
description="Save SVG files on disk.",
category="image/save",
inputs=[
@ -591,7 +596,7 @@ class ImageRotate(IO.ComfyNode):
def define_schema(cls):
return IO.Schema(
node_id="ImageRotate",
display_name="Image Rotate",
display_name="Rotate Image",
search_aliases=["turn", "flip orientation"],
category="image/transform",
essentials_category="Image Tools",
@ -624,6 +629,7 @@ class ImageFlip(IO.ComfyNode):
return IO.Schema(
node_id="ImageFlip",
search_aliases=["mirror", "reflect"],
display_name="Flip Image",
category="image/transform",
inputs=[
IO.Image.Input("image"),
@ -650,6 +656,7 @@ class ImageScaleToMaxDimension(IO.ComfyNode):
def define_schema(cls):
return IO.Schema(
node_id="ImageScaleToMaxDimension",
display_name="Scale Image to Max Dimension",
category="image/upscaling",
inputs=[
IO.Image.Input("image"),

View File

@ -80,7 +80,8 @@ class ImageCompositeMasked(IO.ComfyNode):
def define_schema(cls):
return IO.Schema(
node_id="ImageCompositeMasked",
search_aliases=["paste image", "overlay", "layer"],
search_aliases=["overlay", "layer", "paste image", "images composition"],
display_name="Image Composite Masked",
category="image",
inputs=[
IO.Image.Input("destination"),
@ -201,6 +202,7 @@ class InvertMask(IO.ComfyNode):
return IO.Schema(
node_id="InvertMask",
search_aliases=["reverse mask", "flip mask"],
display_name="Invert Mask",
category="mask",
inputs=[
IO.Mask.Input("mask"),
@ -222,6 +224,7 @@ class CropMask(IO.ComfyNode):
return IO.Schema(
node_id="CropMask",
search_aliases=["cut mask", "extract mask region", "mask slice"],
display_name="Crop Mask",
category="mask",
inputs=[
IO.Mask.Input("mask"),
@ -247,7 +250,8 @@ class MaskComposite(IO.ComfyNode):
def define_schema(cls):
return IO.Schema(
node_id="MaskComposite",
search_aliases=["combine masks", "blend masks", "layer masks"],
search_aliases=["combine masks", "blend masks", "layer masks", "masks composition"],
display_name="Combine Masks",
category="mask",
inputs=[
IO.Mask.Input("destination"),
@ -298,6 +302,7 @@ class FeatherMask(IO.ComfyNode):
return IO.Schema(
node_id="FeatherMask",
search_aliases=["soft edge mask", "blur mask edges", "gradient mask edge"],
display_name="Feather Mask",
category="mask",
inputs=[
IO.Mask.Input("mask"),

View File

@ -59,7 +59,8 @@ class ImageRGBToYUV(io.ComfyNode):
return io.Schema(
node_id="ImageRGBToYUV",
search_aliases=["color space conversion"],
category="image/batch",
display_name="Image RGB to YUV",
category="image/color",
inputs=[
io.Image.Input("image"),
],
@ -81,7 +82,8 @@ class ImageYUVToRGB(io.ComfyNode):
return io.Schema(
node_id="ImageYUVToRGB",
search_aliases=["color space conversion"],
category="image/batch",
display_name="Image YUV to RGB",
category="image/color",
inputs=[
io.Image.Input("Y"),
io.Image.Input("U"),

View File

@ -20,7 +20,8 @@ class Blend(io.ComfyNode):
def define_schema(cls):
return io.Schema(
node_id="ImageBlend",
display_name="Image Blend",
search_aliases=["mix images"],
display_name="Blend Images",
category="image/postprocessing",
essentials_category="Image Tools",
inputs=[
@ -224,6 +225,7 @@ class ImageScaleToTotalPixels(io.ComfyNode):
def define_schema(cls):
return io.Schema(
node_id="ImageScaleToTotalPixels",
display_name="Scale Image to Total Pixels",
category="image/upscaling",
inputs=[
io.Image.Input("image"),
@ -568,7 +570,7 @@ class BatchImagesNode(io.ComfyNode):
return io.Schema(
node_id="BatchImagesNode",
display_name="Batch Images",
category="image",
category="image/batch",
essentials_category="Image Tools",
search_aliases=["batch", "image batch", "batch images", "combine images", "merge images", "stack images"],
inputs=[
@ -666,12 +668,13 @@ class ColorTransfer(io.ComfyNode):
def define_schema(cls):
return io.Schema(
node_id="ColorTransfer",
display_name="Color Transfer",
category="image/postprocessing",
description="Match the colors of one image to another using various algorithms.",
search_aliases=["color match", "color grading", "color correction", "match colors", "color transform", "mkl", "reinhard", "histogram"],
inputs=[
io.Image.Input("image_target", tooltip="Image(s) to apply the color transform to."),
io.Image.Input("image_ref", optional=True, tooltip="Reference image(s) to match colors to. If not provided, processing is skipped"),
io.Image.Input("image_ref", tooltip="Reference image(s) to match colors to."),
io.Combo.Input("method", options=['reinhard_lab', 'mkl_lab', 'histogram'],),
io.DynamicCombo.Input("source_stats",
tooltip="per_frame: each frame matched to image_ref individually. uniform: pool stats across all source frames as baseline, match to image_ref. target_frame: use one chosen frame as the baseline for the transform to image_ref, applied uniformly to all frames (preserves relative differences)",

View File

@ -49,7 +49,7 @@ class Int(io.ComfyNode):
display_name="Int",
category="utils/primitive",
inputs=[
io.Int.Input("value", min=-sys.maxsize, max=sys.maxsize, control_after_generate=True),
io.Int.Input("value", min=-sys.maxsize, max=sys.maxsize, control_after_generate=io.ControlAfterGenerate.fixed),
],
outputs=[io.Int.Output()],
)

View File

@ -17,7 +17,8 @@ class SaveWEBM(io.ComfyNode):
return io.Schema(
node_id="SaveWEBM",
search_aliases=["export webm"],
category="image/video",
display_name="Save WEBM",
category="video",
is_experimental=True,
inputs=[
io.Image.Input("images"),
@ -72,7 +73,7 @@ class SaveVideo(io.ComfyNode):
node_id="SaveVideo",
search_aliases=["export video"],
display_name="Save Video",
category="image/video",
category="video",
essentials_category="Basics",
description="Saves the input images to your ComfyUI output directory.",
inputs=[
@ -121,7 +122,7 @@ class CreateVideo(io.ComfyNode):
node_id="CreateVideo",
search_aliases=["images to video"],
display_name="Create Video",
category="image/video",
category="video",
description="Create a video from images.",
inputs=[
io.Image.Input("images", tooltip="The images to create a video from."),
@ -146,7 +147,7 @@ class GetVideoComponents(io.ComfyNode):
node_id="GetVideoComponents",
search_aliases=["extract frames", "split video", "video to images", "demux"],
display_name="Get Video Components",
category="image/video",
category="video",
description="Extracts all components from a video: frames, audio, and framerate.",
inputs=[
io.Video.Input("video", tooltip="The video to extract components from."),
@ -174,7 +175,7 @@ class LoadVideo(io.ComfyNode):
node_id="LoadVideo",
search_aliases=["import video", "open video", "video file"],
display_name="Load Video",
category="image/video",
category="video",
essentials_category="Basics",
inputs=[
io.Combo.Input("file", options=sorted(files), upload=io.UploadType.video),
@ -216,7 +217,7 @@ class VideoSlice(io.ComfyNode):
"frame load cap",
"start time",
],
category="image/video",
category="video",
essentials_category="Video Tools",
inputs=[
io.Video.Input("video"),

View File

@ -1016,10 +1016,6 @@ async def validate_inputs(prompt_id, prompt, item, validated, visiting=None):
if isinstance(input_type, list) or input_type == io.Combo.io_type:
if input_type == io.Combo.io_type:
# Skip validation for combos with remote options — options
# are fetched client-side and not available on the server.
if extra_info.get("remote_combo"):
continue
combo_options = extra_info.get("options", [])
else:
combo_options = input_type

View File

@ -28,7 +28,7 @@
#config for a1111 ui
#all you have to do is uncomment this (remove the #) and change the base_path to where yours is installed
#a111:
#a1111:
# base_path: path/to/stable-diffusion-webui/
# checkpoints: models/Stable-diffusion
# configs: models/Stable-diffusion

View File

@ -1754,57 +1754,49 @@ class LoadImage:
return True
class LoadImageMask:
class LoadImageMask(LoadImage):
ESSENTIALS_CATEGORY = "Image Tools"
SEARCH_ALIASES = ["import mask", "alpha mask", "channel mask"]
_color_channels = ["alpha", "red", "green", "blue"]
@classmethod
def INPUT_TYPES(s):
input_dir = folder_paths.get_input_directory()
files = [f for f in os.listdir(input_dir) if os.path.isfile(os.path.join(input_dir, f))]
return {"required":
{"image": (sorted(files), {"image_upload": True}),
"channel": (s._color_channels, ), }
}
types = super().INPUT_TYPES()
return {
"required": {
**types["required"],
"channel": (s._color_channels, )
}
}
CATEGORY = "mask"
RETURN_TYPES = ("MASK",)
FUNCTION = "load_image"
def load_image(self, image, channel):
image_path = folder_paths.get_annotated_filepath(image)
i = node_helpers.pillow(Image.open, image_path)
i = node_helpers.pillow(ImageOps.exif_transpose, i)
if i.getbands() != ("R", "G", "B", "A"):
if i.mode == 'I':
i = i.point(lambda i: i * (1 / 255))
i = i.convert("RGBA")
mask = None
FUNCTION = "load_image_mask"
def load_image_mask(self, image, channel):
image_tensor, mask_tensor = super().load_image(image)
c = channel[0].upper()
if c in i.getbands():
mask = np.array(i.getchannel(c)).astype(np.float32) / 255.0
mask = torch.from_numpy(mask)
if c == 'A':
mask = 1. - mask
if c == 'A':
return (mask_tensor,)
channel_idx = {'R': 0, 'G': 1, 'B': 2}.get(c, 0)
if channel_idx < image_tensor.shape[-1]:
return (image_tensor[..., channel_idx].clone(),)
else:
mask = torch.zeros((64,64), dtype=torch.float32, device="cpu")
return (mask.unsqueeze(0),)
empty_mask = torch.zeros(
image_tensor.shape[:-1],
dtype=image_tensor.dtype,
device=image_tensor.device
)
return (empty_mask,)
@classmethod
def IS_CHANGED(s, image, channel):
image_path = folder_paths.get_annotated_filepath(image)
m = hashlib.sha256()
with open(image_path, 'rb') as f:
m.update(f.read())
return m.digest().hex()
@classmethod
def VALIDATE_INPUTS(s, image):
if not folder_paths.exists_annotated_filepath(image):
return "Invalid image file: {}".format(image)
return True
return super().IS_CHANGED(image)
class LoadImageOutput(LoadImage):
@ -1895,7 +1887,7 @@ class ImageInvert:
RETURN_TYPES = ("IMAGE",)
FUNCTION = "invert"
CATEGORY = "image"
CATEGORY = "image/color"
def invert(self, image):
s = 1.0 - image
@ -1911,7 +1903,7 @@ class ImageBatch:
RETURN_TYPES = ("IMAGE",)
FUNCTION = "batch"
CATEGORY = "image"
CATEGORY = "image/batch"
DEPRECATED = True
def batch(self, image1, image2):
@ -1968,7 +1960,7 @@ class ImagePadForOutpaint:
RETURN_TYPES = ("IMAGE", "MASK")
FUNCTION = "expand_image"
CATEGORY = "image"
CATEGORY = "image/transform"
def expand_image(self, image, left, top, right, bottom, feathering):
d1, d2, d3, d4 = image.size()
@ -2111,7 +2103,7 @@ NODE_DISPLAY_NAME_MAPPINGS = {
"ConditioningSetArea": "Conditioning (Set Area)",
"ConditioningSetAreaPercentage": "Conditioning (Set Area with Percentage)",
"ConditioningSetMask": "Conditioning (Set Mask)",
"ControlNetApply": "Apply ControlNet (OLD)",
"ControlNetApply": "Apply ControlNet (DEPRECATED)",
"ControlNetApplyAdvanced": "Apply ControlNet",
# Latent
"VAEEncodeForInpaint": "VAE Encode (for Inpainting)",
@ -2129,6 +2121,7 @@ NODE_DISPLAY_NAME_MAPPINGS = {
"LatentFromBatch" : "Latent From Batch",
"RepeatLatentBatch": "Repeat Latent Batch",
# Image
"EmptyImage": "Empty Image",
"SaveImage": "Save Image",
"PreviewImage": "Preview Image",
"LoadImage": "Load Image",
@ -2136,15 +2129,15 @@ NODE_DISPLAY_NAME_MAPPINGS = {
"LoadImageOutput": "Load Image (from Outputs)",
"ImageScale": "Upscale Image",
"ImageScaleBy": "Upscale Image By",
"ImageInvert": "Invert Image",
"ImageInvert": "Invert Image Colors",
"ImagePadForOutpaint": "Pad Image for Outpainting",
"ImageBatch": "Batch Images",
"ImageCrop": "Image Crop",
"ImageStitch": "Image Stitch",
"ImageBlend": "Image Blend",
"ImageBlur": "Image Blur",
"ImageQuantize": "Image Quantize",
"ImageSharpen": "Image Sharpen",
"ImageBatch": "Batch Images (DEPRECATED)",
"ImageCrop": "Crop Image",
"ImageStitch": "Stitch Images",
"ImageBlend": "Blend Images",
"ImageBlur": "Blur Image",
"ImageQuantize": "Quantize Image",
"ImageSharpen": "Sharpen Image",
"ImageScaleToTotalPixels": "Scale Image to Total Pixels",
"GetImageSize": "Get Image Size",
# _for_testing

View File

@ -1999,6 +1999,26 @@ components:
items:
type: string
description: List of node IDs to execute (partial graph execution)
workflow_id:
type: string
format: uuid
nullable: true
x-runtime: [cloud]
description: |
UUID identifying a hosted-cloud workflow entity to associate with this
job. Local ComfyUI doesn't track workflow entities and returns `null`
(or omits the field). The `x-runtime: [cloud]` extension marks this
as populated only by the hosted-cloud runtime; absence of the tag
means a field is populated by all runtimes.
workflow_version_id:
type: string
format: uuid
nullable: true
x-runtime: [cloud]
description: |
UUID identifying a hosted-cloud workflow version to associate with
this job. Local ComfyUI returns `null` (or omits the field). See
`workflow_id` above for `x-runtime` semantics.
PromptResponse:
type: object
@ -2347,7 +2367,12 @@ components:
description: Device type (cuda, mps, cpu, etc.)
index:
type: number
description: Device index
nullable: true
description: |
Device index within its type (e.g. CUDA ordinal for `cuda:0`,
`cuda:1`). `null` for devices with no index, including the CPU
device returned in `--cpu` mode (PyTorch's `torch.device('cpu').index`
is `None`).
vram_total:
type: number
description: Total VRAM in bytes
@ -2503,7 +2528,18 @@ components:
description: Alternative search terms for finding this node
essentials_category:
type: string
description: Category override used by the essentials pack
nullable: true
description: |
Category override used by the essentials pack. The
`essentials_category` key may be present with a string value,
present and `null`, or absent entirely:
- V1 nodes: `essentials_category` is **omitted** when the node
class doesn't define an `ESSENTIALS_CATEGORY` attribute, and
**`null`** if the attribute is explicitly set to `None`.
- V3 nodes (`comfy_api.latest.io`): `essentials_category` is
**always present**, and **`null`** for nodes whose `Schema`
doesn't populate it.
# -------------------------------------------------------------------
# Models

View File

@ -1,3 +1,4 @@
import errno
import os
import sys
import asyncio
@ -1245,7 +1246,13 @@ class PromptServer():
address = addr[0]
port = addr[1]
site = web.TCPSite(runner, address, port, ssl_context=ssl_ctx)
await site.start()
try:
await site.start()
except OSError as e:
if e.errno == errno.EADDRINUSE:
logging.error(f"Port {port} is already in use on address {address}. Please close the other application or use a different port with --port.")
raise SystemExit(1)
raise
if not hasattr(self, 'address'):
self.address = address #TODO: remove this

View File

@ -1,139 +0,0 @@
import pytest
from comfy_api.latest._io import (
Combo,
RemoteComboOptions,
RemoteItemSchema,
RemoteOptions,
)
def _schema(**overrides):
defaults = dict(value_field="id", label_field="name")
return RemoteItemSchema(**{**defaults, **overrides})
def _combo(**overrides):
defaults = dict(route="/proxy/foo", item_schema=_schema())
return RemoteComboOptions(**{**defaults, **overrides})
def test_item_schema_defaults_accepted():
d = _schema().as_dict()
assert d == {"value_field": "id", "label_field": "name", "preview_type": "image"}
def test_item_schema_full_config_accepted():
d = _schema(
preview_url_field="preview",
preview_type="audio",
description_field="desc",
search_fields=["first", "last", "profile.email"],
).as_dict()
assert d["preview_type"] == "audio"
assert d["search_fields"] == ["first", "last", "profile.email"]
@pytest.mark.parametrize(
"bad_fields",
[
["{first} {last}"],
["name", "{age}"],
["leading{"],
["trailing}"],
],
)
def test_item_schema_rejects_template_strings_in_search_fields(bad_fields):
with pytest.raises(ValueError, match="search_fields"):
_schema(search_fields=bad_fields)
@pytest.mark.parametrize("bad_preview_type", ["middle", "IMAGE", "", "gif"])
def test_item_schema_rejects_unknown_preview_type(bad_preview_type):
with pytest.raises(ValueError, match="preview_type"):
_schema(preview_type=bad_preview_type)
def test_combo_options_minimal_accepted():
d = _combo().as_dict()
assert d["route"] == "/proxy/foo"
assert d["refresh_button"] is True
assert "item_schema" in d
@pytest.mark.parametrize(
"route",
[
"/proxy/foo",
"/voices",
],
)
def test_combo_options_accepts_valid_routes(route):
_combo(route=route)
@pytest.mark.parametrize(
"route",
[
"",
"api.example.com/voices",
"voices",
"ftp-no-scheme",
"http://localhost:9000/voices",
"https://api.example.com/v1/voices",
],
)
def test_combo_options_rejects_non_relative_routes(route):
with pytest.raises(ValueError, match="'route'"):
_combo(route=route)
@pytest.mark.parametrize("bad_auto_select", ["middle", "FIRST", "", "firstlast"])
def test_combo_options_rejects_unknown_auto_select(bad_auto_select):
with pytest.raises(ValueError, match="auto_select"):
_combo(auto_select=bad_auto_select)
@pytest.mark.parametrize("bad_refresh", [1, 127])
def test_combo_options_refresh_in_forbidden_range_rejected(bad_refresh):
with pytest.raises(ValueError, match="refresh"):
_combo(refresh=bad_refresh)
@pytest.mark.parametrize("ok_refresh", [0, -1, 128])
def test_combo_options_refresh_valid_values_accepted(ok_refresh):
_combo(refresh=ok_refresh)
def test_combo_options_timeout_negative_rejected():
with pytest.raises(ValueError, match="timeout"):
_combo(timeout=-1)
def test_combo_options_max_retries_negative_rejected():
with pytest.raises(ValueError, match="max_retries"):
_combo(max_retries=-1)
def test_combo_options_as_dict_prunes_none_fields():
d = _combo().as_dict()
for pruned in ("response_key", "refresh", "timeout", "max_retries", "auto_select"):
assert pruned not in d
def test_combo_input_accepts_remote_combo_alone():
Combo.Input("voice", remote_combo=_combo())
def test_combo_input_rejects_remote_plus_remote_combo():
with pytest.raises(ValueError, match="remote.*remote_combo"):
Combo.Input(
"voice",
remote=RemoteOptions(route="/r", refresh_button=True),
remote_combo=_combo(),
)
def test_combo_input_rejects_options_plus_remote_combo():
with pytest.raises(ValueError, match="options.*remote_combo"):
Combo.Input("voice", options=["a", "b"], remote_combo=_combo())