25a1bfab4e
chore(api-nodes-bytedance): mark "seededit" as deprecated, adjust display name of Seedream ( #11490 )
2025-12-30 08:33:34 -08:00
d7111e426a
ResizeByLongerSide: support video ( #11555 )
...
(cherry picked from commit 98c6840aa4e5fd5407ba9ab113d209011e474bf6)
2025-12-29 17:07:29 -08:00
0e6221cc79
Add some warnings for pin and unpin errors. ( #11561 )
2025-12-29 18:26:42 -05:00
9ca7e143af
mm: discard async errors from pinning failures ( #10738 )
...
Pretty much every error cudaHostRegister can throw also queues the same
error on the async GPU queue. This was fixed for repinning error case,
but there is the bad mmap and just enomem cases that are harder to
detect.
Do some dummy GPU work to clean the error state.
2025-12-29 18:19:34 -05:00
8fd07170f1
Comment out unused norm_final in lumina/z image model. ( #11545 )
2025-12-28 22:07:25 -05:00
2943093a53
Enable async offload by default for AMD. ( #11534 )
2025-12-27 18:54:15 -05:00
36deef2c57
chore(api-nodes): switch to credits instead of $ ( #11489 )
2025-12-26 19:56:52 -08:00
0d2e4bdd44
fix(api-nodes-gemini): always force enhance_prompt to be True ( #11503 )
2025-12-26 19:55:30 -08:00
eff4ea0b62
[V3] converted nodes_images.py to V3 schema ( #11206 )
...
* converted nodes_images.py to V3 schema
* fix test
2025-12-26 19:39:02 -08:00
865568b7fc
feat(api-nodes): add Kling Motion Control node ( #11493 )
2025-12-26 19:16:21 -08:00
1e4e342f54
Fix noise with ancestral samplers when inferencing on cpu. ( #11528 )
2025-12-26 22:03:01 -05:00
16fb6849d2
bump comfyui_manager version to the 4.0.4 ( #11521 )
2025-12-27 08:55:59 +09:00
d9a76cf66e
Specify in readme that we only support pytorch 2.4 and up. ( #11512 )
2025-12-25 23:46:51 -05:00
532e285079
Add a ManualSigmas node. ( #11499 )
...
Can be used to manually set the sigmas for a model.
This node accepts a list of integer and floating point numbers separated
with any non numeric character.
2025-12-24 19:09:37 -05:00
4f067b07fb
chore: update workflow templates to v0.7.64 ( #11496 )
2025-12-24 18:54:21 -05:00
650e716dda
Bump comfyui-frontend-package to 1.35.9 ( #11470 )
...
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-12-23 21:29:41 -08:00
e4c61d7555
ComfyUI v0.6.0
v0.6.0
2025-12-23 20:50:02 -05:00
22ff1bbfcb
chore: update workflow templates to v0.7.63 ( #11482 )
2025-12-23 20:48:45 -05:00
f4f44bb807
api-nodes: use new custom endpoint for Nano Banana ( #11311 )
2025-12-23 12:10:27 -08:00
33aa808713
Make denoised output on custom sampler nodes work with nested tensors. ( #11471 )
2025-12-22 16:43:24 -05:00
eb0e10aec4
Update workflow templates to v0.7.62 ( #11467 )
2025-12-22 16:02:41 -05:00
c176b214cc
extend possible duration range for Kling O1 StartEndFrame node ( #11451 )
2025-12-21 22:44:49 -08:00
91bf6b6aa3
Add node to create empty latents for qwen image layered model. ( #11460 )
2025-12-21 19:59:40 -05:00
807538fe6c
Core release process. ( #11447 )
2025-12-20 20:02:02 -05:00
bbb11e2608
fix(api-nodes): Topaz 4k video upscaling ( #11438 )
2025-12-20 08:48:28 -08:00
0899012ad6
chore(api-nodes): by default set Watermark generation to False ( #11437 )
2025-12-19 22:24:37 -08:00
fb478f679a
Only apply gemma quant config to gemma model for newbie. ( #11436 )
2025-12-20 01:02:43 -05:00
4c432c11ed
Implement Jina CLIP v2 and NewBie dual CLIP ( #11415 )
...
* Implement Jina CLIP v2
* Support quantized Gemma in NewBie dual CLIP
2025-12-20 00:57:22 -05:00
31e961736a
Fix issue with batches and newbie. ( #11435 )
2025-12-20 00:23:51 -05:00
767ee30f21
ZImageFunControlNet: Fix mask concatenation in --gpu-only ( #11421 )
...
This operation trades in latents which in --gpu-only may be out of the GPU
The two VAE results will follow the --gpu-only defined behaviour so follow
the inpaint image device when calculating the mask in this path.
2025-12-20 00:22:17 -05:00
3ab9748903
Disable prompt weights on newbie te. ( #11434 )
2025-12-20 00:19:47 -05:00
0aa7fa464e
Implement sliding attention in Gemma3 ( #11409 )
2025-12-20 00:16:46 -05:00
514c24d756
Fix error from logging line ( #11423 )
...
Co-authored-by: ozbayb <17261091+ozbayb@users.noreply.github.com >
2025-12-19 20:22:45 -08:00
809ce68749
Support nested tensor denoise masks. ( #11431 )
2025-12-19 19:59:25 -05:00
cc4ddba1b6
Allow enabling use of MIOpen by setting COMFYUI_ENABLE_MIOPEN=1 as an env var ( #11366 )
2025-12-19 17:01:50 -05:00
8376ff6831
bump comfyui_manager version to the 4.0.3b7 ( #11422 )
2025-12-19 10:41:56 -08:00
5b4d0664c8
add Flux2MaxImage API Node ( #11420 )
2025-12-19 10:02:49 -08:00
894802b0f9
Add LatentCutToBatch node. ( #11411 )
2025-12-18 22:21:40 -05:00
28eaab608b
Diffusion model part of Qwen Image Layered. ( #11408 )
...
Only thing missing after this is some nodes to make using it easier.
2025-12-18 20:21:14 -05:00
6a2678ac65
Trim/pad channels in VAE code. ( #11406 )
2025-12-18 18:22:38 -05:00
e4fb3a3572
Support loading Wan/Qwen VAEs with different in/out channels. ( #11405 )
2025-12-18 17:45:33 -05:00
e8ebbe668e
chore: update workflow templates to v0.7.60 ( #11403 )
2025-12-18 17:09:29 -05:00
1ca89b810e
Add unified jobs API with /api/jobs endpoints ( #11054 )
...
* feat: create a /jobs api to return queue and history jobs
* update unused vars
* include priority
* create jobs helper file
* fix ruff
* update how we set error message
* include execution error in both responses
* rename error -> failed, fix output shape
* re-use queue and history functions
* set workflow id
* allow srot by exec duration
* fix tests
* send priority and remove error msg
* use ws messages to get start and end times
* revert main.py fully
* refactor: move all /jobs business logic to jobs.py
* fix failing test
* remove some tests
* fix non dict nodes
* address comments
* filter by workflow id and remove null fields
* add clearer typing - remove get("..") or ..
* refactor query params to top get_job(s) doc, add remove_sensitive_from_queue
* add brief comment explaining why we skip animated
* comment that format field is for frontend backward compatibility
* fix whitespace
---------
Co-authored-by: Jedrzej Kosinski <kosinkadink1@gmail.com >
Co-authored-by: guill <jacob.e.segal@gmail.com >
2025-12-17 21:44:31 -08:00
bf7dc63bd6
skip_load_model -> force_full_load ( #11390 )
...
This should be a bit more clear and less prone to potential breakage if the
logic of the load models changes a bit.
2025-12-17 23:29:32 -05:00
86dbb89fc9
Resolution bucketing and Trainer implementation refactoring ( #11117 )
2025-12-17 22:15:27 -05:00
ba6080bbab
ComfyUI v0.5.1
v0.5.1
2025-12-17 21:04:50 -05:00
16d85ea133
Better handle torch being imported by prestartup nodes. ( #11383 )
2025-12-17 19:43:18 -05:00
5d9ad0c6bf
Fix the last step with non-zero sigma in sa_solver ( #11380 )
2025-12-17 13:57:40 -05:00
c08f97f344
fix regression in V3 nodes processing ( #11375 )
2025-12-17 10:24:25 -08:00
887143854b
feat(api-nodes): add GPT-Image-1.5 ( #11368 )
2025-12-17 09:43:41 -08:00