eeb6be4c2a
bump templates
2025-11-20 18:12:26 -08:00
336b2a7086
bump templates
2025-11-20 18:11:47 -08:00
b75d349f25
fix(KlingLipSyncAudioToVideoNode): convert audio to mp3 format ( #10811 )
2025-11-20 16:33:54 -08:00
7b8389578e
feat(api-nodes): add Nano Banana Pro ( #10814 )
...
* feat(api-nodes): add Nano Banana Pro
* frontend bump to 1.28.9
2025-11-20 16:17:47 -08:00
9e00ce5b76
Make Batch Images node add alpha channel when one of the inputs has it ( #10816 )
...
* When one Batch Image input has alpha and one does not, add empty alpha channel
* Use torch.nn.functional.pad
2025-11-20 17:42:46 -05:00
f5e66d5e47
Fix ImageBatch with different channel count. ( #10815 )
2025-11-20 15:08:03 -05:00
87b0359392
Update server templates handler to use new multi-package distribution (comfyui-workflow-templates versions >=0.3) ( #10791 )
...
* update templates for monorepo
* refactor
2025-11-19 22:36:56 -08:00
cb96d4d18c
Disable workaround on newer cudnn. ( #10807 )
2025-11-19 23:56:23 -05:00
394348f5ca
feat(api-nodes): add Topaz API nodes ( #10755 )
2025-11-19 17:44:04 -08:00
7601e89255
Fix workflow name. ( #10806 )
2025-11-19 20:17:15 -05:00
6a1d3a1ae1
convert hunyuan3d.py to V3 schema ( #10664 )
2025-11-19 14:49:01 -08:00
65ee24c978
change display name of PreviewAny node to "Preview as Text" ( #10796 )
2025-11-19 01:25:28 -08:00
17027f2a6a
Add a way to disable the final norm in the llama based TE models. ( #10794 )
2025-11-18 22:36:03 -05:00
b5c8be8b1d
ComfyUI 0.3.70
v0.3.70
2025-11-18 19:37:20 -05:00
24fdb92edf
feat(api-nodes): add new Gemini model ( #10789 )
2025-11-18 14:26:44 -08:00
d526974576
Fix hunyuan 3d 2.0 ( #10792 )
2025-11-18 16:46:19 -05:00
e1ab6bb394
EasyCache: Fix for mismatch in input/output channels with some models ( #10788 )
...
Slices model input with output channels so the caching tracks only the noise channels, resolves channel mismatch with models like WanVideo I2V
Also fix for slicing deprecation in pytorch 2.9
2025-11-18 07:00:21 -08:00
048f49adbd
chore(api-nodes): adjusted PR template; set min python version for pylint to 3.10 ( #10787 )
2025-11-18 03:59:27 -08:00
47bfd5a33f
Native block swap custom nodes considered harmful. ( #10783 )
2025-11-18 00:26:44 -05:00
fdf49a2861
Fix the portable download link for CUDA 12.6 ( #10780 )
2025-11-17 22:04:06 -05:00
f41e5f398d
Update README with new portable download link ( #10778 )
2025-11-17 19:59:19 -05:00
27cbac865e
Add release workflow for NVIDIA cu126 ( #10777 )
2025-11-17 19:04:04 -05:00
3d0003c24c
ComfyUI version 0.3.69
v0.3.69
2025-11-17 17:17:24 -05:00
7d6103325e
Change ROCm nightly install command to 7.1 ( #10764 )
2025-11-16 03:01:14 -05:00
2d4a08b717
Revert "chore(api-nodes): mark OpenAIDalle2 and OpenAIDalle3 nodes as deprecated ( #10757 )" ( #10759 )
...
This reverts commit 9a02382568 .
2025-11-15 12:37:34 -08:00
9a02382568
chore(api-nodes): mark OpenAIDalle2 and OpenAIDalle3 nodes as deprecated ( #10757 )
2025-11-15 11:18:49 -08:00
bd01d9f7fd
Add left padding support to tokenizers. ( #10753 )
2025-11-15 06:54:40 -05:00
443056c401
Fix custom nodes import error. ( #10747 )
...
This should fix the import errors but will break if the custom nodes actually try to use the class.
2025-11-14 03:26:05 -05:00
f60923590c
Use same code for chroma and flux blocks so that optimizations are shared. ( #10746 )
2025-11-14 01:28:05 -05:00
1ef328c007
Better instructions for the portable. ( #10743 )
2025-11-13 21:32:39 -05:00
94c298f962
flux: reduce VRAM usage ( #10737 )
...
Cleanup a bunch of stack tensors on Flux. This take me from B=19 to B=22
for 1600x1600 on RTX5090.
2025-11-13 16:02:03 -08:00
2fde9597f4
feat: add create_time dict to prompt field in /history and /queue ( #10741 )
2025-11-13 15:11:52 -08:00
f91078b1ff
add PR template for API-Nodes ( #10736 )
2025-11-13 10:05:26 -08:00
3b3ef9a77a
Quantized Ops fixes ( #10715 )
...
* offload support, bug fixes, remove mixins
* add readme
2025-11-12 18:26:52 -05:00
8b0b93df51
Update Python 3.14 compatibility notes in README ( #10730 )
2025-11-12 17:04:41 -05:00
1c7eaeca10
qwen: reduce VRAM usage ( #10725 )
...
Clean up a bunch of stacked and no-longer-needed tensors on the QWEN
VRAM peak (currently FFN).
With this I go from OOMing at B=37x1328x1328 to being able to
succesfully run B=47 (RTX5090).
2025-11-12 16:20:53 -05:00
18e7d6dba5
mm/mp: always unload re-used but modified models ( #10724 )
...
The partial unloader path in model re-use flow skips straight to the
actual unload without any check of the patching UUID. This means that
if you do an upscale flow with a model patch on an existing model, it
will not apply your patchings.
Fix by delaying the partial_unload until after the uuid checks. This
is done by making partial_unload a model of partial_load where extra_mem
is -ve.
2025-11-12 16:19:53 -05:00
e1d85e7577
Update README.md for Intel Arc GPU installation, remove IPEX ( #10729 )
...
IPEX is no longer needed for Intel Arc GPUs. Removing instruction to setup ipex.
2025-11-12 15:21:05 -05:00
1199411747
Don't pin tensor if not a torch.nn.parameter.Parameter ( #10718 )
2025-11-11 19:33:30 -05:00
5ebcab3c7d
Update CI workflow to remove dead macOS runner. ( #10704 )
...
* Update CI workflow to remove dead macOS runner.
* revert
* revert
2025-11-10 15:35:29 -05:00
c350009236
ops: Put weight cast on the offload stream ( #10697 )
...
This needs to be on the offload stream. This reproduced a black screen
with low resolution images on a slow bus when using FP8.
2025-11-09 22:52:11 -05:00
dea899f221
Unload weights if vram usage goes up between runs. ( #10690 )
2025-11-09 18:51:33 -05:00
e632e5de28
Add logging for model unloading. ( #10692 )
2025-11-09 18:06:39 -05:00
2abd2b5c20
Make ScaleROPE node work on Flux. ( #10686 )
2025-11-08 15:52:02 -05:00
a1a70362ca
Only unpin tensor if it was pinned by ComfyUI ( #10677 )
2025-11-07 11:15:05 -05:00
cf97b033ee
mm: guard against double pin and unpin explicitly ( #10672 )
...
As commented, if you let cuda be the one to detect double pin/unpinning
it actually creates an asyc GPU error.
2025-11-06 21:20:48 -05:00
eb1c42f649
Tell users they need to upload their logs in bug reports. ( #10671 )
2025-11-06 20:24:28 -05:00
e05c907126
Clarify release cycle. ( #10667 )
2025-11-06 04:11:30 -05:00
09dc24c8a9
Pinned mem also seems to work on AMD. ( #10658 )
2025-11-05 19:11:15 -05:00
1d69245981
Enable pinned memory by default on Nvidia. ( #10656 )
...
Removed the --fast pinned_memory flag.
You can use --disable-pinned-memory to disable it. Please report if it
causes any issues.
2025-11-05 18:08:13 -05:00