adccfb2dfd
Remove populate_db_with_asset from load_torch_file for now, as nothing yet uses the hashes
2025-09-26 20:33:46 -07:00
9f4c0f3afe
Merge branch 'master' into asset-management
2025-09-26 20:24:25 -07:00
196954ab8c
Add 'input_cond' and 'input_uncond' to the args dictionary passed into sampler_cfg_function ( #10044 )
2025-09-26 19:55:03 -07:00
1e098d6132
Don't add template to qwen2.5vl when template is in prompt. ( #10043 )
...
Make the hunyuan image refiner template_end 36.
2025-09-26 18:34:17 -04:00
cd66d72b46
convert CLIPTextEncodeSDXL nodes to V3 schema ( #9716 )
2025-09-26 14:15:44 -07:00
2103e39335
convert nodes_post_processing to V3 schema ( #9491 )
2025-09-26 14:14:42 -07:00
d20576e6a3
convert nodes_sag.py to V3 schema ( #9940 )
2025-09-26 14:13:52 -07:00
a061b06321
convert nodes_tcfg.py to V3 schema ( #9942 )
2025-09-26 14:13:05 -07:00
80718908a9
convert nodes_sdupscale.py to V3 schema ( #9943 )
2025-09-26 14:12:38 -07:00
7ea173c187
convert nodes_fresca.py to V3 schema ( #9951 )
2025-09-26 14:12:04 -07:00
76eb1d72c3
convert nodes_rebatch.py to V3 schema ( #9945 )
2025-09-26 14:10:49 -07:00
c4a46e943c
Add @kosinkadink as code owner ( #10041 )
...
Updated CODEOWNERS to include @kosinkadink as a code owner.
2025-09-26 17:08:16 -04:00
2b7f9a8196
Fix the failing unit test. ( #10037 )
2025-09-26 14:12:43 -04:00
ce4cb2389c
Make LatentCompositeMasked work with basic video latents. ( #10023 )
2025-09-25 17:20:13 -04:00
ca39552954
Merge branch 'master' into asset-management
2025-09-24 23:44:57 -07:00
c8d2117f02
Fix memory leak by properly detaching model finalizer ( #9979 )
...
When unloading models in load_models_gpu(), the model finalizer was not
being explicitly detached, leading to a memory leak. This caused
linear memory consumption increase over time as models are repeatedly
loaded and unloaded.
This change prevents orphaned finalizer references from accumulating in
memory during model switching operations.
2025-09-24 22:35:12 -04:00
fccab99ec0
Fix issue with .view() in HuMo. ( #10014 )
2025-09-24 20:09:42 -04:00
fd79d32f38
Add new audio nodes ( #9908 )
...
* Add new audio nodes
- TrimAudioDuration
- SplitAudioChannels
- AudioConcat
- AudioMerge
- AudioAdjustVolume
* Update nodes_audio.py
* Add EmptyAudio -node
* Change duration to Float (allows sub seconds)
2025-09-24 18:59:29 -04:00
341b4adefd
Rodin3D - add [Rodin3D Gen-2 generate] api-node ( #9994 )
...
* update Rodin api node
* update rodin3d gen2 api node
* fix images limited bug
2025-09-24 14:05:37 -04:00
b8730510db
ComfyUI version 0.3.60
v0.3.60
2025-09-23 11:50:33 -04:00
e808790799
feat(api-nodes): add wan t2i, t2v, i2v nodes ( #9996 )
2025-09-23 11:36:47 -04:00
145b0e4f79
update template to 0.1.86 ( #9998 )
...
* update template to 0.1.84
* update template to 0.1.85
* Update template to 0.1.86
2025-09-23 11:22:35 -04:00
707b2638ec
Fix bug with WanAnimateToVideo. ( #9990 )
2025-09-22 17:34:33 -04:00
8a5ac527e6
Fix bug with WanAnimateToVideo node. ( #9988 )
2025-09-22 17:26:58 -04:00
e3206351b0
add offset param ( #9977 )
2025-09-22 17:12:32 -04:00
1fee8827cb
Support for qwen edit plus model. Use the new TextEncodeQwenImageEditPlus. ( #9986 )
2025-09-22 16:49:48 -04:00
27bc181c49
Set some wan nodes as no longer experimental. ( #9976 )
2025-09-21 19:48:31 -04:00
d1d9eb94b1
Lower wan memory estimation value a bit. ( #9964 )
...
Previous pr reduced the peak memory requirement.
2025-09-20 22:09:35 -04:00
7be2b49b6b
Fix LoRA Trainer bugs with FP8 models. ( #9854 )
...
* Fix adapter weight init
* Fix fp8 model training
* Avoid inference tensor
2025-09-20 21:24:48 -04:00
9ed3c5cc09
[Reviving #5709 ] Add strength input to Differential Diffusion ( #9957 )
...
* Update nodes_differential_diffusion.py
* Update nodes_differential_diffusion.py
* Make strength optional to avoid validation errors when loading old workflows, adjust step
---------
Co-authored-by: ThereforeGames <eric@sparknight.io >
2025-09-20 21:10:39 -04:00
66241cef31
Add inputs for character replacement to the WanAnimateToVideo node. ( #9960 )
2025-09-20 02:24:10 -04:00
e8df53b764
Update WanAnimateToVideo to more easily extend videos. ( #9959 )
2025-09-19 18:48:56 -04:00
852704c81a
fix(seedream4): add flag to ignore error on partial success ( #9952 )
2025-09-19 16:04:51 -04:00
9fdf8c25ab
api_nodes: reduce default timeout from 7 days to 2 hours ( #9918 )
2025-09-19 16:02:43 -04:00
dc95b6acc0
Basic WIP support for the wan animate model. ( #9939 )
2025-09-19 03:07:17 -04:00
711bcf33ee
Bump frontend to 1.26.13 ( #9933 )
2025-09-19 03:03:30 -04:00
24b0fce099
Do padding of audio embed in model for humo for more flexibility. ( #9935 )
2025-09-18 19:54:16 -04:00
1ea8c54064
make kernel of same type as image to avoid mismatch issues ( #9932 )
2025-09-18 19:51:16 -04:00
8d6653fca6
Enable fp8 ops by default on gfx1200 ( #9926 )
2025-09-18 19:50:37 -04:00
4dd843d36f
Merge branch 'master' into asset-management
2025-09-18 14:08:20 -07:00
46fdd636de
Merge pull request #9545 from bigcat88/asset-management
...
[Assets] Initial implementation
2025-09-18 14:07:18 -07:00
283cd27bdc
final adjustments
2025-09-18 10:05:32 +03:00
dd611a7700
Support the HuMo 17B model. ( #9912 )
2025-09-17 18:39:24 -04:00
1a37d1476d
refactor(6): fully batched initial scan
2025-09-17 20:29:29 +03:00
f9602457d6
optimization: initial scan speed(batching metadata[filename])
2025-09-17 16:47:27 +03:00
85ef08449d
optimization: initial scan speed(batching tags)
2025-09-17 14:08:57 +03:00
5b6810a2c6
fixed hash calculation during model loading in ComfyUI
2025-09-17 13:25:56 +03:00
621faaa195
refactor(5): use less DB queries to create seed asset
2025-09-17 10:46:21 +03:00
9288c78fc5
Support the HuMo model. ( #9903 )
2025-09-17 00:12:48 -04:00
e42682b24e
Reduce Peak WAN inference VRAM usage ( #9898 )
...
* flux: Do the xq and xk ropes one at a time
This was doing independendent interleaved tensor math on the q and k
tensors, leading to the holding of more than the minimum intermediates
in VRAM. On a bad day, it would VRAM OOM on xk intermediates.
Do everything q and then everything k, so torch can garbage collect
all of qs intermediates before k allocates its intermediates.
This reduces peak VRAM usage for some WAN2.2 inferences (at least).
* wan: Optimize qkv intermediates on attention
As commented. The former logic computed independent pieces of QKV in
parallel which help more inference intermediates in VRAM spiking
VRAM usage. Fully roping Q and garbage collecting the intermediates
before touching K reduces the peak inference VRAM usage.
2025-09-16 19:21:14 -04:00