Commit Graph

1418 Commits

Author SHA1 Message Date
01015bff16 Add er_sde sampler (#7187) 2025-03-12 02:42:37 -04:00
ca8efab79f Support control loras on Wan. 2025-03-10 17:23:13 -04:00
9aac21f894 Fix issues with new hunyuan img2vid model and bumb version to v0.3.26 2025-03-09 05:07:22 -04:00
528d1b3563 When cached_hook_patches contain weights for hooks, only use hook_backup for unused keys (#7067) 2025-03-09 04:26:31 -04:00
7395b0c0d1 Support new hunyuan video i2v model.
Use the new "v2 (replace)" guidance type in HunyuanImageToVideo and set
image_interleave to 4 on the "Text Encode Hunyuan Video" node.
2025-03-08 20:34:47 -05:00
0952569493 Fix stable cascade VAE on some lowvram machines. 2025-03-08 20:24:04 -05:00
be4e760648 Add an image_interleave option to the Hunyuan image to video encode node.
See the tooltip for what it does.
2025-03-07 19:56:26 -05:00
11b1f27cb1 Set WAN default compute dtype to fp16. 2025-03-07 04:52:36 -05:00
70e15fd743 No need for scale_input when fp8 matrix mult is disabled. 2025-03-07 04:49:20 -05:00
e1474150de Support fp8_scaled diffusion models that don't use fp8 matrix mult. 2025-03-07 04:39:21 -05:00
e62d72e8ca Typo in node_typing.py (#7092) 2025-03-06 15:24:04 -05:00
dfa36e6855 Fix some things breaking when embeddings fail to apply. 2025-03-06 13:31:55 -05:00
29a70ca101 Support HunyuanVideo image to video model. 2025-03-06 03:07:15 -05:00
0bef826a98 Support llava clip vision model. 2025-03-06 00:24:43 -05:00
85ef295069 Make applying embeddings more efficient.
Adding new tokens no longer makes a whole copy of the embeddings weight
which can be massive on certain models.
2025-03-05 17:34:38 -05:00
5d84607bf3 Add type hint for FileLocator (#6968)
* Add type hint for FileLocator

* nit
2025-03-05 15:35:26 -05:00
c1909f350f Better argument handling of front-end-root (#7043)
* Better argument handling of front-end-root

Improves handling of front-end-root launch argument. Several instances where users have set it and ComfyUI launches as normal and completely disregards the launch arg which doesn't make sense. Better to indicate to user that something is incorrect.

* Removed unused import

There was no real reason to use "Optional" typing in ther front-end-root argument.
2025-03-05 15:34:22 -05:00
52b3469606 [NodeDef] Explicitly add control_after_generate to seed/noise_seed (#7059)
* [NodeDef] Explicitly add control_after_generate to seed/noise_seed

* Update comfy/comfy_types/node_typing.py

Co-authored-by: filtered <176114999+webfiltered@users.noreply.github.com>

---------

Co-authored-by: filtered <176114999+webfiltered@users.noreply.github.com>
2025-03-05 15:33:23 -05:00
369b079ff6 Fix lowvram issue with ltxv vae. 2025-03-05 05:26:08 -05:00
9c9a7f012a Adjust ltxv memory factor. 2025-03-05 05:16:05 -05:00
93fedd92fe Support LTXV 0.9.5.
Credits: Lightricks team.
2025-03-05 00:13:49 -05:00
65042f7d39 Make it easier to set a custom template for hunyuan video. 2025-03-04 09:26:05 -05:00
7c7c70c400 Refactor skyreels i2v code. 2025-03-04 00:15:45 -05:00
f86c724ef2 Temporal area composition.
New ConditioningSetAreaPercentageVideo node.
2025-03-03 06:50:31 -05:00
9af6320ec9 Make 2d area composition nodes work on video models. 2025-03-02 08:19:16 -05:00
4dc6709307 Rename argument in last commit and document the options. 2025-03-01 02:43:49 -05:00
4d55f16ae8 Use enum list for --fast options (#7024) 2025-03-01 02:37:35 -05:00
cf0b549d48 --fast now takes a number as argument to indicate how fast you want it.
The idea is that you can indicate how much quality vs speed you want.

At the moment:

--fast 2 enables fp16 accumulation if your pytorch supports it.
--fast 5 enables fp8 matrix mult on fp8 models and the optimization above.

--fast without a number enables all optimizations.
2025-02-28 02:48:20 -05:00
eb4543474b Use fp16 for intermediate for fp8 weights with --fast if supported. 2025-02-28 02:17:50 -05:00
1804397952 Use fp16 if checkpoint weights are fp16 and the model supports it. 2025-02-27 16:39:57 -05:00
f4dac8ab6f Wan code small cleanup. 2025-02-27 07:22:42 -05:00
89253e9fe5 Support Cambricon MLU (#6964)
Co-authored-by: huzhan <huzhan@cambricon.com>
2025-02-26 20:45:13 -05:00
3ea3bc8546 Fix wan issues when prompt length is long. 2025-02-26 20:34:02 -05:00
0270a0b41c Reduce artifacts on Wan by doing the patch embedding in fp32. 2025-02-26 16:59:26 -05:00
c37f15f98e Add fast preview support for Wan models. 2025-02-26 08:56:23 -05:00
4bca7367f3 Don't try to use clip_fea on t2v model. 2025-02-26 08:38:09 -05:00
b6fefe686b Better wan memory estimation. 2025-02-26 07:51:22 -05:00
fa62287f1f More code reuse in wan.
Fix bug when changing the compute dtype on wan.
2025-02-26 05:22:29 -05:00
0844998db3 Slightly better wan i2v mask implementation. 2025-02-26 03:49:50 -05:00
4ced06b879 WIP support for Wan I2V model. 2025-02-26 01:49:43 -05:00
cb06e9669b Wan seems to work with fp16. 2025-02-25 21:37:12 -05:00
9a66bb972d Make wan work with all latent resolutions.
Cleanup some code.
2025-02-25 19:56:04 -05:00
ea0f939df3 Fix issue with wan and other attention implementations. 2025-02-25 19:13:39 -05:00
f37551c1d2 Change wan rope implementation to the flux one.
Should be more compatible.
2025-02-25 19:11:14 -05:00
63023011b9 WIP support for Wan t2v model. 2025-02-25 17:20:35 -05:00
f40076096e Cleanup some lumina te code. 2025-02-25 04:10:26 -05:00
96d891cb94 Speedup on some models by not upcasting bfloat16 to float32 on mac. 2025-02-24 05:41:32 -05:00
ace899e71a Prioritize fp16 compute when using allow_fp16_accumulation 2025-02-23 04:45:54 -05:00
aff16532d4 Remove some useless code. 2025-02-22 04:45:14 -05:00
072db3bea6 Assume the mac black image bug won't be fixed before v16. 2025-02-21 20:24:07 -05:00