Commit Graph

91 Commits

Author SHA1 Message Date
497db6212f Alternative fix for #5767 2024-11-26 17:53:04 -05:00
5818f6cf51 Remove print. 2024-11-22 10:49:15 -05:00
5e16f1d24b Support Lightricks LTX-Video model. 2024-11-22 08:46:39 -05:00
8f0009aad0 Support new flux model variants. 2024-11-21 08:38:23 -05:00
b699a15062 Refactor inpaint/ip2p code. 2024-11-19 03:25:25 -05:00
5cbb01bc2f Basic Genmo Mochi video model support.
To use:
"Load CLIP" node with t5xxl + type mochi
"Load Diffusion Model" node with the mochi dit file.
"Load VAE" with the mochi vae file.

EmptyMochiLatentVideo node for the latent.
euler + linear_quadratic in the KSampler node.
2024-10-26 06:54:00 -04:00
0075c6d096 Mixed precision diffusion models with scaled fp8.
This change allows supports for diffusion models where all the linears are
scaled fp8 while the other weights are the original precision.
2024-10-21 18:12:51 -04:00
a68bbafddb Support diffusion models with scaled fp8 weights. 2024-10-19 23:47:42 -04:00
e38c94228b Add a weight_dtype fp8_e4m3fn_fast to the Diffusion Model Loader node.
This is used to load weights in fp8 and use fp8 matrix multiplication.
2024-10-09 19:43:17 -04:00
9953f22fce Add --fast argument to enable experimental optimizations.
Optimizations that might break things/lower quality will be put behind
this flag first and might be enabled by default in the future.

Currently the only optimization is float8_e4m3fn matrix multiplication on
4000/ADA series Nvidia cards or later. If you have one of these cards you
will see a speed boost when using fp8_e4m3fn flux for example.
2024-08-20 11:55:51 -04:00
e9589d6d92 Add a way to set model dtype and ops from load_checkpoint_guess_config. 2024-08-11 08:50:34 -04:00
b334605a66 Fix OOMs happening in some cases.
A cloned model patcher sometimes reported a model was loaded on a device
when it wasn't.
2024-08-06 13:36:04 -04:00
0a6b008117 Fix issue with some custom nodes. 2024-08-04 10:03:33 -04:00
ba9095e5bd Automatically use fp8 for diffusion model weights if:
Checkpoint contains weights in fp8.

There isn't enough memory to load the diffusion model in GPU vram.
2024-08-03 13:45:19 -04:00
ea03c9dcd2 Better per model memory usage estimations. 2024-08-02 18:09:24 -04:00
3a9ee995cf Tweak regular SD memory formula. 2024-08-02 17:34:30 -04:00
47da42d928 Better Flux vram estimation. 2024-08-02 17:02:35 -04:00
d420bc792a Tweak the memory usage formulas for Flux and SD. 2024-08-01 17:53:45 -04:00
1589b58d3e Basic Flux Schnell and Flux Dev model implementation. 2024-08-01 09:49:29 -04:00
a5f4292f9f Basic hunyuan dit implementation. (#4102)
* Let tokenizers return weights to be stored in the saved checkpoint.

* Basic hunyuan dit implementation.

* Fix some resolutions not working.

* Support hydit checkpoint save.

* Init with right dtype.

* Switch to optimized attention in pooler.

* Fix black images on hunyuan dit.
2024-07-25 18:21:08 -04:00
9f291d75b3 AuraFlow model implementation. 2024-07-11 16:52:26 -04:00
8ceb5a02a3 Support saving stable audio checkpoint that can be loaded back. 2024-06-27 11:06:52 -04:00
bb1969cab7 Initial support for the stable audio open model. 2024-06-15 12:14:56 -04:00
0ec513d877 Add a --force-channels-last to inference models in channel last mode. 2024-06-15 01:08:12 -04:00
1ddf512fdc Don't auto convert clip and vae weights to fp16 when saving checkpoint. 2024-06-12 01:07:58 -04:00
694e0b48e0 SD3 better memory usage estimation. 2024-06-12 00:49:00 -04:00
9424522ead Reuse code. 2024-06-11 07:20:26 -04:00
8c4a9befa7 SD3 Support. 2024-06-10 14:06:23 -04:00
cd07340d96 Typo fix. 2024-05-08 18:36:56 -04:00
1088d1850f Support for CosXL models. 2024-04-05 10:53:41 -04:00
575acb69e4 IP2P model loading support.
This is the code to load the model and inference it with only a text
prompt. This commit does not contain the nodes to properly use it with an
image input.

This supports both the original SD1 instructpix2pix model and the
diffusers SDXL one.
2024-03-31 03:10:28 -04:00
94a5a67c32 Cleanup to support different types of inpaint models. 2024-03-29 14:44:13 -04:00
40e124c6be SV3D support. 2024-03-18 16:54:13 -04:00
0ed72befe1 Change log levels.
Logging level now defaults to info. --verbose sets it to debug.
2024-03-11 13:54:56 -04:00
65397ce601 Replace prints with logging and add --verbose argument. 2024-03-10 12:14:23 -04:00
51df846598 Let conditioning specify custom concat conds. 2024-03-02 11:44:06 -05:00
cb7c3a2921 Allow image_only_indicator to be None. 2024-02-29 13:11:30 -05:00
8daedc5bf2 Auto detect playground v2.5 model. 2024-02-27 18:03:03 -05:00
0d0fbabd1d Pass pooled CLIP to stage b. 2024-02-20 04:24:45 -05:00
667c92814e Stable Cascade Stage B. 2024-02-16 13:02:03 -05:00
f83109f09b Stable Cascade Stage C. 2024-02-16 10:55:08 -05:00
25a4805e51 Add a way to set different conditioning for the controlnet. 2024-02-09 14:13:31 -05:00
4871a36458 Cleanup some unused imports. 2024-01-21 21:51:22 -05:00
d76a04b6ea Add unfinished ImageOnlyCheckpointSave node to save a SVD checkpoint.
This node is unfinished, SVD checkpoints saved with this node will
work with ComfyUI but not with anything else.
2024-01-17 19:46:21 -05:00
2395ae740a Make unclip more deterministic.
Pass a seed argument note that this might make old unclip images different.
2024-01-14 17:28:31 -05:00
10f2609fdd Add InpaintModelConditioning node.
This is an alternative to VAE Encode for inpaint that should work with
lower denoise.

This is a different take on #2501
2024-01-11 03:15:27 -05:00
8c6493578b Implement noise augmentation for SD 4X upscale model. 2024-01-03 14:27:11 -05:00
a7874d1a8b Add support for the stable diffusion x4 upscaling model.
This is an old model.

Load the checkpoint like a regular one and use the new
SD_4XUpscale_Conditioning node.
2024-01-03 03:37:56 -05:00
36a7953142 Greatly improve lowvram sampling speed by getting rid of accelerate.
Let me know if this breaks anything.
2023-12-22 14:38:45 -05:00
8cf1daa108 Fix SDXL area composition sometimes not using the right pooled output. 2023-12-18 12:54:23 -05:00