58b2364f58
Properly support SDXL diffusers unet with UNETLoader node.
2023-07-21 14:38:56 -04:00
0115018695
Print errors and continue when lora weights are not compatible.
2023-07-20 19:56:22 -04:00
3ded1a3a04
Refactor of sampler code to deal more easily with different model types.
2023-07-17 01:22:12 -04:00
5f57362613
Lower lora ram usage when in normal vram mode.
2023-07-16 02:59:04 -04:00
490771b7f4
Speed up lora loading a bit.
2023-07-15 13:25:22 -04:00
50b1180dde
Fix CLIPSetLastLayer not reverting when removed.
2023-07-15 01:41:21 -04:00
6fb084f39d
Reduce floating point rounding errors in loras.
2023-07-15 00:53:00 -04:00
91ed2815d5
Add a node to merge CLIP models.
2023-07-14 02:41:18 -04:00
6ad0a6d7e2
Don't patch weights when multiplier is zero.
2023-07-09 17:46:56 -04:00
a9a4ba7574
Fix merging not working when model2 of model merge node was a merge.
2023-07-08 22:31:10 -04:00
e7bee85df8
Add arguments to run the VAE in fp16 or bf16 for testing.
2023-07-06 23:23:46 -04:00
ddc6f12ad5
Disable autocast in unet for increased speed.
2023-07-05 21:58:29 -04:00
af7a49916b
Support loading unet files in diffusers format.
2023-07-05 17:38:59 -04:00
acf95191ff
Properly support SDXL diffusers loras for unet.
2023-07-04 21:15:23 -04:00
c3e96e637d
Pass device to CLIP model.
2023-07-03 16:09:37 -04:00
2c4e0b49b7
Switch to fp16 on some cards when the model is too big.
2023-07-02 10:00:57 -04:00
1c1b0e7299
--gpu-only now keeps the VAE on the device.
2023-07-01 15:22:40 -04:00
3b6fe51c1d
Leave text_encoder on the CPU when it can handle it.
2023-07-01 14:38:51 -04:00
b6a60fa696
Try to keep text encoders loaded and patched to increase speed.
...
load_model_gpu() is now used with the text encoder models instead of just
the unet.
2023-07-01 13:28:07 -04:00
97ee230682
Make highvram and normalvram shift the text encoders to vram and back.
...
This is faster on big text encoder models than running it on the CPU.
2023-07-01 12:37:23 -04:00
5a9ddf94eb
LoraLoader node now caches the lora file between executions.
2023-06-29 23:40:51 -04:00
62db11683b
Move unet to device right after loading on highvram mode.
2023-06-29 20:43:06 -04:00
2c7c14de56
Support for SDXL text encoder lora.
2023-06-28 02:22:49 -04:00
9b93b920be
Add CheckpointSave node to save checkpoints.
...
The created checkpoints contain workflow metadata that can be loaded by
dragging them on top of the UI or loading them with the "Load" button.
Checkpoints will be saved in fp16 or fp32 depending on the format ComfyUI
is using for inference on your hardware. To force fp32 use: --force-fp32
Anything that patches the model weights like merging or loras will be
saved.
The output directory is currently set to: output/checkpoints but that might
change in the future.
2023-06-26 12:22:27 -04:00
b72a7a835a
Support loras based on the stability unet implementation.
2023-06-26 02:56:11 -04:00
20f579d91d
Add DualClipLoader to load clip models for SDXL.
...
Update LoadClip to load clip models for SDXL refiner.
2023-06-25 01:40:38 -04:00
b7933960bb
Fix CLIPLoader node.
2023-06-24 13:56:46 -04:00
05676942b7
Add some more transformer hooks and move tomesd to comfy_extras.
...
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
2023-06-24 03:30:22 -04:00
8607c2d42d
Move latent scale factor from VAE to model.
2023-06-23 02:33:31 -04:00
30a3861946
Fix bug when yaml config has no clip params.
2023-06-23 01:12:59 -04:00
9e37f4c7d5
Fix error with ClipVision loader node.
2023-06-23 01:08:05 -04:00
9f83b098c9
Don't merge weights when shapes don't match and print a warning.
2023-06-22 19:08:31 -04:00
f87ec10a97
Support base SDXL and SDXL refiner models.
...
Large refactor of the model detection and loading code.
2023-06-22 13:03:50 -04:00
51581dbfa9
Fix last commits causing an issue with the text encoder lora.
2023-06-20 19:44:39 -04:00
8125b51a62
Keep a set of model_keys for faster add_patches.
2023-06-20 19:08:48 -04:00
45beebd33c
Add a type of model patch useful for model merging.
2023-06-20 17:34:11 -04:00
8883cb0f67
Add a way to set patches that modify the attn2 output.
...
Change the transformer patches function format to be more future proof.
2023-06-18 22:58:22 -04:00
fb4bf7f591
This is not needed anymore and causes issues with alphas_cumprod.
2023-06-18 03:18:25 -04:00
f7edcfd927
Add a --gpu-only argument to keep and run everything on the GPU.
...
Make the CLIP model work on the GPU.
2023-06-15 15:38:52 -04:00
6b774589a5
Set model to fp16 before loading the state dict to lower ram bump.
2023-06-14 12:48:02 -04:00
388567f20b
sampler_cfg_function now uses a dict for the argument.
...
This means arguments can be added without issues.
2023-06-13 16:10:36 -04:00
ff9b22d79e
Turn on safe load for a few models.
2023-06-13 10:12:03 -04:00
f0a2b81cd0
Cleanup: Remove a bunch of useless files.
2023-06-13 02:19:08 -04:00
f8c5931053
Split the batch in VAEEncode if there's not enough memory.
2023-06-12 00:21:50 -04:00
c069fc0730
Auto switch to tiled VAE encode if regular one runs out of memory.
2023-06-11 23:25:39 -04:00
de142eaad5
Simpler base model code.
2023-06-09 12:31:16 -04:00
0e425603fb
Small refactor.
2023-06-06 13:23:01 -04:00
700491d81a
Implement global average pooling for controlnet.
2023-06-03 01:49:03 -04:00
03da8a3426
This is useless for inference.
2023-05-31 13:03:24 -04:00
eb448dd8e1
Auto load model in lowvram if not enough memory.
2023-05-30 12:36:41 -04:00