e7bee85df8
Add arguments to run the VAE in fp16 or bf16 for testing.
2023-07-06 23:23:46 -04:00
608fcc2591
Fix bug with weights when prompt is long.
2023-07-06 02:43:40 -04:00
ddc6f12ad5
Disable autocast in unet for increased speed.
2023-07-05 21:58:29 -04:00
603f02d613
Fix loras not working when loading checkpoint with config.
2023-07-05 19:42:24 -04:00
af7a49916b
Support loading unet files in diffusers format.
2023-07-05 17:38:59 -04:00
e57cba4c61
Add gpu variations of the sde samplers that are less deterministic
...
but faster.
2023-07-05 01:39:38 -04:00
f81b192944
Add logit scale parameter so it's present when saving the checkpoint.
2023-07-04 23:01:28 -04:00
acf95191ff
Properly support SDXL diffusers loras for unet.
2023-07-04 21:15:23 -04:00
8d694cc450
Fix issue with OSX.
2023-07-04 02:09:02 -04:00
c3e96e637d
Pass device to CLIP model.
2023-07-03 16:09:37 -04:00
5e6bc824aa
Allow passing custom path to clip-g and clip-h.
2023-07-03 15:45:04 -04:00
dc9d1f31c8
Improvements for OSX.
2023-07-03 00:08:30 -04:00
103c487a89
Cleanup.
2023-07-02 11:58:23 -04:00
2c4e0b49b7
Switch to fp16 on some cards when the model is too big.
2023-07-02 10:00:57 -04:00
6f3d9f52db
Add a --force-fp16 argument to force fp16 for testing.
2023-07-01 22:42:35 -04:00
1c1b0e7299
--gpu-only now keeps the VAE on the device.
2023-07-01 15:22:40 -04:00
ce35d8c659
Lower latency by batching some text encoder inputs.
2023-07-01 15:07:39 -04:00
3b6fe51c1d
Leave text_encoder on the CPU when it can handle it.
2023-07-01 14:38:51 -04:00
b6a60fa696
Try to keep text encoders loaded and patched to increase speed.
...
load_model_gpu() is now used with the text encoder models instead of just
the unet.
2023-07-01 13:28:07 -04:00
97ee230682
Make highvram and normalvram shift the text encoders to vram and back.
...
This is faster on big text encoder models than running it on the CPU.
2023-07-01 12:37:23 -04:00
5a9ddf94eb
LoraLoader node now caches the lora file between executions.
2023-06-29 23:40:51 -04:00
9920367d3c
Fix embeddings not working with --gpu-only
2023-06-29 20:43:06 -04:00
62db11683b
Move unet to device right after loading on highvram mode.
2023-06-29 20:43:06 -04:00
4376b125eb
Remove useless code.
2023-06-29 00:26:33 -04:00
89120f1fbe
This is unused but it should be 1280.
2023-06-28 18:04:23 -04:00
2c7c14de56
Support for SDXL text encoder lora.
2023-06-28 02:22:49 -04:00
fcef47f06e
Fix bug.
2023-06-28 00:38:07 -04:00
8248babd44
Use pytorch attention by default on nvidia when xformers isn't present.
...
Add a new argument --use-quad-cross-attention
2023-06-26 13:03:44 -04:00
9b93b920be
Add CheckpointSave node to save checkpoints.
...
The created checkpoints contain workflow metadata that can be loaded by
dragging them on top of the UI or loading them with the "Load" button.
Checkpoints will be saved in fp16 or fp32 depending on the format ComfyUI
is using for inference on your hardware. To force fp32 use: --force-fp32
Anything that patches the model weights like merging or loras will be
saved.
The output directory is currently set to: output/checkpoints but that might
change in the future.
2023-06-26 12:22:27 -04:00
b72a7a835a
Support loras based on the stability unet implementation.
2023-06-26 02:56:11 -04:00
c71a7e6b20
Fix ddim + inpainting not working.
2023-06-26 00:48:48 -04:00
4eab00e14b
Set the seed in the SDE samplers to make them more reproducible.
2023-06-25 03:04:57 -04:00
cef6aa62b2
Add support for TAESD decoder for SDXL.
2023-06-25 02:38:14 -04:00
20f579d91d
Add DualClipLoader to load clip models for SDXL.
...
Update LoadClip to load clip models for SDXL refiner.
2023-06-25 01:40:38 -04:00
b7933960bb
Fix CLIPLoader node.
2023-06-24 13:56:46 -04:00
78d8035f73
Fix bug with controlnet.
2023-06-24 11:02:38 -04:00
05676942b7
Add some more transformer hooks and move tomesd to comfy_extras.
...
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
2023-06-24 03:30:22 -04:00
fa28d7334b
Remove useless code.
2023-06-23 12:35:26 -04:00
8607c2d42d
Move latent scale factor from VAE to model.
2023-06-23 02:33:31 -04:00
30a3861946
Fix bug when yaml config has no clip params.
2023-06-23 01:12:59 -04:00
9e37f4c7d5
Fix error with ClipVision loader node.
2023-06-23 01:08:05 -04:00
9f83b098c9
Don't merge weights when shapes don't match and print a warning.
2023-06-22 19:08:31 -04:00
f87ec10a97
Support base SDXL and SDXL refiner models.
...
Large refactor of the model detection and loading code.
2023-06-22 13:03:50 -04:00
9fccf4aa03
Add original_shape parameter to transformer patch extra_options.
2023-06-21 13:22:01 -04:00
51581dbfa9
Fix last commits causing an issue with the text encoder lora.
2023-06-20 19:44:39 -04:00
8125b51a62
Keep a set of model_keys for faster add_patches.
2023-06-20 19:08:48 -04:00
45beebd33c
Add a type of model patch useful for model merging.
2023-06-20 17:34:11 -04:00
036a22077c
Fix k_diffusion math being off by a tiny bit during txt2img.
2023-06-19 15:28:54 -04:00
8883cb0f67
Add a way to set patches that modify the attn2 output.
...
Change the transformer patches function format to be more future proof.
2023-06-18 22:58:22 -04:00
cd930d4e7f
pop clip vision keys after loading them.
2023-06-18 21:21:17 -04:00