824e4935f5
Add dtype parameter to VAE object.
2023-12-12 12:03:29 -05:00
32b7e7e769
Add manual cast to controlnet.
2023-12-12 11:32:42 -05:00
3152023fbc
Use inference dtype for unet memory usage estimation.
2023-12-11 23:50:38 -05:00
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
b0aab1e4ea
Add an option --fp16-unet to force using fp16 for the unet.
2023-12-11 18:36:29 -05:00
ba07cb748e
Use faster manual cast for fp8 in unet.
2023-12-11 18:24:44 -05:00
57926635e8
Switch text encoder to manual cast.
...
Use fp16 text encoder weights for CPU inference to lower memory usage.
2023-12-10 23:00:54 -05:00
340177e6e8
Disable non blocking on mps.
2023-12-10 01:30:35 -05:00
614b7e731f
Implement GLora.
2023-12-09 18:15:26 -05:00
cb63e230b4
Make lora code a bit cleaner.
2023-12-09 14:15:09 -05:00
174eba8e95
Use own clip vision model implementation.
2023-12-09 11:56:31 -05:00
97015b6b38
Cleanup.
2023-12-08 16:02:08 -05:00
a4ec54a40d
Add linear_start and linear_end to model_config.sampling_settings
2023-12-08 02:49:30 -05:00
9ac0b487ac
Make --gpu-only put intermediate values in GPU memory instead of cpu.
2023-12-08 02:35:45 -05:00
efb704c758
Support attention masking in CLIP implementation.
2023-12-07 02:51:02 -05:00
fbdb14d4c4
Cleaner CLIP text encoder implementation.
...
Use a simple CLIP model implementation instead of the one from
transformers.
This will allow some interesting things that would too hackish to implement
using the transformers implementation.
2023-12-06 23:50:03 -05:00
2db86b4676
Slightly faster lora applying.
2023-12-06 05:13:14 -05:00
1bbd65ab30
Missed this one.
2023-12-05 12:48:41 -05:00
9b655d4fd7
Fix memory issue with control loras.
2023-12-04 21:55:19 -05:00
26b1c0a771
Fix control lora on fp8.
2023-12-04 13:47:41 -05:00
be3468ddd5
Less useless downcasting.
2023-12-04 12:53:46 -05:00
ca82ade765
Use .itemsize to get dtype size for fp8.
2023-12-04 11:52:06 -05:00
31b0f6f3d8
UNET weights can now be stored in fp8.
...
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
2023-12-04 11:10:00 -05:00
af365e4dd1
All the unet ops with weights are now handled by comfy.ops
2023-12-04 03:12:18 -05:00
61a123a1e0
A different way of handling multiple images passed to SVD.
...
Previously when a list of 3 images [0, 1, 2] was used for a 6 frame video
they were concated like this:
[0, 1, 2, 0, 1, 2]
now they are concated like this:
[0, 0, 1, 1, 2, 2]
2023-12-03 03:31:47 -05:00
c97be4db91
Support SD2.1 turbo checkpoint.
2023-11-30 19:27:03 -05:00
983ebc5792
Use smart model management for VAE to decrease latency.
2023-11-28 04:58:51 -05:00
c45d1b9b67
Add a function to load a unet from a state dict.
2023-11-27 17:41:29 -05:00
f30b992b18
.sigma and .timestep now return tensors on the same device as the input.
2023-11-27 16:41:33 -05:00
13fdee6abf
Try to free memory for both cond+uncond before inference.
2023-11-27 14:55:40 -05:00
be71bb5e13
Tweak memory inference calculations a bit.
2023-11-27 14:04:16 -05:00
39e75862b2
Fix regression from last commit.
2023-11-26 03:43:02 -05:00
50dc39d6ec
Clean up the extra_options dict for the transformer patches.
...
Now everything in transformer_options gets put in extra_options.
2023-11-26 03:13:56 -05:00
5d6dfce548
Fix importing diffusers unets.
2023-11-24 20:35:29 -05:00
3e5ea74ad3
Make buggy xformers fall back on pytorch attention.
2023-11-24 03:55:35 -05:00
871cc20e13
Support SVD img2vid model.
2023-11-23 19:41:33 -05:00
410bf07771
Make VAE memory estimation take dtype into account.
2023-11-22 18:17:19 -05:00
32447f0c39
Add sampling_settings so models can specify specific sampling settings.
2023-11-22 17:24:00 -05:00
c3ae99a749
Allow controlling downscale and upscale methods in PatchModelAddDownscale.
2023-11-22 03:23:16 -05:00
72741105a6
Remove useless code.
2023-11-21 17:27:28 -05:00
6a491ebe27
Allow model config to preprocess the vae state dict on load.
2023-11-21 16:29:18 -05:00
cd4fc77d5f
Add taesd and taesdxl to VAELoader node.
...
They will show up if both the taesd_encoder and taesd_decoder or taesdxl
model files are present in the models/vae_approx directory.
2023-11-21 12:54:19 -05:00
ce67dcbcda
Make it easy for models to process the unet state dict on load.
2023-11-20 23:17:53 -05:00
d9d8702d8d
percent_to_sigma now returns a float instead of a tensor.
2023-11-18 23:20:29 -05:00
0cf4e86939
Add some command line arguments to store text encoder weights in fp8.
...
Pytorch supports two variants of fp8:
--fp8_e4m3fn-text-enc (the one that seems to give better results)
--fp8_e5m2-text-enc
2023-11-17 02:56:59 -05:00
107e78b1cb
Add support for loading SSD1B diffusers unet version.
...
Improve diffusers model detection.
2023-11-16 23:12:55 -05:00
7e3fe3ad28
Make deep shrink behave like it should.
2023-11-16 15:26:28 -05:00
9f00a18095
Fix potential issues.
2023-11-16 14:59:54 -05:00
7ea6bb038c
Print warning when controlnet can't be applied instead of crashing.
2023-11-16 12:57:12 -05:00
dcec1047e6
Invert the start and end percentages in the code.
...
This doesn't affect how percentages behave in the frontend but breaks
things if you relied on them in the backend.
percent_to_sigma goes from 0 to 1.0 instead of 1.0 to 0 for less confusion.
Make percent 0 return an extremely large sigma and percent 1.0 return a
zero one to fix imprecision.
2023-11-16 04:23:44 -05:00