2fd9c1308a
Fix mask issue in some attention functions.
2024-11-22 02:10:09 -05:00
07f6eeaa13
Fix mask issue with attention_xformers.
2024-11-20 17:07:46 -05:00
fabf449feb
Mochi VAE encoder.
2024-11-01 17:33:09 -04:00
33fb282d5c
Fix issue.
2024-08-14 02:51:47 -04:00
bb1969cab7
Initial support for the stable audio open model.
2024-06-15 12:14:56 -04:00
0920e0e5fe
Remove some unused imports.
2024-05-27 19:08:27 -04:00
8508df2569
Work around black image bug on Mac 14.5 by forcing attention upcasting.
2024-05-21 16:56:33 -04:00
83d969e397
Disable xformers when tracing model.
2024-05-21 13:55:49 -04:00
1900e5119f
Fix potential issue.
2024-05-20 08:19:54 -04:00
0bdc2b15c7
Cleanup.
2024-05-18 10:11:44 -04:00
98f828fad9
Remove unnecessary code.
2024-05-18 09:36:44 -04:00
46daf0a9a7
Add debug options to force on and off attention upcasting.
2024-05-16 04:09:41 -04:00
ec6f16adb6
Fix SAG.
2024-05-14 18:02:27 -04:00
bb4940d837
Only enable attention upcasting on models that actually need it.
2024-05-14 17:00:50 -04:00
b0ab31d06c
Refactor attention upcasting code part 1.
2024-05-14 12:47:31 -04:00
2aed53c4ac
Workaround xformers bug.
2024-04-30 21:23:40 -04:00
2a813c3b09
Switch some more prints to logging.
2024-03-11 16:34:58 -04:00
6bcf57ff10
Fix attention masks properly for multiple batches.
2024-02-17 16:15:18 -05:00
f8706546f3
Fix attention mask batch size in some attention functions.
2024-02-17 15:22:21 -05:00
3b9969c1c5
Properly fix attention masks in CLIP with batches.
2024-02-17 12:13:13 -05:00
89507f8adf
Remove some unused imports.
2024-01-25 23:42:37 -05:00
6a7bc35db8
Use basic attention implementation for small inputs on old pytorch.
2024-01-09 13:46:52 -05:00
c6951548cf
Update optimized_attention_for_device function for new functions that
...
support masked attention.
2024-01-07 13:52:08 -05:00
aaa9017302
Add attention mask support to sub quad attention.
2024-01-07 04:13:58 -05:00
0c2c9fbdfa
Support attention mask in split attention.
2024-01-06 13:16:48 -05:00
3ad0191bfb
Implement attention mask on xformers.
2024-01-06 04:33:03 -05:00
a5056cfb1f
Remove useless code.
2023-12-15 01:28:16 -05:00
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
2023-12-11 23:27:13 -05:00
fbdb14d4c4
Cleaner CLIP text encoder implementation.
...
Use a simple CLIP model implementation instead of the one from
transformers.
This will allow some interesting things that would too hackish to implement
using the transformers implementation.
2023-12-06 23:50:03 -05:00
1bbd65ab30
Missed this one.
2023-12-05 12:48:41 -05:00
af365e4dd1
All the unet ops with weights are now handled by comfy.ops
2023-12-04 03:12:18 -05:00
39e75862b2
Fix regression from last commit.
2023-11-26 03:43:02 -05:00
50dc39d6ec
Clean up the extra_options dict for the transformer patches.
...
Now everything in transformer_options gets put in extra_options.
2023-11-26 03:13:56 -05:00
3e5ea74ad3
Make buggy xformers fall back on pytorch attention.
2023-11-24 03:55:35 -05:00
871cc20e13
Support SVD img2vid model.
2023-11-23 19:41:33 -05:00
c837a173fa
Fix some memory issues in sub quad attention.
2023-10-30 15:30:49 -04:00
125b03eead
Fix some OOM issues with split attention.
2023-10-30 13:14:11 -04:00
a373367b0c
Fix some OOM issues with split and sub quad attention.
2023-10-25 20:17:28 -04:00
8b65f5de54
attention_basic now works with hypertile.
2023-10-22 03:59:53 -04:00
e6bc42df46
Make sub_quad and split work with hypertile.
2023-10-22 03:51:29 -04:00
9906e3efe3
Make xformers work with hypertile.
2023-10-21 13:23:03 -04:00
bb064c9796
Add a separate optimized_attention_masked function.
2023-10-16 02:31:24 -04:00
ac7d8cfa87
Allow attn_mask in attention_pytorch.
2023-10-11 20:38:48 -04:00
1a4bd9e9a6
Refactor the attention functions.
...
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
2023-10-11 20:38:48 -04:00
fff491b032
Model patches can now know which batch is positive and negative.
2023-09-27 12:04:07 -04:00
1938f5c5fe
Add a force argument to soft_empty_cache to force a cache empty.
2023-09-04 00:58:18 -04:00
4a0c4ce4ef
Some fixes to generalize CUDA specific functionality to Intel or other GPUs.
2023-09-02 18:22:10 -07:00
0e3b641172
Remove xformers related print.
2023-09-01 02:12:03 -04:00
b80c3276dc
Fix issue with gligen.
2023-08-18 16:32:23 -04:00
d6e4b342e6
Support for Control Loras.
...
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.
This allows a much smaller memory footprint depending on the rank of the
matrices.
These controlnets are used just like regular ones.
2023-08-18 11:59:51 -04:00