ab885b33ba
Skip layer guidance node now works on LTX-Video.
2024-11-23 10:33:05 -05:00
839ed3368e
Some improvements to the lowvram unloading.
2024-11-22 20:59:15 -05:00
6e8cdcd3cb
Fix some tiled VAE decoding issues with LTX-Video.
2024-11-22 18:00:34 -05:00
e5c3f4b87f
LTXV lowvram fixes.
2024-11-22 17:17:11 -05:00
bc6be6c11e
Some fixes to the lowvram system.
2024-11-22 16:40:04 -05:00
5818f6cf51
Remove print.
2024-11-22 10:49:15 -05:00
5e16f1d24b
Support Lightricks LTX-Video model.
2024-11-22 08:46:39 -05:00
2fd9c1308a
Fix mask issue in some attention functions.
2024-11-22 02:10:09 -05:00
8f0009aad0
Support new flux model variants.
2024-11-21 08:38:23 -05:00
41444b5236
Add some new weight patching functionality.
...
Add a way to reshape lora weights.
Allow weight patches to all weight not just .weight and .bias
Add a way for a lora to set a weight to a specific value.
2024-11-21 07:19:17 -05:00
07f6eeaa13
Fix mask issue with attention_xformers.
2024-11-20 17:07:46 -05:00
22535d0589
Skip layer guidance now works on stable audio model.
2024-11-20 07:33:06 -05:00
b699a15062
Refactor inpaint/ip2p code.
2024-11-19 03:25:25 -05:00
d9f90965c8
Support block replace patches in auraflow.
2024-11-17 08:19:59 -05:00
41886af138
Add transformer options blocks replace patch to mochi.
2024-11-16 20:48:14 -05:00
3b9a6cf2b1
Fix issue with 3d masks.
2024-11-13 07:18:30 -05:00
8ebf2d8831
Add block replace transformer_options to flux.
2024-11-12 08:00:39 -05:00
eb476e6ea9
Allow 1D masks for 1D latents.
2024-11-11 14:44:52 -05:00
8b275ce5be
Support auto detecting some zsnr anime checkpoints.
2024-11-11 05:34:11 -05:00
2a18e98ccf
Refactor so that zsnr can be set in the sampling_settings.
2024-11-11 04:55:56 -05:00
bdeb1c171c
Fast previews for mochi.
2024-11-10 03:39:35 -05:00
8b90e50979
Properly handle and reshape masks when used on 3d latents.
2024-11-09 15:30:19 -05:00
2865f913f7
Free memory before doing tiled decode.
2024-11-07 04:01:24 -05:00
b49616f951
Make VAEDecodeTiled node work with video VAEs.
2024-11-07 03:47:12 -05:00
5e29e7a488
Remove scaled_fp8 key after reading it to silence warning.
2024-11-06 04:56:42 -05:00
8afb97cd3f
Fix unknown VAE being detected as the mochi VAE.
2024-11-05 03:43:27 -05:00
69694f40b3
fix dynamic shape export ( #5490 )
2024-11-04 14:59:28 -05:00
6c9dbde7de
Fix mochi all in one checkpoint t5xxl key names.
2024-11-03 01:40:42 -05:00
fabf449feb
Mochi VAE encoder.
2024-11-01 17:33:09 -04:00
1c8286a44b
Avoid SyntaxWarning in UniPC docstring ( #5442 )
2024-10-31 15:17:26 -04:00
1af4a47fd1
Bump up mac version for attention upcast bug workaround.
2024-10-31 15:15:31 -04:00
daa1565b93
Fix diffusers flux controlnet regression.
2024-10-30 13:11:34 -04:00
09fdb2b269
Support SD3.5 medium diffusers format weights and loras.
2024-10-30 04:24:00 -04:00
30c0c81351
Add a way to patch blocks in SD3.
2024-10-29 00:48:32 -04:00
13b0ff8a6f
Update SD3 code.
2024-10-28 21:58:52 -04:00
c320801187
Remove useless line.
2024-10-28 17:41:12 -04:00
669d9e4c67
Set default shift on mochi to 6.0
2024-10-27 22:21:04 -04:00
9ee0a6553a
float16 inference is a bit broken on mochi.
2024-10-27 04:56:40 -04:00
5cbb01bc2f
Basic Genmo Mochi video model support.
...
To use:
"Load CLIP" node with t5xxl + type mochi
"Load Diffusion Model" node with the mochi dit file.
"Load VAE" with the mochi vae file.
EmptyMochiLatentVideo node for the latent.
euler + linear_quadratic in the KSampler node.
2024-10-26 06:54:00 -04:00
c3ffbae067
Make LatentUpscale nodes work on 3d latents.
2024-10-26 01:50:51 -04:00
d605677b33
Make euler_ancestral work on flow models (credit: Ashen).
2024-10-25 19:53:44 -04:00
af8cf79a2d
support SimpleTuner lycoris lora for SD3 ( #5340 )
2024-10-24 01:18:32 -04:00
66b0961a46
Fix ControlLora issue with last commit.
2024-10-23 17:02:40 -04:00
754597c8a9
Clean up some controlnet code.
...
Remove self.device which was useless.
2024-10-23 14:19:05 -04:00
915fdb5745
Fix lowvram edge case.
2024-10-22 16:34:50 -04:00
5a8a48931a
remove attention abstraction ( #5324 )
2024-10-22 14:02:38 -04:00
8ce2a1052c
Optimizations to --fast and scaled fp8.
2024-10-22 02:12:28 -04:00
f82314fcfc
Fix duplicate sigmas on beta scheduler.
2024-10-21 20:19:45 -04:00
0075c6d096
Mixed precision diffusion models with scaled fp8.
...
This change allows supports for diffusion models where all the linears are
scaled fp8 while the other weights are the original precision.
2024-10-21 18:12:51 -04:00
83ca891118
Support scaled fp8 t5xxl model.
2024-10-20 22:27:00 -04:00