33e5203a2a
Don't cache index.html ( #4211 )
2024-08-05 12:25:28 -04:00
a178e25912
Fix Flux FP64 math on XPU ( #4210 )
2024-08-05 01:26:20 -04:00
78e133d041
Support simple diffusers Flux loras.
2024-08-04 22:05:48 -04:00
7afa985fba
Correct spelling 'token_weight_pars_t5' to 'token_weight_pairs_t5' ( #4200 )
2024-08-04 17:10:02 -04:00
ddb6a9f47c
Set the step in EmptySD3LatentImage to 16.
...
These models work better when the res is a multiple of 16.
2024-08-04 15:59:02 -04:00
3b71f84b50
ONNX tracing fixes.
2024-08-04 15:45:43 -04:00
0a6b008117
Fix issue with some custom nodes.
2024-08-04 10:03:33 -04:00
56f3c660bf
ModelSamplingFlux now takes a resolution and adjusts the shift with it.
...
If you want to sample Flux dev exactly how the reference code does use
the same resolution as your image in this node.
2024-08-04 04:06:00 -04:00
f7a5107784
Fix crash.
2024-08-03 16:55:38 -04:00
91be9c2867
Tweak lowvram memory formula.
2024-08-03 16:44:50 -04:00
03c5018c98
Lower lowvram memory to 1/3 of free memory.
2024-08-03 15:14:07 -04:00
2ba5cc8b86
Fix some issues.
2024-08-03 15:06:40 -04:00
1e68002b87
Cap lowvram to half of free memory.
2024-08-03 14:50:20 -04:00
ba9095e5bd
Automatically use fp8 for diffusion model weights if:
...
Checkpoint contains weights in fp8.
There isn't enough memory to load the diffusion model in GPU vram.
2024-08-03 13:45:19 -04:00
f123328b82
Load T5 in fp8 if it's in fp8 in the Flux checkpoint.
2024-08-03 12:39:33 -04:00
63a7e8edba
More aggressive batch splitting.
2024-08-03 11:53:30 -04:00
0eea47d580
Add ModelSamplingFlux to experiment with the shift value.
...
Default shift on Flux Schnell is 0.0
2024-08-03 03:54:38 -04:00
7cd0cdfce6
Add advanced model merge node for Flux model.
2024-08-02 23:20:53 -04:00
ea03c9dcd2
Better per model memory usage estimations.
2024-08-02 18:09:24 -04:00
3a9ee995cf
Tweak regular SD memory formula.
2024-08-02 17:34:30 -04:00
47da42d928
Better Flux vram estimation.
2024-08-02 17:02:35 -04:00
17bbd83176
Fix bug loading flac workflow when it contains = character.
2024-08-02 13:14:28 -04:00
bfb52de866
Lower SAG scale step for finer control ( #4158 )
...
* Lower SAG step for finer control
Since the introduction of cfg++ which uses very low cfg value, a step of 0.1 in SAG might be too high for finer control. Even SAG of 0.1 can be too high when cfg is only 0.6, so I change the step to 0.01.
* Lower PAG step as well.
* Update nodes_sag.py
2024-08-02 10:29:03 -04:00
eca962c6da
Add FluxGuidance node.
...
This lets you adjust the guidance on the dev model which is a parameter
that is passed to the diffusion model.
2024-08-02 10:25:49 -04:00
c1696cd1b5
Add missing import ( #4174 )
2024-08-02 09:34:12 -04:00
369f459b20
Fix no longer working on old pytorch.
2024-08-01 22:20:24 -04:00
ce9ac2fe05
Fix clip_g/clip_l mixup ( #4168 )
2024-08-01 21:40:56 -04:00
e638f2858a
Hack to make all resolutions work on Flux models.
2024-08-01 21:39:18 -04:00
a531001cc7
Add CLIPTextEncodeFlux.
2024-08-01 18:53:25 -04:00
d420bc792a
Tweak the memory usage formulas for Flux and SD.
2024-08-01 17:53:45 -04:00
d965474aaa
Make ComfyUI split batches a higher priority than weight offload.
2024-08-01 16:39:59 -04:00
1c61361fd2
Fast preview support for Flux.
2024-08-01 16:28:11 -04:00
a6decf1e62
Fix bfloat16 potentially not being enabled on mps.
2024-08-01 16:18:44 -04:00
48eb1399c0
Try to fix mac issue.
2024-08-01 13:41:27 -04:00
b4f6ebb2e8
Rename UNETLoader node to "Load Diffusion Model".
2024-08-01 13:33:30 -04:00
d7430a1651
Add a way to load the diffusion model in fp8 with UNETLoader node.
2024-08-01 13:30:51 -04:00
f2b80f95d2
Better Mac support on flux model.
2024-08-01 13:10:50 -04:00
1aa9cf3292
Make lowvram more aggressive on low memory machines.
2024-08-01 12:11:57 -04:00
2f88d19ef3
Add link to Flux examples to readme.
2024-08-01 11:48:19 -04:00
eb96c3bd82
Fix .sft file loading (they are safetensors files).
2024-08-01 11:32:58 -04:00
5f98de7697
Load flux t5 in fp8 if weights are in fp8.
2024-08-01 11:05:56 -04:00
8d34211a7a
Fix old python versions no longer working.
2024-08-01 09:57:20 -04:00
1589b58d3e
Basic Flux Schnell and Flux Dev model implementation.
2024-08-01 09:49:29 -04:00
7ad574bffd
Mac supports bf16 just make sure you are using the latest pytorch.
2024-08-01 09:42:17 -04:00
e2382b6adb
Make lowvram less aggressive when there are large amounts of free memory.
2024-08-01 03:58:58 -04:00
c24f897352
Fix to get fp8 working on T5 base.
2024-07-31 02:00:19 -04:00
a5991a7aa6
Fix hunyuan dit text encoder weights always being in fp32.
2024-07-31 01:34:57 -04:00
2c038ccef0
Lower CLIP memory usage by a bit.
2024-07-31 01:32:35 -04:00
b85216a3c0
Lower T5 memory usage by a few hundred MB.
2024-07-31 00:52:34 -04:00
82cae45d44
Fix potential issue with non clip text embeddings.
2024-07-30 14:41:13 -04:00