61196d8857
Add option to inference the diffusion model in fp32 and fp64.
2024-11-25 05:00:23 -05:00
1af4a47fd1
Bump up mac version for attention upcast bug workaround.
2024-10-31 15:15:31 -04:00
471cd3eace
fp8 casting is fast on GPUs that support fp8 compute.
2024-10-20 00:54:47 -04:00
67158994a4
Use the lowvram cast_to function for everything.
2024-10-17 17:25:56 -04:00
4b2f0d9413
Increase maximum macOS version to 15.0.1 when forcing upcast attention ( #5191 )
2024-10-09 22:21:41 -04:00
e38c94228b
Add a weight_dtype fp8_e4m3fn_fast to the Diffusion Model Loader node.
...
This is used to load weights in fp8 and use fp8 matrix multiplication.
2024-10-09 19:43:17 -04:00
dc96a1ae19
Load controlnet in fp8 if weights are in fp8.
2024-09-21 04:50:12 -04:00
de8e8e3b0d
Fix xpu Pytorch nightly build from calling optimize which doesn't exist. ( #4978 )
2024-09-19 05:11:42 -04:00
c7427375ee
Prioritize freeing partially offloaded models first.
2024-09-04 19:47:32 -04:00
8d31a6632f
Speed up inference on nvidia 10 series on Linux.
2024-09-01 17:29:31 -04:00
b643eae08b
Make minimum_inference_memory() depend on --reserve-vram
2024-09-01 01:18:34 -04:00
935ae153e1
Cleanup.
2024-08-30 12:53:59 -04:00
38c22e631a
Fix case where model was not properly unloaded in merging workflows.
2024-08-27 19:03:51 -04:00
5d8bbb7281
Cleanup.
2024-08-23 04:06:27 -04:00
2c1d2375d6
Fix.
2024-08-23 04:04:55 -04:00
64ccb3c7e3
Rework IPEX check for future inclusion of XPU into Pytorch upstream and do a bit more optimization of ipex.optimize(). ( #4562 )
2024-08-23 03:59:57 -04:00
7c6bb84016
Code cleanups.
2024-08-22 17:05:12 -04:00
c54d3ed5e6
Fix issue with models staying loaded in memory.
2024-08-22 15:58:20 -04:00
7b70b266d8
Generalize MacOS version check for force-upcast-attention ( #4548 )
...
This code automatically forces upcasting attention for MacOS versions 14.5 and 14.6. My computer returns the string "14.6.1" for `platform.mac_ver()[0]`, so this generalizes the comparison to catch more versions.
I am running MacOS Sonoma 14.6.1 (latest version) and was seeing black image generation on previously functional workflows after recent software updates. This PR solved the issue for me.
See comfyanonymous/ComfyUI#3521
2024-08-22 13:24:21 -04:00
843a7ff70c
fp16 is actually faster than fp32 on a GTX 1080.
2024-08-21 23:23:50 -04:00
a60620dcea
Fix slow performance on 10 series Nvidia GPUs.
2024-08-21 16:39:02 -04:00
03ec517afb
Remove useless line, adjust windows default reserved vram.
2024-08-21 00:47:19 -04:00
9953f22fce
Add --fast argument to enable experimental optimizations.
...
Optimizations that might break things/lower quality will be put behind
this flag first and might be enabled by default in the future.
Currently the only optimization is float8_e4m3fn matrix multiplication on
4000/ADA series Nvidia cards or later. If you have one of these cards you
will see a speed boost when using fp8_e4m3fn flux for example.
2024-08-20 11:55:51 -04:00
1b3eee672c
Fix potential issue with multi devices.
2024-08-20 10:46:36 -04:00
045377ea89
Add a --reserve-vram argument if you don't want comfy to use all of it.
...
--reserve-vram 1.0 for example will make ComfyUI try to keep 1GB vram free.
This can also be useful if workflows are failing because of OOM errors but
in that case please report it if --reserve-vram improves your situation.
2024-08-19 17:16:18 -04:00
be0726c1ed
Remove duplication.
2024-08-19 15:26:50 -04:00
39fb74c5bd
Fix bug when model cannot be partially unloaded.
2024-08-13 03:57:55 -04:00
74e124f4d7
Fix some issues with TE being in lowvram mode.
2024-08-12 23:42:21 -04:00
b8ffb2937f
Memory tweaks.
2024-08-12 15:07:11 -04:00
ad76574cb8
Fix some potential issues with the previous commits.
2024-08-12 00:23:29 -04:00
5c69cde037
Load TE model straight to vram if certain conditions are met.
2024-08-11 23:52:43 -04:00
1de69fe4d5
Fix some issues with inference slowing down.
2024-08-10 16:21:25 -04:00
55ad9d5f8c
Fix regression.
2024-08-09 03:36:40 -04:00
037c38eb0f
Try to improve inference speed on some machines.
2024-08-08 17:29:27 -04:00
66d4233210
Fix.
2024-08-08 15:16:51 -04:00
08f92d55e9
Partial model shift support.
2024-08-08 14:45:06 -04:00
6969fc9ba4
Make supported_dtypes a priority list.
2024-08-07 15:00:06 -04:00
b334605a66
Fix OOMs happening in some cases.
...
A cloned model patcher sometimes reported a model was loaded on a device
when it wasn't.
2024-08-06 13:36:04 -04:00
c14ac98fed
Unload models and load them back in lowvram mode no free vram.
2024-08-06 03:22:39 -04:00
8edbcf5209
Improve performance on some lowend GPUs.
2024-08-05 16:24:04 -04:00
f7a5107784
Fix crash.
2024-08-03 16:55:38 -04:00
91be9c2867
Tweak lowvram memory formula.
2024-08-03 16:44:50 -04:00
03c5018c98
Lower lowvram memory to 1/3 of free memory.
2024-08-03 15:14:07 -04:00
2ba5cc8b86
Fix some issues.
2024-08-03 15:06:40 -04:00
1e68002b87
Cap lowvram to half of free memory.
2024-08-03 14:50:20 -04:00
ba9095e5bd
Automatically use fp8 for diffusion model weights if:
...
Checkpoint contains weights in fp8.
There isn't enough memory to load the diffusion model in GPU vram.
2024-08-03 13:45:19 -04:00
d965474aaa
Make ComfyUI split batches a higher priority than weight offload.
2024-08-01 16:39:59 -04:00
a6decf1e62
Fix bfloat16 potentially not being enabled on mps.
2024-08-01 16:18:44 -04:00
1aa9cf3292
Make lowvram more aggressive on low memory machines.
2024-08-01 12:11:57 -04:00
5f98de7697
Load flux t5 in fp8 if weights are in fp8.
2024-08-01 11:05:56 -04:00