9953f22fce
Add --fast argument to enable experimental optimizations.
...
Optimizations that might break things/lower quality will be put behind
this flag first and might be enabled by default in the future.
Currently the only optimization is float8_e4m3fn matrix multiplication on
4000/ADA series Nvidia cards or later. If you have one of these cards you
will see a speed boost when using fp8_e4m3fn flux for example.
2024-08-20 11:55:51 -04:00
1b3eee672c
Fix potential issue with multi devices.
2024-08-20 10:46:36 -04:00
045377ea89
Add a --reserve-vram argument if you don't want comfy to use all of it.
...
--reserve-vram 1.0 for example will make ComfyUI try to keep 1GB vram free.
This can also be useful if workflows are failing because of OOM errors but
in that case please report it if --reserve-vram improves your situation.
2024-08-19 17:16:18 -04:00
be0726c1ed
Remove duplication.
2024-08-19 15:26:50 -04:00
39fb74c5bd
Fix bug when model cannot be partially unloaded.
2024-08-13 03:57:55 -04:00
74e124f4d7
Fix some issues with TE being in lowvram mode.
2024-08-12 23:42:21 -04:00
b8ffb2937f
Memory tweaks.
2024-08-12 15:07:11 -04:00
ad76574cb8
Fix some potential issues with the previous commits.
2024-08-12 00:23:29 -04:00
5c69cde037
Load TE model straight to vram if certain conditions are met.
2024-08-11 23:52:43 -04:00
1de69fe4d5
Fix some issues with inference slowing down.
2024-08-10 16:21:25 -04:00
55ad9d5f8c
Fix regression.
2024-08-09 03:36:40 -04:00
037c38eb0f
Try to improve inference speed on some machines.
2024-08-08 17:29:27 -04:00
66d4233210
Fix.
2024-08-08 15:16:51 -04:00
08f92d55e9
Partial model shift support.
2024-08-08 14:45:06 -04:00
6969fc9ba4
Make supported_dtypes a priority list.
2024-08-07 15:00:06 -04:00
b334605a66
Fix OOMs happening in some cases.
...
A cloned model patcher sometimes reported a model was loaded on a device
when it wasn't.
2024-08-06 13:36:04 -04:00
c14ac98fed
Unload models and load them back in lowvram mode no free vram.
2024-08-06 03:22:39 -04:00
8edbcf5209
Improve performance on some lowend GPUs.
2024-08-05 16:24:04 -04:00
f7a5107784
Fix crash.
2024-08-03 16:55:38 -04:00
91be9c2867
Tweak lowvram memory formula.
2024-08-03 16:44:50 -04:00
03c5018c98
Lower lowvram memory to 1/3 of free memory.
2024-08-03 15:14:07 -04:00
2ba5cc8b86
Fix some issues.
2024-08-03 15:06:40 -04:00
1e68002b87
Cap lowvram to half of free memory.
2024-08-03 14:50:20 -04:00
ba9095e5bd
Automatically use fp8 for diffusion model weights if:
...
Checkpoint contains weights in fp8.
There isn't enough memory to load the diffusion model in GPU vram.
2024-08-03 13:45:19 -04:00
d965474aaa
Make ComfyUI split batches a higher priority than weight offload.
2024-08-01 16:39:59 -04:00
a6decf1e62
Fix bfloat16 potentially not being enabled on mps.
2024-08-01 16:18:44 -04:00
1aa9cf3292
Make lowvram more aggressive on low memory machines.
2024-08-01 12:11:57 -04:00
5f98de7697
Load flux t5 in fp8 if weights are in fp8.
2024-08-01 11:05:56 -04:00
7ad574bffd
Mac supports bf16 just make sure you are using the latest pytorch.
2024-08-01 09:42:17 -04:00
e2382b6adb
Make lowvram less aggressive when there are large amounts of free memory.
2024-08-01 03:58:58 -04:00
6425252c4f
Use fp16 as the default vae dtype for the audio VAE.
2024-06-16 13:12:54 -04:00
0ec513d877
Add a --force-channels-last to inference models in channel last mode.
2024-06-15 01:08:12 -04:00
5eb98f0092
Exempt IPEX from non_blocking previews fixing segmentation faults. ( #3708 )
2024-06-13 18:51:14 -04:00
0e49211a11
Load the SD3 T5xxl model in the same dtype stored in the checkpoint.
2024-06-11 17:03:26 -04:00
104fcea0c8
Add function to get the list of currently loaded models.
2024-06-05 23:25:16 -04:00
b1fd26fe9e
pytorch xpu should be flash or mem efficient attention?
2024-06-04 17:44:14 -04:00
b249862080
Add an annoying print to a function I want to remove.
2024-06-01 12:47:31 -04:00
bf3e334d46
Disable non_blocking when --deterministic or directml.
2024-05-30 11:07:38 -04:00
0920e0e5fe
Remove some unused imports.
2024-05-27 19:08:27 -04:00
6c23854f54
Fix OSX latent2rgb previews.
2024-05-22 13:56:28 -04:00
8508df2569
Work around black image bug on Mac 14.5 by forcing attention upcasting.
2024-05-21 16:56:33 -04:00
09e069ae6c
Log the pytorch version.
2024-05-20 06:22:29 -04:00
19300655dd
Don't automatically switch to lowvram mode on GPUs with low memory.
2024-05-17 00:31:32 -04:00
f509c6fe21
Fix Intel GPU memory allocation accuracy and documentation update. ( #3459 )
...
* Change calculation of memory total to be more accurate, allocated is actually smaller than reserved.
* Update README.md install documentation for Intel GPUs.
2024-05-12 06:36:30 -04:00
fa6dd7e5bb
Fix lowvram issue with saving checkpoints.
...
The previous fix didn't cover the case where the model was loaded in
lowvram mode right before.
2024-05-12 06:13:45 -04:00
49c20cdc70
No longer necessary.
2024-05-12 05:34:43 -04:00
e1489ad257
Fix issue with lowvram mode breaking model saving.
2024-05-11 21:55:20 -04:00
a56d02efc7
Change torch.xpu to ipex.optimize, xpu device initialization and remove workaround for text node issue from older IPEX. ( #3388 )
2024-05-02 03:26:50 -04:00
258dbc06c3
Fix some memory related issues.
2024-04-14 12:08:58 -04:00
0a03009808
Fix issue with controlnet models getting loaded multiple times.
2024-04-06 18:38:39 -04:00