4d55f16ae8
Use enum list for --fast options ( #7024 )
2025-03-01 02:37:35 -05:00
cf0b549d48
--fast now takes a number as argument to indicate how fast you want it.
...
The idea is that you can indicate how much quality vs speed you want.
At the moment:
--fast 2 enables fp16 accumulation if your pytorch supports it.
--fast 5 enables fp8 matrix mult on fp8 models and the optimization above.
--fast without a number enables all optimizations.
2025-02-28 02:48:20 -05:00
eb4543474b
Use fp16 for intermediate for fp8 weights with --fast if supported.
2025-02-28 02:17:50 -05:00
1804397952
Use fp16 if checkpoint weights are fp16 and the model supports it.
2025-02-27 16:39:57 -05:00
89253e9fe5
Support Cambricon MLU ( #6964 )
...
Co-authored-by: huzhan <huzhan@cambricon.com >
2025-02-26 20:45:13 -05:00
96d891cb94
Speedup on some models by not upcasting bfloat16 to float32 on mac.
2025-02-24 05:41:32 -05:00
ace899e71a
Prioritize fp16 compute when using allow_fp16_accumulation
2025-02-23 04:45:54 -05:00
072db3bea6
Assume the mac black image bug won't be fixed before v16.
2025-02-21 20:24:07 -05:00
a6deca6d9a
Latest mac still has the black image bug.
2025-02-21 20:14:30 -05:00
41c30e92e7
Let all model memory be offloaded on nvidia.
2025-02-21 06:32:21 -05:00
12da6ef581
Apparently directml supports fp16.
2025-02-20 09:30:24 -05:00
b07258cef2
Fix typo.
...
Let me know if this slows things down on 2000 series and below.
2025-02-18 07:28:33 -05:00
31e54b7052
Improve AMD arch detection.
2025-02-17 04:53:40 -05:00
8c0bae50c3
bf16 manual cast works on old AMD.
2025-02-17 04:42:40 -05:00
530412cb9d
Refactor torch version checks to be more future proof.
2025-02-17 04:36:45 -05:00
e2919d38b4
Disable bf16 on AMD GPUs that don't support it.
2025-02-16 05:46:10 -05:00
1cd6cd6080
Disable pytorch attention in VAE for AMD.
2025-02-14 05:42:14 -05:00
d7b4bf21a2
Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7
...
I'm not not sure which arches are supported yet. If you see improvements in
memory usage while using --use-pytorch-cross-attention on your AMD GPU let
me know and I will add it to the list.
2025-02-14 04:18:14 -05:00
8773ccf74d
Better memory estimation for ROCm that support mem efficient attention.
...
There is no way to check if the card actually supports it so it assumes
that it does if you use --use-pytorch-cross-attention with yours.
2025-02-13 08:32:36 -05:00
1d5d6586f3
Fix ruff.
2025-02-12 06:49:16 -05:00
35740259de
mix_ascend_bf16_infer_err ( #6794 )
2025-02-12 06:48:11 -05:00
b124256817
Fix for running via DirectML ( #6542 )
...
* Fix for running via DirectML
Fix DirectML empty image generation issue with Flux1. add CPU fallback for unsupported path. Verified the model works on AMD GPUs
* fix formating
* update casual mask calculation
2025-02-11 17:11:32 -05:00
af4b7c91be
Make --force-fp16 actually force the diffusion model to be fp16.
2025-02-11 08:33:09 -05:00
43a74c0de1
Allow FP16 accumulation with --fast ( #6453 )
...
Currently only applies to PyTorch nightly releases. (>=20250208)
2025-02-08 17:00:56 -05:00
255edf2246
Lower minimum ratio of loaded weights on Nvidia.
2025-01-27 05:26:51 -05:00
67feb05299
Remove redundant code.
2025-01-25 19:04:53 -05:00
d45ebb63f6
Remove old unused function.
2025-01-04 07:20:54 -05:00
9e9c8a1c64
Clear cache as often on AMD as Nvidia.
...
I think the issue this was working around has been solved.
If you notice that this change slows things down or causes stutters on
your AMD GPU with ROCm on Linux please report it.
2025-01-02 08:44:16 -05:00
160ca08138
Use python 3.9 in launch test instead of 3.8
...
Fix ruff check.
2024-12-26 20:05:54 -05:00
c4bfdba330
Support ascend npu ( #5436 )
...
* support ascend npu
Co-authored-by: YukMingLaw <lymmm2@163.com >
Co-authored-by: starmountain1997 <guozr1997@hotmail.com >
Co-authored-by: Ginray <ginray0215@gmail.com >
2024-12-26 19:36:50 -05:00
19a64d6291
Cleanup some mac related code.
2024-12-25 05:32:51 -05:00
b486885e08
Disable bfloat16 on older mac.
2024-12-25 05:18:50 -05:00
0229228f3f
Clean up the VAE dtypes code.
2024-12-25 04:50:34 -05:00
15564688ed
Add a try except block so if torch version is weird it won't crash.
2024-12-23 03:22:48 -05:00
c6b9c11ef6
Add oneAPI device selector for xpu and some other changes. ( #6112 )
...
* Add oneAPI device selector and some other minor changes.
* Fix device selector variable name.
* Flip minor version check sign.
* Undo changes to README.md.
2024-12-23 03:18:32 -05:00
e44d0ac7f7
Make --novram completely offload weights.
...
This flag is mainly used for testing the weight offloading, it shouldn't
actually be used in practice.
Remove useless import.
2024-12-23 01:51:08 -05:00
57f330caf9
Relax minimum ratio of weights loaded in memory on nvidia.
...
This should make it possible to do higher res images/longer videos by
further offloading weights to CPU memory.
Please report an issue if this slows down things on your system.
2024-12-22 03:06:37 -05:00
d7969cb070
Replace print with logging ( #6138 )
...
* Replace print with logging
* nit
* nit
* nit
* nit
* nit
* nit
2024-12-20 16:24:55 -05:00
2dda7c11a3
More proper fix for the memory issue.
2024-12-19 16:21:56 -05:00
3ad3248ad7
Fix lowvram bug when using a model multiple times in a row.
...
The memory system would load an extra 64MB each time until either the
model was completely in memory or OOM.
2024-12-19 16:04:56 -05:00
37e5390f5f
Add: --use-sage-attention to enable SageAttention.
...
You need to have the library installed first.
2024-12-18 01:56:10 -05:00
d9d7f3c619
Lint all unused variables ( #5989 )
...
* Enable F841
* Autofix
* Remove all unused variable assignment
2024-12-12 17:59:16 -05:00
fd5dfb812c
Set initial load devices for te and model to mps device on mac.
2024-12-12 06:00:31 -05:00
57e8bf6a9f
Fix case where a memory leak could cause crash.
...
Now the only symptom of code messing up and keeping references to a model
object when it should not will be endless prints in the log instead of the
next workflow crashing ComfyUI.
2024-12-02 19:49:49 -05:00
79d5ceae6e
Improved memory management. ( #5450 )
...
* Less fragile memory management.
* Fix issue.
* Remove useless function.
* Prevent and detect some types of memory leaks.
* Run garbage collector when switching workflow if needed.
* Fix issue.
2024-12-02 14:39:34 -05:00
61196d8857
Add option to inference the diffusion model in fp32 and fp64.
2024-11-25 05:00:23 -05:00
1af4a47fd1
Bump up mac version for attention upcast bug workaround.
2024-10-31 15:15:31 -04:00
471cd3eace
fp8 casting is fast on GPUs that support fp8 compute.
2024-10-20 00:54:47 -04:00
67158994a4
Use the lowvram cast_to function for everything.
2024-10-17 17:25:56 -04:00
4b2f0d9413
Increase maximum macOS version to 15.0.1 when forcing upcast attention ( #5191 )
2024-10-09 22:21:41 -04:00