5a8a48931a
remove attention abstraction ( #5324 )
2024-10-22 14:02:38 -04:00
8ce2a1052c
Optimizations to --fast and scaled fp8.
v0.2.4
2024-10-22 02:12:28 -04:00
f82314fcfc
Fix duplicate sigmas on beta scheduler.
2024-10-21 20:19:45 -04:00
0075c6d096
Mixed precision diffusion models with scaled fp8.
...
This change allows supports for diffusion models where all the linears are
scaled fp8 while the other weights are the original precision.
2024-10-21 18:12:51 -04:00
83ca891118
Support scaled fp8 t5xxl model.
2024-10-20 22:27:00 -04:00
f9f9faface
Fixed model merging issue with scaled fp8.
2024-10-20 06:24:31 -04:00
471cd3eace
fp8 casting is fast on GPUs that support fp8 compute.
2024-10-20 00:54:47 -04:00
a68bbafddb
Support diffusion models with scaled fp8 weights.
2024-10-19 23:47:42 -04:00
73e3a9e676
Clamp output when rounding weight to prevent Nan.
2024-10-19 19:07:10 -04:00
518c0dc2fe
Add tooltips to LoraSave node.
2024-10-18 06:01:09 -04:00
ce0542e10b
Add a note that python 3.13 is not yet supported to the README.
2024-10-17 19:27:37 -04:00
8473019d40
Pytorch can be shipped with numpy 2 now.
2024-10-17 19:15:17 -04:00
89f15894dd
Ignore more network related errors during websocket communication. ( #5269 )
...
Intermittent network issues during websocket communication should not crash ComfyUi process.
Co-authored-by: Xiaodong Xie <xie.xiaodong@frever.com >
2024-10-17 18:31:45 -04:00
67158994a4
Use the lowvram cast_to function for everything.
2024-10-17 17:25:56 -04:00
7390ff3b1e
Add missing import.
2024-10-16 14:58:30 -04:00
0bedfb26af
Revert "Fix Transformers FutureWarning ( #5140 )"
...
This reverts commit 95b7cf9bbe .
2024-10-16 12:36:19 -04:00
f71cfd2687
Add an experimental node to sharpen latents.
...
Can be used with LatentApplyOperationCFG for interesting results.
2024-10-16 05:25:31 -04:00
c695c4af7f
Frontend Manager: avoid redundant gh calls for static versions ( #5152 )
...
* Frontend Manager: avoid redundant gh calls for static versions
* actually, removing old tmpdir isn't needed
I tested - downloader code handles this case well already
(also rmdir was wrong func anyway, needed shutil.rmtree if it had content)
* add code comment
2024-10-16 03:35:37 -04:00
0dbba9f751
Add some latent operation nodes.
...
This is a port of the ModelSamplerTonemapNoiseTest from the experiments
repo.
To replicate that node use LatentOperationTonemapReinhard and
LatentApplyOperationCFG together.
2024-10-15 15:00:36 -04:00
f584758271
Cleanup some useless lines.
2024-10-14 21:02:39 -04:00
95b7cf9bbe
Fix Transformers FutureWarning ( #5140 )
...
* Update sd1_clip.py
Fix Transformers FutureWarning
* Update sd1_clip.py
Fix comment
2024-10-14 20:12:20 -04:00
191a0d56b4
Switch default packaging workflows to python 3.12
2024-10-13 06:59:31 -04:00
3c60ecd7a8
Fix fp8 ops staying enabled.
2024-10-12 14:10:13 -04:00
7ae6626723
Remove useless argument.
2024-10-12 07:16:21 -04:00
6632365e16
model_options consistency between functions.
...
weight_dtype -> dtype
2024-10-11 20:51:19 -04:00
ad07796777
🐛 Add device to variable c ( #5210 )
2024-10-11 20:37:50 -04:00
1b80895285
Make clip loader nodes support loading sd3 t5xxl in lower precision.
...
Add attention mask support in the SD3 text encoder code.
2024-10-10 15:06:15 -04:00
5f9d5a244b
Hotfix for the div zero occurrence when memory_used_encode is 0 ( #5121 )
...
https://github.com/comfyanonymous/ComfyUI/issues/5069#issuecomment-2382656368
v0.2.3
2024-10-09 23:34:34 -04:00
14eba07acd
Update web content to release v1.3.11 ( #5189 )
...
* Update web content to release v1.3.11
* nit
2024-10-09 22:37:04 -04:00
4b2f0d9413
Increase maximum macOS version to 15.0.1 when forcing upcast attention ( #5191 )
2024-10-09 22:21:41 -04:00
25eac1d780
Change runner label for the new runners ( #5197 )
2024-10-09 20:08:57 -04:00
e38c94228b
Add a weight_dtype fp8_e4m3fn_fast to the Diffusion Model Loader node.
...
This is used to load weights in fp8 and use fp8 matrix multiplication.
2024-10-09 19:43:17 -04:00
203942c8b2
Fix flux doras with diffusers keys.
2024-10-08 19:03:40 -04:00
3c72c89a52
Update folder_paths.py - try/catch for special file_name values ( #5187 )
...
Somehow managed to drop a file called "nul" into a windows checkpoints subdirectory. This caused all sorts of havoc with many nodes that needed the list of checkpoints.
2024-10-08 15:04:32 -04:00
614377abd6
Update web content to release v1.2.64 ( #5124 )
2024-10-07 17:15:29 -04:00
8dfa0cc552
Make SD3 fast previews a little better.
2024-10-07 09:19:59 -04:00
e5ecdfdd2d
Make fast previews for SDXL a little better by adding a bias.
2024-10-06 19:27:04 -04:00
7d29fbf74b
Slightly improve the fast previews for flux by adding a bias.
2024-10-06 17:55:46 -04:00
2c641e64ad
IS_CHANGED should be a classmethod ( #5159 )
2024-10-06 05:47:51 -04:00
7d2467e830
Some minor cleanups.
2024-10-05 13:22:39 -04:00
6f021d8aa0
Let --verbose have an argument for the log level.
2024-10-04 10:05:34 -04:00
d854ed0bcf
Allow using SD3 type te output on flux model.
2024-10-03 09:44:54 -04:00
abcd006b8c
Allow more permutations of clip/t5 in dual clip loader.
2024-10-03 09:26:11 -04:00
d985d1d7dc
CLIP Loader node now supports clip_l and clip_g only for SD3.
2024-10-02 04:25:17 -04:00
d1cdf51e1b
Refactor some of the TE detection code.
2024-10-01 07:08:41 -04:00
b4626ab93e
Add simpletuner lycoris format for SD unet.
2024-09-30 06:03:27 -04:00
a9e459c2a4
Use torch.nn.functional.linear in RGB preview code.
...
Add an optional bias to the latent RGB preview code.
2024-09-29 11:27:49 -04:00
3bb4dec720
Fix issue with loras, lowvram and --fast fp8.
2024-09-28 14:42:32 -04:00
8733191563
Flux torch.compile fix ( #5082 )
2024-09-27 22:07:51 -04:00
83b01f960a
Add backend option to TorchCompileModel.
...
If you want to use the cudagraphs backend you need to: --disable-cuda-malloc
If you get other backends working feel free to make a PR to add them.
2024-09-27 02:12:37 -04:00