0fc483dcfd
Refactor diffusers model convert code to be able to reuse it.
2023-05-28 01:55:40 -04:00
eb4bd7711a
Remove einops.
2023-05-25 18:42:56 -04:00
87ab25fac7
Do operations in same order as the one it replaces.
2023-05-25 18:31:27 -04:00
2b1fac9708
Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI
2023-05-25 14:44:16 -04:00
e1278fa925
Support old pytorch versions that don't have weights_only.
2023-05-25 13:30:59 -04:00
8b4b0c3188
vecorized bislerp
2023-05-25 19:23:47 +02:00
b8ccbec6d8
Various improvements to bislerp.
2023-05-23 11:40:24 -04:00
34887b8885
Add experimental bislerp algorithm for latent upscaling.
...
It's like bilinear but with slerp.
2023-05-23 03:12:56 -04:00
6cc450579b
Auto transpose images from exif data.
2023-05-22 00:22:24 -04:00
dc198650c0
sample_dpmpp_2m_sde no longer crashes when step == 1.
2023-05-21 11:34:29 -04:00
069657fbf3
Add DPM-Solver++(2M) SDE and exponential scheduler.
...
exponential scheduler is the one recommended with this sampler.
2023-05-21 01:46:03 -04:00
b8636a44aa
Make scaled_dot_product switch to sliced attention on OOM.
2023-05-20 16:01:02 -04:00
797c4e8d3b
Simplify and improve some vae attention code.
2023-05-20 15:07:21 -04:00
ef815ba1e2
Switch default scheduler to normal.
2023-05-15 00:29:56 -04:00
68d12b530e
Merge branch 'tiled_sampler' of https://github.com/BlenderNeko/ComfyUI
2023-05-14 15:39:39 -04:00
3a1f47764d
Print the torch device that is used on startup.
2023-05-13 17:11:27 -04:00
1201d2eae5
Make nodes map over input lists ( #579 )
...
* allow nodes to map over lists
* make work with IS_CHANGED and VALIDATE_INPUTS
* give list outputs distinct socket shape
* add rebatch node
* add batch index logic
* add repeat latent batch
* deal with noise mask edge cases in latentfrombatch
2023-05-13 11:15:45 -04:00
19c014f429
comment out annoying print statement
2023-05-12 23:57:40 +02:00
d9e088ddfd
minor changes for tiled sampler
2023-05-12 23:49:09 +02:00
f7c0f75d1f
Auto batching improvements.
...
Try batching when cond sizes don't match with smart padding.
2023-05-10 13:59:24 -04:00
314e526c5c
Not needed anymore because sampling works with any latent size.
2023-05-09 12:18:18 -04:00
c6e34963e4
Make t2i adapter work with any latent resolution.
2023-05-08 18:15:19 -04:00
a1f12e370d
Merge branch 'autostart' of https://github.com/EllangoK/ComfyUI
2023-05-07 17:19:03 -04:00
6fc4917634
Make maximum_batch_area take into account python2.0 attention function.
...
More conservative xformers maximum_batch_area.
2023-05-06 19:58:54 -04:00
678f933d38
maximum_batch_area for xformers.
...
Remove useless code.
2023-05-06 19:28:46 -04:00
8e03c789a2
auto-launch cli arg
2023-05-06 16:59:40 -04:00
cb1551b819
Lowvram mode for gligen and fix some lowvram issues.
2023-05-05 18:11:41 -04:00
af9cc1fb6a
Search recursively in subfolders for embeddings.
2023-05-05 01:28:48 -04:00
6ee11d7bc0
Fix import.
2023-05-05 00:19:35 -04:00
bae4fb4a9d
Fix imports.
2023-05-04 18:10:29 -04:00
fcf513e0b6
Refactor.
2023-05-03 17:48:35 -04:00
a74e176a24
Merge branch 'tiled-progress' of https://github.com/pythongosssss/ComfyUI
2023-05-03 16:24:56 -04:00
5eeecf3fd5
remove unused import
2023-05-03 18:21:23 +01:00
8912623ea9
use comfy progress bar
2023-05-03 18:19:22 +01:00
908dc1d5a8
Add a total_steps value to sampler callback.
2023-05-03 12:58:10 -04:00
fdf57325f4
Merge remote-tracking branch 'origin/master' into tiled-progress
2023-05-03 17:33:42 +01:00
27df74101e
reduce duplication
2023-05-03 17:33:19 +01:00
93c64afaa9
Use sampler callback instead of tqdm hook for progress bar.
2023-05-02 23:00:49 -04:00
06ad35b493
added progress to encode + upscale
2023-05-02 19:18:07 +01:00
ba8a4c3667
Change latent resolution step to 8.
2023-05-02 14:17:51 -04:00
66c8aa5c3e
Make unet work with any input shape.
2023-05-02 13:31:43 -04:00
9c335a553f
LoKR support.
2023-05-01 18:18:23 -04:00
d3293c8339
Properly disable all progress bars when disable_pbar=True
2023-05-01 15:52:17 -04:00
a2e18b1504
allow disabling of progress bar when sampling
2023-04-30 18:59:58 +02:00
071011aebe
Mask strength should be separate from area strength.
2023-04-29 20:06:53 -04:00
870fae62e7
Merge branch 'condition_by_mask_node' of https://github.com/guill/ComfyUI
2023-04-29 15:05:18 -04:00
af02393c2a
Default to sampling entire image
...
By default, when applying a mask to a condition, the entire image will
still be used for sampling. The new "set_area_to_bounds" option on the
node will allow the user to automatically limit conditioning to the
bounds of the mask.
I've also removed the dependency on torchvision for calculating bounding
boxes. I've taken the opportunity to fix some frustrating details in the
other version:
1. An all-0 mask will no longer cause an error
2. Indices are returned as integers instead of floats so they can be
used to index into tensors.
2023-04-29 00:16:58 -07:00
056e5545ff
Don't try to get vram from xpu or cuda when directml is enabled.
2023-04-29 00:28:48 -04:00
2ca934f7d4
You can now select the device index with: --directml id
...
Like this for example: --directml 1
2023-04-28 16:51:35 -04:00
3baded9892
Basic torch_directml support. Use --directml to use it.
2023-04-28 14:28:57 -04:00