324273fff2
Fix embedding not working when on new line.
2023-02-09 14:12:02 -05:00
1f6a467e92
Update ldm dir with latest upstream stable diffusion changes.
2023-02-09 13:47:36 -05:00
642516a3a6
create output dir if none is present
2023-02-09 12:49:31 -05:00
773cdabfce
Same thing but for the other places where it's used.
2023-02-09 12:43:29 -05:00
df40d4f3bf
torch.cuda.OutOfMemoryError is not present on older pytorch versions.
2023-02-09 12:33:27 -05:00
1d9ec62cfb
Use absolute output directory path.
2023-02-09 09:59:43 -05:00
05d571fe7f
Merge branch 'master' of https://github.com/bazettfraga/ComfyUI into merge_pr2
2023-02-09 00:44:38 -05:00
e8c499ddd4
Split optimization for VAE attention block.
2023-02-08 22:04:20 -05:00
5b4e312749
Use inplace operations for less OOM issues.
2023-02-08 22:04:13 -05:00
e58887dfa7
forgot windows does double backslashes for paths due to its use as escape char.
2023-02-09 01:30:06 +01:00
81082045c2
add recursive_search, swap relevant os.listdirs
2023-02-09 01:22:33 +01:00
3fd87cbd21
Slightly smarter batching behaviour.
...
Try to keep batch sizes more consistent which seems to improve things on
AMD GPUs.
2023-02-08 17:28:43 -05:00
bbdcf0b737
Use relative imports for k_diffusion.
2023-02-08 16:51:19 -05:00
3e22815a9a
Fix k_diffusion not getting imported from the folder.
2023-02-08 16:29:22 -05:00
708138c77d
Remove print.
2023-02-08 14:51:18 -05:00
047775615b
Lower the chances of an OOM.
2023-02-08 14:24:27 -05:00
853e96ada3
Increase it/s by batching together some stuff sent to unet.
2023-02-08 14:24:00 -05:00
c92633eaa2
Auto calculate amount of memory to use for --lowvram
2023-02-08 11:42:37 -05:00
534736b924
Add some low vram modes: --lowvram and --novram
2023-02-08 11:37:10 -05:00
a84cd0d1ad
Don't unload/reload model from CPU uselessly.
2023-02-08 03:40:43 -05:00
e3e65947f2
Add a --help to main.py
2023-02-07 22:13:42 -05:00
1f18221e17
Add --port to set custom port.
2023-02-07 21:57:17 -05:00
6e40393b6b
Fix delete sometimes not properly refreshing queue state.
2023-02-07 00:07:31 -05:00
d71d0c88e5
Add some simple queue management to the GUI.
2023-02-06 23:40:38 -05:00
b1a7c9ebf6
Embeddings/textual inversion support for SD2.x
2023-02-05 15:49:03 -05:00
1de5aa6a59
Add a CLIPLoader node to load standalone clip weights.
...
Put them in models/clip
2023-02-05 15:20:18 -05:00
56d802e1f3
Use transformers CLIP instead of open_clip for SD2.x
...
This should make things a bit cleaner.
2023-02-05 14:36:28 -05:00
bf9ccffb17
Small fix for SD2.x loras.
2023-02-05 11:38:25 -05:00
678105fade
SD2.x CLIP support for Loras.
2023-02-05 01:54:09 -05:00
3f3d77a324
Fix image node always executing instead of only when the image changed.
2023-02-04 16:08:29 -05:00
4225d1cb9f
Add a basic ImageScale node.
...
It's pretty much the same as the LatentUpscale node for now but for images
in pixel space.
2023-02-04 16:01:01 -05:00
bff0e11941
Add a LatentCrop node.
2023-02-04 15:21:46 -05:00
43c795f462
Add a --listen argument to listen on 0.0.0.0
2023-02-04 12:01:53 -05:00
41a7532c15
A bit bigger.
2023-02-03 13:56:00 -05:00
7bc3f91bd6
Add some instructions how to use the venv from another SD install.
2023-02-03 13:54:45 -05:00
149a4de3f2
Fix potential issue if exception happens when patching model.
2023-02-03 03:55:50 -05:00
ef90e9c376
Add a LoraLoader node to apply loras to models and clip.
...
The models are modified in place before being used and unpatched after.
I think this is better than monkeypatching since it might make it easier
to use faster non pytorch unet inference in the future.
2023-02-03 02:46:24 -05:00
96664f5d5e
Web interface bug fix for multiple inputs from the same node.
2023-02-03 00:39:28 -05:00
1d84a44b08
Fix some small annoyances with the UI.
2023-02-02 14:36:11 -05:00
e65a20e62a
Add a button to queue prompts to the front of the queue.
2023-02-01 22:34:59 -05:00
4b08314257
Add more features to the backend queue code.
...
The queue can now be queried, entries can be deleted and prompts easily
queued to the front of the queue.
Just need to expose it in the UI next.
2023-02-01 22:33:10 -05:00
9d611a90e8
Small web interface fixes.
2023-01-31 03:37:34 -05:00
fef41d0a72
Add LatentComposite node.
...
This can be used to "paste" one latent image on top of the other.
2023-01-31 03:35:03 -05:00
3fa009f4cc
Add a LatentFlip node.
2023-01-31 03:28:38 -05:00
69df7eba94
Add KSamplerAdvanced node.
...
This node exposes more sampling options and makes it possible for example
to sample the first few steps on the latent image, do some operations on it
and then do the rest of the sampling steps. This can be achieved using the
start_at_step and end_at_step options.
2023-01-31 03:09:38 -05:00
f8f165e2c3
Add a LatentRotate node.
2023-01-31 02:28:07 -05:00
1daccf3678
Run softmax in place if it OOMs.
2023-01-30 19:55:01 -05:00
0d8ad93852
Add link to examples github page.
2023-01-30 01:09:35 -05:00
f73e57d881
Add support for textual inversion embedding for SD1.x CLIP.
2023-01-29 18:46:44 -05:00
702ac43d0c
Readme formatting.
2023-01-29 13:23:57 -05:00