Commit Graph

41 Commits

Author SHA1 Message Date
cef2cc3cb0 Support for inpaint models. 2023-02-15 16:38:20 -05:00
07db00355f Add masks to samplers code for inpainting. 2023-02-15 13:16:38 -05:00
e3451cea4f uni_pc now works with KSamplerAdvanced return_with_leftover_noise. 2023-02-13 12:29:21 -05:00
f542f248f1 Show the right amount of steps in the progress bar for uni_pc.
The extra step doesn't actually call the unet so it doesn't belong in
the progress bar.
2023-02-11 14:59:42 -05:00
f10b8948c3 768-v support for uni_pc sampler. 2023-02-11 04:34:58 -05:00
ce0aeb109e Remove print. 2023-02-11 03:41:40 -05:00
5489d5af04 Add uni_pc sampler to KSampler* nodes. 2023-02-11 03:34:09 -05:00
1a4edd19cd Fix overflow issue with inplace softmax. 2023-02-10 11:47:41 -05:00
509c7dfc6d Use real softmax in split op to fix issue with some images. 2023-02-10 03:13:49 -05:00
7e1e193f39 Automatically enable lowvram mode if vram is less than 4GB.
Use: --normalvram to disable it.
2023-02-10 00:47:56 -05:00
324273fff2 Fix embedding not working when on new line. 2023-02-09 14:12:02 -05:00
1f6a467e92 Update ldm dir with latest upstream stable diffusion changes. 2023-02-09 13:47:36 -05:00
773cdabfce Same thing but for the other places where it's used. 2023-02-09 12:43:29 -05:00
df40d4f3bf torch.cuda.OutOfMemoryError is not present on older pytorch versions. 2023-02-09 12:33:27 -05:00
e8c499ddd4 Split optimization for VAE attention block. 2023-02-08 22:04:20 -05:00
5b4e312749 Use inplace operations for less OOM issues. 2023-02-08 22:04:13 -05:00
3fd87cbd21 Slightly smarter batching behaviour.
Try to keep batch sizes more consistent which seems to improve things on
AMD GPUs.
2023-02-08 17:28:43 -05:00
bbdcf0b737 Use relative imports for k_diffusion. 2023-02-08 16:51:19 -05:00
708138c77d Remove print. 2023-02-08 14:51:18 -05:00
047775615b Lower the chances of an OOM. 2023-02-08 14:24:27 -05:00
853e96ada3 Increase it/s by batching together some stuff sent to unet. 2023-02-08 14:24:00 -05:00
c92633eaa2 Auto calculate amount of memory to use for --lowvram 2023-02-08 11:42:37 -05:00
534736b924 Add some low vram modes: --lowvram and --novram 2023-02-08 11:37:10 -05:00
a84cd0d1ad Don't unload/reload model from CPU uselessly. 2023-02-08 03:40:43 -05:00
b1a7c9ebf6 Embeddings/textual inversion support for SD2.x 2023-02-05 15:49:03 -05:00
1de5aa6a59 Add a CLIPLoader node to load standalone clip weights.
Put them in models/clip
2023-02-05 15:20:18 -05:00
56d802e1f3 Use transformers CLIP instead of open_clip for SD2.x
This should make things a bit cleaner.
2023-02-05 14:36:28 -05:00
bf9ccffb17 Small fix for SD2.x loras. 2023-02-05 11:38:25 -05:00
678105fade SD2.x CLIP support for Loras. 2023-02-05 01:54:09 -05:00
ef90e9c376 Add a LoraLoader node to apply loras to models and clip.
The models are modified in place before being used and unpatched after.
I think this is better than monkeypatching since it might make it easier
to use faster non pytorch unet inference in the future.
2023-02-03 02:46:24 -05:00
69df7eba94 Add KSamplerAdvanced node.
This node exposes more sampling options and makes it possible for example
to sample the first few steps on the latent image, do some operations on it
 and then do the rest of the sampling steps. This can be achieved using the
start_at_step and end_at_step options.
2023-01-31 03:09:38 -05:00
1daccf3678 Run softmax in place if it OOMs. 2023-01-30 19:55:01 -05:00
f73e57d881 Add support for textual inversion embedding for SD1.x CLIP. 2023-01-29 18:46:44 -05:00
50db297cf6 Try to fix OOM issues with cards that have less vram than mine. 2023-01-29 00:50:46 -05:00
73f60740c8 Slightly cleaner code. 2023-01-28 02:14:22 -05:00
0108616b77 Fix issue with some models. 2023-01-28 01:38:42 -05:00
2973ff24c5 Round CLIP position ids to fix float issues in some checkpoints. 2023-01-28 00:19:33 -05:00
c4b02059d0 Add ConditioningSetArea node.
to apply conditioning/prompts only to a specific area of the image.

Add ConditioningCombine node.
so that multiple conditioning/prompts can be applied to the image at the
same time
2023-01-26 12:06:48 -05:00
acdc6f42e0 Fix loading some malformed checkpoints? 2023-01-25 15:20:55 -05:00
051f472e8f Fix sub quadratic attention for SD2 and make it the default optimization. 2023-01-25 01:22:43 -05:00
220afe3310 Initial commit. 2023-01-16 22:37:14 -05:00