Commit Graph

1092 Commits

Author SHA1 Message Date
cf80d28689 Support loading controlnets with different input. 2024-09-13 09:54:37 -04:00
b962db9952 Add cli arg to override user directory (#4856)
* Override user directory.

* Use overridden user directory.

* Remove prints.

* Remove references to global user_files.

* Remove unused replace_folder function.

* Remove newline.

* Remove global during get_user_directory.

* Add validation.
2024-09-12 08:10:27 -04:00
9d720187f1 types -> comfy_types to fix import issue. 2024-09-12 03:57:46 -04:00
9f4daca9d9 Doesn't really make sense for cfg_pp sampler to call regular one. 2024-09-11 02:51:36 -04:00
b5d0f2a908 Add CFG++ to DPM++ 2S Ancestral (#3871)
* Update sampling.py

* Update samplers.py

* my bad

* "fix" the sampler

* Update samplers.py

* i named it wrong

* minor sampling improvements

mainly using a dynamic rho value (hey this sounds a lot like smea!!!)

* revert rho change

rho? r? its just 1/2
2024-09-11 02:49:44 -04:00
9c5fca75f4 Fix lora issue. 2024-09-08 10:10:47 -04:00
32a60a7bac Support onetrainer text encoder Flux lora. 2024-09-08 09:31:41 -04:00
bb52934ba4 Fix import issue (#4815) 2024-09-07 05:28:32 -04:00
ea77750759 Support a generic Comfy format for text encoder loras.
This is a format with keys like:
text_encoders.clip_l.transformer.text_model.encoder.layers.9.self_attn.v_proj.lora_up.weight

Instead of waiting for me to add support for specific lora formats you can
convert your text encoder loras to this format instead.

If you want to see an example save a text encoder lora with the SaveLora
node with the commit right after this one.
2024-09-07 02:20:39 -04:00
c27ebeb1c2 Fix onnx export not working on flux. 2024-09-06 03:21:52 -04:00
5cbaa9e07c Mistoline flux controlnet support. 2024-09-05 00:05:17 -04:00
c7427375ee Prioritize freeing partially offloaded models first. 2024-09-04 19:47:32 -04:00
f04229b84d Add emb_patch support to UNetModel forward (#4779) 2024-09-04 14:35:15 -04:00
f067ad15d1 Make live preview size a configurable launch argument (#4649)
* Make live preview size a configurable launch argument

* Remove import from testing phase

* Update cli_args.py
2024-09-03 19:16:38 -04:00
483004dd1d Support newer glora format. 2024-09-03 17:02:19 -04:00
00a5d08103 Lower fp8 lora memory usage. 2024-09-03 01:25:05 -04:00
d043997d30 Flux onetrainer lora. 2024-09-02 08:22:15 -04:00
8d31a6632f Speed up inference on nvidia 10 series on Linux. 2024-09-01 17:29:31 -04:00
b643eae08b Make minimum_inference_memory() depend on --reserve-vram 2024-09-01 01:18:34 -04:00
935ae153e1 Cleanup. 2024-08-30 12:53:59 -04:00
e91662e784 Get logs endpoint & system_stats additions (#4690)
* Add route for getting output logs

* Include ComfyUI version

* Move to own function

* Changed to memory logger

* Unify logger setup logic

* Fix get version git fallback

---------

Co-authored-by: pythongosssss <125205205+pythongosssss@users.noreply.github.com>
2024-08-30 12:46:37 -04:00
63fafaef45 Fix potential issue with hydit controlnets. 2024-08-30 04:58:41 -04:00
6eb5d64522 Fix glora lowvram issue. 2024-08-29 19:07:23 -04:00
10a79e9898 Implement model part of flux union controlnet. 2024-08-29 18:41:22 -04:00
ea3f39bd69 InstantX depth flux controlnet. 2024-08-29 02:14:19 -04:00
b33cd61070 InstantX canny controlnet. 2024-08-28 19:02:50 -04:00
d31e226650 Unify RMSNorm code. 2024-08-28 16:56:38 -04:00
38c22e631a Fix case where model was not properly unloaded in merging workflows. 2024-08-27 19:03:51 -04:00
6bbdcd28ae Support weight padding on diff weight patch (#4576) 2024-08-27 13:55:37 -04:00
ab130001a8 Do RMSNorm in native type. 2024-08-27 02:41:56 -04:00
2ca8f6e23d Make the stochastic fp8 rounding reproducible. 2024-08-26 15:12:06 -04:00
7985ff88b9 Use less memory in float8 lora patching by doing calculations in fp16. 2024-08-26 14:45:58 -04:00
c6812947e9 Fix potential memory leak. 2024-08-26 02:07:32 -04:00
9230f65823 Fix some controlnets OOMing when loading. 2024-08-25 05:54:29 -04:00
8ae23d8e80 Fix onnx export. 2024-08-23 17:52:47 -04:00
7df42b9a23 Fix dora. 2024-08-23 04:58:59 -04:00
5d8bbb7281 Cleanup. 2024-08-23 04:06:27 -04:00
2c1d2375d6 Fix. 2024-08-23 04:04:55 -04:00
64ccb3c7e3 Rework IPEX check for future inclusion of XPU into Pytorch upstream and do a bit more optimization of ipex.optimize(). (#4562) 2024-08-23 03:59:57 -04:00
9465b23432 Added SD15_Inpaint_Diffusers model support for unet_config_from_diffusers_unet function (#4565) 2024-08-23 03:57:08 -04:00
c0b0da264b Missing imports. 2024-08-22 17:20:51 -04:00
c26ca27207 Move calculate function to comfy.lora 2024-08-22 17:12:00 -04:00
7c6bb84016 Code cleanups. 2024-08-22 17:05:12 -04:00
c54d3ed5e6 Fix issue with models staying loaded in memory. 2024-08-22 15:58:20 -04:00
c7ee4b37a1 Try to fix some lora issues. 2024-08-22 15:32:18 -04:00
7b70b266d8 Generalize MacOS version check for force-upcast-attention (#4548)
This code automatically forces upcasting attention for MacOS versions 14.5 and 14.6. My computer returns the string "14.6.1" for `platform.mac_ver()[0]`, so this generalizes the comparison to catch more versions.

I am running MacOS Sonoma 14.6.1 (latest version) and was seeing black image generation on previously functional workflows after recent software updates. This PR solved the issue for me.

See comfyanonymous/ComfyUI#3521
2024-08-22 13:24:21 -04:00
8f60d093ba Fix issue. 2024-08-22 10:38:24 -04:00
843a7ff70c fp16 is actually faster than fp32 on a GTX 1080. 2024-08-21 23:23:50 -04:00
a60620dcea Fix slow performance on 10 series Nvidia GPUs. 2024-08-21 16:39:02 -04:00
015f73dc49 Try a different type of flux fp16 fix. 2024-08-21 16:17:15 -04:00