136c93cb47
Fix bug with workflow not registering change.
...
There was an issue when only the class type of a node changed with all the
inputs staying the same.
2024-07-15 20:01:49 -04:00
276f8fce9f
Print error when node is missing.
2024-05-20 07:04:08 -04:00
4bc1884478
Provide a better error message when attempting to execute the workflow with a missing node. ( #3517 )
2024-05-20 06:58:46 -04:00
5d08802f78
Sync some minor changes from the other repo.
2024-04-19 03:43:09 -04:00
5d8898c056
Fix some performance issues with weight loading and unloading.
...
Lower peak memory usage when changing model.
Fix case where model weights would be unloaded and reloaded.
2024-03-28 18:04:42 -04:00
6a32c06f06
Move cleanup_models to improve performance.
2024-03-23 17:27:10 -04:00
314d28c251
Pass extra_pnginfo as None when not in input data.
2024-03-07 15:07:47 -05:00
f81dbe26e2
FIX recursive_will_execute performance (simple ~300x performance increase} ( #2852 )
...
* FIX recursive_will_execute performance
* Minimize code changes
* memo must be created outside lambda
2024-02-21 20:21:24 -05:00
2d105066df
Cleanups.
2024-01-26 21:31:13 -05:00
fad02dc2df
Don't use PEP 604 type hints, to stay compatible with Python<3.10.
2024-01-17 17:16:34 -05:00
56d9496b18
Rename status notes to status messages.
...
I think message describes them better.
2024-01-12 18:17:06 -05:00
bcc0bde2af
Clear status notes on execution start.
2024-01-12 17:21:22 -05:00
1b3d65bd84
Add error, status to /history endpoint
2024-01-11 10:16:42 -05:00
6d281b4ff4
Add a /free route to unload models or free all memory.
...
A POST request to /free with: {"unload_models":true}
will unload models from vram.
A POST request to /free with: {"free_memory":true}
will unload models and free all cached data from the last run workflow.
2024-01-04 17:15:22 -05:00
04b713dda1
Fix VALIDATE_INPUTS getting called multiple times.
...
Allow VALIDATE_INPUTS to only validate specific inputs.
2023-12-29 17:36:40 -05:00
a252963f95
--disable-smart-memory now unloads everything like it did originally.
2023-12-23 04:25:06 -05:00
6b769bca01
Do a garbage collect after the interval even if nothing is running.
2023-11-30 15:22:32 -05:00
2dd5b4dd78
Only show last 200 elements in the UI history tab.
2023-11-20 16:56:29 -05:00
a03dde190e
Cap maximum history size at 10000. Delete oldest entry when reached.
2023-11-20 16:38:39 -05:00
20d3852aa1
Pull some small changes from the other repo.
2023-10-11 20:38:48 -04:00
62799c8585
fix crash on node with VALIDATE_INPUTS and actual inputs
2023-09-07 18:42:21 +01:00
89a0767abf
Smarter memory management.
...
Try to keep models on the vram when possible.
Better lowvram mode for controlnets.
2023-08-17 01:06:34 -04:00
90b0163524
fix(execution): Fix support for input-less nodes
2023-08-01 12:29:01 -07:00
7785d073f0
chore: Fix typo
2023-08-01 12:27:50 -07:00
09386a3697
Fix issue with lora in some cases when combined with model merging.
2023-07-21 21:27:27 -04:00
6e9f28401f
Persist node instances between executions instead of deleting them.
...
If the same node id with the same class exists between two executions the
same instance will be used.
This means you can now cache things in nodes for more efficiency.
2023-06-29 23:38:56 -04:00
d52ed407a7
Send websocket message only when prompt is actually done executing.
2023-06-13 13:38:43 -04:00
af91df85c2
Add a /history/{prompt_id} endpoint.
2023-06-12 14:34:30 -04:00
ad81fd682a
Fix issue with cancelling prompt.
2023-05-28 00:32:26 -04:00
03f2d0a764
Rename exception message field
2023-05-27 21:06:07 -05:00
52c9590b7b
Exception message
2023-05-27 21:06:07 -05:00
62bdd9d26a
Catch typecast errors
2023-05-27 21:06:07 -05:00
a9e7e23724
Fix
2023-05-27 21:06:07 -05:00
e2d080b694
Return null for value format
2023-05-27 21:06:07 -05:00
6b2a8a3845
Show message in the frontend if prompt execution raises an exception
2023-05-27 21:06:07 -05:00
ffec815257
Send back more information about exceptions that happen during execution
2023-05-27 21:06:07 -05:00
0d834e3a2b
Add missing input name/config
2023-05-27 21:06:07 -05:00
c33b7c5549
Improve invalid prompt error message
2023-05-27 21:06:07 -05:00
73e85fb3f4
Improve error output for failed nodes
2023-05-27 21:06:07 -05:00
48fcc5b777
Parsing error crash.
2023-05-22 20:51:30 -04:00
ffc56c53c9
Add a node_errors to the /prompt error json response.
...
"node_errors" contains a dict keyed by node ids. The contents are a message
and a list of dependent outputs.
2023-05-22 13:22:38 -04:00
516119ad83
Print min and max values in validation error message.
2023-05-21 00:24:28 -04:00
1dd846a7ba
Fix outputs gone from history.
2023-05-15 00:27:28 -04:00
9bf67c4c5a
Print prompt execution time.
2023-05-14 01:34:25 -04:00
44f9f9baf1
Add the prompt id to some websocket messages.
2023-05-13 11:17:16 -04:00
1201d2eae5
Make nodes map over input lists ( #579 )
...
* allow nodes to map over lists
* make work with IS_CHANGED and VALIDATE_INPUTS
* give list outputs distinct socket shape
* add rebatch node
* add batch index logic
* add repeat latent batch
* deal with noise mask edge cases in latentfrombatch
2023-05-13 11:15:45 -04:00
dfc74c19d9
Add the prompt_id to some websocket messages.
2023-05-11 01:22:40 -04:00
3a7c3acc72
Send websocket message with list of cached nodes right before execution.
2023-05-10 15:59:24 -04:00
602095f614
Send execution_error message on websocket on execution exception.
2023-05-10 15:49:49 -04:00
d6dee8af1d
Only validate each input once.
2023-05-10 00:29:31 -04:00