5f2117528a
Force min length 1 when tokenizing for text generation. ( #12538 )
2026-02-19 22:57:44 -05:00
0301ccf745
Small cleanup and try to get qwen 3 work with the text gen. ( #12537 )
2026-02-19 22:42:28 -05:00
6d11cc7354
feat: Add basic text generation support with native models, initially supporting Gemma3 ( #12392 )
2026-02-18 20:49:43 -05:00
831351a29e
Support generating attention masks for left padded text encoders. ( #12454 )
2026-02-13 20:15:23 -05:00
3c1a1a2df8
Basic support for the ace step 1.5 model. ( #12237 )
2026-02-03 00:06:18 -05:00
f8acd9c402
Reduce RAM usage, fix VRAM OOMs, and fix Windows shared memory spilling with adaptive model loading ( #11845 )
2026-02-01 01:01:11 -05:00
2129e7d278
Fix mistral 3 tokenizer code failing on latest transformers version and other breakage. ( #12095 )
...
* Fix mistral 3 tokenizer code failing on latest transformers version.
* Add requests to the requirements
2026-01-26 11:39:00 -05:00
3ab9748903
Disable prompt weights on newbie te. ( #11434 )
2025-12-20 00:19:47 -05:00
43071e3de3
Make old scaled fp8 format use the new mixed quant ops system. ( #11000 )
2025-12-05 14:35:42 -05:00
ea17add3c6
Fix case where text encoders where running on the CPU instead of GPU. ( #11095 )
2025-12-03 23:15:15 -05:00
d196a905bb
Lower vram usage for flux 2 text encoder. ( #10887 )
2025-11-25 14:58:39 -05:00
25022e0b09
Cleanup and fix issues with text encoder quants. ( #10872 )
2025-11-25 01:48:53 -05:00
bd01d9f7fd
Add left padding support to tokenizers. ( #10753 )
2025-11-15 06:54:40 -05:00
5a8f502db5
Disable prompt weights for qwen. ( #9438 )
2025-08-20 01:08:11 -04:00
4977f203fa
P2 of qwen edit model. ( #9412 )
...
* P2 of qwen edit model.
* Typo.
* Fix normal qwen.
* Fix.
* Make the TextEncodeQwenImageEdit also set the ref latent.
If you don't want it to set the ref latent and want to use the
ReferenceLatent node with your custom latent instead just disconnect the
VAE.
2025-08-18 22:38:34 -04:00
ec70ed6aea
Omnigen2 model implementation. ( #8669 )
2025-06-25 19:35:57 -04:00
e1c6dc720e
Allow setting min_length with tokenizer_data. ( #8547 )
2025-06-16 13:43:52 -04:00
23e39f2ba7
Add a T5TokenizerOptions node to set options for the T5 tokenizer. ( #7803 )
2025-04-25 19:36:00 -04:00
3e8155f7a3
More flexible long clip support.
...
Add clip g long clip support.
Text encoder refactor.
Support llama models with different vocab sizes.
2025-04-15 10:32:21 -04:00
dfa36e6855
Fix some things breaking when embeddings fail to apply.
2025-03-06 13:31:55 -05:00
0bef826a98
Support llava clip vision model.
2025-03-06 00:24:43 -05:00
85ef295069
Make applying embeddings more efficient.
...
Adding new tokens no longer makes a whole copy of the embeddings weight
which can be massive on certain models.
2025-03-05 17:34:38 -05:00
65042f7d39
Make it easier to set a custom template for hunyuan video.
2025-03-04 09:26:05 -05:00
e5ea112a90
Support Lumina 2 model.
2025-02-04 04:16:30 -05:00
cba58fff0b
Remove unsafe embedding load for very old pytorch.
2025-01-15 04:32:23 -05:00
d7969cb070
Replace print with logging ( #6138 )
...
* Replace print with logging
* nit
* nit
* nit
* nit
* nit
* nit
2024-12-20 16:24:55 -05:00
bddb02660c
Add PixArt model support ( #6055 )
...
* PixArt initial version
* PixArt Diffusers convert logic
* pos_emb and interpolation logic
* Reduce duplicate code
* Formatting
* Use optimized attention
* Edit empty token logic
* Basic PixArt LoRA support
* Fix aspect ratio logic
* PixArtAlpha text encode with conds
* Use same detection key logic for PixArt diffusers
2024-12-20 15:25:00 -05:00
ca457f7ba1
Properly tokenize the template for hunyuan video.
2024-12-17 16:22:02 -05:00
bda1482a27
Basic Hunyuan Video model support.
2024-12-16 19:35:40 -05:00
d9d7f3c619
Lint all unused variables ( #5989 )
...
* Enable F841
* Autofix
* Remove all unused variable assignment
2024-12-12 17:59:16 -05:00
44db978531
Fix a few things in text enc code for models with no eos token.
2024-12-10 23:07:26 -05:00
1c8d11e48a
Support different types of tokenizers.
...
Support tokenizers without an eos token.
Pass full sentences to tokenizer for more efficient tokenizing.
2024-12-10 15:03:39 -05:00
83ca891118
Support scaled fp8 t5xxl model.
2024-10-20 22:27:00 -04:00
0bedfb26af
Revert "Fix Transformers FutureWarning ( #5140 )"
...
This reverts commit 95b7cf9bbe .
2024-10-16 12:36:19 -04:00
95b7cf9bbe
Fix Transformers FutureWarning ( #5140 )
...
* Update sd1_clip.py
Fix Transformers FutureWarning
* Update sd1_clip.py
Fix comment
2024-10-14 20:12:20 -04:00
7ae6626723
Remove useless argument.
2024-10-12 07:16:21 -04:00
e813abbb2c
Long CLIP L support for SDXL, SD3 and Flux.
...
Use the *CLIPLoader nodes.
2024-09-15 07:59:38 -04:00
83dbac28eb
Properly set if clip text pooled projection instead of using hack.
2024-08-20 10:46:36 -04:00
fca42836f2
Add model_options for text encoder.
2024-08-17 11:17:20 -04:00
e1c528196e
Fix bundled embed.
2024-08-07 13:30:45 -04:00
1c08bf35b4
Support format for embeddings bundled in loras.
2024-08-07 03:45:25 -04:00
2c038ccef0
Lower CLIP memory usage by a bit.
2024-07-31 01:32:35 -04:00
82cae45d44
Fix potential issue with non clip text embeddings.
2024-07-30 14:41:13 -04:00
f87810cd3e
Let tokenizers return weights to be stored in the saved checkpoint.
2024-07-25 10:52:09 -04:00
10c919f4c7
Make it possible to load tokenizer data from checkpoints.
2024-07-24 16:43:53 -04:00
391c1046cf
More flexibility with text encoder return values.
...
Text encoders can now return other values to the CONDITIONING than the cond
and pooled output.
2024-07-10 20:06:50 -04:00
e44fa5667f
Support returning text encoder attention masks.
2024-07-10 19:31:22 -04:00
bb663bcd6c
Rename clip_t5base to t5base for stable audio text encoder.
2024-07-08 08:53:55 -04:00
80c4590998
Allow specifying the padding token for the tokenizer.
2024-07-06 00:06:49 -04:00
ce649d61c0
Allow zeroing out of embeds with unused attention mask.
2024-07-05 23:48:17 -04:00