Name | compel JSON |
Version |
2.0.3
JSON |
| download |
home_page | None |
Summary | A prompting enhancement library for transformers-type text embedding systems. |
upload_time | 2024-06-29 17:44:58 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.7 |
license | None |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Compel
A text prompt weighting and blending library for transformers-type text embedding systems, by [@damian0815](https://github.com/damian0815).
With a flexible and intuitive syntax, you can re-weight different parts of a prompt string and thus re-weight the different parts of the embedding tensor produced from the string.
Tested and developed against Hugging Face's `StableDiffusionPipeline` but it should work with any diffusers-based system that uses an `Tokenizer` and a `Text Encoder` of some kind.
Adapted from the [InvokeAI](https://github.com/invoke-ai) prompting code (also by [@damian0815](https://github.com/damian0815)).
Note that cross-attention control `.swap()` is currently ignored by Compel, but you can use it by calling `build_conditioning_tensor_for_prompt_object()` yourself, and implementing cross-attention control in your diffusion loop.
### Installation
`pip install compel`
### Documentation
Documentation is [here](doc/).
### Demo
See [compel-demo.ipynb](compel-demo.ipynb)
<a target="_blank" href="https://colab.research.google.com/github/damian0815/compel/blob/main/compel-demo.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
### Quickstart
with Hugging Face diffusers >=0.12:
```python
from diffusers import StableDiffusionPipeline
from compel import Compel
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
compel = Compel(tokenizer=pipeline.tokenizer, text_encoder=pipeline.text_encoder)
# upweight "ball"
prompt = "a cat playing with a ball++ in the forest"
conditioning = compel.build_conditioning_tensor(prompt)
# or: conditioning = compel([prompt])
# generate image
images = pipeline(prompt_embeds=conditioning, num_inference_steps=20).images
images[0].save("image.jpg")
```
For batched input, use the __call__ interface to compel:
```python
from diffusers import StableDiffusionPipeline
from compel import Compel
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
compel = Compel(tokenizer=pipeline.tokenizer, text_encoder=pipeline.text_encoder)
prompts = ["a cat playing with a ball++ in the forest", "a dog playing with a ball in the forest"]
prompt_embeds = compel(prompts)
images = pipeline(prompt_embeds=prompt_embeds).images
images[0].save("image0.jpg")
images[1].save("image1.jpg")
```
### Textual Inversion support
If you want to have access to 🤗diffusers textual inversions, instantiate a `DiffusersTextualInversionManager` and pass it on Compel init:
```
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
textual_inversion_manager = DiffusersTextualInversionManager(pipeline)
compel = Compel(tokenizer=pipeline.tokenizer, text_encoder=pipeline.text_encoder,
textual_inversion_manager=textual_inversion_manager)
```
## Memory usage/VRAM leaks
If you run into memory issues, please make sure you're running compel inside `with torch.no_grad():` blocks.
If this doesn't help, you could try this advice offered by @kshieh1:
> After image generation, you should explictly de-reference the tensor object (i.e., prompt_embeds = None) and call gc.collect()
See https://github.com/damian0815/compel/issues/24 for more details. Thanks @kshieh1 !
## Changelog
#### 2.0.3 - include contributed fixes #64, #80 and fix license in pyproject.toml/pypi
#### 2.0.2 - fix for `pipeline.enable_sequential_cpu_offloading()` with SDXL models (you need to pass `device='cuda'` on compel init)
#### 2.0.1 - fix for [#45](https://github.com/damian0815/compel/issues/45) padding issue with SDXL non-truncated prompts and `.and()`
### 2.0.0 - SDXL Support
With big thanks to Patrick von Platen from Hugging Face for [the pull request](https://github.com/damian0815/compel/pull/41), Compel now supports SDXL. Use it like this:
```py
from compel import Compel, ReturnedEmbeddingsType
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", variant="fp16", use_safetensors=True, torch_dtype=torch.float16).to("cuda")
compel = Compel(tokenizer=[pipeline.tokenizer, pipeline.tokenizer_2] , text_encoder=[pipeline.text_encoder, pipeline.text_encoder_2], returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, requires_pooled=[False, True])
# upweight "ball"
prompt = "a cat playing with a ball++ in the forest"
conditioning, pooled = compel(prompt)
# generate image
image = pipeline(prompt_embeds=conditioning, pooled_prompt_embeds=pooled, num_inference_steps=30).images[0]
```
Please note that this is a **breaking change** if you've been using clip skip: the old boolean arg `use_penultimate_clip_layer` has been replaced with an enum `ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NORMALIZED`.
#### 1.2.1 - actually apply `.and()` weights
### 1.2.0 - Concatenate embeddings using `.and()`
For Stable Diffusion 2.1 I've been experimenting with a new feature: concatenated embeddings. What I noticed, for example, is that for more complex prompts image generation quality becomes wildly better when the prompt is broken into multiple parts and fed to OpenCLIP separately.
TL;DR: you can now experiment with breaking up your prompts into segments, which for SD2.1 appears to improve the generated image. The syntax is `("prompt part 1", "prompt part 2").and()`. You can have more than one part, and you can also weight them, eg `("a man eating an apple", "sitting on the roof of a car", "high quality, trending on artstation, 8K UHD").and(1, 0.5, 0.5)` which will assign weight `1` to `man eating an apple` and `0.5` to `sitting on the roof of a car` and `high quality, trending on artstation, 8K UHD`.
Here's a nonsense example from the InvokeAI discord #garbage-bin channel, created by gogurt enjoyer's incredible [nightmare prompt generator](https://huggingface.co/cactusfriend/nightmare-invokeai-prompts):
```
a moist sloppy pindlesackboy sloppy hamblin' bogomadong, Clem Fandango is pissed-off, Wario's Woods in background, making a noise like ga-woink-a
```
Plugging this straight into SD2.1 we get this, which is really not a good image:
![](images/000075.6dfd7adf.466129594.png)
However, if the prompt is broken up into chunks and fed into OpenCLIP separately as four separate prompts, and then concatenated:
```
a moist sloppy pindlesackboy sloppy hamblin' bogomadong
Clem Fandango is pissed-off
Wario's Woods in background
making a noise like ga-woink-a
```
then output image with the same seed is *so much* better:
![](images/000076.68b1c320.466129594.png)
In the new `.and()` syntax you would prompt this as follows:
```
("a moist sloppy pindlesackboy sloppy hamblin' bogomadong", "Clem Fandango is pissed-off", "Wario's Woods in background", "making a noise like ga-woink-a").and()
```
The effect can be more or less subtle. Here for example is
```
A dream of a distant galaxy, by Caspar David Friedrich, matte painting, trending on artstation, HQ
```
![](images/000129.1b33b559.2793529321.png)
And the same split into two parts:
```
A dream of a distant galaxy, by Caspar David Friedrich, matte painting
trending on artstation, HQ
```
![](images/000128.b5d5cd62.2793529321.png)
The Compel prompt for this is:
```
("A dream of a distant galaxy, by Caspar David Friedrich, matte painting", "trending on artstation, HQ").and()
```
#### 1.1.6 - misc small fixes
- add `DiffusersTextualInversionManager` (thanks @pdoane)
- fix batch embedding generation with truncated/non-truncated prompt lengths (#18, thanks @abassino)
- add note about memory leakage (ref #24, thanks @kshieh1)
- fix incorrect parsing when commas are not followed by whitespace (#34, thanks @moono)
#### 1.1.5 - fix for compel turning numbers into floats for text inside parentheses
#### 1.1.4 - fixes for #23 (sequential offload) and InvokeAI issue #3442 (allow hyphens in LoRA names)
#### 1.1.3 - enable fetching the penultimate CLIP hidden layer (aka "clip skip")
To use, pass `use_penultimate_clip_layer=True` when initializing your `Compel` instance. Note that there's no need to pass this flag for SD2.0/SD2.1 because diffusers already throws away the last hidden layer when loading the SD2.0+ text encoder.
#### 1.1.2 - fix for #21 (crash when parsing long prompts with truncation enabled if there is weighted fragments beyond the truncation boundary)
#### 1.1.1 - fix for #22 (issues parsing `.` characters inside parentheses)
#### 1.1.0 - support for parsing `withLora`/`useLora` on `parse_prompt_string()`.
* `Compel.parse_prompt_string()` now returns a `Conjunction`
* any appearances of `withLora(name[, weight])` or `useLora(name[, weight])` anywhere in the prompt string will be parsed to `LoraWeight` instances, and returned on the outermost `Conjunction` returned by `parse_prompt_string()`.
#### 1.0.5 - fix incorrect parsing when passing invalid (auto1111) syntax that has a float
also fix test case for default swap parameters
#### 1.0.4 - fix embeddings for empty swap target (eg `cat.swap("")`) when truncation is disabled
#### 1.0.3 - better defaults for .swap (https://github.com/damian0815/compel/issues/8)
#### 1.0.2 - fix padding for non-truncated batched embeddings (https://github.com/damian0815/compel/issues/9)
#### 1.0.1 - fix for InvokeAI's `--free_gpu_mem` option
### 1.0.0 - new downweighting algorithm
Downweighting now works by applying an attention mask to remove the downweighted tokens, rather than literally removing them from the sequence. This behaviour is the default, but the old behaviour can be re-enabled by passing `downweight_mode=DownweightMode.REMOVE` on init of the `Compel` instance.
Formerly, downweighting a token worked by both multiplying the weighting of the token's embedding, and doing an inverse-weighted blend with a copy of the token sequence that had the downweighted tokens removed. The intuition is that as weight approaches zero, the tokens being downweighted should be actually removed from the sequence. However, removing the tokens resulted in the positioning of all downstream tokens becoming messed up. The blend ended up blending a lot more than just the tokens in question.
As of v1.0.0, taking advice from @keturn and @bonlime (https://github.com/damian0815/compel/issues/7) the procedure is by default different. Downweighting still involves a blend but what is blended is a version of the token sequence with the downweighted tokens masked out, rather than removed. This correctly preserves positioning embeddings of the other tokens.
Also a bugfix: fix black images on weight 0 (https://github.com/invoke-ai/InvokeAI/issues/2832)
### 0.1.10 - add support for prompts longer than the model's max token length.
To enable, initialize `Compel` with `truncate_long_prompts=False` (default is True). Prompts that are longer than the model's `max_token_length` will be chunked and padded out to an integer multiple of `max_token_length`.
Note that even if you don't use a negative prompt, you'll need to build a conditioning tensor for a negative prompt of at least `""`, and use `compel.pad_conditioning_tensors_to_same_length()`, otherwise the you'll get an error about mismatched conditioning tensor lengths:
```python
compel = Compel(..., truncate_long_prompts=False)
prompt = "a cat playing with a ball++ in the forest, amazing, exquisite, stunning, masterpiece, skilled, powerful, incredible, amazing, trending on gregstation, greg, greggy, greggs greggson, greggy mcgregface, ..." # very long prompt
conditioning = compel.build_conditioning_tensor(prompt)
negative_prompt = "" # it's necessary to create an empty prompt - it can also be very long, if you want
negative_conditioning = compel.build_conditioning_tensor(negative_prompt)
[conditioning, negative_conditioning] = compel.pad_conditioning_tensors_to_same_length([conditioning, negative_conditioning])
```
#### 0.1.9 - broken
#### 0.1.8 - downgrade Python min version to 3.7
#### 0.1.7 - InvokeAI compatibility
Raw data
{
"_id": null,
"home_page": null,
"name": "compel",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": "Damian Stewart <null@damianstewart.com>",
"download_url": "https://files.pythonhosted.org/packages/d7/83/e64aac76631bfd8ea3d5b503af0f88af462622a9e7c949123e878fa1d296/compel-2.0.3.tar.gz",
"platform": null,
"description": "# Compel\nA text prompt weighting and blending library for transformers-type text embedding systems, by [@damian0815](https://github.com/damian0815).\n\nWith a flexible and intuitive syntax, you can re-weight different parts of a prompt string and thus re-weight the different parts of the embedding tensor produced from the string.\n\nTested and developed against Hugging Face's `StableDiffusionPipeline` but it should work with any diffusers-based system that uses an `Tokenizer` and a `Text Encoder` of some kind. \n\nAdapted from the [InvokeAI](https://github.com/invoke-ai) prompting code (also by [@damian0815](https://github.com/damian0815)).\n\nNote that cross-attention control `.swap()` is currently ignored by Compel, but you can use it by calling `build_conditioning_tensor_for_prompt_object()` yourself, and implementing cross-attention control in your diffusion loop.\n\n### Installation\n\n`pip install compel`\n\n### Documentation\n\nDocumentation is [here](doc/).\n\n### Demo\n\nSee [compel-demo.ipynb](compel-demo.ipynb)\n\n<a target=\"_blank\" href=\"https://colab.research.google.com/github/damian0815/compel/blob/main/compel-demo.ipynb\">\n <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n</a>\n\n### Quickstart\n\nwith Hugging Face diffusers >=0.12:\n\n```python\nfrom diffusers import StableDiffusionPipeline\nfrom compel import Compel\n\npipeline = StableDiffusionPipeline.from_pretrained(\"runwayml/stable-diffusion-v1-5\")\ncompel = Compel(tokenizer=pipeline.tokenizer, text_encoder=pipeline.text_encoder)\n\n# upweight \"ball\"\nprompt = \"a cat playing with a ball++ in the forest\"\nconditioning = compel.build_conditioning_tensor(prompt)\n# or: conditioning = compel([prompt])\n\n# generate image\nimages = pipeline(prompt_embeds=conditioning, num_inference_steps=20).images\nimages[0].save(\"image.jpg\")\n```\n\nFor batched input, use the __call__ interface to compel:\n\n```python\nfrom diffusers import StableDiffusionPipeline\nfrom compel import Compel\n\npipeline = StableDiffusionPipeline.from_pretrained(\"runwayml/stable-diffusion-v1-5\")\ncompel = Compel(tokenizer=pipeline.tokenizer, text_encoder=pipeline.text_encoder)\n\nprompts = [\"a cat playing with a ball++ in the forest\", \"a dog playing with a ball in the forest\"]\nprompt_embeds = compel(prompts)\nimages = pipeline(prompt_embeds=prompt_embeds).images\n\nimages[0].save(\"image0.jpg\")\nimages[1].save(\"image1.jpg\")\n```\n\n### Textual Inversion support\n\nIf you want to have access to \ud83e\udd17diffusers textual inversions, instantiate a `DiffusersTextualInversionManager` and pass it on Compel init:\n\n```\npipeline = StableDiffusionPipeline.from_pretrained(\"runwayml/stable-diffusion-v1-5\")\ntextual_inversion_manager = DiffusersTextualInversionManager(pipeline)\ncompel = Compel(tokenizer=pipeline.tokenizer, text_encoder=pipeline.text_encoder, \n textual_inversion_manager=textual_inversion_manager)\n```\n\n## Memory usage/VRAM leaks\n\nIf you run into memory issues, please make sure you're running compel inside `with torch.no_grad():` blocks. \n\nIf this doesn't help, you could try this advice offered by @kshieh1: \n> After image generation, you should explictly de-reference the tensor object (i.e., prompt_embeds = None) and call gc.collect()\n\nSee https://github.com/damian0815/compel/issues/24 for more details. Thanks @kshieh1 !\n\n## Changelog\n\n#### 2.0.3 - include contributed fixes #64, #80 and fix license in pyproject.toml/pypi\n\n#### 2.0.2 - fix for `pipeline.enable_sequential_cpu_offloading()` with SDXL models (you need to pass `device='cuda'` on compel init)\n\n#### 2.0.1 - fix for [#45](https://github.com/damian0815/compel/issues/45) padding issue with SDXL non-truncated prompts and `.and()` \n\n### 2.0.0 - SDXL Support\n\nWith big thanks to Patrick von Platen from Hugging Face for [the pull request](https://github.com/damian0815/compel/pull/41), Compel now supports SDXL. Use it like this: \n\n```py\nfrom compel import Compel, ReturnedEmbeddingsType\nfrom diffusers import DiffusionPipeline\nimport torch\n\npipeline = DiffusionPipeline.from_pretrained(\"stabilityai/stable-diffusion-xl-base-1.0\", variant=\"fp16\", use_safetensors=True, torch_dtype=torch.float16).to(\"cuda\")\ncompel = Compel(tokenizer=[pipeline.tokenizer, pipeline.tokenizer_2] , text_encoder=[pipeline.text_encoder, pipeline.text_encoder_2], returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, requires_pooled=[False, True])\n# upweight \"ball\"\nprompt = \"a cat playing with a ball++ in the forest\"\nconditioning, pooled = compel(prompt)\n# generate image\nimage = pipeline(prompt_embeds=conditioning, pooled_prompt_embeds=pooled, num_inference_steps=30).images[0]\n```\n\nPlease note that this is a **breaking change** if you've been using clip skip: the old boolean arg `use_penultimate_clip_layer` has been replaced with an enum `ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NORMALIZED`.\n\n\n#### 1.2.1 - actually apply `.and()` weights\n\n### 1.2.0 - Concatenate embeddings using `.and()`\n\nFor Stable Diffusion 2.1 I've been experimenting with a new feature: concatenated embeddings. What I noticed, for example, is that for more complex prompts image generation quality becomes wildly better when the prompt is broken into multiple parts and fed to OpenCLIP separately.\n\nTL;DR: you can now experiment with breaking up your prompts into segments, which for SD2.1 appears to improve the generated image. The syntax is `(\"prompt part 1\", \"prompt part 2\").and()`. You can have more than one part, and you can also weight them, eg `(\"a man eating an apple\", \"sitting on the roof of a car\", \"high quality, trending on artstation, 8K UHD\").and(1, 0.5, 0.5)` which will assign weight `1` to `man eating an apple` and `0.5` to `sitting on the roof of a car` and `high quality, trending on artstation, 8K UHD`. \n\nHere's a nonsense example from the InvokeAI discord #garbage-bin channel, created by gogurt enjoyer's incredible [nightmare prompt generator](https://huggingface.co/cactusfriend/nightmare-invokeai-prompts):\n\n```\na moist sloppy pindlesackboy sloppy hamblin' bogomadong, Clem Fandango is pissed-off, Wario's Woods in background, making a noise like ga-woink-a\n```\n\nPlugging this straight into SD2.1 we get this, which is really not a good image:\n![](images/000075.6dfd7adf.466129594.png)\n\nHowever, if the prompt is broken up into chunks and fed into OpenCLIP separately as four separate prompts, and then concatenated:\n\n```\na moist sloppy pindlesackboy sloppy hamblin' bogomadong\n\nClem Fandango is pissed-off\n\nWario's Woods in background\n\nmaking a noise like ga-woink-a\n```\n\nthen output image with the same seed is *so much* better:\n![](images/000076.68b1c320.466129594.png)\n\nIn the new `.and()` syntax you would prompt this as follows:\n```\n(\"a moist sloppy pindlesackboy sloppy hamblin' bogomadong\", \"Clem Fandango is pissed-off\", \"Wario's Woods in background\", \"making a noise like ga-woink-a\").and()\n```\n\nThe effect can be more or less subtle. Here for example is \n```\nA dream of a distant galaxy, by Caspar David Friedrich, matte painting, trending on artstation, HQ\n```\n![](images/000129.1b33b559.2793529321.png)\n\nAnd the same split into two parts:\n```\nA dream of a distant galaxy, by Caspar David Friedrich, matte painting\n\ntrending on artstation, HQ\n```\n![](images/000128.b5d5cd62.2793529321.png)\n\nThe Compel prompt for this is: \n```\n(\"A dream of a distant galaxy, by Caspar David Friedrich, matte painting\", \"trending on artstation, HQ\").and()\n```\n\n\n\n\n#### 1.1.6 - misc small fixes\n- add `DiffusersTextualInversionManager` (thanks @pdoane)\n- fix batch embedding generation with truncated/non-truncated prompt lengths (#18, thanks @abassino)\n- add note about memory leakage (ref #24, thanks @kshieh1) \n- fix incorrect parsing when commas are not followed by whitespace (#34, thanks @moono)\n\n#### 1.1.5 - fix for compel turning numbers into floats for text inside parentheses\n\n#### 1.1.4 - fixes for #23 (sequential offload) and InvokeAI issue #3442 (allow hyphens in LoRA names) \n\n#### 1.1.3 - enable fetching the penultimate CLIP hidden layer (aka \"clip skip\")\n\nTo use, pass `use_penultimate_clip_layer=True` when initializing your `Compel` instance. Note that there's no need to pass this flag for SD2.0/SD2.1 because diffusers already throws away the last hidden layer when loading the SD2.0+ text encoder.\n\n#### 1.1.2 - fix for #21 (crash when parsing long prompts with truncation enabled if there is weighted fragments beyond the truncation boundary)\n\n#### 1.1.1 - fix for #22 (issues parsing `.` characters inside parentheses)\n\n#### 1.1.0 - support for parsing `withLora`/`useLora` on `parse_prompt_string()`.\n\n* `Compel.parse_prompt_string()` now returns a `Conjunction`\n* any appearances of `withLora(name[, weight])` or `useLora(name[, weight])` anywhere in the prompt string will be parsed to `LoraWeight` instances, and returned on the outermost `Conjunction` returned by `parse_prompt_string()`.\n\n#### 1.0.5 - fix incorrect parsing when passing invalid (auto1111) syntax that has a float\n\nalso fix test case for default swap parameters\n\n#### 1.0.4 - fix embeddings for empty swap target (eg `cat.swap(\"\")`) when truncation is disabled \n\n#### 1.0.3 - better defaults for .swap (https://github.com/damian0815/compel/issues/8)\n\n#### 1.0.2 - fix padding for non-truncated batched embeddings (https://github.com/damian0815/compel/issues/9)\n\n#### 1.0.1 - fix for InvokeAI's `--free_gpu_mem` option\n\n### 1.0.0 - new downweighting algorithm \n\nDownweighting now works by applying an attention mask to remove the downweighted tokens, rather than literally removing them from the sequence. This behaviour is the default, but the old behaviour can be re-enabled by passing `downweight_mode=DownweightMode.REMOVE` on init of the `Compel` instance.\n\nFormerly, downweighting a token worked by both multiplying the weighting of the token's embedding, and doing an inverse-weighted blend with a copy of the token sequence that had the downweighted tokens removed. The intuition is that as weight approaches zero, the tokens being downweighted should be actually removed from the sequence. However, removing the tokens resulted in the positioning of all downstream tokens becoming messed up. The blend ended up blending a lot more than just the tokens in question. \n\nAs of v1.0.0, taking advice from @keturn and @bonlime (https://github.com/damian0815/compel/issues/7) the procedure is by default different. Downweighting still involves a blend but what is blended is a version of the token sequence with the downweighted tokens masked out, rather than removed. This correctly preserves positioning embeddings of the other tokens. \n\nAlso a bugfix: fix black images on weight 0 (https://github.com/invoke-ai/InvokeAI/issues/2832)\n\n### 0.1.10 - add support for prompts longer than the model's max token length. \n\nTo enable, initialize `Compel` with `truncate_long_prompts=False` (default is True). Prompts that are longer than the model's `max_token_length` will be chunked and padded out to an integer multiple of `max_token_length`. \n\nNote that even if you don't use a negative prompt, you'll need to build a conditioning tensor for a negative prompt of at least `\"\"`, and use `compel.pad_conditioning_tensors_to_same_length()`, otherwise the you'll get an error about mismatched conditioning tensor lengths:\n\n```python\ncompel = Compel(..., truncate_long_prompts=False)\nprompt = \"a cat playing with a ball++ in the forest, amazing, exquisite, stunning, masterpiece, skilled, powerful, incredible, amazing, trending on gregstation, greg, greggy, greggs greggson, greggy mcgregface, ...\" # very long prompt\nconditioning = compel.build_conditioning_tensor(prompt)\nnegative_prompt = \"\" # it's necessary to create an empty prompt - it can also be very long, if you want\nnegative_conditioning = compel.build_conditioning_tensor(negative_prompt)\n[conditioning, negative_conditioning] = compel.pad_conditioning_tensors_to_same_length([conditioning, negative_conditioning])\n```\n\n#### 0.1.9 - broken\n\n#### 0.1.8 - downgrade Python min version to 3.7\n\n#### 0.1.7 - InvokeAI compatibility\n\n",
"bugtrack_url": null,
"license": null,
"summary": "A prompting enhancement library for transformers-type text embedding systems.",
"version": "2.0.3",
"project_urls": {
"Bug Tracker": "https://github.com/damian0815/compel/issues",
"Homepage": "https://github.com/damian0815/compel"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "fa775d204fe516335b07060a86c6f820bd549fce23ddb6030583d73d3d470b42",
"md5": "dbf50d0ab9e3d08f2e418d2625558fda",
"sha256": "eb0ab6cf230fc59ae12dfe00d859e43eec9ba46408d7c5f79b95ccfcb47443ef"
},
"downloads": -1,
"filename": "compel-2.0.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "dbf50d0ab9e3d08f2e418d2625558fda",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 30091,
"upload_time": "2024-06-29T17:44:56",
"upload_time_iso_8601": "2024-06-29T17:44:56.476406Z",
"url": "https://files.pythonhosted.org/packages/fa/77/5d204fe516335b07060a86c6f820bd549fce23ddb6030583d73d3d470b42/compel-2.0.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "d783e64aac76631bfd8ea3d5b503af0f88af462622a9e7c949123e878fa1d296",
"md5": "715766f8c2fb34f046b9582d53686245",
"sha256": "6548b90340166b85e26d3d5b4e9a11de6908a03b6b24879aff65e6e6fc5b677c"
},
"downloads": -1,
"filename": "compel-2.0.3.tar.gz",
"has_sig": false,
"md5_digest": "715766f8c2fb34f046b9582d53686245",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 40685,
"upload_time": "2024-06-29T17:44:58",
"upload_time_iso_8601": "2024-06-29T17:44:58.146796Z",
"url": "https://files.pythonhosted.org/packages/d7/83/e64aac76631bfd8ea3d5b503af0f88af462622a9e7c949123e878fa1d296/compel-2.0.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-06-29 17:44:58",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "damian0815",
"github_project": "compel",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "compel"
}