# Cross Attention Map Visualization
[](https://huggingface.co/spaces/We-Want-GPU/diffusers-cross-attention-map-SDXL-t2i)
Thanks to HuggingFace [Diffusers](https://github.com/huggingface/diffusers) team for the GPU sponsorship!
This repository is for extracting and visualizing cross attention maps, based on the latest [Diffusers](https://github.com/huggingface/diffusers) code (`v0.32.0`).
For errors reports or feature requests, feel free to raise an issue.
## Update Log.
[2024-12-22] It is now compatible with _"Stable Diffusion 3.5"_, _"Flux-dev"_ and _"Flux-schnell"_! (“Sana" will be the focus of the next update.)
[2024-12-17] Refactor and add setup.py
[2024-11-12] _"Stable Diffusion 3"_ is compatible and supports _batch operations_! (Flux and "Stable Diffusion 3.5" is not compatible yet.)
[2024-07-04] Added features for _saving attention maps based on timesteps and layers_.
## Compatible models.
<!-- Compatible with various models, including both UNet/DiT based models listed below. -->
Compatible with various models listed below.
- [black-forest-labs/FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell)
- [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev)
- [stabilityai/stable-diffusion-3.5-medium](https://huggingface.co/stabilityai/stable-diffusion-3.5-medium)
- [stabilityai/stable-diffusion-3-medium-diffusers](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers)
- [stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
- [stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base)
- ...
<!-- - [sdxl-turbo](https://huggingface.co/stabilityai/sdxl-turbo) -->
<!-- - [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) -->
## Example.
<div style="text-align: center;">
<img src="./assets/sd3.png" alt="Image 1" width="400" height="400">
<img src="./assets/4--bara>.png" alt="Image 2" width="400" height="400">
</div>
<details>
<summary>cap-</summary>
<div markdown="1">
<div style="text-align: center;">
<img src="./assets/sd3.png" alt="Image 1" width="400" height="400">
<img src="./assets/2-<cap-.png" alt="<cap-" width="400" height="400">
</div>
</div>
</details>
<details>
<summary>-y-</summary>
<div markdown="1">
<div style="text-align: center;">
<img src="./assets/sd3.png" alt="Image 1" width="400" height="400">
<img src="./assets/3--y-.png" alt="-y-" width="400" height="400">
</div>
</div>
</details>
<details>
<summary>-bara</summary>
<div markdown="1">
<div style="text-align: center;">
<img src="./assets/sd3.png" alt="Image 1" width="400" height="400">
<img src="./assets/4--bara>.png" alt="-bara>" width="400" height="400">
</div>
</div>
</details>
<details>
<summary>hello</summary>
<div markdown="1">
<div style="text-align: center;">
<img src="./assets/sd3.png" alt="Image 1" width="400" height="400">
<img src="./assets/10-<hello>.png" alt="<hello>" width="400" height="400">
</div>
</div>
</details>
<details>
<summary>world</summary>
<div markdown="1">
<div style="text-align: center;">
<img src="./assets/sd3.png" alt="Image 1" width="400" height="400">
<img src="./assets/11-<world>.png" alt="<world>>" width="400" height="400">
</div>
</div>
</details>
## demo
```bash
git clone https://github.com/wooyeolBaek/attention-map-diffusers.git
cd attention-map-diffusers
pip install -e .
```
or
```bash
pip install attention_map_diffusers
```
### Flux-dev
```python
import torch
from diffusers import FluxPipeline
from attention_map_diffusers import (
attn_maps,
init_pipeline,
save_attention_maps
)
pipe = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
torch_dtype=torch.bfloat16
)
# pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power
pipe.to('cuda')
##### 1. Replace modules and Register hook #####
pipe = init_pipeline(pipe)
################################################
# recommend not using batch operations for sd3, as cpu memory could be exceeded.
prompts = [
# "A photo of a puppy wearing a hat.",
"A capybara holding a sign that reads Hello World.",
]
images = pipe(
prompts,
num_inference_steps=15,
guidance_scale=4.5,
).images
for batch, image in enumerate(images):
image.save(f'{batch}-flux-dev.png')
##### 2. Process and Save attention map #####
save_attention_maps(attn_maps, pipe.tokenizer, prompts, base_dir='attn_maps-flux-dev', unconditional=False)
#############################################
```
### Flux-schnell
```python
import torch
from diffusers import FluxPipeline
from attention_map_diffusers import (
attn_maps,
init_pipeline,
save_attention_maps
)
pipe = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-schnell",
torch_dtype=torch.bfloat16
)
# pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power
pipe.to('cuda')
##### 1. Replace modules and Register hook #####
pipe = init_pipeline(pipe)
################################################
# recommend not using batch operations for sd3, as cpu memory could be exceeded.
prompts = [
# "A photo of a puppy wearing a hat.",
"A capybara holding a sign that reads Hello World.",
]
images = pipe(
prompts,
num_inference_steps=15,
guidance_scale=4.5,
).images
for batch, image in enumerate(images):
image.save(f'{batch}-flux-schnell.png')
##### 2. Process and Save attention map #####
save_attention_maps(attn_maps, pipe.tokenizer, prompts, base_dir='attn_maps-flux-schnell', unconditional=False)
#############################################
```
### Stable Diffusion 3.5
```python
import torch
from diffusers import StableDiffusion3Pipeline
from attention_map_diffusers import (
attn_maps,
init_pipeline,
save_attention_maps
)
pipe = StableDiffusion3Pipeline.from_pretrained(
"stabilityai/stable-diffusion-3.5-medium",
torch_dtype=torch.bfloat16
)
pipe = pipe.to("cuda")
##### 1. Replace modules and Register hook #####
pipe = init_pipeline(pipe)
################################################
# recommend not using batch operations for sd3, as cpu memory could be exceeded.
prompts = [
# "A photo of a puppy wearing a hat.",
"A capybara holding a sign that reads Hello World.",
]
images = pipe(
prompts,
num_inference_steps=15,
guidance_scale=4.5,
).images
for batch, image in enumerate(images):
image.save(f'{batch}-sd3-5.png')
##### 2. Process and Save attention map #####
save_attention_maps(attn_maps, pipe.tokenizer, prompts, base_dir='attn_maps-sd3-5', unconditional=True)
#############################################
```
### Stable Diffusion 3.0
```python
import torch
from diffusers import StableDiffusion3Pipeline
from attention_map_diffusers import (
attn_maps,
init_pipeline,
save_attention_maps
)
pipe = StableDiffusion3Pipeline.from_pretrained(
"stabilityai/stable-diffusion-3-medium-diffusers",
torch_dtype=torch.bfloat16
)
pipe = pipe.to("cuda")
##### 1. Replace modules and Register hook #####
pipe = init_pipeline(pipe)
################################################
# recommend not using batch operations for sd3, as cpu memory could be exceeded.
prompts = [
# "A photo of a puppy wearing a hat.",
"A capybara holding a sign that reads Hello World.",
]
images = pipe(
prompts,
num_inference_steps=15,
guidance_scale=4.5,
).images
for batch, image in enumerate(images):
image.save(f'{batch}-sd3.png')
##### 2. Process and Save attention map #####
save_attention_maps(attn_maps, pipe.tokenizer, prompts, base_dir='attn_maps', unconditional=True)
#############################################
```
### Stable Diffusion XL
```python
import torch
from diffusers import DiffusionPipeline
from attention_map_diffusers import (
attn_maps,
init_pipeline,
save_attention_maps
)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")
##### 1. Replace modules and Register hook #####
pipe = init_pipeline(pipe)
################################################
prompts = [
"A photo of a puppy wearing a hat.",
"A capybara holding a sign that reads Hello World.",
]
images = pipe(
prompts,
num_inference_steps=15,
).images
for batch, image in enumerate(images):
image.save(f'{batch}-sdxl.png')
##### 2. Process and Save attention map #####
save_attention_maps(attn_maps, pipe.tokenizer, prompts, base_dir='attn_maps', unconditional=True)
#############################################
```
### Stable Diffusion 2.1
```python
import torch
from diffusers import DiffusionPipeline
from attention_map_diffusers import (
attn_maps,
init_pipeline,
save_attention_maps
)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-2-1",
torch_dtype=torch.float16,
)
pipe = pipe.to("cuda")
##### 1. Replace modules and Register hook #####
pipe = init_pipeline(pipe)
################################################
prompts = [
"A photo of a puppy wearing a hat.",
"A capybara holding a sign that reads Hello World.",
]
images = pipe(
prompts,
num_inference_steps=15,
).images
for batch, image in enumerate(images):
image.save(f'{batch}-sd2-1.png')
##### 2. Process and Save attention map #####
save_attention_maps(attn_maps, pipe.tokenizer, prompts, base_dir='attn_maps', unconditional=True)
#############################################
```
Raw data
{
"_id": null,
"home_page": "https://github.com/wooyeolBaek/attention-map-diffusers",
"name": "attention-map-diffusers",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": null,
"author": "wooyeolbaek",
"author_email": "100wooyeol@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/ca/88/a2698626b85eedb9882d2a1aff1d2421ea4c5dd3d89abd2268365d28fddf/attention_map_diffusers-0.1.5.tar.gz",
"platform": null,
"description": "# Cross Attention Map Visualization\n\n[](https://huggingface.co/spaces/We-Want-GPU/diffusers-cross-attention-map-SDXL-t2i)\n\nThanks to HuggingFace [Diffusers](https://github.com/huggingface/diffusers) team for the GPU sponsorship!\n\nThis repository is for extracting and visualizing cross attention maps, based on the latest [Diffusers](https://github.com/huggingface/diffusers) code (`v0.32.0`).\n\nFor errors reports or feature requests, feel free to raise an issue.\n\n## Update Log.\n[2024-12-22] It is now compatible with _\"Stable Diffusion 3.5\"_, _\"Flux-dev\"_ and _\"Flux-schnell\"_! (\u201cSana\" will be the focus of the next update.)\n\n[2024-12-17] Refactor and add setup.py\n\n[2024-11-12] _\"Stable Diffusion 3\"_ is compatible and supports _batch operations_! (Flux and \"Stable Diffusion 3.5\" is not compatible yet.)\n\n[2024-07-04] Added features for _saving attention maps based on timesteps and layers_.\n\n\n## Compatible models.\n<!-- Compatible with various models, including both UNet/DiT based models listed below. -->\nCompatible with various models listed below.\n- [black-forest-labs/FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell)\n- [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev)\n- [stabilityai/stable-diffusion-3.5-medium](https://huggingface.co/stabilityai/stable-diffusion-3.5-medium)\n- [stabilityai/stable-diffusion-3-medium-diffusers](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers)\n- [stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)\n- [stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base)\n- ...\n\n<!-- - [sdxl-turbo](https://huggingface.co/stabilityai/sdxl-turbo) -->\n<!-- - [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) -->\n\n\n## Example.\n\n\n<div style=\"text-align: center;\">\n <img src=\"./assets/sd3.png\" alt=\"Image 1\" width=\"400\" height=\"400\">\n <img src=\"./assets/4--bara>.png\" alt=\"Image 2\" width=\"400\" height=\"400\">\n</div>\n\n\n\n<details>\n<summary>cap-</summary>\n<div markdown=\"1\">\n\n<div style=\"text-align: center;\">\n <img src=\"./assets/sd3.png\" alt=\"Image 1\" width=\"400\" height=\"400\">\n <img src=\"./assets/2-<cap-.png\" alt=\"<cap-\" width=\"400\" height=\"400\">\n</div>\n\n</div>\n</details>\n\n\n<details>\n<summary>-y-</summary>\n<div markdown=\"1\">\n\n<div style=\"text-align: center;\">\n <img src=\"./assets/sd3.png\" alt=\"Image 1\" width=\"400\" height=\"400\">\n <img src=\"./assets/3--y-.png\" alt=\"-y-\" width=\"400\" height=\"400\">\n</div>\n\n</div>\n</details>\n\n\n<details>\n<summary>-bara</summary>\n<div markdown=\"1\">\n\n<div style=\"text-align: center;\">\n <img src=\"./assets/sd3.png\" alt=\"Image 1\" width=\"400\" height=\"400\">\n <img src=\"./assets/4--bara>.png\" alt=\"-bara>\" width=\"400\" height=\"400\">\n</div>\n\n</div>\n</details>\n\n\n<details>\n<summary>hello</summary>\n<div markdown=\"1\">\n\n<div style=\"text-align: center;\">\n <img src=\"./assets/sd3.png\" alt=\"Image 1\" width=\"400\" height=\"400\">\n <img src=\"./assets/10-<hello>.png\" alt=\"<hello>\" width=\"400\" height=\"400\">\n</div>\n\n</div>\n</details>\n\n\n<details>\n<summary>world</summary>\n<div markdown=\"1\">\n\n<div style=\"text-align: center;\">\n <img src=\"./assets/sd3.png\" alt=\"Image 1\" width=\"400\" height=\"400\">\n <img src=\"./assets/11-<world>.png\" alt=\"<world>>\" width=\"400\" height=\"400\">\n</div>\n\n</div>\n</details>\n\n\n\n## demo\n```bash\ngit clone https://github.com/wooyeolBaek/attention-map-diffusers.git\ncd attention-map-diffusers\npip install -e .\n```\nor\n```bash\npip install attention_map_diffusers\n```\n\n### Flux-dev\n```python\nimport torch\nfrom diffusers import FluxPipeline\nfrom attention_map_diffusers import (\n attn_maps,\n init_pipeline,\n save_attention_maps\n)\n\npipe = FluxPipeline.from_pretrained(\n \"black-forest-labs/FLUX.1-dev\",\n torch_dtype=torch.bfloat16\n)\n# pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power\npipe.to('cuda')\n\n##### 1. Replace modules and Register hook #####\npipe = init_pipeline(pipe)\n################################################\n\n# recommend not using batch operations for sd3, as cpu memory could be exceeded.\nprompts = [\n # \"A photo of a puppy wearing a hat.\",\n \"A capybara holding a sign that reads Hello World.\",\n]\n\nimages = pipe(\n prompts,\n num_inference_steps=15,\n guidance_scale=4.5,\n).images\n\nfor batch, image in enumerate(images):\n image.save(f'{batch}-flux-dev.png')\n\n##### 2. Process and Save attention map #####\nsave_attention_maps(attn_maps, pipe.tokenizer, prompts, base_dir='attn_maps-flux-dev', unconditional=False)\n#############################################\n```\n\n### Flux-schnell\n```python\nimport torch\nfrom diffusers import FluxPipeline\nfrom attention_map_diffusers import (\n attn_maps,\n init_pipeline,\n save_attention_maps\n)\n\npipe = FluxPipeline.from_pretrained(\n \"black-forest-labs/FLUX.1-schnell\",\n torch_dtype=torch.bfloat16\n)\n# pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power\npipe.to('cuda')\n\n##### 1. Replace modules and Register hook #####\npipe = init_pipeline(pipe)\n################################################\n\n# recommend not using batch operations for sd3, as cpu memory could be exceeded.\nprompts = [\n # \"A photo of a puppy wearing a hat.\",\n \"A capybara holding a sign that reads Hello World.\",\n]\n\nimages = pipe(\n prompts,\n num_inference_steps=15,\n guidance_scale=4.5,\n).images\n\nfor batch, image in enumerate(images):\n image.save(f'{batch}-flux-schnell.png')\n\n##### 2. Process and Save attention map #####\nsave_attention_maps(attn_maps, pipe.tokenizer, prompts, base_dir='attn_maps-flux-schnell', unconditional=False)\n#############################################\n```\n\n### Stable Diffusion 3.5\n```python\nimport torch\nfrom diffusers import StableDiffusion3Pipeline\nfrom attention_map_diffusers import (\n attn_maps,\n init_pipeline,\n save_attention_maps\n)\n\npipe = StableDiffusion3Pipeline.from_pretrained(\n \"stabilityai/stable-diffusion-3.5-medium\",\n torch_dtype=torch.bfloat16\n)\npipe = pipe.to(\"cuda\")\n\n##### 1. Replace modules and Register hook #####\npipe = init_pipeline(pipe)\n################################################\n\n# recommend not using batch operations for sd3, as cpu memory could be exceeded.\nprompts = [\n # \"A photo of a puppy wearing a hat.\",\n \"A capybara holding a sign that reads Hello World.\",\n]\n\nimages = pipe(\n prompts,\n num_inference_steps=15,\n guidance_scale=4.5,\n).images\n\nfor batch, image in enumerate(images):\n image.save(f'{batch}-sd3-5.png')\n\n##### 2. Process and Save attention map #####\nsave_attention_maps(attn_maps, pipe.tokenizer, prompts, base_dir='attn_maps-sd3-5', unconditional=True)\n#############################################\n```\n\n### Stable Diffusion 3.0\n```python\nimport torch\nfrom diffusers import StableDiffusion3Pipeline\nfrom attention_map_diffusers import (\n attn_maps,\n init_pipeline,\n save_attention_maps\n)\n\n\npipe = StableDiffusion3Pipeline.from_pretrained(\n \"stabilityai/stable-diffusion-3-medium-diffusers\",\n torch_dtype=torch.bfloat16\n)\npipe = pipe.to(\"cuda\")\n\n##### 1. Replace modules and Register hook #####\npipe = init_pipeline(pipe)\n################################################\n\n# recommend not using batch operations for sd3, as cpu memory could be exceeded.\nprompts = [\n # \"A photo of a puppy wearing a hat.\",\n \"A capybara holding a sign that reads Hello World.\",\n]\n\nimages = pipe(\n prompts,\n num_inference_steps=15,\n guidance_scale=4.5,\n).images\n\nfor batch, image in enumerate(images):\n image.save(f'{batch}-sd3.png')\n\n##### 2. Process and Save attention map #####\nsave_attention_maps(attn_maps, pipe.tokenizer, prompts, base_dir='attn_maps', unconditional=True)\n#############################################\n```\n\n### Stable Diffusion XL\n```python\nimport torch\nfrom diffusers import DiffusionPipeline\nfrom attention_map_diffusers import (\n attn_maps,\n init_pipeline,\n save_attention_maps\n)\n\n\npipe = DiffusionPipeline.from_pretrained(\n \"stabilityai/stable-diffusion-xl-base-1.0\",\n torch_dtype=torch.float16,\n)\npipe = pipe.to(\"cuda\")\n\n##### 1. Replace modules and Register hook #####\npipe = init_pipeline(pipe)\n################################################\n\nprompts = [\n \"A photo of a puppy wearing a hat.\",\n \"A capybara holding a sign that reads Hello World.\",\n]\n\nimages = pipe(\n prompts,\n num_inference_steps=15,\n).images\n\nfor batch, image in enumerate(images):\n image.save(f'{batch}-sdxl.png')\n\n##### 2. Process and Save attention map #####\nsave_attention_maps(attn_maps, pipe.tokenizer, prompts, base_dir='attn_maps', unconditional=True)\n#############################################\n```\n\n### Stable Diffusion 2.1\n```python\nimport torch\nfrom diffusers import DiffusionPipeline\nfrom attention_map_diffusers import (\n attn_maps,\n init_pipeline,\n save_attention_maps\n)\n\n\npipe = DiffusionPipeline.from_pretrained(\n \"stabilityai/stable-diffusion-2-1\",\n torch_dtype=torch.float16,\n)\npipe = pipe.to(\"cuda\")\n\n##### 1. Replace modules and Register hook #####\npipe = init_pipeline(pipe)\n################################################\n\nprompts = [\n \"A photo of a puppy wearing a hat.\",\n \"A capybara holding a sign that reads Hello World.\",\n]\n\nimages = pipe(\n prompts,\n num_inference_steps=15,\n).images\n\nfor batch, image in enumerate(images):\n image.save(f'{batch}-sd2-1.png')\n\n##### 2. Process and Save attention map #####\nsave_attention_maps(attn_maps, pipe.tokenizer, prompts, base_dir='attn_maps', unconditional=True)\n#############################################\n\n```\n",
"bugtrack_url": null,
"license": null,
"summary": "attention map for diffusers",
"version": "0.1.5",
"project_urls": {
"Homepage": "https://github.com/wooyeolBaek/attention-map-diffusers"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "e2058e7c490fb61a3f8924e02895b2be9b060707647fe72184e0edab5212ef30",
"md5": "224b845446372d1afb1657edb910bc8a",
"sha256": "a3f9376b8c7c7a39aa70e0ab432ee8f749d52e76fe22147c2dd65b6cae0b4704"
},
"downloads": -1,
"filename": "attention_map_diffusers-0.1.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "224b845446372d1afb1657edb910bc8a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 22902,
"upload_time": "2024-12-24T18:17:34",
"upload_time_iso_8601": "2024-12-24T18:17:34.298611Z",
"url": "https://files.pythonhosted.org/packages/e2/05/8e7c490fb61a3f8924e02895b2be9b060707647fe72184e0edab5212ef30/attention_map_diffusers-0.1.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "ca88a2698626b85eedb9882d2a1aff1d2421ea4c5dd3d89abd2268365d28fddf",
"md5": "dfa160cfa3a18575edc468241068e04f",
"sha256": "5277993aa221fd1f77f4fe2eedaafcf5a23a2ba28667aac960137d0995ba7747"
},
"downloads": -1,
"filename": "attention_map_diffusers-0.1.5.tar.gz",
"has_sig": false,
"md5_digest": "dfa160cfa3a18575edc468241068e04f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 23796,
"upload_time": "2024-12-24T18:17:35",
"upload_time_iso_8601": "2024-12-24T18:17:35.672676Z",
"url": "https://files.pythonhosted.org/packages/ca/88/a2698626b85eedb9882d2a1aff1d2421ea4c5dd3d89abd2268365d28fddf/attention_map_diffusers-0.1.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-24 18:17:35",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "wooyeolBaek",
"github_project": "attention-map-diffusers",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "diffusers",
"specs": [
[
">=",
"0.29.0"
]
]
},
{
"name": "accelerate",
"specs": []
},
{
"name": "transformers",
"specs": []
},
{
"name": "einops",
"specs": []
},
{
"name": "torchvision",
"specs": []
},
{
"name": "protobuf",
"specs": []
},
{
"name": "sentencepiece",
"specs": []
}
],
"lcname": "attention-map-diffusers"
}