tipo-kgen


Nametipo-kgen JSON
Version 0.2.0 PyPI version JSON
download
home_pageNone
SummaryTIPO: Text to Image with text Presampling for Optimal prompting
upload_time2025-01-22 02:39:44
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseApache-2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # KGen - A System for Prompt Generation to Improve Text-to-Image Performance

KGen is a project that utilizes Large Language Models (LLMs) to generate prompts for Text-to-Image (T2I) models.

The goal is to enable T2I models to use more complicated and detailed captions during training while maintaining good usability. Which avoid the requirements of multi-level caption or caption dropout for "better flexibility".

This project contains the implementation of some prompt gen model, framework and sampling method.

The implemented framework or sampling method can also be used in other domain of field, but this project will only focus in prompt generation at this time.

## Usage

Installation:

```bash
pip install tipo-kgen
```

Use in code:
Read the [Example code](scripts/example.py) or [TIPO-test script](scripts/tipo-test.py) for more informations.

## TGTS

TGTS: Token Group Tree Sampling for Efficient Multi-Variants Sampling in Language Model

Implementation can be found in [sampling module](src/kgen/sampling/). <br>
Experiment scripts can be found in [scrtips](scripts/exp/tgts/)

## TIPO

TIPO: Text to Image with text Presampling for Optimal prompting

Arxiv Paper: https://arxiv.org/abs/2411.08127

TIPO is a LLM model system designed for generating detailed prompt from input tags or caption. Unlike DTG, TIPO can handle both tags and Natural language. In theory, you can also design your own tag in linguistic way. (For example, long blue hair is acceptable tag in TIPO and will not break the model).
The main difference between TIPO and DTG is:

1. TIPO is trained with both Natural Language captions and Danbooru tags, the "nl+tags" data are also not only from danbooru but also some general text-image dataset like Coyo-11M
2. TIPO is trained with better format which achieve some ability on "generate meta infos" such as artists/characters. (or, you can say TIPO have ability to "choose" which artist tag is suitable with current content)
3. TIPO is trained on 30M entries dataset, with more than 25M entries have NL caption, more than 18M entries have tags

### Model card

|                   | TIPO-200M                                                                      | TIPO-200M-ft                                                                         | TIPO-500M                                                                      |
| ----------------- | ------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------ |
| Arch              | LLaMA                                                                          | LLaMA                                                                                | LLaMA                                                                          |
| Max ctx length    | 1024                                                                           | 1024                                                                                 | 1024                                                                           |
| Batch Size        | 2048                                                                           | 2048                                                                                 | 3584                                                                           |
| Training dataset  | Danbooru, GBC10M, 5epoch<br />Danbooru, GBC10M, Coyo11M, 3epoch              | Danbooru(pixtral), Coyo11M, 2epoch                                                   | Danbooru, GBC10M, Coyo11M, 5epoch                                            |
| Real Token Seen*  | 40B token                                                                      | 50B (10B more from TIPO-200M)                                                       | 30B token                                                                      |
| Training Hardware | RTX 3090 x 4                                                                   | RTX 3090 x 4                                                                         | H100 x 8                                                                       |
| Training Time     | 420 hour`                                                                      | 120 hour`                                                                            | 100 hour`                                                                      |
| Huggingface       | [KBlueLeaf/TIPO-200M · Hugging Face](https://huggingface.co/KBlueLeaf/TIPO-200M) | [KBlueLeaf/TIPO-200M-ft · Hugging Face](https://huggingface.co/KBlueLeaf/TIPO-200M-ft) | [KBlueLeaf/TIPO-500M · Hugging Face](https://huggingface.co/KBlueLeaf/TIPO-500M) |

*: We only count "non-padding token" in the token seen, since all the training data have very large length range.`<br/>`
`: Since the training data is pretty short, it cost more time to reach same token seen than general LLM pretraining.<br/>````
As reference, with 4096 as max ctx length and almost all the data have reach that length, you may only need 2days to reach 10B token seen on RTX 3090 x 4 with 200M model.

### Usage

A Simple DEMO for TIPO (with t2i functionality included):
https://huggingface.co/spaces/KBlueLeaf/TIPO-DEMO

TIPO-extension: https://github.com/KohakuBlueleaf/z-tipo-extension

#### Cite

```bibtex
@misc{yeh2024tipotextimagetext,
      title={TIPO: Text to Image with Text Presampling for Prompt Optimization}, 
      author={Shih-Ying Yeh and Sang-Hyun Park and Giyeong Oh and Min Song and Youngjae Yu},
      year={2024},
      eprint={2411.08127},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2411.08127}, 
}
```

## DanTagGen

DanTagGen is an early project under KGen, trained on the Danbooru tag system. Danbooru tags often have "overlaps" or "duplicates", such as:

- "long hair" and "very long hair"
- "thighhighs", "black thighhighs", and "black legwears"

Although users naturally avoid these duplications, the model may benefit from having the complete set of tags for better results, as that is how they were trained.

In addition to overlapping tags, "character tags" also need to be mentioned. For simplicity, a "DreamBooth style" prompt can be used to illustrate this:

- Original: a dog
- DreamBooth: a \[V\] dog
- What User wants: \[V\]

As shown above, users tend to ignore all the "descriptions" that directly point to the character. In this situation, utilizing LLMs to "connect" the "trigger word" and "related description" is a promising approach. DanTagGen is an experimental project to prove this concept.

### Architecture

DanTagGen uses the LLaMA architecture with 400M parameters.

### Training

DanTagGen is trained on posts with the top 75% favorite count in Danbooru, which amounts to 5 million entries.

More details about the architecture and training can be found on the Hugging Face page: [KBlueLeaf/DanTagGen-beta · Hugging Face](https://huggingface.co/KBlueLeaf/DanTagGen-beta)

### Usage

* Hugging Face Space: [DTG Demo - a Hugging Face Space by KBlueLeaf](https://huggingface.co/spaces/KBlueLeaf/DTG-demo)
* SD-WebUI Extension: [KohakuBlueleaf/z-a1111-sd-webui-dtg: A sd-webui extension for utilizing DanTagGen to &#34;upsample prompts&#34;. (github.com)](https://github.com/KohakuBlueleaf/z-a1111-sd-webui-dtg)
* ComfyUI Node: [toyxyz/a1111-sd-webui-dtg_comfyui: A sd-webui extension for utilizing DanTagGen to &#34;upsample prompts&#34;. (github.com)](https://github.com/toyxyz/a1111-sd-webui-dtg_comfyui)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "tipo-kgen",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": "\"Shih-Ying Yeh(KohakuBlueLeaf)\" <apolloyeh0123@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/58/9d/8496f092053884ec11ea4445d95ce06d1e8ce7e0d0a8bc77d5ed9e77fcf1/tipo_kgen-0.2.0.tar.gz",
    "platform": null,
    "description": "# KGen - A System for Prompt Generation to Improve Text-to-Image Performance\r\n\r\nKGen is a project that utilizes Large Language Models (LLMs) to generate prompts for Text-to-Image (T2I) models.\r\n\r\nThe goal is to enable T2I models to use more complicated and detailed captions during training while maintaining good usability. Which avoid the requirements of multi-level caption or caption dropout for \"better flexibility\".\r\n\r\nThis project contains the implementation of some prompt gen model, framework and sampling method.\r\n\r\nThe implemented framework or sampling method can also be used in other domain of field, but this project will only focus in prompt generation at this time.\r\n\r\n## Usage\r\n\r\nInstallation:\r\n\r\n```bash\r\npip install tipo-kgen\r\n```\r\n\r\nUse in code:\r\nRead the [Example code](scripts/example.py) or [TIPO-test script](scripts/tipo-test.py) for more informations.\r\n\r\n## TGTS\r\n\r\nTGTS: Token Group Tree Sampling for Efficient Multi-Variants Sampling in Language Model\r\n\r\nImplementation can be found in [sampling module](src/kgen/sampling/). <br>\r\nExperiment scripts can be found in [scrtips](scripts/exp/tgts/)\r\n\r\n## TIPO\r\n\r\nTIPO: Text to Image with text Presampling for Optimal prompting\r\n\r\nArxiv Paper: https://arxiv.org/abs/2411.08127\r\n\r\nTIPO is a LLM model system designed for generating detailed prompt from input tags or caption. Unlike DTG, TIPO can handle both tags and Natural language. In theory, you can also design your own tag in linguistic way. (For example, long blue hair is acceptable tag in TIPO and will not break the model).\r\nThe main difference between TIPO and DTG is:\r\n\r\n1. TIPO is trained with both Natural Language captions and Danbooru tags, the \"nl+tags\" data are also not only from danbooru but also some general text-image dataset like Coyo-11M\r\n2. TIPO is trained with better format which achieve some ability on \"generate meta infos\" such as artists/characters. (or, you can say TIPO have ability to \"choose\" which artist tag is suitable with current content)\r\n3. TIPO is trained on 30M entries dataset, with more than 25M entries have NL caption, more than 18M entries have tags\r\n\r\n### Model card\r\n\r\n|                   | TIPO-200M                                                                      | TIPO-200M-ft                                                                         | TIPO-500M                                                                      |\r\n| ----------------- | ------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------ |\r\n| Arch              | LLaMA                                                                          | LLaMA                                                                                | LLaMA                                                                          |\r\n| Max ctx length    | 1024                                                                           | 1024                                                                                 | 1024                                                                           |\r\n| Batch Size        | 2048                                                                           | 2048                                                                                 | 3584                                                                           |\r\n| Training dataset  | Danbooru, GBC10M, 5epoch<br />Danbooru,\u00a0GBC10M,\u00a0Coyo11M, 3epoch              | Danbooru(pixtral), Coyo11M, 2epoch                                                   | Danbooru,\u00a0GBC10M,\u00a0Coyo11M, 5epoch                                            |\r\n| Real Token Seen*  | 40B token                                                                      | 50B\u00a0(10B more from TIPO-200M)                                                       | 30B token                                                                      |\r\n| Training Hardware | RTX 3090 x 4                                                                   | RTX 3090 x 4                                                                         | H100 x 8                                                                       |\r\n| Training Time     | 420 hour`                                                                      | 120 hour`                                                                            | 100 hour`                                                                      |\r\n| Huggingface       | [KBlueLeaf/TIPO-200M \u00b7 Hugging Face](https://huggingface.co/KBlueLeaf/TIPO-200M) | [KBlueLeaf/TIPO-200M-ft \u00b7 Hugging Face](https://huggingface.co/KBlueLeaf/TIPO-200M-ft) | [KBlueLeaf/TIPO-500M \u00b7 Hugging Face](https://huggingface.co/KBlueLeaf/TIPO-500M) |\r\n\r\n*: We only count \"non-padding token\" in the token seen, since all the training data have very large length range.`<br/>`\r\n`: Since the training data is pretty short, it cost more time to reach same token seen than general LLM pretraining.<br/>````\r\nAs reference, with 4096 as max ctx length and almost all the data have reach that length, you may only need 2days to reach 10B token seen on RTX 3090 x 4 with 200M model.\r\n\r\n### Usage\r\n\r\nA Simple DEMO for TIPO (with t2i functionality included):\r\nhttps://huggingface.co/spaces/KBlueLeaf/TIPO-DEMO\r\n\r\nTIPO-extension: https://github.com/KohakuBlueleaf/z-tipo-extension\r\n\r\n#### Cite\r\n\r\n```bibtex\r\n@misc{yeh2024tipotextimagetext,\r\n      title={TIPO: Text to Image with Text Presampling for Prompt Optimization}, \r\n      author={Shih-Ying Yeh and Sang-Hyun Park and Giyeong Oh and Min Song and Youngjae Yu},\r\n      year={2024},\r\n      eprint={2411.08127},\r\n      archivePrefix={arXiv},\r\n      primaryClass={cs.CV},\r\n      url={https://arxiv.org/abs/2411.08127}, \r\n}\r\n```\r\n\r\n## DanTagGen\r\n\r\nDanTagGen is an early project under KGen, trained on the Danbooru tag system. Danbooru tags often have \"overlaps\" or \"duplicates\", such as:\r\n\r\n- \"long hair\" and \"very long hair\"\r\n- \"thighhighs\", \"black thighhighs\", and \"black legwears\"\r\n\r\nAlthough users naturally avoid these duplications, the model may benefit from having the complete set of tags for better results, as that is how they were trained.\r\n\r\nIn addition to overlapping tags, \"character tags\" also need to be mentioned. For simplicity, a \"DreamBooth style\" prompt can be used to illustrate this:\r\n\r\n- Original: a dog\r\n- DreamBooth: a \\[V\\] dog\r\n- What User wants: \\[V\\]\r\n\r\nAs shown above, users tend to ignore all the \"descriptions\" that directly point to the character. In this situation, utilizing LLMs to \"connect\" the \"trigger word\" and \"related description\" is a promising approach. DanTagGen is an experimental project to prove this concept.\r\n\r\n### Architecture\r\n\r\nDanTagGen uses the LLaMA architecture with 400M parameters.\r\n\r\n### Training\r\n\r\nDanTagGen is trained on posts with the top 75% favorite count in Danbooru, which amounts to 5 million entries.\r\n\r\nMore details about the architecture and training can be found on the Hugging Face page: [KBlueLeaf/DanTagGen-beta \u00b7 Hugging Face](https://huggingface.co/KBlueLeaf/DanTagGen-beta)\r\n\r\n### Usage\r\n\r\n* Hugging Face Space: [DTG Demo - a Hugging Face Space by KBlueLeaf](https://huggingface.co/spaces/KBlueLeaf/DTG-demo)\r\n* SD-WebUI Extension: [KohakuBlueleaf/z-a1111-sd-webui-dtg: A sd-webui extension for utilizing DanTagGen to &#34;upsample prompts&#34;. (github.com)](https://github.com/KohakuBlueleaf/z-a1111-sd-webui-dtg)\r\n* ComfyUI Node: [toyxyz/a1111-sd-webui-dtg_comfyui: A sd-webui extension for utilizing DanTagGen to &#34;upsample prompts&#34;. (github.com)](https://github.com/toyxyz/a1111-sd-webui-dtg_comfyui)\r\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "TIPO: Text to Image with text Presampling for Optimal prompting",
    "version": "0.2.0",
    "project_urls": {
        "Homepage": "https://github.com/KohakuBlueleaf/KGen"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "589d8496f092053884ec11ea4445d95ce06d1e8ce7e0d0a8bc77d5ed9e77fcf1",
                "md5": "ceaf5a749778bc5e54011628d1c48c06",
                "sha256": "4334a295d3ef5c06fe87ba091c22779007ae7c0a99b4635eaaaab1fe621ad06d"
            },
            "downloads": -1,
            "filename": "tipo_kgen-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "ceaf5a749778bc5e54011628d1c48c06",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 5024938,
            "upload_time": "2025-01-22T02:39:44",
            "upload_time_iso_8601": "2025-01-22T02:39:44.375667Z",
            "url": "https://files.pythonhosted.org/packages/58/9d/8496f092053884ec11ea4445d95ce06d1e8ce7e0d0a8bc77d5ed9e77fcf1/tipo_kgen-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-01-22 02:39:44",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "KohakuBlueleaf",
    "github_project": "KGen",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "tipo-kgen"
}
        
Elapsed time: 0.42939s