cheesechaser


Namecheesechaser JSON
Version 0.1.5 PyPI version JSON
download
home_pagehttps://github.com/deepghs/cheesechaser
SummarySwiftly get tons of images from indexed tars on Huggingface.
upload_time2024-11-02 11:53:56
maintainerNone
docs_urlNone
authornarugo1992
requires_python>=3.8
licenseApache License, Version 2.0
keywords utilities of images.
VCS
bugtrack_url
requirements hfutils hbutils huggingface_hub tqdm requests click pillow httpx random_user_agent pandas pyrate_limiter pyarrow
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # cheesechaser

[![PyPI](https://img.shields.io/pypi/v/cheesechaser)](https://pypi.org/project/cheesechaser/)
![PyPI - Python Version](https://img.shields.io/pypi/pyversions/cheesechaser)
![Loc](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/narugo1992/eedf334ff9d7ff02e7ec9535e43a1faa/raw/loc.json)
![Comments](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/narugo1992/eedf334ff9d7ff02e7ec9535e43a1faa/raw/comments.json)

[![Code Test](https://github.com/deepghs/cheesechaser/workflows/Code%20Test/badge.svg)](https://github.com/deepghs/cheesechaser/actions?query=workflow%3A%22Code+Test%22)
[![Package Release](https://github.com/deepghs/cheesechaser/workflows/Package%20Release/badge.svg)](https://github.com/deepghs/cheesechaser/actions?query=workflow%3A%22Package+Release%22)
[![codecov](https://codecov.io/gh/deepghs/cheesechaser/branch/main/graph/badge.svg?token=XJVDP4EFAT)](https://codecov.io/gh/deepghs/cheesechaser)

[![Discord](https://img.shields.io/discord/1157587327879745558?style=social&logo=discord&link=https%3A%2F%2Fdiscord.gg%2FTwdHJ42N72)](https://discord.gg/TwdHJ42N72)
![GitHub Org's stars](https://img.shields.io/github/stars/deepghs)
[![GitHub stars](https://img.shields.io/github/stars/deepghs/cheesechaser)](https://github.com/deepghs/cheesechaser/stargazers)
[![GitHub forks](https://img.shields.io/github/forks/deepghs/cheesechaser)](https://github.com/deepghs/cheesechaser/network)
![GitHub commit activity](https://img.shields.io/github/commit-activity/m/deepghs/cheesechaser)
[![GitHub issues](https://img.shields.io/github/issues/deepghs/cheesechaser)](https://github.com/deepghs/cheesechaser/issues)
[![GitHub pulls](https://img.shields.io/github/issues-pr/deepghs/cheesechaser)](https://github.com/deepghs/cheesechaser/pulls)
[![Contributors](https://img.shields.io/github/contributors/deepghs/cheesechaser)](https://github.com/deepghs/cheesechaser/graphs/contributors)
[![GitHub license](https://img.shields.io/github/license/deepghs/cheesechaser)](https://github.com/deepghs/cheesechaser/blob/master/LICENSE)

Swiftly get tons of images from indexed tars on Huggingface

## Installation

```shell
pip install cheesechaser
```

## How this library works

This library is based on the mirror datasets on huggingface.

For the Gelbooru mirror dataset repository, such
as [deepghs/gelbooru_full](https://huggingface.co/datasets/deepghs/gelbooru_full), each data packet includes a tar
archive file and a corresponding JSON index file. The JSON index file contains detailed information about the files
within the tar archive, including file size, offset, and file fingerprint.

The files in this dataset repository are organized according to a fixed pattern based on their IDs. For example, a file
with the ID 114514 will have a modulus result of 4514 when divided by 10000. Consequently, it is stored
in `images/4/0514.tar`.

Utilizing the quick download feature
from [hfutils.index](https://deepghs.github.io/hfutils/main/api_doc/index/index.html), users can instantly access
individual files. Since the download service is provided through Huggingface's LFS service and not the original website
or an image CDN, there is no risk of IP or account blocking. **The only limitations to your download speed are your
network bandwidth and disk read/write speeds.**

This efficient system ensures a seamless and reliable access to the dataset without any restrictions.

## Batch Download Images

* Danbooru

```python
from cheesechaser.datapool import DanbooruNewestDataPool

pool = DanbooruNewestDataPool()

# download danbooru #2010000-2010300, to directory /data/exp2
pool.batch_download_to_directory(
    resource_ids=range(2010000, 2010300),
    dst_dir='/data/exp2',
    max_workers=12,
)
```

* Danbooru With Tags Query

```python
from cheesechaser.datapool import DanbooruNewestDataPool
from cheesechaser.query import DanbooruIdQuery

pool = DanbooruNewestDataPool()
my_waifu_ids = DanbooruIdQuery(['surtr_(arknights)', 'solo'])

# download danbooru images with surtr+solo, to directory /data/exp2_surtr
pool.batch_download_to_directory(
    resource_ids=my_waifu_ids,
    dst_dir='/data/exp2_surtr',
    max_workers=12,
)
```

* Konachan (Gated dataset, you should be granted first and set `HF_TOKEN` environment variable)

```python
from cheesechaser.datapool import KonachanDataPool

pool = KonachanDataPool()

# download konachan #210000-210300, to directory /data/exp2
pool.batch_download_to_directory(
    resource_ids=range(210000, 210300),
    dst_dir='/data/exp2',
    max_workers=12,
)
```

* Civitai (this mirror repository on hf is private for now, you have to use hf token of an authorized account)

```python
from cheesechaser.datapool import CivitaiDataPool

pool = CivitaiDataPool()

# download civitai #7810000-7810300, to directory /data/exp2
# should contain one image and one json metadata file
pool.batch_download_to_directory(
    resource_ids=range(7810000, 7810300),
    dst_dir='/data/exp2',
    max_workers=12,
)
```

More supported:

* `RealbooruDataPool` (Gated Dataset)
* `ThreedbooruDataPool` (Gated Dataset)
* `FancapsDataPool` (Gated Dataset)
* `BangumiBaseDataPool` (Gated Dataset)
* `AnimePicturesDataPool` (Gated Dataset)
* `KonachanDataPool` (Gated Dataset)
* `YandeDataPool` (Gated Dataset)
* `ZerochanDataPool` (Gated Dataset)
* `GelbooruDataPool` and `GelbooruWebpDataPool` (Gated Dataset)
* `DanbooruNewestDataPool` and `DanbooruNewestWebpDataPool`

## Batch Retrieving Images

```python
from itertools import islice

from cheesechaser.datapool import DanbooruNewestDataPool
from cheesechaser.pipe import SimpleImagePipe, PipeItem

pool = DanbooruNewestDataPool()
pipe = SimpleImagePipe(pool)

# select from danbooru 7349990-7359990
ids = range(7349990, 7359990)
with pipe.batch_retrieve(ids) as session:
    # only need 20 images
    for i, item in enumerate(islice(session, 20)):
        item: PipeItem
        print(i, item)

```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/deepghs/cheesechaser",
    "name": "cheesechaser",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "Utilities of images.",
    "author": "narugo1992",
    "author_email": "narugo992@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/1f/dc/948b4aee67e286360d419418ce4764d5a1fee30f1286fc27e29260fa78a6/cheesechaser-0.1.5.tar.gz",
    "platform": null,
    "description": "# cheesechaser\n\n[![PyPI](https://img.shields.io/pypi/v/cheesechaser)](https://pypi.org/project/cheesechaser/)\n![PyPI - Python Version](https://img.shields.io/pypi/pyversions/cheesechaser)\n![Loc](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/narugo1992/eedf334ff9d7ff02e7ec9535e43a1faa/raw/loc.json)\n![Comments](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/narugo1992/eedf334ff9d7ff02e7ec9535e43a1faa/raw/comments.json)\n\n[![Code Test](https://github.com/deepghs/cheesechaser/workflows/Code%20Test/badge.svg)](https://github.com/deepghs/cheesechaser/actions?query=workflow%3A%22Code+Test%22)\n[![Package Release](https://github.com/deepghs/cheesechaser/workflows/Package%20Release/badge.svg)](https://github.com/deepghs/cheesechaser/actions?query=workflow%3A%22Package+Release%22)\n[![codecov](https://codecov.io/gh/deepghs/cheesechaser/branch/main/graph/badge.svg?token=XJVDP4EFAT)](https://codecov.io/gh/deepghs/cheesechaser)\n\n[![Discord](https://img.shields.io/discord/1157587327879745558?style=social&logo=discord&link=https%3A%2F%2Fdiscord.gg%2FTwdHJ42N72)](https://discord.gg/TwdHJ42N72)\n![GitHub Org's stars](https://img.shields.io/github/stars/deepghs)\n[![GitHub stars](https://img.shields.io/github/stars/deepghs/cheesechaser)](https://github.com/deepghs/cheesechaser/stargazers)\n[![GitHub forks](https://img.shields.io/github/forks/deepghs/cheesechaser)](https://github.com/deepghs/cheesechaser/network)\n![GitHub commit activity](https://img.shields.io/github/commit-activity/m/deepghs/cheesechaser)\n[![GitHub issues](https://img.shields.io/github/issues/deepghs/cheesechaser)](https://github.com/deepghs/cheesechaser/issues)\n[![GitHub pulls](https://img.shields.io/github/issues-pr/deepghs/cheesechaser)](https://github.com/deepghs/cheesechaser/pulls)\n[![Contributors](https://img.shields.io/github/contributors/deepghs/cheesechaser)](https://github.com/deepghs/cheesechaser/graphs/contributors)\n[![GitHub license](https://img.shields.io/github/license/deepghs/cheesechaser)](https://github.com/deepghs/cheesechaser/blob/master/LICENSE)\n\nSwiftly get tons of images from indexed tars on Huggingface\n\n## Installation\n\n```shell\npip install cheesechaser\n```\n\n## How this library works\n\nThis library is based on the mirror datasets on huggingface.\n\nFor the Gelbooru mirror dataset repository, such\nas [deepghs/gelbooru_full](https://huggingface.co/datasets/deepghs/gelbooru_full), each data packet includes a tar\narchive file and a corresponding JSON index file. The JSON index file contains detailed information about the files\nwithin the tar archive, including file size, offset, and file fingerprint.\n\nThe files in this dataset repository are organized according to a fixed pattern based on their IDs. For example, a file\nwith the ID 114514 will have a modulus result of 4514 when divided by 10000. Consequently, it is stored\nin `images/4/0514.tar`.\n\nUtilizing the quick download feature\nfrom [hfutils.index](https://deepghs.github.io/hfutils/main/api_doc/index/index.html), users can instantly access\nindividual files. Since the download service is provided through Huggingface's LFS service and not the original website\nor an image CDN, there is no risk of IP or account blocking. **The only limitations to your download speed are your\nnetwork bandwidth and disk read/write speeds.**\n\nThis efficient system ensures a seamless and reliable access to the dataset without any restrictions.\n\n## Batch Download Images\n\n* Danbooru\n\n```python\nfrom cheesechaser.datapool import DanbooruNewestDataPool\n\npool = DanbooruNewestDataPool()\n\n# download danbooru #2010000-2010300, to directory /data/exp2\npool.batch_download_to_directory(\n    resource_ids=range(2010000, 2010300),\n    dst_dir='/data/exp2',\n    max_workers=12,\n)\n```\n\n* Danbooru With Tags Query\n\n```python\nfrom cheesechaser.datapool import DanbooruNewestDataPool\nfrom cheesechaser.query import DanbooruIdQuery\n\npool = DanbooruNewestDataPool()\nmy_waifu_ids = DanbooruIdQuery(['surtr_(arknights)', 'solo'])\n\n# download danbooru images with surtr+solo, to directory /data/exp2_surtr\npool.batch_download_to_directory(\n    resource_ids=my_waifu_ids,\n    dst_dir='/data/exp2_surtr',\n    max_workers=12,\n)\n```\n\n* Konachan (Gated dataset, you should be granted first and set `HF_TOKEN` environment variable)\n\n```python\nfrom cheesechaser.datapool import KonachanDataPool\n\npool = KonachanDataPool()\n\n# download konachan #210000-210300, to directory /data/exp2\npool.batch_download_to_directory(\n    resource_ids=range(210000, 210300),\n    dst_dir='/data/exp2',\n    max_workers=12,\n)\n```\n\n* Civitai (this mirror repository on hf is private for now, you have to use hf token of an authorized account)\n\n```python\nfrom cheesechaser.datapool import CivitaiDataPool\n\npool = CivitaiDataPool()\n\n# download civitai #7810000-7810300, to directory /data/exp2\n# should contain one image and one json metadata file\npool.batch_download_to_directory(\n    resource_ids=range(7810000, 7810300),\n    dst_dir='/data/exp2',\n    max_workers=12,\n)\n```\n\nMore supported:\n\n* `RealbooruDataPool` (Gated Dataset)\n* `ThreedbooruDataPool` (Gated Dataset)\n* `FancapsDataPool` (Gated Dataset)\n* `BangumiBaseDataPool` (Gated Dataset)\n* `AnimePicturesDataPool` (Gated Dataset)\n* `KonachanDataPool` (Gated Dataset)\n* `YandeDataPool` (Gated Dataset)\n* `ZerochanDataPool` (Gated Dataset)\n* `GelbooruDataPool` and `GelbooruWebpDataPool` (Gated Dataset)\n* `DanbooruNewestDataPool` and `DanbooruNewestWebpDataPool`\n\n## Batch Retrieving Images\n\n```python\nfrom itertools import islice\n\nfrom cheesechaser.datapool import DanbooruNewestDataPool\nfrom cheesechaser.pipe import SimpleImagePipe, PipeItem\n\npool = DanbooruNewestDataPool()\npipe = SimpleImagePipe(pool)\n\n# select from danbooru 7349990-7359990\nids = range(7349990, 7359990)\nwith pipe.batch_retrieve(ids) as session:\n    # only need 20 images\n    for i, item in enumerate(islice(session, 20)):\n        item: PipeItem\n        print(i, item)\n\n```\n",
    "bugtrack_url": null,
    "license": "Apache License, Version 2.0",
    "summary": "Swiftly get tons of images from indexed tars on Huggingface.",
    "version": "0.1.5",
    "project_urls": {
        "Homepage": "https://github.com/deepghs/cheesechaser"
    },
    "split_keywords": [
        "utilities",
        "of",
        "images."
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "974759ad92f79c0aec73e2eccb5ffc9ed80bfb42fa2db6eac7b80cc34f4dbf21",
                "md5": "f24873e56ef224a912a64038f52969a6",
                "sha256": "dd54d6c7785b51ec96dd03a05f9c08ccb7b2e85a807743955e626fbca26fbb16"
            },
            "downloads": -1,
            "filename": "cheesechaser-0.1.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f24873e56ef224a912a64038f52969a6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 55383,
            "upload_time": "2024-11-02T11:53:54",
            "upload_time_iso_8601": "2024-11-02T11:53:54.801860Z",
            "url": "https://files.pythonhosted.org/packages/97/47/59ad92f79c0aec73e2eccb5ffc9ed80bfb42fa2db6eac7b80cc34f4dbf21/cheesechaser-0.1.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1fdc948b4aee67e286360d419418ce4764d5a1fee30f1286fc27e29260fa78a6",
                "md5": "494ee1580126bf859a6c2664eab7e510",
                "sha256": "0465c2dad958826da9bfecde73d3aebea50ce261d3c287b1c0256e61c5db749f"
            },
            "downloads": -1,
            "filename": "cheesechaser-0.1.5.tar.gz",
            "has_sig": false,
            "md5_digest": "494ee1580126bf859a6c2664eab7e510",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 41616,
            "upload_time": "2024-11-02T11:53:56",
            "upload_time_iso_8601": "2024-11-02T11:53:56.546960Z",
            "url": "https://files.pythonhosted.org/packages/1f/dc/948b4aee67e286360d419418ce4764d5a1fee30f1286fc27e29260fa78a6/cheesechaser-0.1.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-02 11:53:56",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "deepghs",
    "github_project": "cheesechaser",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "hfutils",
            "specs": [
                [
                    ">=",
                    "0.4.3"
                ]
            ]
        },
        {
            "name": "hbutils",
            "specs": [
                [
                    ">=",
                    "0.9.0"
                ]
            ]
        },
        {
            "name": "huggingface_hub",
            "specs": [
                [
                    ">=",
                    "0.22"
                ]
            ]
        },
        {
            "name": "tqdm",
            "specs": []
        },
        {
            "name": "requests",
            "specs": []
        },
        {
            "name": "click",
            "specs": [
                [
                    ">=",
                    "7"
                ]
            ]
        },
        {
            "name": "pillow",
            "specs": []
        },
        {
            "name": "httpx",
            "specs": []
        },
        {
            "name": "random_user_agent",
            "specs": []
        },
        {
            "name": "pandas",
            "specs": []
        },
        {
            "name": "pyrate_limiter",
            "specs": []
        },
        {
            "name": "pyarrow",
            "specs": []
        }
    ],
    "lcname": "cheesechaser"
}
        
Elapsed time: 1.69425s