Name | glutamate JSON |
Version |
0.0.1a6
JSON |
| download |
home_page | |
Summary | Python library for querying and downloading posts from e621 |
upload_time | 2023-12-02 18:09:25 |
maintainer | |
docs_url | None |
author | |
requires_python | >=3.10 |
license | |
keywords |
e621
glutamate
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# glutamate
Easy to use Python library for querying and downloading posts from e621.net.
## Installation
```bash
pip install glutamate
```
## Example
```python
from pathlib import Path
from glutamate.database import E621, E621Data, E621PostsCSV, E621TagsCSV, Query, autoinit_from_directory
from glutamate.dataset import get_captions, write_captions, write_stats
from glutamate.download import download_posts
# Init with qualified file paths
e621_data_directory = Path('./e621-data/')
posts_csv = e621_data_directory / 'posts.csv'
posts = E621PostsCSV(posts_csv)
tags_csv = e621_data_directory / 'tags.csv'
tags = E621TagsCSV(tags_csv)
e621: E621 = E621Data(posts, tags)
# or simple automatic init with directory contains CSVs
e621_data_directory = Path('./e621-data/')
e621 = autoinit_from_directory(e621_data_directory)
query = Query(("kisha", "solo"))
kisha_dataset = e621.select(query)
target_directory = Path().cwd() / 'tmp' / 'kisha_solo'
target_directory.mkdir(parents=True, exist_ok=True)
results = download_posts(posts, target_directory, naming='id')
failed = [result for result in results if not result.ok]
if failed:
print(f"Failed to download {len(failed)} posts")
captions = kisha_dataset.get_captions(
naming='id',
remove_underscores=True,
tags_to_head=('kisha', 'kisha (character)')
)
write_captions(captions, target_directory)
counts_csv = target_directory / 'tags.csv'
counts = kisha_dataset.get_tags_stats()
write_stats(counts, counts_csv)
```
Raw data
{
"_id": null,
"home_page": "",
"name": "glutamate",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": "",
"keywords": "e621,glutamate",
"author": "",
"author_email": "jorektheglitch <jorektheglitch@yandex.ru>",
"download_url": "https://files.pythonhosted.org/packages/9b/a2/21a1a6847001c5426ffbcfc5f96ebed5e951d90fc84f8bdef97f2b89b3ab/glutamate-0.0.1a6.tar.gz",
"platform": null,
"description": "# glutamate\n\nEasy to use Python library for querying and downloading posts from e621.net.\n\n## Installation\n\n```bash\npip install glutamate\n```\n\n## Example\n\n```python\nfrom pathlib import Path\n\nfrom glutamate.database import E621, E621Data, E621PostsCSV, E621TagsCSV, Query, autoinit_from_directory\nfrom glutamate.dataset import get_captions, write_captions, write_stats\nfrom glutamate.download import download_posts\n\n\n# Init with qualified file paths\ne621_data_directory = Path('./e621-data/')\nposts_csv = e621_data_directory / 'posts.csv'\nposts = E621PostsCSV(posts_csv)\ntags_csv = e621_data_directory / 'tags.csv'\ntags = E621TagsCSV(tags_csv)\ne621: E621 = E621Data(posts, tags)\n\n# or simple automatic init with directory contains CSVs\ne621_data_directory = Path('./e621-data/')\ne621 = autoinit_from_directory(e621_data_directory)\n\nquery = Query((\"kisha\", \"solo\"))\nkisha_dataset = e621.select(query)\n\ntarget_directory = Path().cwd() / 'tmp' / 'kisha_solo'\ntarget_directory.mkdir(parents=True, exist_ok=True)\n\nresults = download_posts(posts, target_directory, naming='id')\nfailed = [result for result in results if not result.ok]\nif failed:\n print(f\"Failed to download {len(failed)} posts\")\n\ncaptions = kisha_dataset.get_captions(\n naming='id',\n remove_underscores=True,\n tags_to_head=('kisha', 'kisha (character)')\n)\nwrite_captions(captions, target_directory)\n\ncounts_csv = target_directory / 'tags.csv'\ncounts = kisha_dataset.get_tags_stats()\nwrite_stats(counts, counts_csv)\n\n```\n",
"bugtrack_url": null,
"license": "",
"summary": "Python library for querying and downloading posts from e621",
"version": "0.0.1a6",
"project_urls": {
"Homepage": "https://github.com/jorektheglitch/glutamate/"
},
"split_keywords": [
"e621",
"glutamate"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "2e627ee57e2002ef43b45f142a1d9786ac280812ec23d91d706c1e5d61e1064a",
"md5": "ab705eafa657c436230f9f7fb276acec",
"sha256": "a175c53ba3dc7323dad182601718a7de265487103fb859e34c95b744a8d0ca56"
},
"downloads": -1,
"filename": "glutamate-0.0.1a6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ab705eafa657c436230f9f7fb276acec",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 14841,
"upload_time": "2023-12-02T18:09:24",
"upload_time_iso_8601": "2023-12-02T18:09:24.343510Z",
"url": "https://files.pythonhosted.org/packages/2e/62/7ee57e2002ef43b45f142a1d9786ac280812ec23d91d706c1e5d61e1064a/glutamate-0.0.1a6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "9ba221a1a6847001c5426ffbcfc5f96ebed5e951d90fc84f8bdef97f2b89b3ab",
"md5": "b3a82ac7a07970dbd290b663f989c0c7",
"sha256": "4b15ffdf9b6970779d899ddd6f1cd6e0b73b75e4a44e0d3f78ac8b270795ae33"
},
"downloads": -1,
"filename": "glutamate-0.0.1a6.tar.gz",
"has_sig": false,
"md5_digest": "b3a82ac7a07970dbd290b663f989c0c7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 13457,
"upload_time": "2023-12-02T18:09:25",
"upload_time_iso_8601": "2023-12-02T18:09:25.855461Z",
"url": "https://files.pythonhosted.org/packages/9b/a2/21a1a6847001c5426ffbcfc5f96ebed5e951d90fc84f8bdef97f2b89b3ab/glutamate-0.0.1a6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-12-02 18:09:25",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "jorektheglitch",
"github_project": "glutamate",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "glutamate"
}