# StarGAN-PyTorch
## Contents
- [Introduction](#introduction)
- [Getting Started](#getting-started)
- [Requirements](#requirements)
- [From PyPI](#from-pypi)
- [Local Install](#local-install)
- [All pretrained model weights](#all-pretrained-model-weights)
- [Test (e.g. CelebA-128x128)](#test-eg-celeba-128x128)
- [Train](#train)
- [Contributing](#contributing)
- [Credit](#credit)
- [StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation](#stargan-unified-generative-adversarial-networks-for-multi-domain-image-to-image-translation)
## Introduction
This repository contains an op-for-op PyTorch reimplementation of [StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation](https://arxiv.org/abs/1711.09020v3).
## Getting Started
### Requirements
- Python 3.10+
- PyTorch 2.1.0+
- CUDA 11.8+
- Ubuntu 22.04+
### From PyPI
```bash
pip install stargan_pytorch -i https://pypi.org/simple
```
### Local Install
```bash
git clone https://github.com/Lornatang/StarGAN-PyTorch.git
cd StarGAN-PyTorch
pip install -r requirements.txt
pip install -e .
```
## All pretrained model weights
- [g_celeba128](https://huggingface.co/goodfellowliu/StarGAN-PyTorch/resolve/main/g_celeba128.pth.tar?download=true)
- [g_celeba256](https://huggingface.co/goodfellowliu/StarGAN-PyTorch/resolve/main/g_celeba256.pth.tar?download=true)
- [d_celeba128](https://huggingface.co/goodfellowliu/StarGAN-PyTorch/resolve/main/d_celeba128.pth.tar?download=true)
- [d_celeba256](https://huggingface.co/goodfellowliu/StarGAN-PyTorch/resolve/main/d_celeba256.pth.tar?download=true)
## Test (e.g. CelebA-128x128)
```shell
# Download g_celeba128 model weights to `./results/pretrained_models`
wget https://huggingface.co/goodfellowliu/StarGAN-PyTorch/resolve/main/g_celeba128.pth.tar?download=true -O ./results/pretrained_models/g_celeba128.pth.tar
python ./tools/test.py ./configs/CelebA128.yaml
# Result will be saved to `./results/test/celeba128`
```
<div align="center">
<img src="figure/celeba_128.jpg" width="768">
</div>
## Train
Please refer to `README.md` in the `data` directory for the method of making a dataset.
```shell
# If you want to train StarGAN-CelebA-128x128, run this command
python3 ./tools/train.py ./configs/CelebA128.yaml
# If you want to train StarGAN-CelebA-256x256, run this command
python3 ./tools/train.py ./configs/CelebA256.yaml
```
The training results will be saved to `./results/train/celeba128` or `./results/train/celeba256`.
## Contributing
If you find a bug, create a GitHub issue, or even better, submit a pull request. Similarly, if you have questions, simply post them as GitHub issues.
I look forward to seeing what the community does with these models!
## Credit
#### StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
_Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, Jaegul Choo_ <br>
**Abstract** <br>
Recent studies have shown remarkable success in imageto-image translation for two domains. However, existing
approaches have limited scalability and robustness in handling more than two domains, since different models should
be built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and
scalable approach that can perform image-to-image translations for multiple domains using only a single model.
Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains
within a single network. This leads to StarGAN’s superior quality of translated images compared to existing models as
well as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our
approach on a facial attribute transfer and a facial expression synthesis tasks.
[[Paper]](https://arxiv.org/pdf/1711.09020v3) [[Code(PyTorch)]](https://github.com/yunjey/stargan)
```
@misc{choi2018stargan,
title={StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation},
author={Yunjey Choi and Minje Choi and Munyoung Kim and Jung-Woo Ha and Sunghun Kim and Jaegul Choo},
year={2018},
eprint={1711.09020},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
Raw data
{
"_id": null,
"home_page": "https://github.com/Lornatang/StarGAN-PyTorch",
"name": "stargan-pytorch",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.10.0",
"maintainer_email": "",
"keywords": "",
"author": "Lornatang",
"author_email": "liuchangyu1111@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/f9/78/6b2cc407d1c1c45ad0e4c8f7576cfc65f962993e3d1a5ded453b5c4b1caf/stargan_pytorch-0.1.0.tar.gz",
"platform": null,
"description": "\n# StarGAN-PyTorch\n\n## Contents\n\n- [Introduction](#introduction)\n- [Getting Started](#getting-started)\n - [Requirements](#requirements)\n - [From PyPI](#from-pypi)\n - [Local Install](#local-install)\n- [All pretrained model weights](#all-pretrained-model-weights)\n- [Test (e.g. CelebA-128x128)](#test-eg-celeba-128x128)\n- [Train](#train)\n- [Contributing](#contributing)\n- [Credit](#credit)\n - [StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation](#stargan-unified-generative-adversarial-networks-for-multi-domain-image-to-image-translation)\n\n## Introduction\n\nThis repository contains an op-for-op PyTorch reimplementation of [StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation](https://arxiv.org/abs/1711.09020v3).\n\n## Getting Started\n\n### Requirements\n\n- Python 3.10+\n- PyTorch 2.1.0+\n- CUDA 11.8+\n- Ubuntu 22.04+\n\n### From PyPI\n\n```bash\npip install stargan_pytorch -i https://pypi.org/simple\n```\n\n### Local Install\n\n```bash\ngit clone https://github.com/Lornatang/StarGAN-PyTorch.git\ncd StarGAN-PyTorch\npip install -r requirements.txt\npip install -e .\n```\n\n## All pretrained model weights\n\n- [g_celeba128](https://huggingface.co/goodfellowliu/StarGAN-PyTorch/resolve/main/g_celeba128.pth.tar?download=true)\n- [g_celeba256](https://huggingface.co/goodfellowliu/StarGAN-PyTorch/resolve/main/g_celeba256.pth.tar?download=true)\n- [d_celeba128](https://huggingface.co/goodfellowliu/StarGAN-PyTorch/resolve/main/d_celeba128.pth.tar?download=true)\n- [d_celeba256](https://huggingface.co/goodfellowliu/StarGAN-PyTorch/resolve/main/d_celeba256.pth.tar?download=true)\n\n## Test (e.g. CelebA-128x128)\n\n```shell\n# Download g_celeba128 model weights to `./results/pretrained_models`\nwget https://huggingface.co/goodfellowliu/StarGAN-PyTorch/resolve/main/g_celeba128.pth.tar?download=true -O ./results/pretrained_models/g_celeba128.pth.tar\npython ./tools/test.py ./configs/CelebA128.yaml\n# Result will be saved to `./results/test/celeba128`\n```\n\n<div align=\"center\">\n<img src=\"figure/celeba_128.jpg\" width=\"768\">\n</div>\n\n## Train\n\nPlease refer to `README.md` in the `data` directory for the method of making a dataset.\n\n```shell\n# If you want to train StarGAN-CelebA-128x128, run this command\npython3 ./tools/train.py ./configs/CelebA128.yaml\n# If you want to train StarGAN-CelebA-256x256, run this command\npython3 ./tools/train.py ./configs/CelebA256.yaml\n```\n\nThe training results will be saved to `./results/train/celeba128` or `./results/train/celeba256`.\n\n## Contributing\n\nIf you find a bug, create a GitHub issue, or even better, submit a pull request. Similarly, if you have questions, simply post them as GitHub issues.\n\nI look forward to seeing what the community does with these models!\n\n## Credit\n\n#### StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation\n\n_Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, Jaegul Choo_ <br>\n\n**Abstract** <br>\nRecent studies have shown remarkable success in imageto-image translation for two domains. However, existing\napproaches have limited scalability and robustness in handling more than two domains, since different models should\nbe built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and\nscalable approach that can perform image-to-image translations for multiple domains using only a single model.\nSuch a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains\nwithin a single network. This leads to StarGAN\u2019s superior quality of translated images compared to existing models as\nwell as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our\napproach on a facial attribute transfer and a facial expression synthesis tasks.\n\n[[Paper]](https://arxiv.org/pdf/1711.09020v3) [[Code(PyTorch)]](https://github.com/yunjey/stargan)\n\n```\n@misc{choi2018stargan,\n title={StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation}, \n author={Yunjey Choi and Minje Choi and Munyoung Kim and Jung-Woo Ha and Sunghun Kim and Jaegul Choo},\n year={2018},\n eprint={1711.09020},\n archivePrefix={arXiv},\n primaryClass={cs.CV}\n}\n```\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "StarGAN in PyTorch.",
"version": "0.1.0",
"project_urls": {
"Homepage": "https://github.com/Lornatang/StarGAN-PyTorch"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "c2212c018237fc9af8e9a78692b9eba79296c435cbdccd6192156bbffd1da68f",
"md5": "2c494760601e676f58beceffb3c18df4",
"sha256": "7654100932a3c8cd4e4cf3e252c065c63c07c0070e443e1bbeb243492d27c812"
},
"downloads": -1,
"filename": "stargan_pytorch-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2c494760601e676f58beceffb3c18df4",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10.0",
"size": 7080,
"upload_time": "2023-11-18T15:36:21",
"upload_time_iso_8601": "2023-11-18T15:36:21.921714Z",
"url": "https://files.pythonhosted.org/packages/c2/21/2c018237fc9af8e9a78692b9eba79296c435cbdccd6192156bbffd1da68f/stargan_pytorch-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f9786b2cc407d1c1c45ad0e4c8f7576cfc65f962993e3d1a5ded453b5c4b1caf",
"md5": "50b3f19fe6febb2fd1e62a5321b7aa18",
"sha256": "e60b20f28c695e1c8aadf3d1c8d29c593a31558853ae6514c3c568d7447842f9"
},
"downloads": -1,
"filename": "stargan_pytorch-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "50b3f19fe6febb2fd1e62a5321b7aa18",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10.0",
"size": 8145,
"upload_time": "2023-11-18T15:36:23",
"upload_time_iso_8601": "2023-11-18T15:36:23.976402Z",
"url": "https://files.pythonhosted.org/packages/f9/78/6b2cc407d1c1c45ad0e4c8f7576cfc65f962993e3d1a5ded453b5c4b1caf/stargan_pytorch-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-11-18 15:36:23",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Lornatang",
"github_project": "StarGAN-PyTorch",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "stargan-pytorch"
}