scepter


Namescepter JSON
Version 1.3.1 PyPI version JSON
download
home_pageNone
SummaryNone
upload_time2024-12-05 08:10:54
maintainerNone
docs_urlNone
authorTongyi Lab
requires_pythonNone
licenseNone
keywords compute vision framework generation image edition.
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <h1 align="center">🪄SCEPTER</h1>

<p align="center">
<img src="https://img.shields.io/badge/python-%E2%89%A53.8-5be.svg">
<img src="https://img.shields.io/badge/pytorch-%E2%89%A51.12%20%7C%20%E2%89%A52.0-orange.svg">
<a href="https://pypi.org/project/scepter/"><img src="https://badge.fury.io/py/scepter.svg"></a>
<a href="https://github.com/modelscope/scepter/blob/main/LICENSE"><img src="https://img.shields.io/github/license/modelscope/scepter"></a>
<a href="https://github.com/modelscope/scepter/"><img src="https://img.shields.io/badge/scepter-Build from source-6FEBB9.svg"></a>
</p>

🪄SCEPTER is an open-source code repository dedicated to generative training, fine-tuning, and inference, encompassing a suite of downstream tasks such as image generation, transfer, editing.
SCEPTER integrates popular community-driven implementations as well as proprietary methods by Tongyi Lab of Alibaba Group, offering a comprehensive toolkit for researchers and practitioners in the field of AIGC. This versatile library is designed to facilitate innovation and accelerate development in the rapidly evolving domain of generative models.

SCEPTER offers 3 core components:
- [Generative training and inference framework](#tutorials)
- [Easy implementation of popular approaches](#currently-supported-approaches)
- [Interactive user interface: SCEPTER Studio & Comfy UI](#launch)


## 🎉 News
- [🔥🔥🔥2024.11]: We're excited to announce the upcoming release of the [ACE-0.6b-1024px](https://huggingface.co/scepter-studio/ACE-0.6B-1024px) model, 
which significantly enhances image generation quality compared with [ACE-0.6b-512px](https://huggingface.co/scepter-studio/ACE-0.6B-512px). The detailed documents can be found at [ACE repo](https://github.com/ali-vilab/ACE.git).
At the same time, based on the editing results of ACE, combined with the powerful text-to-image capabilities of the [FLUX-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) model through SDEdit as an image quality refiner, the quality of image editing can be further enhanced.
- [🔥2024.11]: Supports video files, video annotation, caption translation in data management, and inference & training of the [CogVideoX](https://arxiv.org/abs/2408.06072).
- [2024.10]: We are pleased to announce the release of the code for [ACE](https://arxiv.org/abs/2410.00086), supporting Customized Training / Comfy UI Workflow / gradio-based ChatBot Interface. 
- [2024.10]: Support for inference and tuning with [FLUX](https://huggingface.co/black-forest-labs/FLUX.1-dev), as well as for building [ComfyUI](https://github.com/comfyanonymous/ComfyUI) workflows using this framework.
- [2024.09]: We introduce **ACE**, an **A**ll-round **C**reator and **E**ditor adept at executing a diverse array of image editing tasks tailored to your specifications. Built upon the cutting-edge Diffusion Transformer architecture, ACE has been extensively trained on a comprehensive dataset to seamlessly interpret and execute any natural language instruction. For further information, please consult the [project page](https://ali-vilab.github.io/ace-page/).
- [2024.07]: Support the inference and training of open-source generative models based on the [DiT](https://arxiv.org/abs/2212.09748) architecture, such as [SD3](https://arxiv.org/pdf/2403.03206) and [PixArt](https://arxiv.org/abs/2310.00426).
- [2024.05]: Introducing SCEPTER v1, supporting customized image edit tasks! Simply provide 10 image pairs, SCEPTER will tune an edit tuner for your own Image-to-Image tasks, like `Clay Style`, `De-Text`, `Segmentation`, etc.
- [2024.04]: New [StyleBooth](https://ali-vilab.github.io/stylebooth-page/) demo on SCEPTER Studio for`Text-Based Style Editing`.
- [2024.03]: We optimize the training UI and checkpoint management. New [LAR-Gen](https://arxiv.org/abs/2403.19534) model has been added on SCEPTER Studio, supporting `zoom-out`, `virtual try on`, `inpainting`.
- [2024.02]: We release new SCEdit controllable image synthesis models for SD v2.1 and SD XL. Multiple strategies applied to accelerate inference time for SCEPTER Studio.
- [2024.01]: We release **SCEPTER Studio**, an integrated toolkit for data management, model training and inference based on [Gradio](https://www.gradio.app/).
- [2024.01]: [SCEdit](https://arxiv.org/abs/2312.11392) support controllable image synthesis for training and inference.
- [2023.12]: We propose [SCEdit](https://arxiv.org/abs/2312.11392), an efficient and controllable generation framework.
- [2023.12]: We release [🪄SCEPTER](https://github.com/modelscope/scepter/) library.




## 🪄ACE

ACE is a unified foundational model framework that supports a wide range of visual generation tasks. By defining CU for unifying multi-modal inputs across different tasks and incorporating long-context CU, we introduce historical contextual information into visual generation tasks, paving the way for ChatGPT-like dialog systems in visual generation.

[![Watch the demo](https://ali-vilab.github.io/ace-page/static/images/tasks.png)](https://ali-vilab.github.io/ace-page/)

### ACE Models
|    **Model**     |                                                                                                                                                                                                            **Status**                                                                                                                                                                                                             | 
|:----------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
|  ACE-0.6B-512px  |          [![Demo link](https://img.shields.io/badge/Demo-ACE_Chat-purple)](https://huggingface.co/spaces/scepter-studio/ACE-Chat)<br>[![ModelScope link](https://img.shields.io/badge/ModelScope-Model-blue)](https://www.modelscope.cn/models/iic/ACE-0.6B-512px)  [![HuggingFace link](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-yellow)](https://huggingface.co/scepter-studio/ACE-0.6B-512px)          |
| ACE-0.6B-1024px  | [![Demo link](https://img.shields.io/badge/Demo-ACE_Refiner_Chat-purple)](https://huggingface.co/spaces/scepter-studio/ACE-Refiner-Chat)<br>[![ModelScope link](https://img.shields.io/badge/ModelScope-Model-blue)](https://www.modelscope.cn/models/iic/ACE-0.6B-1024px)  [![HuggingFace link](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-yellow)](https://huggingface.co/scepter-studio/ACE-0.6B-1024px) |             |
| ACE-12B-FLUX-dev |                                                                                                                                                                                                            Coming Soon                                                                                                                                                                                                            |
### ACE Training

We offer a demonstration training YAML that enables the end-to-end training of ACE using a toy dataset. For a comprehensive overview of the hyperparameter configurations, please consult `scepter/methods/edit/dit_ace_0.6b_512.yaml`.

#### Prepare datasets

Please find the dataset class located in `scepter/modules/data/dataset/ms_dataset.py`,
designed to facilitate end-to-end training using an open-source toy dataset.
Download a dataset zip file from [modelscope](https://www.modelscope.cn/models/iic/scepter/resolve/master/datasets/hed_pair.zip), and then extract its contents into the `cache/datasets/` directory.

Should you wish to prepare your own datasets, we recommend consulting `scepter/modules/data/dataset/ms_dataset.py` for detailed guidance on the required data format.

#### Prepare initial weight
The ACE checkpoint has been uploaded to both ModelScope and HuggingFace platforms:
* [ModelScope](https://www.modelscope.cn/models/iic/ACE-0.6B-512px)
* [HuggingFace](https://huggingface.co/scepter-studio/ACE-0.6B-512px)

In the provided training YAML configuration, we have designated the Modelscope URL as the default checkpoint URL. Should you wish to transition to Hugging Face, you can effortlessly achieve this by modifying the PRETRAINED_MODEL value within the YAML file (replace the prefix "ms://iic" to "hf://scepter-studio").


#### Start training

You can easily start training procedure by executing the following command:
```bash
# ACE-0.6B-512px
PYTHONPATH=. python scepter/tools/run_train.py --cfg scepter/methods/edit/dit_ace_0.6b_512.yaml
# ACE-0.6B-1024px
PYTHONPATH=. python scepter/tools/run_train.py --cfg scepter/methods/edit/dit_ace_0.6b_1024.yaml
```

### ACE Chat Bot

We have developed a chatbot interface utilizing Gradio, designed to convert user input in natural language into visually captivating images that align semantically with the specified instructions. You can easily access this functionality by launching Scepter Studio with the following command:
```bash
PYTHONPATH=. python scepter/tools/webui.py --cfg scepter/methods/studio/scepter_ui.yaml --language zh --tab chatbot
```
Upon starting, you will find a "ChatBot" tab within the Gradio application, which serves as a chat-based interface to handle any requests related to image editing or generation.

### ACE ComfyUI Workflow

![Workflow](https://github.com/ali-vilab/ace-page/raw/main/assets/comfyui/ace_example.jpg)

<table><tbody>
  <tr>
    <th align="center" colspan="4">ACE Workflow Examples</th>
  </tr>
  <tr>
    <th align="center" colspan="1">Control</th>
    <th align="center" colspan="1">Semantic</th>
    <th align="center" colspan="1">Element</th>
  </tr>
  <tr>
    <td>
      <a href="https://github.com/ali-vilab/ace-page/raw/main/assets/comfyui/ace_control.png" target="_blank">
        <img src="https://github.com/ali-vilab/ace-page/raw/main/assets/comfyui/ace_control.png" width="200">
      </a>
    </td>
    <td>
      <a href="https://github.com/ali-vilab/ace-page/raw/main/assets/comfyui/ace_semantic.png" target="_blank">
        <img src="https://github.com/ali-vilab/ace-page/raw/main/assets/comfyui/ace_semantic.png" width="200">
      </a>
    </td>
    <td>
      <a href="https://github.com/ali-vilab/ace-page/raw/main/assets/comfyui/ace_element.png" target="_blank">
        <img src="https://github.com/ali-vilab/ace-page/raw/main/assets/comfyui/ace_element.png" width="200">
      </a>
    </td>
  </tr>
</tbody>
</table>

## 🖼 Gallery for Recent Works

### FLUX Tuners

<table><tbody>
  <tr>
    <th align="center" colspan="3">Yarn Style</th>
    <th align="center" colspan="3">Soft Watercolor Style</th>
  </tr>
  <tr>
    <td><img src="asset/images/flux_tuner/flux_tuner_2_1.webp" width="200"></td>
    <td><img src="asset/images/flux_tuner/flux_tuner_2_2.webp" width="200"></td>
    <td><img src="asset/images/flux_tuner/flux_tuner_2_3.webp" width="200"></td>
    <td><img src="asset/images/flux_tuner/flux_tuner_1_1.webp" width="200"></td>
    <td><img src="asset/images/flux_tuner/flux_tuner_1_2.webp" width="200"></td>
    <td><img src="asset/images/flux_tuner/flux_tuner_1_3.webp" width="200"></td>
  </tr>
  <tr>
    <th align="center" colspan="3">Travel Style</th>
    <th align="center" colspan="3">WuKong Style</th>
  </tr>
  <tr>
    <td><img src="asset/images/flux_tuner/flux_tuner_3_1.webp" width="200"></td>
    <td><img src="asset/images/flux_tuner/flux_tuner_3_2.webp" width="200"></td>
    <td><img src="asset/images/flux_tuner/flux_tuner_3_3.webp" width="200"></td>
    <td><img src="asset/images/flux_tuner/flux_tuner_4_1.webp" width="200"></td>
    <td><img src="asset/images/flux_tuner/flux_tuner_4_2.webp" width="200"></td>
    <td><img src="asset/images/flux_tuner/flux_tuner_4_3.webp" width="200"></td>
  </tr>
</tbody>
</table>

### ComfyUI Workflow

![Workflow](asset/workflow/workflow.jpg)

<table><tbody>
  <tr>
    <th align="center" colspan="4">Example Workflow Case</th>
  </tr>
  <tr>
    <th align="center" colspan="1">Base</th>
    <th align="center" colspan="1">+Mantra</th>
    <th align="center" colspan="1">+Tuner</th>
    <th align="center" colspan="1">+Control</th>
  </tr>
  <tr>
    <td>
      <a href="asset/workflow/sdxl_base.json" target="_blank">
        <img src="asset/workflow/sdxl_base.jpg" width="200">
      </a>
    </td>
    <td>
      <a href="asset/workflow/sdxl_base_mantra.json" target="_blank">
        <img src="asset/workflow/sdxl_base_mantra.jpg" width="200">
      </a>
    </td>
    <td>
      <a href="asset/workflow/sdxl_base_mantra_tuner.json" target="_blank">
        <img src="asset/workflow/sdxl_base_mantra_tuner.jpg" width="200">
      </a>
    </td>
    <td>
      <a href="asset/workflow/sdxl_base_mantra_tuner_control.json" target="_blank">
        <img src="asset/workflow/sdxl_base_mantra_tuner_control.jpg" width="200">
      </a>
    </td>
  </tr>
</tbody>
</table>


## 🛠️ Installation

- Create new environment with `conda` command:

```shell
conda env create -f environment.yaml
conda activate scepter
```

- Install with `pip` command:

We recommend installing the specific version of PyTorch and accelerate toolbox [xFormers](https://pypi.org/project/xformers/). You can install these recommended version by pip:
```shell
pip install -r requirements/recommended.txt
pip install scepter
```

## 🧩 Generative Framework

### Tutorials

| Documentation                                      | Key Features                      |
|:---------------------------------------------------|:----------------------------------|
| [Train](docs/en/tutorials/train.md)                | DDP / FSDP / FairScale / Xformers |
| [Inference](docs/en/tutorials/inference.md)        | Dynamic load/unload               |
| [Dataset Management](docs/en/tutorials/dataset.md) | Local / Http / OSS / Modelscope   |


## 📝 Popular Approaches

### Currently supported approaches

|            Tasks             |                    Methods                     | Links                                                                                                                                                                                                                                                       |
|:----------------------------:|:----------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|   Text-to-image Generation   |                    SD v1.5                     | [![Hugging Face Repo](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Repo-blue)](https://huggingface.co/runwayml/stable-diffusion-v1-5)                                                                                                         |
|   Text-to-image Generation   |                    SD v2.1                     | [![Hugging Face Repo](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Repo-blue)](https://huggingface.co/runwayml/stable-diffusion-v1-5)                                                                                                         |
|   Text-to-image Generation   |                     SD-XL                      | [![Hugging Face Repo](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Repo-blue)](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)                                                                                               |
|   Text-to-image Generation   |                      FLUX                      | [![Hugging Face Repo](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Repo-blue)](https://huggingface.co/black-forest-labs/FLUX.1-dev)                                                                                               |
|       Efficient Tuning       |                      LoRA                      | [![Arxiv   link](https://img.shields.io/static/v1?label=arXiv&message=LoRA&color=red&logo=arxiv)](https://arxiv.org/abs/2106.09685)                                                                                                                         |
|       Efficient Tuning       |             Res-Tuning(NeurIPS23)              | [![Arxiv   link](https://img.shields.io/static/v1?label=arXiv&message=Res-Tuing&color=red&logo=arxiv)](https://arxiv.org/abs/2310.19859) [![Page link](https://img.shields.io/badge/Page-ResTuning-Gree)](https://res-tuning.github.io/)                    |
| Controllable Image Synthesis |  [🌟SCEdit(CVPR24)](docs/en/tasks/scedit.md)   | [![Arxiv   link](https://img.shields.io/static/v1?label=arXiv&message=SCEdit&color=red&logo=arxiv)](https://arxiv.org/abs/2312.11392)   [![Page link](https://img.shields.io/badge/Page-SCEdit-Gree)](https://scedit.github.io/)                            |
|        Image Editing         |      [🌟LAR-Gen](docs/en/tasks/largen.md)      | [![Arxiv   link](https://img.shields.io/static/v1?label=arXiv&message=LARGen&color=red&logo=arxiv)](https://arxiv.org/abs/2403.19534)   [![Page link](https://img.shields.io/badge/Page-LARGen-Gree)](https://ali-vilab.github.io/largen-page/)             |
|        Image Editing         |  [🌟StyleBooth](docs/en/tasks/stylebooth.md)   | [![Arxiv   link](https://img.shields.io/static/v1?label=arXiv&message=StyleBooth&color=red&logo=arxiv)](https://arxiv.org/abs/2404.12154)   [![Page link](https://img.shields.io/badge/Page-StyleBooth-Gree)](https://ali-vilab.github.io/stylebooth-page/) |
| Image Generation and Editing | [🌟ACE](https://ali-vilab.github.io/ace-page/) | [![Arxiv   link](https://img.shields.io/static/v1?label=arXiv&message=ACE&color=red&logo=arxiv)](https://arxiv.org/abs/2410.00086)   [![Page link](https://img.shields.io/badge/Page-ACE-Gree)](https://ali-vilab.github.io/ace-page/) [![Demo link](https://img.shields.io/badge/Demo-ACE-purple)](https://huggingface.co/spaces/scepter-studio/ACE-Chat) <br> [![ModelScope link](https://img.shields.io/badge/ModelScope-Model-blue)](https://www.modelscope.cn/models/iic/ACE-0.6B-512px)  [![HuggingFace link](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-yellow)](https://huggingface.co/scepter-studio/ACE-0.6B-512px) |


## 🖥️ SCEPTER Studio

### Launch

To fully experience **SCEPTER Studio**, you can launch the following command line:

```shell
pip install scepter
python -m scepter.tools.webui
```
or run after clone repo code
```shell
git clone https://github.com/modelscope/scepter.git
PYTHONPATH=. python scepter/tools/webui.py --cfg scepter/methods/studio/scepter_ui.yaml
```

The startup of **SCEPTER Studio** eliminates the need for manual downloading and organizing of models; it will automatically load the corresponding models and store them in a local directory.
Depending on the network and hardware situation, the initial startup usually requires 15-60 minutes, primarily involving the download and processing of SDv1.5, SDv2.1, and SDXL models.
Therefore, subsequent startups will become much faster (about one minute) as downloading is no longer required.

### Usage Demo

|              [Image Editing](https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Fimage_editing_20240419.webm)              |                [Training](https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Ftraining_20240419.webm)                 |              [Model Sharing](https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Fmodel_sharing_20240419.webm)               |             [Model Inference](https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Fmodel_inference_20240419.webm)              |             [Data Management](https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Fdata_management_20240419.webm)              |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------:|
| <video src="https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Fimage_editing_20240419.webm" width="240" controls></video> | <video src="https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Ftraining_20240419.webm" width="240" controls></video> | <video src="https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Fmodel_sharing_20240419.webm" width="240" controls></video>  | <video src="https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Fmodel_inference_20240419.webm" width="240" controls></video>  | <video src="https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Fdata_management_20240419.webm" width="240" controls></video>  |

### Modelscope Studio & Huggingface Space

We deploy a work studio on Modelscope that includes only the inference tab, please refer to [ms_scepter_studio](https://www.modelscope.cn/studios/iic/scepter_studio/summary) and [hf_scepter_studio](https://huggingface.co/spaces/modelscope/scepter_studio)



## ⚙️️ ComfyUI Workflow

We support the use of all models in the ComfyUI Workflow through the following methods:

1) Automatic installation directly via the ComfyUI Manager by searching for the **ComfyUI-Scepter** node.
2) Manually install by moving custom_nodes from Scepter to ComfyUI.
```shell
git clone https://github.com/modelscope/scepter.git
cd path/to/scepter
pip install -e .
cp -r path/to/scepter/workflow/ path/to/ComfyUI/custom_nodes/ComfyUI-Scepter
cd path/to/ComfyUI
python main.py
```

**Note**: You can use the nodes by dragging the sample images into ComfyUI. Additionally, our nodes can automatically pull models from ModelScope or HuggingFace by selecting the *model_source* field, or you can place the already downloaded models in a local path.

## 🔍 Learn More

- [Alibaba TongYi Vision Intelligence Lab](https://github.com/ali-vilab)

  Discover more about open-source projects on image generation, video generation, and editing tasks.

- [ModelScope library](https://github.com/modelscope/modelscope/)

  ModelScope Library is the model library of ModelScope project, which contains a large number of popular models.

- [SWIFT library](https://github.com/modelscope/swift/)

  SWIFT (Scalable lightWeight Infrastructure for Fine-Tuning) is an extensible framwork designed to faciliate lightweight model fine-tuning and inference.


## BibTeX
If our work is useful for your research, please consider citing:
```bibtex
@misc{scepter,
    title = {SCEPTER, https://github.com/modelscope/scepter},
    author = {SCEPTER},
    year = {2023}
}
```


## License

This project is licensed under the [Apache License (Version 2.0)](https://github.com/modelscope/modelscope/blob/master/LICENSE).


## Acknowledgement
Thanks to [Stability-AI](https://github.com/Stability-AI), [SWIFT library](https://github.com/modelscope/swift/), [Fooocus](https://github.com/lllyasviel/Fooocus) and [ComfyUI](https://github.com/comfyanonymous/ComfyUI) for their awesome work.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "scepter",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "compute vision, framework, generation, image edition.",
    "author": "Tongyi Lab",
    "author_email": null,
    "download_url": null,
    "platform": null,
    "description": "<h1 align=\"center\">\ud83e\ude84SCEPTER</h1>\n\n<p align=\"center\">\n<img src=\"https://img.shields.io/badge/python-%E2%89%A53.8-5be.svg\">\n<img src=\"https://img.shields.io/badge/pytorch-%E2%89%A51.12%20%7C%20%E2%89%A52.0-orange.svg\">\n<a href=\"https://pypi.org/project/scepter/\"><img src=\"https://badge.fury.io/py/scepter.svg\"></a>\n<a href=\"https://github.com/modelscope/scepter/blob/main/LICENSE\"><img src=\"https://img.shields.io/github/license/modelscope/scepter\"></a>\n<a href=\"https://github.com/modelscope/scepter/\"><img src=\"https://img.shields.io/badge/scepter-Build from source-6FEBB9.svg\"></a>\n</p>\n\n\ud83e\ude84SCEPTER is an open-source code repository dedicated to generative training, fine-tuning, and inference, encompassing a suite of downstream tasks such as image generation, transfer, editing.\nSCEPTER integrates popular community-driven implementations as well as proprietary methods by Tongyi Lab of Alibaba Group, offering a comprehensive toolkit for researchers and practitioners in the field of AIGC. This versatile library is designed to facilitate innovation and accelerate development in the rapidly evolving domain of generative models.\n\nSCEPTER offers 3 core components:\n- [Generative training and inference framework](#tutorials)\n- [Easy implementation of popular approaches](#currently-supported-approaches)\n- [Interactive user interface: SCEPTER Studio & Comfy UI](#launch)\n\n\n## \ud83c\udf89 News\n- [\ud83d\udd25\ud83d\udd25\ud83d\udd252024.11]: We're excited to announce the upcoming release of the [ACE-0.6b-1024px](https://huggingface.co/scepter-studio/ACE-0.6B-1024px) model, \nwhich significantly enhances image generation quality compared with [ACE-0.6b-512px](https://huggingface.co/scepter-studio/ACE-0.6B-512px). The detailed documents can be found at [ACE repo](https://github.com/ali-vilab/ACE.git).\nAt the same time, based on the editing results of ACE, combined with the powerful text-to-image capabilities of the [FLUX-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) model through SDEdit as an image quality refiner, the quality of image editing can be further enhanced.\n- [\ud83d\udd252024.11]: Supports video files, video annotation, caption translation in data management, and inference & training of the [CogVideoX](https://arxiv.org/abs/2408.06072).\n- [2024.10]: We are pleased to announce the release of the code for [ACE](https://arxiv.org/abs/2410.00086), supporting Customized Training / Comfy UI Workflow / gradio-based ChatBot Interface. \n- [2024.10]: Support for inference and tuning with [FLUX](https://huggingface.co/black-forest-labs/FLUX.1-dev), as well as for building [ComfyUI](https://github.com/comfyanonymous/ComfyUI) workflows using this framework.\n- [2024.09]: We introduce **ACE**, an **A**ll-round **C**reator and **E**ditor adept at executing a diverse array of image editing tasks tailored to your specifications. Built upon the cutting-edge Diffusion Transformer architecture, ACE has been extensively trained on a comprehensive dataset to seamlessly interpret and execute any natural language instruction. For further information, please consult the [project page](https://ali-vilab.github.io/ace-page/).\n- [2024.07]: Support the inference and training of open-source generative models based on the [DiT](https://arxiv.org/abs/2212.09748) architecture, such as [SD3](https://arxiv.org/pdf/2403.03206) and [PixArt](https://arxiv.org/abs/2310.00426).\n- [2024.05]: Introducing SCEPTER v1, supporting customized image edit tasks! Simply provide 10 image pairs, SCEPTER will tune an edit tuner for your own Image-to-Image tasks, like `Clay Style`, `De-Text`, `Segmentation`, etc.\n- [2024.04]: New [StyleBooth](https://ali-vilab.github.io/stylebooth-page/) demo on SCEPTER Studio for`Text-Based Style Editing`.\n- [2024.03]: We optimize the training UI and checkpoint management. New [LAR-Gen](https://arxiv.org/abs/2403.19534) model has been added on SCEPTER Studio, supporting `zoom-out`, `virtual try on`, `inpainting`.\n- [2024.02]: We release new SCEdit controllable image synthesis models for SD v2.1 and SD XL. Multiple strategies applied to accelerate inference time for SCEPTER Studio.\n- [2024.01]: We release **SCEPTER Studio**, an integrated toolkit for data management, model training and inference based on [Gradio](https://www.gradio.app/).\n- [2024.01]: [SCEdit](https://arxiv.org/abs/2312.11392) support controllable image synthesis for training and inference.\n- [2023.12]: We propose [SCEdit](https://arxiv.org/abs/2312.11392), an efficient and controllable generation framework.\n- [2023.12]: We release [\ud83e\ude84SCEPTER](https://github.com/modelscope/scepter/) library.\n\n\n\n\n## \ud83e\ude84ACE\n\nACE is a unified foundational model framework that supports a wide range of visual generation tasks. By defining CU for unifying multi-modal inputs across different tasks and incorporating long-context CU, we introduce historical contextual information into visual generation tasks, paving the way for ChatGPT-like dialog systems in visual generation.\n\n[![Watch the demo](https://ali-vilab.github.io/ace-page/static/images/tasks.png)](https://ali-vilab.github.io/ace-page/)\n\n### ACE Models\n|    **Model**     |                                                                                                                                                                                                            **Status**                                                                                                                                                                                                             | \n|:----------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|\n|  ACE-0.6B-512px  |          [![Demo link](https://img.shields.io/badge/Demo-ACE_Chat-purple)](https://huggingface.co/spaces/scepter-studio/ACE-Chat)<br>[![ModelScope link](https://img.shields.io/badge/ModelScope-Model-blue)](https://www.modelscope.cn/models/iic/ACE-0.6B-512px)  [![HuggingFace link](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-yellow)](https://huggingface.co/scepter-studio/ACE-0.6B-512px)          |\n| ACE-0.6B-1024px  | [![Demo link](https://img.shields.io/badge/Demo-ACE_Refiner_Chat-purple)](https://huggingface.co/spaces/scepter-studio/ACE-Refiner-Chat)<br>[![ModelScope link](https://img.shields.io/badge/ModelScope-Model-blue)](https://www.modelscope.cn/models/iic/ACE-0.6B-1024px)  [![HuggingFace link](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-yellow)](https://huggingface.co/scepter-studio/ACE-0.6B-1024px) |             |\n| ACE-12B-FLUX-dev |                                                                                                                                                                                                            Coming Soon                                                                                                                                                                                                            |\n### ACE Training\n\nWe offer a demonstration training YAML that enables the end-to-end training of ACE using a toy dataset. For a comprehensive overview of the hyperparameter configurations, please consult `scepter/methods/edit/dit_ace_0.6b_512.yaml`.\n\n#### Prepare datasets\n\nPlease find the dataset class located in `scepter/modules/data/dataset/ms_dataset.py`,\ndesigned to facilitate end-to-end training using an open-source toy dataset.\nDownload a dataset zip file from [modelscope](https://www.modelscope.cn/models/iic/scepter/resolve/master/datasets/hed_pair.zip), and then extract its contents into the `cache/datasets/` directory.\n\nShould you wish to prepare your own datasets, we recommend consulting `scepter/modules/data/dataset/ms_dataset.py` for detailed guidance on the required data format.\n\n#### Prepare initial weight\nThe ACE checkpoint has been uploaded to both ModelScope and HuggingFace platforms:\n* [ModelScope](https://www.modelscope.cn/models/iic/ACE-0.6B-512px)\n* [HuggingFace](https://huggingface.co/scepter-studio/ACE-0.6B-512px)\n\nIn the provided training YAML configuration, we have designated the Modelscope URL as the default checkpoint URL. Should you wish to transition to Hugging Face, you can effortlessly achieve this by modifying the PRETRAINED_MODEL value within the YAML file (replace the prefix \"ms://iic\" to \"hf://scepter-studio\").\n\n\n#### Start training\n\nYou can easily start training procedure by executing the following command:\n```bash\n# ACE-0.6B-512px\nPYTHONPATH=. python scepter/tools/run_train.py --cfg scepter/methods/edit/dit_ace_0.6b_512.yaml\n# ACE-0.6B-1024px\nPYTHONPATH=. python scepter/tools/run_train.py --cfg scepter/methods/edit/dit_ace_0.6b_1024.yaml\n```\n\n### ACE Chat Bot\n\nWe have developed a chatbot interface utilizing Gradio, designed to convert user input in natural language into visually captivating images that align semantically with the specified instructions. You can easily access this functionality by launching Scepter Studio with the following command:\n```bash\nPYTHONPATH=. python scepter/tools/webui.py --cfg scepter/methods/studio/scepter_ui.yaml --language zh --tab chatbot\n```\nUpon starting, you will find a \"ChatBot\" tab within the Gradio application, which serves as a chat-based interface to handle any requests related to image editing or generation.\n\n### ACE ComfyUI Workflow\n\n![Workflow](https://github.com/ali-vilab/ace-page/raw/main/assets/comfyui/ace_example.jpg)\n\n<table><tbody>\n  <tr>\n    <th align=\"center\" colspan=\"4\">ACE Workflow Examples</th>\n  </tr>\n  <tr>\n    <th align=\"center\" colspan=\"1\">Control</th>\n    <th align=\"center\" colspan=\"1\">Semantic</th>\n    <th align=\"center\" colspan=\"1\">Element</th>\n  </tr>\n  <tr>\n    <td>\n      <a href=\"https://github.com/ali-vilab/ace-page/raw/main/assets/comfyui/ace_control.png\" target=\"_blank\">\n        <img src=\"https://github.com/ali-vilab/ace-page/raw/main/assets/comfyui/ace_control.png\" width=\"200\">\n      </a>\n    </td>\n    <td>\n      <a href=\"https://github.com/ali-vilab/ace-page/raw/main/assets/comfyui/ace_semantic.png\" target=\"_blank\">\n        <img src=\"https://github.com/ali-vilab/ace-page/raw/main/assets/comfyui/ace_semantic.png\" width=\"200\">\n      </a>\n    </td>\n    <td>\n      <a href=\"https://github.com/ali-vilab/ace-page/raw/main/assets/comfyui/ace_element.png\" target=\"_blank\">\n        <img src=\"https://github.com/ali-vilab/ace-page/raw/main/assets/comfyui/ace_element.png\" width=\"200\">\n      </a>\n    </td>\n  </tr>\n</tbody>\n</table>\n\n## \ud83d\uddbc Gallery for Recent Works\n\n### FLUX Tuners\n\n<table><tbody>\n  <tr>\n    <th align=\"center\" colspan=\"3\">Yarn Style</th>\n    <th align=\"center\" colspan=\"3\">Soft Watercolor Style</th>\n  </tr>\n  <tr>\n    <td><img src=\"asset/images/flux_tuner/flux_tuner_2_1.webp\" width=\"200\"></td>\n    <td><img src=\"asset/images/flux_tuner/flux_tuner_2_2.webp\" width=\"200\"></td>\n    <td><img src=\"asset/images/flux_tuner/flux_tuner_2_3.webp\" width=\"200\"></td>\n    <td><img src=\"asset/images/flux_tuner/flux_tuner_1_1.webp\" width=\"200\"></td>\n    <td><img src=\"asset/images/flux_tuner/flux_tuner_1_2.webp\" width=\"200\"></td>\n    <td><img src=\"asset/images/flux_tuner/flux_tuner_1_3.webp\" width=\"200\"></td>\n  </tr>\n  <tr>\n    <th align=\"center\" colspan=\"3\">Travel Style</th>\n    <th align=\"center\" colspan=\"3\">WuKong Style</th>\n  </tr>\n  <tr>\n    <td><img src=\"asset/images/flux_tuner/flux_tuner_3_1.webp\" width=\"200\"></td>\n    <td><img src=\"asset/images/flux_tuner/flux_tuner_3_2.webp\" width=\"200\"></td>\n    <td><img src=\"asset/images/flux_tuner/flux_tuner_3_3.webp\" width=\"200\"></td>\n    <td><img src=\"asset/images/flux_tuner/flux_tuner_4_1.webp\" width=\"200\"></td>\n    <td><img src=\"asset/images/flux_tuner/flux_tuner_4_2.webp\" width=\"200\"></td>\n    <td><img src=\"asset/images/flux_tuner/flux_tuner_4_3.webp\" width=\"200\"></td>\n  </tr>\n</tbody>\n</table>\n\n### ComfyUI Workflow\n\n![Workflow](asset/workflow/workflow.jpg)\n\n<table><tbody>\n  <tr>\n    <th align=\"center\" colspan=\"4\">Example Workflow Case</th>\n  </tr>\n  <tr>\n    <th align=\"center\" colspan=\"1\">Base</th>\n    <th align=\"center\" colspan=\"1\">+Mantra</th>\n    <th align=\"center\" colspan=\"1\">+Tuner</th>\n    <th align=\"center\" colspan=\"1\">+Control</th>\n  </tr>\n  <tr>\n    <td>\n      <a href=\"asset/workflow/sdxl_base.json\" target=\"_blank\">\n        <img src=\"asset/workflow/sdxl_base.jpg\" width=\"200\">\n      </a>\n    </td>\n    <td>\n      <a href=\"asset/workflow/sdxl_base_mantra.json\" target=\"_blank\">\n        <img src=\"asset/workflow/sdxl_base_mantra.jpg\" width=\"200\">\n      </a>\n    </td>\n    <td>\n      <a href=\"asset/workflow/sdxl_base_mantra_tuner.json\" target=\"_blank\">\n        <img src=\"asset/workflow/sdxl_base_mantra_tuner.jpg\" width=\"200\">\n      </a>\n    </td>\n    <td>\n      <a href=\"asset/workflow/sdxl_base_mantra_tuner_control.json\" target=\"_blank\">\n        <img src=\"asset/workflow/sdxl_base_mantra_tuner_control.jpg\" width=\"200\">\n      </a>\n    </td>\n  </tr>\n</tbody>\n</table>\n\n\n## \ud83d\udee0\ufe0f Installation\n\n- Create new environment with `conda` command:\n\n```shell\nconda env create -f environment.yaml\nconda activate scepter\n```\n\n- Install with `pip` command:\n\nWe recommend installing the specific version of PyTorch and accelerate toolbox [xFormers](https://pypi.org/project/xformers/). You can install these recommended version by pip:\n```shell\npip install -r requirements/recommended.txt\npip install scepter\n```\n\n## \ud83e\udde9 Generative Framework\n\n### Tutorials\n\n| Documentation                                      | Key Features                      |\n|:---------------------------------------------------|:----------------------------------|\n| [Train](docs/en/tutorials/train.md)                | DDP / FSDP / FairScale / Xformers |\n| [Inference](docs/en/tutorials/inference.md)        | Dynamic load/unload               |\n| [Dataset Management](docs/en/tutorials/dataset.md) | Local / Http / OSS / Modelscope   |\n\n\n## \ud83d\udcdd Popular Approaches\n\n### Currently supported approaches\n\n|            Tasks             |                    Methods                     | Links                                                                                                                                                                                                                                                       |\n|:----------------------------:|:----------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n|   Text-to-image Generation   |                    SD v1.5                     | [![Hugging Face Repo](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Repo-blue)](https://huggingface.co/runwayml/stable-diffusion-v1-5)                                                                                                         |\n|   Text-to-image Generation   |                    SD v2.1                     | [![Hugging Face Repo](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Repo-blue)](https://huggingface.co/runwayml/stable-diffusion-v1-5)                                                                                                         |\n|   Text-to-image Generation   |                     SD-XL                      | [![Hugging Face Repo](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Repo-blue)](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)                                                                                               |\n|   Text-to-image Generation   |                      FLUX                      | [![Hugging Face Repo](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Repo-blue)](https://huggingface.co/black-forest-labs/FLUX.1-dev)                                                                                               |\n|       Efficient Tuning       |                      LoRA                      | [![Arxiv   link](https://img.shields.io/static/v1?label=arXiv&message=LoRA&color=red&logo=arxiv)](https://arxiv.org/abs/2106.09685)                                                                                                                         |\n|       Efficient Tuning       |             Res-Tuning(NeurIPS23)              | [![Arxiv   link](https://img.shields.io/static/v1?label=arXiv&message=Res-Tuing&color=red&logo=arxiv)](https://arxiv.org/abs/2310.19859) [![Page link](https://img.shields.io/badge/Page-ResTuning-Gree)](https://res-tuning.github.io/)                    |\n| Controllable Image Synthesis |  [\ud83c\udf1fSCEdit(CVPR24)](docs/en/tasks/scedit.md)   | [![Arxiv   link](https://img.shields.io/static/v1?label=arXiv&message=SCEdit&color=red&logo=arxiv)](https://arxiv.org/abs/2312.11392)   [![Page link](https://img.shields.io/badge/Page-SCEdit-Gree)](https://scedit.github.io/)                            |\n|        Image Editing         |      [\ud83c\udf1fLAR-Gen](docs/en/tasks/largen.md)      | [![Arxiv   link](https://img.shields.io/static/v1?label=arXiv&message=LARGen&color=red&logo=arxiv)](https://arxiv.org/abs/2403.19534)   [![Page link](https://img.shields.io/badge/Page-LARGen-Gree)](https://ali-vilab.github.io/largen-page/)             |\n|        Image Editing         |  [\ud83c\udf1fStyleBooth](docs/en/tasks/stylebooth.md)   | [![Arxiv   link](https://img.shields.io/static/v1?label=arXiv&message=StyleBooth&color=red&logo=arxiv)](https://arxiv.org/abs/2404.12154)   [![Page link](https://img.shields.io/badge/Page-StyleBooth-Gree)](https://ali-vilab.github.io/stylebooth-page/) |\n| Image Generation and Editing | [\ud83c\udf1fACE](https://ali-vilab.github.io/ace-page/) | [![Arxiv   link](https://img.shields.io/static/v1?label=arXiv&message=ACE&color=red&logo=arxiv)](https://arxiv.org/abs/2410.00086)   [![Page link](https://img.shields.io/badge/Page-ACE-Gree)](https://ali-vilab.github.io/ace-page/) [![Demo link](https://img.shields.io/badge/Demo-ACE-purple)](https://huggingface.co/spaces/scepter-studio/ACE-Chat) <br> [![ModelScope link](https://img.shields.io/badge/ModelScope-Model-blue)](https://www.modelscope.cn/models/iic/ACE-0.6B-512px)  [![HuggingFace link](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-yellow)](https://huggingface.co/scepter-studio/ACE-0.6B-512px) |\n\n\n## \ud83d\udda5\ufe0f SCEPTER Studio\n\n### Launch\n\nTo fully experience **SCEPTER Studio**, you can launch the following command line:\n\n```shell\npip install scepter\npython -m scepter.tools.webui\n```\nor run after clone repo code\n```shell\ngit clone https://github.com/modelscope/scepter.git\nPYTHONPATH=. python scepter/tools/webui.py --cfg scepter/methods/studio/scepter_ui.yaml\n```\n\nThe startup of **SCEPTER Studio** eliminates the need for manual downloading and organizing of models; it will automatically load the corresponding models and store them in a local directory.\nDepending on the network and hardware situation, the initial startup usually requires 15-60 minutes, primarily involving the download and processing of SDv1.5, SDv2.1, and SDXL models.\nTherefore, subsequent startups will become much faster (about one minute) as downloading is no longer required.\n\n### Usage Demo\n\n|              [Image Editing](https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Fimage_editing_20240419.webm)              |                [Training](https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Ftraining_20240419.webm)                 |              [Model Sharing](https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Fmodel_sharing_20240419.webm)               |             [Model Inference](https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Fmodel_inference_20240419.webm)              |             [Data Management](https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Fdata_management_20240419.webm)              |\n|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------:|\n| <video src=\"https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Fimage_editing_20240419.webm\" width=\"240\" controls></video> | <video src=\"https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Ftraining_20240419.webm\" width=\"240\" controls></video> | <video src=\"https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Fmodel_sharing_20240419.webm\" width=\"240\" controls></video>  | <video src=\"https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Fmodel_inference_20240419.webm\" width=\"240\" controls></video>  | <video src=\"https://www.modelscope.cn/api/v1/models/iic/scepter/repo?Revision=master&FilePath=assets%2Fscepter_studio%2Fdata_management_20240419.webm\" width=\"240\" controls></video>  |\n\n### Modelscope Studio & Huggingface Space\n\nWe deploy a work studio on Modelscope that includes only the inference tab, please refer to [ms_scepter_studio](https://www.modelscope.cn/studios/iic/scepter_studio/summary) and [hf_scepter_studio](https://huggingface.co/spaces/modelscope/scepter_studio)\n\n\n\n## \u2699\ufe0f\ufe0f ComfyUI Workflow\n\nWe support the use of all models in the ComfyUI Workflow through the following methods:\n\n1) Automatic installation directly via the ComfyUI Manager by searching for the **ComfyUI-Scepter** node.\n2) Manually install by moving custom_nodes from Scepter to ComfyUI.\n```shell\ngit clone https://github.com/modelscope/scepter.git\ncd path/to/scepter\npip install -e .\ncp -r path/to/scepter/workflow/ path/to/ComfyUI/custom_nodes/ComfyUI-Scepter\ncd path/to/ComfyUI\npython main.py\n```\n\n**Note**: You can use the nodes by dragging the sample images into ComfyUI. Additionally, our nodes can automatically pull models from ModelScope or HuggingFace by selecting the *model_source* field, or you can place the already downloaded models in a local path.\n\n## \ud83d\udd0d Learn More\n\n- [Alibaba TongYi Vision Intelligence Lab](https://github.com/ali-vilab)\n\n  Discover more about open-source projects on image generation, video generation, and editing tasks.\n\n- [ModelScope library](https://github.com/modelscope/modelscope/)\n\n  ModelScope Library is the model library of ModelScope project, which contains a large number of popular models.\n\n- [SWIFT library](https://github.com/modelscope/swift/)\n\n  SWIFT (Scalable lightWeight Infrastructure for Fine-Tuning) is an extensible framwork designed to faciliate lightweight model fine-tuning and inference.\n\n\n## BibTeX\nIf our work is useful for your research, please consider citing:\n```bibtex\n@misc{scepter,\n    title = {SCEPTER, https://github.com/modelscope/scepter},\n    author = {SCEPTER},\n    year = {2023}\n}\n```\n\n\n## License\n\nThis project is licensed under the [Apache License (Version 2.0)](https://github.com/modelscope/modelscope/blob/master/LICENSE).\n\n\n## Acknowledgement\nThanks to [Stability-AI](https://github.com/Stability-AI), [SWIFT library](https://github.com/modelscope/swift/), [Fooocus](https://github.com/lllyasviel/Fooocus) and [ComfyUI](https://github.com/comfyanonymous/ComfyUI) for their awesome work.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": null,
    "version": "1.3.1",
    "project_urls": null,
    "split_keywords": [
        "compute vision",
        " framework",
        " generation",
        " image edition."
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "161d959d5814e96d5fd74edb44efd1612db1203b6b30ed0ea548946632193eb3",
                "md5": "153354ac7a53c48c8d293c31f6612cdd",
                "sha256": "0539ee974480f02698ae96f5597607a246c36bde4a8b74e1bf0641c062d6228e"
            },
            "downloads": -1,
            "filename": "scepter-1.3.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "153354ac7a53c48c8d293c31f6612cdd",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 988655,
            "upload_time": "2024-12-05T08:10:54",
            "upload_time_iso_8601": "2024-12-05T08:10:54.819421Z",
            "url": "https://files.pythonhosted.org/packages/16/1d/959d5814e96d5fd74edb44efd1612db1203b6b30ed0ea548946632193eb3/scepter-1.3.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-05 08:10:54",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "scepter"
}
        
Elapsed time: 0.40164s