mmagic


Namemmagic JSON
Version 1.2.0 PyPI version JSON
download
home_pagehttps://github.com/open-mmlab/mmagic
SummaryOpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox
upload_time2023-12-18 13:53:24
maintainerMMagic Contributors
docs_urlNone
author
requires_python
licenseApache License 2.0
keywords computer vision super resolution video interpolation inpainting matting sisr refsr vsr gan vfi
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div id="top" align="center">
  <img src="docs/en/_static/image/mmagic-logo.png" width="500px"/>
  <div>&nbsp;</div>
  <div align="center">
    <font size="10"><b>M</b>ultimodal <b>A</b>dvanced, <b>G</b>enerative, and <b>I</b>ntelligent <b>C</b>reation (MMagic [em'mædʒɪk])</font>
  </div>
  <div>&nbsp;</div>
  <div align="center">
    <b><font size="5">OpenMMLab website</font></b>
    <sup>
      <a href="https://openmmlab.com">
        <i><font size="4">HOT</font></i>
      </a>
    </sup>
    &nbsp;&nbsp;&nbsp;&nbsp;
    <b><font size="5">OpenMMLab platform</font></b>
    <sup>
      <a href="https://platform.openmmlab.com">
        <i><font size="4">TRY IT OUT</font></i>
      </a>
    </sup>
  </div>
  <div>&nbsp;</div>

[![PyPI](https://badge.fury.io/py/mmagic.svg)](https://pypi.org/project/mmagic/)
[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmagic.readthedocs.io/en/latest/)
[![badge](https://github.com/open-mmlab/mmagic/workflows/build/badge.svg)](https://github.com/open-mmlab/mmagic/actions)
[![codecov](https://codecov.io/gh/open-mmlab/mmagic/branch/master/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmagic)
[![license](https://img.shields.io/github/license/open-mmlab/mmagic.svg)](https://github.com/open-mmlab/mmagic/blob/main/LICENSE)
[![open issues](https://isitmaintained.com/badge/open/open-mmlab/mmagic.svg)](https://github.com/open-mmlab/mmagic/issues)
[![issue resolution](https://isitmaintained.com/badge/resolution/open-mmlab/mmagic.svg)](https://github.com/open-mmlab/mmagic/issues)
[![Open in OpenXLab](https://cdn-static.openxlab.org.cn/app-center/openxlab_demo.svg)](https://openxlab.org.cn/apps?search=mmagic)

[📘Documentation](https://mmagic.readthedocs.io/en/latest/) |
[🛠️Installation](https://mmagic.readthedocs.io/en/latest/get_started/install.html) |
[📊Model Zoo](https://mmagic.readthedocs.io/en/latest/model_zoo/overview.html) |
[🆕Update News](https://mmagic.readthedocs.io/en/latest/changelog.html) |
[🚀Ongoing Projects](https://github.com/open-mmlab/mmagic/projects) |
[🤔Reporting Issues](https://github.com/open-mmlab/mmagic/issues)

English | [简体中文](README_zh-CN.md)

</div>

<div align="center">
  <a href="https://openmmlab.medium.com/" style="text-decoration:none;">
    <img src="https://user-images.githubusercontent.com/25839884/218352562-cdded397-b0f3-4ca1-b8dd-a60df8dca75b.png" width="3%" alt="" /></a>
  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
  <a href="https://discord.gg/raweFPmdzG" style="text-decoration:none;">
    <img src="https://user-images.githubusercontent.com/25839884/218347213-c080267f-cbb6-443e-8532-8e1ed9a58ea9.png" width="3%" alt="" /></a>
  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
  <a href="https://twitter.com/OpenMMLab" style="text-decoration:none;">
    <img src="https://user-images.githubusercontent.com/25839884/218346637-d30c8a0f-3eba-4699-8131-512fb06d46db.png" width="3%" alt="" /></a>
  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
  <a href="https://www.youtube.com/openmmlab" style="text-decoration:none;">
    <img src="https://user-images.githubusercontent.com/25839884/218346691-ceb2116a-465a-40af-8424-9f30d2348ca9.png" width="3%" alt="" /></a>
</div>

## 🚀 What's New <a><img width="35" height="20" src="https://user-images.githubusercontent.com/12782558/212848161-5e783dd6-11e8-4fe0-bbba-39ffb77730be.png"></a>

### New release [**MMagic v1.2.0**](https://github.com/open-mmlab/mmagic/releases/tag/v1.2.0) \[18/12/2023\]:

- An advanced and powerful inpainting algorithm named PowerPaint is released in our repository. [Click to View](https://github.com/open-mmlab/mmagic/tree/main/projects/powerpaint)

We are excited to announce the release of MMagic v1.0.0 that inherits from [MMEditing](https://github.com/open-mmlab/mmediting) and [MMGeneration](https://github.com/open-mmlab/mmgeneration).

After iterative updates with OpenMMLab 2.0 framework and merged with MMGeneration, MMEditing has become a powerful tool that supports low-level algorithms based on both GAN and CNN. Today, MMEditing embraces Generative AI and transforms into a more advanced and comprehensive AIGC toolkit: **MMagic** (**M**ultimodal **A**dvanced, **G**enerative, and **I**ntelligent **C**reation). MMagic will provide more agile and flexible experimental support for researchers and AIGC enthusiasts, and help you on your AIGC exploration journey.

We highlight the following new features.

**1. New Models**

We support 11 new models in 4 new tasks.

- Text2Image / Diffusion
  - ControlNet
  - DreamBooth
  - Stable Diffusion
  - Disco Diffusion
  - GLIDE
  - Guided Diffusion
- 3D-aware Generation
  - EG3D
- Image Restoration
  - NAFNet
  - Restormer
  - SwinIR
- Image Colorization
  - InstColorization

**2. Magic Diffusion Model**

For the Diffusion Model, we provide the following "magic" :

- Support image generation based on Stable Diffusion and Disco Diffusion.
- Support Finetune methods such as Dreambooth and DreamBooth LoRA.
- Support controllability in text-to-image generation using ControlNet.
- Support acceleration and optimization strategies based on xFormers to improve training and inference efficiency.
- Support video generation based on MultiFrame Render.
- Support calling basic models and sampling strategies through DiffuserWrapper.

**3. Upgraded Framework**

By using MMEngine and MMCV of OpenMMLab 2.0 framework, MMagic has upgraded in the following new features:

- Refactor DataSample to support the combination and splitting of batch dimensions.
- Refactor DataPreprocessor and unify the data format for various tasks during training and inference.
- Refactor MultiValLoop and MultiTestLoop, supporting the evaluation of both generation-type metrics (e.g. FID) and reconstruction-type metrics (e.g. SSIM), and supporting the evaluation of multiple datasets at once.
- Support visualization on local files or using tensorboard and wandb.
- Support for 33+ algorithms accelerated by Pytorch 2.0.

**MMagic** has supported all the tasks, models, metrics, and losses in [MMEditing](https://github.com/open-mmlab/mmediting) and [MMGeneration](https://github.com/open-mmlab/mmgeneration) and unifies interfaces of all components based on [MMEngine](https://github.com/open-mmlab/mmengine) 😍.

Please refer to [changelog.md](docs/en/changelog.md) for details and release history.

Please refer to [migration documents](docs/en/migration/overview.md) to migrate from [old version](https://github.com/open-mmlab/mmagic/tree/0.x) MMEditing 0.x to new version MMagic 1.x .

<div id="table" align="center"></div>

## 📄 Table of Contents

- [📖 Introduction](#-introduction)
- [🙌 Contributing](#-contributing)
- [🛠️ Installation](#️-installation)
- [📊 Model Zoo](#-model-zoo)
- [🤝 Acknowledgement](#-acknowledgement)
- [🖊️ Citation](#️-citation)
- [🎫 License](#-license)
- [🏗️ ️OpenMMLab Family](#️-️openmmlab-family)

## 📖 Introduction

MMagic (**M**ultimodal **A**dvanced, **G**enerative, and **I**ntelligent **C**reation) is an advanced and comprehensive AIGC toolkit that inherits from [MMEditing](https://github.com/open-mmlab/mmediting) and [MMGeneration](https://github.com/open-mmlab/mmgeneration). It is an open-source image and video editing&generating toolbox based on PyTorch. It is a part of the [OpenMMLab](https://openmmlab.com/) project.

Currently, MMagic support multiple image and video generation/editing tasks.

https://user-images.githubusercontent.com/49083766/233564593-7d3d48ed-e843-4432-b610-35e3d257765c.mp4

### ✨ Major features

- **State of the Art Models**

  MMagic provides state-of-the-art generative models to process, edit and synthesize images and videos.

- **Powerful and Popular Applications**

  MMagic supports popular and contemporary image restoration, text-to-image, 3D-aware generation, inpainting, matting, super-resolution and generation applications. Specifically, MMagic supports fine-tuning for stable diffusion and many exciting diffusion's application such as ControlNet Animation with SAM. MMagic also supports GAN interpolation, GAN projection, GAN manipulations and many other popular GAN’s applications. It’s time to begin your AIGC exploration journey!

- **Efficient Framework**

  By using MMEngine and MMCV of OpenMMLab 2.0 framework, MMagic decompose the editing framework into different modules and one can easily construct a customized editor framework by combining different modules. We can define the training process just like playing with Legos and provide rich components and strategies. In MMagic, you can complete controls on the training process with different levels of APIs. With the support of [MMSeparateDistributedDataParallel](https://github.com/open-mmlab/mmengine/blob/main/mmengine/model/wrappers/seperate_distributed.py), distributed training for dynamic architectures can be easily implemented.

### ✨ Best Practice

- The best practice on our main branch works with **Python 3.9+** and **PyTorch 2.0+**.

<p align="right"><a href="#table">🔝Back to Table of Contents</a></p>

## 🙌 Contributing

More and more community contributors are joining us to make our repo better. Some recent projects are contributed by the community including:

- [SDXL](configs/stable_diffusion_xl/README.md) is contributed by  @okotaku.
- [AnimateDiff](configs/animatediff/README.md) is contributed by @ElliotQi.
- [ViCo](configs/vico/README.md) is contributed by @FerryHuang.
- [DragGan](configs/draggan/README.md) is contributed by @qsun1.
- [FastComposer](configs/fastcomposer/README.md) is contributed by @xiaomile.

[Projects](projects/README.md) is opened to make it easier for everyone to add projects to MMagic.

We appreciate all contributions to improve MMagic. Please refer to [CONTRIBUTING.md](https://github.com/open-mmlab/mmcv/blob/main/CONTRIBUTING.md) in MMCV and [CONTRIBUTING.md](https://github.com/open-mmlab/mmengine/blob/main/CONTRIBUTING.md) in MMEngine for more details about the contributing guideline.

<p align="right"><a href="#table">🔝Back to Table of Contents</a></p>

## 🛠️ Installation

MMagic depends on [PyTorch](https://pytorch.org/), [MMEngine](https://github.com/open-mmlab/mmengine) and [MMCV](https://github.com/open-mmlab/mmcv).
Below are quick steps for installation.

**Step 1.**
Install PyTorch following [official instructions](https://pytorch.org/get-started/locally/).

**Step 2.**
Install MMCV, MMEngine and MMagic with [MIM](https://github.com/open-mmlab/mim).

```shell
pip3 install openmim
mim install mmcv>=2.0.0
mim install mmengine
mim install mmagic
```

**Step 3.**
Verify MMagic has been successfully installed.

```shell
cd ~
python -c "import mmagic; print(mmagic.__version__)"
# Example output: 1.0.0
```

**Getting Started**

After installing MMagic successfully, now you are able to play with MMagic! To generate an image from text, you only need several lines of codes by MMagic!

```python
from mmagic.apis import MMagicInferencer
sd_inferencer = MMagicInferencer(model_name='stable_diffusion')
text_prompts = 'A panda is having dinner at KFC'
result_out_dir = 'output/sd_res.png'
sd_inferencer.infer(text=text_prompts, result_out_dir=result_out_dir)
```

Please see [quick run](docs/en/get_started/quick_run.md) and [inference](docs/en/user_guides/inference.md) for the basic usage of MMagic.

**Install MMagic from source**

You can also experiment on the latest developed version rather than the stable release by installing MMagic from source with the following commands:

```shell
git clone https://github.com/open-mmlab/mmagic.git
cd mmagic
pip3 install -e .
```

Please refer to [installation](docs/en/get_started/install.md) for more detailed instruction.

<p align="right"><a href="#table">🔝Back to Table of Contents</a></p>

## 📊 Model Zoo

<div align="center">
  <b>Supported algorithms</b>
</div>
<table align="center">
  <tbody>
    <tr align="center" valign="bottom">
      <td>
        <b>Conditional GANs</b>
      </td>
      <td>
        <b>Unconditional GANs</b>
      </td>
      <td>
        <b>Image Restoration</b>
      </td>
      <td>
        <b>Image Super-Resolution</b>
      </td>
    </tr>
    <tr valign="top">
      <td>
        <ul>
            <li><a href="configs/sngan_proj/README.md">SNGAN/Projection GAN (ICLR'2018)</a></li>
            <li><a href="configs/sagan/README.md">SAGAN (ICML'2019)</a></li>
            <li><a href="configs/biggan/README.md">BIGGAN/BIGGAN-DEEP (ICLR'2018)</a></li>
      </ul>
      </td>
      <td>
        <ul>
          <li><a href="configs/dcgan/README.md">DCGAN (ICLR'2016)</a></li>
          <li><a href="configs/wgan-gp/README.md">WGAN-GP (NeurIPS'2017)</a></li>
          <li><a href="configs/lsgan/README.md">LSGAN (ICCV'2017)</a></li>
          <li><a href="configs/ggan/README.md">GGAN (ArXiv'2017)</a></li>
          <li><a href="configs/pggan/README.md">PGGAN (ICLR'2018)</a></li>
          <li><a href="configs/singan/README.md">SinGAN (ICCV'2019)</a></li>
          <li><a href="configs/styleganv1/README.md">StyleGANV1 (CVPR'2019)</a></li>
          <li><a href="configs/styleganv2/README.md">StyleGANV2 (CVPR'2019)</a></li>
          <li><a href="configs/styleganv3/README.md">StyleGANV3 (NeurIPS'2021)</a></li>
          <li><a href="configs/draggan/README.md">DragGan (2023)</a></li>
        </ul>
      </td>
      <td>
        <ul>
          <li><a href="configs/swinir/README.md">SwinIR (ICCVW'2021)</a></li>
          <li><a href="configs/nafnet/README.md">NAFNet (ECCV'2022)</a></li>
          <li><a href="configs/restormer/README.md">Restormer (CVPR'2022)</a></li>
        </ul>
      </td>
      <td>
        <ul>
          <li><a href="configs/srcnn/README.md">SRCNN (TPAMI'2015)</a></li>
          <li><a href="configs/srgan_resnet/README.md">SRResNet&SRGAN (CVPR'2016)</a></li>
          <li><a href="configs/edsr/README.md">EDSR (CVPR'2017)</a></li>
          <li><a href="configs/esrgan/README.md">ESRGAN (ECCV'2018)</a></li>
          <li><a href="configs/rdn/README.md">RDN (CVPR'2018)</a></li>
          <li><a href="configs/dic/README.md">DIC (CVPR'2020)</a></li>
          <li><a href="configs/ttsr/README.md">TTSR (CVPR'2020)</a></li>
          <li><a href="configs/glean/README.md">GLEAN (CVPR'2021)</a></li>
          <li><a href="configs/liif/README.md">LIIF (CVPR'2021)</a></li>
          <li><a href="configs/real_esrgan/README.md">Real-ESRGAN (ICCVW'2021)</a></li>
        </ul>
      </td>
    </tr>
</td>
    </tr>
  </tbody>
<tbody>
    <tr align="center" valign="bottom">
      <td>
        <b>Video Super-Resolution</b>
      </td>
      <td>
        <b>Video Interpolation</b>
      </td>
      <td>
        <b>Image Colorization</b>
      </td>
      <td>
        <b>Image Translation</b>
      </td>
    </tr>
    <tr valign="top">
      <td>
        <ul>
            <li><a href="configs/edvr/README.md">EDVR (CVPR'2018)</a></li>
            <li><a href="configs/tof/README.md">TOF (IJCV'2019)</a></li>
            <li><a href="configs/tdan/README.md">TDAN (CVPR'2020)</a></li>
            <li><a href="configs/basicvsr/README.md">BasicVSR (CVPR'2021)</a></li>
            <li><a href="configs/iconvsr/README.md">IconVSR (CVPR'2021)</a></li>
            <li><a href="configs/basicvsr_pp/README.md">BasicVSR++ (CVPR'2022)</a></li>
            <li><a href="configs/real_basicvsr/README.md">RealBasicVSR (CVPR'2022)</a></li>
      </ul>
      </td>
      <td>
        <ul>
          <li><a href="configs/tof/README.md">TOFlow (IJCV'2019)</a></li>
          <li><a href="configs/cain/README.md">CAIN (AAAI'2020)</a></li>
          <li><a href="configs/flavr/README.md">FLAVR (CVPR'2021)</a></li>
        </ul>
      </td>
      <td>
        <ul>
          <li><a href="configs/inst_colorization/README.md">InstColorization (CVPR'2020)</a></li>
        </ul>
      </td>
      <td>
        <ul>
          <li><a href="configs/pix2pix/README.md">Pix2Pix (CVPR'2017)</a></li>
          <li><a href="configs/cyclegan/README.md">CycleGAN (ICCV'2017)</a></li>
        </ul>
      </td>
    </tr>
</td>
    </tr>
  </tbody>
<tbody>
    <tr align="center" valign="bottom">
      <td>
        <b>Inpainting</b>
      </td>
      <td>
        <b>Matting</b>
      </td>
      <td>
        <b>Text-to-Image(Video)</b>
      </td>
      <td>
        <b>3D-aware Generation</b>
      </td>
    </tr>
    <tr valign="top">
      <td>
        <ul>
          <li><a href="configs/global_local/README.md">Global&Local (ToG'2017)</a></li>
          <li><a href="configs/deepfillv1/README.md">DeepFillv1 (CVPR'2018)</a></li>
          <li><a href="configs/partial_conv/README.md">PConv (ECCV'2018)</a></li>
          <li><a href="configs/deepfillv2/README.md">DeepFillv2 (CVPR'2019)</a></li>
          <li><a href="configs/aot_gan/README.md">AOT-GAN (TVCG'2019)</a></li>
          <li><a href="configs/stable_diffusion/README.md">Stable Diffusion Inpainting (CVPR'2022)</a></li>
        </ul>
      </td>
      <td>
        <ul>
          <li><a href="configs/dim/README.md">DIM (CVPR'2017)</a></li>
          <li><a href="configs/indexnet/README.md">IndexNet (ICCV'2019)</a></li>
          <li><a href="configs/gca/README.md">GCA (AAAI'2020)</a></li>
        </ul>
      </td>
      <td>
        <ul>
          <li><a href="projects/glide/configs/README.md">GLIDE (NeurIPS'2021)</a></li>
          <li><a href="configs/guided_diffusion/README.md">Guided Diffusion (NeurIPS'2021)</a></li>
          <li><a href="configs/disco_diffusion/README.md">Disco-Diffusion (2022)</a></li>
          <li><a href="configs/stable_diffusion/README.md">Stable-Diffusion (2022)</a></li>
          <li><a href="configs/dreambooth/README.md">DreamBooth (2022)</a></li>
          <li><a href="configs/textual_inversion/README.md">Textual Inversion (2022)</a></li>
          <li><a href="projects/prompt_to_prompt/README.md">Prompt-to-Prompt (2022)</a></li>
          <li><a href="projects/prompt_to_prompt/README.md">Null-text Inversion (2022)</a></li>
          <li><a href="configs/controlnet/README.md">ControlNet (2023)</a></li>
          <li><a href="configs/controlnet_animation/README.md">ControlNet Animation (2023)</a></li>
          <li><a href="configs/stable_diffusion_xl/README.md">Stable Diffusion XL (2023)</a></li>
          <li><a href="configs/animatediff/README.md">AnimateDiff (2023)</a></li>
          <li><a href="configs/vico/README.md">ViCo (2023)</a></li>
          <li><a href="configs/fastcomposer/README.md">FastComposer (2023)</a></li>
          <li><a href="projects/powerpaint/README.md">PowerPaint (2023)</a></li>
        </ul>
      </td>
      <td>
        <ul>
          <li><a href="configs/eg3d/README.md">EG3D (CVPR'2022)</a></li>
        </ul>
      </td>
    </tr>
</td>
    </tr>
  </tbody>
</table>

Please refer to [model_zoo](https://mmagic.readthedocs.io/en/latest/model_zoo/overview.html) for more details.

<p align="right"><a href="#table">🔝Back to Table of Contents</a></p>

## 🤝 Acknowledgement

MMagic is an open source project that is contributed by researchers and engineers from various colleges and companies. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new methods.

We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. Thank you all!

<a href="https://github.com/open-mmlab/mmagic/graphs/contributors">
  <img src="https://contrib.rocks/image?repo=open-mmlab/mmagic" />
</a>

<p align="right"><a href="#table">🔝Back to Table of Contents</a></p>

## 🖊️ Citation

If MMagic is helpful to your research, please cite it as below.

```bibtex
@misc{mmagic2023,
    title = {{MMagic}: {OpenMMLab} Multimodal Advanced, Generative, and Intelligent Creation Toolbox},
    author = {{MMagic Contributors}},
    howpublished = {\url{https://github.com/open-mmlab/mmagic}},
    year = {2023}
}
```

```bibtex
@misc{mmediting2022,
    title = {{MMEditing}: {OpenMMLab} Image and Video Editing Toolbox},
    author = {{MMEditing Contributors}},
    howpublished = {\url{https://github.com/open-mmlab/mmediting}},
    year = {2022}
}
```

<p align="right"><a href="#table">🔝Back to Table of Contents</a></p>

## 🎫 License

This project is released under the [Apache 2.0 license](LICENSE).
Please refer to [LICENSES](LICENSE) for the careful check, if you are using our code for commercial matters.

<p align="right"><a href="#table">🔝Back to Table of Contents</a></p>

## 🏗️ ️OpenMMLab Family

- [MMEngine](https://github.com/open-mmlab/mmengine): OpenMMLab foundational library for training deep learning models.
- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.
- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.
- [MMPreTrain](https://github.com/open-mmlab/mmpretrain): OpenMMLab Pre-training Toolbox and Benchmark.
- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.
- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.
- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.
- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.
- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.
- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.
- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.
- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.
- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.
- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.
- [MMagic](https://github.com/open-mmlab/mmagic): OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox.
- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.

<p align="right"><a href="#table">🔝Back to Table of Contents</a></p>



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/open-mmlab/mmagic",
    "name": "mmagic",
    "maintainer": "MMagic Contributors",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "openmmlab@gmail.com",
    "keywords": "computer vision,super resolution,video interpolation,inpainting,matting,SISR,RefSR,VSR,GAN,VFI",
    "author": "",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/40/3f/9617589a681ec38449f44edffd71d61d9db969cec4525dcc5a291fc0c4e2/mmagic-1.2.0.tar.gz",
    "platform": null,
    "description": "<div id=\"top\" align=\"center\">\n  <img src=\"docs/en/_static/image/mmagic-logo.png\" width=\"500px\"/>\n  <div>&nbsp;</div>\n  <div align=\"center\">\n    <font size=\"10\"><b>M</b>ultimodal <b>A</b>dvanced, <b>G</b>enerative, and <b>I</b>ntelligent <b>C</b>reation (MMagic [em'm\u00e6d\u0292\u026ak])</font>\n  </div>\n  <div>&nbsp;</div>\n  <div align=\"center\">\n    <b><font size=\"5\">OpenMMLab website</font></b>\n    <sup>\n      <a href=\"https://openmmlab.com\">\n        <i><font size=\"4\">HOT</font></i>\n      </a>\n    </sup>\n    &nbsp;&nbsp;&nbsp;&nbsp;\n    <b><font size=\"5\">OpenMMLab platform</font></b>\n    <sup>\n      <a href=\"https://platform.openmmlab.com\">\n        <i><font size=\"4\">TRY IT OUT</font></i>\n      </a>\n    </sup>\n  </div>\n  <div>&nbsp;</div>\n\n[![PyPI](https://badge.fury.io/py/mmagic.svg)](https://pypi.org/project/mmagic/)\n[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmagic.readthedocs.io/en/latest/)\n[![badge](https://github.com/open-mmlab/mmagic/workflows/build/badge.svg)](https://github.com/open-mmlab/mmagic/actions)\n[![codecov](https://codecov.io/gh/open-mmlab/mmagic/branch/master/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmagic)\n[![license](https://img.shields.io/github/license/open-mmlab/mmagic.svg)](https://github.com/open-mmlab/mmagic/blob/main/LICENSE)\n[![open issues](https://isitmaintained.com/badge/open/open-mmlab/mmagic.svg)](https://github.com/open-mmlab/mmagic/issues)\n[![issue resolution](https://isitmaintained.com/badge/resolution/open-mmlab/mmagic.svg)](https://github.com/open-mmlab/mmagic/issues)\n[![Open in OpenXLab](https://cdn-static.openxlab.org.cn/app-center/openxlab_demo.svg)](https://openxlab.org.cn/apps?search=mmagic)\n\n[\ud83d\udcd8Documentation](https://mmagic.readthedocs.io/en/latest/) |\n[\ud83d\udee0\ufe0fInstallation](https://mmagic.readthedocs.io/en/latest/get_started/install.html) |\n[\ud83d\udccaModel Zoo](https://mmagic.readthedocs.io/en/latest/model_zoo/overview.html) |\n[\ud83c\udd95Update News](https://mmagic.readthedocs.io/en/latest/changelog.html) |\n[\ud83d\ude80Ongoing Projects](https://github.com/open-mmlab/mmagic/projects) |\n[\ud83e\udd14Reporting Issues](https://github.com/open-mmlab/mmagic/issues)\n\nEnglish | [\u7b80\u4f53\u4e2d\u6587](README_zh-CN.md)\n\n</div>\n\n<div align=\"center\">\n  <a href=\"https://openmmlab.medium.com/\" style=\"text-decoration:none;\">\n    <img src=\"https://user-images.githubusercontent.com/25839884/218352562-cdded397-b0f3-4ca1-b8dd-a60df8dca75b.png\" width=\"3%\" alt=\"\" /></a>\n  <img src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" />\n  <a href=\"https://discord.gg/raweFPmdzG\" style=\"text-decoration:none;\">\n    <img src=\"https://user-images.githubusercontent.com/25839884/218347213-c080267f-cbb6-443e-8532-8e1ed9a58ea9.png\" width=\"3%\" alt=\"\" /></a>\n  <img src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" />\n  <a href=\"https://twitter.com/OpenMMLab\" style=\"text-decoration:none;\">\n    <img src=\"https://user-images.githubusercontent.com/25839884/218346637-d30c8a0f-3eba-4699-8131-512fb06d46db.png\" width=\"3%\" alt=\"\" /></a>\n  <img src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" />\n  <a href=\"https://www.youtube.com/openmmlab\" style=\"text-decoration:none;\">\n    <img src=\"https://user-images.githubusercontent.com/25839884/218346691-ceb2116a-465a-40af-8424-9f30d2348ca9.png\" width=\"3%\" alt=\"\" /></a>\n</div>\n\n## \ud83d\ude80 What's New <a><img width=\"35\" height=\"20\" src=\"https://user-images.githubusercontent.com/12782558/212848161-5e783dd6-11e8-4fe0-bbba-39ffb77730be.png\"></a>\n\n### New release [**MMagic v1.2.0**](https://github.com/open-mmlab/mmagic/releases/tag/v1.2.0) \\[18/12/2023\\]:\n\n- An advanced and powerful inpainting algorithm named PowerPaint is released in our repository. [Click to View](https://github.com/open-mmlab/mmagic/tree/main/projects/powerpaint)\n\nWe are excited to announce the release of MMagic v1.0.0 that inherits from [MMEditing](https://github.com/open-mmlab/mmediting) and [MMGeneration](https://github.com/open-mmlab/mmgeneration).\n\nAfter iterative updates with OpenMMLab 2.0 framework and merged with MMGeneration, MMEditing has become a powerful tool that supports low-level algorithms based on both GAN and CNN. Today, MMEditing embraces Generative AI and transforms into a more advanced and comprehensive AIGC toolkit: **MMagic** (**M**ultimodal **A**dvanced, **G**enerative, and **I**ntelligent **C**reation). MMagic will provide more agile and flexible experimental support for researchers and AIGC enthusiasts, and help you on your AIGC exploration journey.\n\nWe highlight the following new features.\n\n**1. New Models**\n\nWe support 11 new models in 4 new tasks.\n\n- Text2Image / Diffusion\n  - ControlNet\n  - DreamBooth\n  - Stable Diffusion\n  - Disco Diffusion\n  - GLIDE\n  - Guided Diffusion\n- 3D-aware Generation\n  - EG3D\n- Image Restoration\n  - NAFNet\n  - Restormer\n  - SwinIR\n- Image Colorization\n  - InstColorization\n\n**2. Magic Diffusion Model**\n\nFor the Diffusion Model, we provide the following \"magic\" :\n\n- Support image generation based on Stable Diffusion and Disco Diffusion.\n- Support Finetune methods such as Dreambooth and DreamBooth LoRA.\n- Support controllability in text-to-image generation using ControlNet.\n- Support acceleration and optimization strategies based on xFormers to improve training and inference efficiency.\n- Support video generation based on MultiFrame Render.\n- Support calling basic models and sampling strategies through DiffuserWrapper.\n\n**3. Upgraded Framework**\n\nBy using MMEngine and MMCV of OpenMMLab 2.0 framework, MMagic has upgraded in the following new features:\n\n- Refactor DataSample to support the combination and splitting of batch dimensions.\n- Refactor DataPreprocessor and unify the data format for various tasks during training and inference.\n- Refactor MultiValLoop and MultiTestLoop, supporting the evaluation of both generation-type metrics (e.g. FID) and reconstruction-type metrics (e.g. SSIM), and supporting the evaluation of multiple datasets at once.\n- Support visualization on local files or using tensorboard and wandb.\n- Support for 33+ algorithms accelerated by Pytorch 2.0.\n\n**MMagic** has supported all the tasks, models, metrics, and losses in [MMEditing](https://github.com/open-mmlab/mmediting) and [MMGeneration](https://github.com/open-mmlab/mmgeneration) and unifies interfaces of all components based on [MMEngine](https://github.com/open-mmlab/mmengine) \ud83d\ude0d.\n\nPlease refer to [changelog.md](docs/en/changelog.md) for details and release history.\n\nPlease refer to [migration documents](docs/en/migration/overview.md) to migrate from [old version](https://github.com/open-mmlab/mmagic/tree/0.x) MMEditing 0.x to new version MMagic 1.x .\n\n<div id=\"table\" align=\"center\"></div>\n\n## \ud83d\udcc4 Table of Contents\n\n- [\ud83d\udcd6 Introduction](#-introduction)\n- [\ud83d\ude4c Contributing](#-contributing)\n- [\ud83d\udee0\ufe0f Installation](#\ufe0f-installation)\n- [\ud83d\udcca Model Zoo](#-model-zoo)\n- [\ud83e\udd1d Acknowledgement](#-acknowledgement)\n- [\ud83d\udd8a\ufe0f Citation](#\ufe0f-citation)\n- [\ud83c\udfab License](#-license)\n- [\ud83c\udfd7\ufe0f \ufe0fOpenMMLab Family](#\ufe0f-\ufe0fopenmmlab-family)\n\n## \ud83d\udcd6 Introduction\n\nMMagic (**M**ultimodal **A**dvanced, **G**enerative, and **I**ntelligent **C**reation) is an advanced and comprehensive AIGC toolkit that inherits from [MMEditing](https://github.com/open-mmlab/mmediting) and [MMGeneration](https://github.com/open-mmlab/mmgeneration). It is an open-source image and video editing&generating toolbox based on PyTorch. It is a part of the [OpenMMLab](https://openmmlab.com/) project.\n\nCurrently, MMagic support multiple image and video generation/editing tasks.\n\nhttps://user-images.githubusercontent.com/49083766/233564593-7d3d48ed-e843-4432-b610-35e3d257765c.mp4\n\n### \u2728 Major features\n\n- **State of the Art Models**\n\n  MMagic provides state-of-the-art generative models to process, edit and synthesize images and videos.\n\n- **Powerful and Popular Applications**\n\n  MMagic supports popular and contemporary image restoration, text-to-image, 3D-aware generation, inpainting, matting, super-resolution and generation applications. Specifically, MMagic supports fine-tuning for stable diffusion and many exciting diffusion's application such as ControlNet Animation with SAM. MMagic also supports GAN interpolation, GAN projection, GAN manipulations and many other popular GAN\u2019s applications. It\u2019s time to begin your AIGC exploration journey!\n\n- **Efficient Framework**\n\n  By using MMEngine and MMCV of OpenMMLab 2.0 framework, MMagic decompose the editing framework into different modules and one can easily construct a customized editor framework by combining different modules. We can define the training process just like playing with Legos and provide rich components and strategies. In MMagic, you can complete controls on the training process with different levels of APIs. With the support of [MMSeparateDistributedDataParallel](https://github.com/open-mmlab/mmengine/blob/main/mmengine/model/wrappers/seperate_distributed.py), distributed training for dynamic architectures can be easily implemented.\n\n### \u2728 Best Practice\n\n- The best practice on our main branch works with **Python 3.9+** and **PyTorch 2.0+**.\n\n<p align=\"right\"><a href=\"#table\">\ud83d\udd1dBack to Table of Contents</a></p>\n\n## \ud83d\ude4c Contributing\n\nMore and more community contributors are joining us to make our repo better. Some recent projects are contributed by the community including:\n\n- [SDXL](configs/stable_diffusion_xl/README.md) is contributed by  @okotaku.\n- [AnimateDiff](configs/animatediff/README.md) is contributed by @ElliotQi.\n- [ViCo](configs/vico/README.md) is contributed by @FerryHuang.\n- [DragGan](configs/draggan/README.md) is contributed by @qsun1.\n- [FastComposer](configs/fastcomposer/README.md) is contributed by @xiaomile.\n\n[Projects](projects/README.md) is opened to make it easier for everyone to add projects to MMagic.\n\nWe appreciate all contributions to improve MMagic. Please refer to [CONTRIBUTING.md](https://github.com/open-mmlab/mmcv/blob/main/CONTRIBUTING.md) in MMCV and [CONTRIBUTING.md](https://github.com/open-mmlab/mmengine/blob/main/CONTRIBUTING.md) in MMEngine for more details about the contributing guideline.\n\n<p align=\"right\"><a href=\"#table\">\ud83d\udd1dBack to Table of Contents</a></p>\n\n## \ud83d\udee0\ufe0f Installation\n\nMMagic depends on [PyTorch](https://pytorch.org/), [MMEngine](https://github.com/open-mmlab/mmengine) and [MMCV](https://github.com/open-mmlab/mmcv).\nBelow are quick steps for installation.\n\n**Step 1.**\nInstall PyTorch following [official instructions](https://pytorch.org/get-started/locally/).\n\n**Step 2.**\nInstall MMCV, MMEngine and MMagic with [MIM](https://github.com/open-mmlab/mim).\n\n```shell\npip3 install openmim\nmim install mmcv>=2.0.0\nmim install mmengine\nmim install mmagic\n```\n\n**Step 3.**\nVerify MMagic has been successfully installed.\n\n```shell\ncd ~\npython -c \"import mmagic; print(mmagic.__version__)\"\n# Example output: 1.0.0\n```\n\n**Getting Started**\n\nAfter installing MMagic successfully, now you are able to play with MMagic! To generate an image from text, you only need several lines of codes by MMagic!\n\n```python\nfrom mmagic.apis import MMagicInferencer\nsd_inferencer = MMagicInferencer(model_name='stable_diffusion')\ntext_prompts = 'A panda is having dinner at KFC'\nresult_out_dir = 'output/sd_res.png'\nsd_inferencer.infer(text=text_prompts, result_out_dir=result_out_dir)\n```\n\nPlease see [quick run](docs/en/get_started/quick_run.md) and [inference](docs/en/user_guides/inference.md) for the basic usage of MMagic.\n\n**Install MMagic from source**\n\nYou can also experiment on the latest developed version rather than the stable release by installing MMagic from source with the following commands:\n\n```shell\ngit clone https://github.com/open-mmlab/mmagic.git\ncd mmagic\npip3 install -e .\n```\n\nPlease refer to [installation](docs/en/get_started/install.md) for more detailed instruction.\n\n<p align=\"right\"><a href=\"#table\">\ud83d\udd1dBack to Table of Contents</a></p>\n\n## \ud83d\udcca Model Zoo\n\n<div align=\"center\">\n  <b>Supported algorithms</b>\n</div>\n<table align=\"center\">\n  <tbody>\n    <tr align=\"center\" valign=\"bottom\">\n      <td>\n        <b>Conditional GANs</b>\n      </td>\n      <td>\n        <b>Unconditional GANs</b>\n      </td>\n      <td>\n        <b>Image Restoration</b>\n      </td>\n      <td>\n        <b>Image Super-Resolution</b>\n      </td>\n    </tr>\n    <tr valign=\"top\">\n      <td>\n        <ul>\n            <li><a href=\"configs/sngan_proj/README.md\">SNGAN/Projection GAN (ICLR'2018)</a></li>\n            <li><a href=\"configs/sagan/README.md\">SAGAN (ICML'2019)</a></li>\n            <li><a href=\"configs/biggan/README.md\">BIGGAN/BIGGAN-DEEP (ICLR'2018)</a></li>\n      </ul>\n      </td>\n      <td>\n        <ul>\n          <li><a href=\"configs/dcgan/README.md\">DCGAN (ICLR'2016)</a></li>\n          <li><a href=\"configs/wgan-gp/README.md\">WGAN-GP (NeurIPS'2017)</a></li>\n          <li><a href=\"configs/lsgan/README.md\">LSGAN (ICCV'2017)</a></li>\n          <li><a href=\"configs/ggan/README.md\">GGAN (ArXiv'2017)</a></li>\n          <li><a href=\"configs/pggan/README.md\">PGGAN (ICLR'2018)</a></li>\n          <li><a href=\"configs/singan/README.md\">SinGAN (ICCV'2019)</a></li>\n          <li><a href=\"configs/styleganv1/README.md\">StyleGANV1 (CVPR'2019)</a></li>\n          <li><a href=\"configs/styleganv2/README.md\">StyleGANV2 (CVPR'2019)</a></li>\n          <li><a href=\"configs/styleganv3/README.md\">StyleGANV3 (NeurIPS'2021)</a></li>\n          <li><a href=\"configs/draggan/README.md\">DragGan (2023)</a></li>\n        </ul>\n      </td>\n      <td>\n        <ul>\n          <li><a href=\"configs/swinir/README.md\">SwinIR (ICCVW'2021)</a></li>\n          <li><a href=\"configs/nafnet/README.md\">NAFNet (ECCV'2022)</a></li>\n          <li><a href=\"configs/restormer/README.md\">Restormer (CVPR'2022)</a></li>\n        </ul>\n      </td>\n      <td>\n        <ul>\n          <li><a href=\"configs/srcnn/README.md\">SRCNN (TPAMI'2015)</a></li>\n          <li><a href=\"configs/srgan_resnet/README.md\">SRResNet&SRGAN (CVPR'2016)</a></li>\n          <li><a href=\"configs/edsr/README.md\">EDSR (CVPR'2017)</a></li>\n          <li><a href=\"configs/esrgan/README.md\">ESRGAN (ECCV'2018)</a></li>\n          <li><a href=\"configs/rdn/README.md\">RDN (CVPR'2018)</a></li>\n          <li><a href=\"configs/dic/README.md\">DIC (CVPR'2020)</a></li>\n          <li><a href=\"configs/ttsr/README.md\">TTSR (CVPR'2020)</a></li>\n          <li><a href=\"configs/glean/README.md\">GLEAN (CVPR'2021)</a></li>\n          <li><a href=\"configs/liif/README.md\">LIIF (CVPR'2021)</a></li>\n          <li><a href=\"configs/real_esrgan/README.md\">Real-ESRGAN (ICCVW'2021)</a></li>\n        </ul>\n      </td>\n    </tr>\n</td>\n    </tr>\n  </tbody>\n<tbody>\n    <tr align=\"center\" valign=\"bottom\">\n      <td>\n        <b>Video Super-Resolution</b>\n      </td>\n      <td>\n        <b>Video Interpolation</b>\n      </td>\n      <td>\n        <b>Image Colorization</b>\n      </td>\n      <td>\n        <b>Image Translation</b>\n      </td>\n    </tr>\n    <tr valign=\"top\">\n      <td>\n        <ul>\n            <li><a href=\"configs/edvr/README.md\">EDVR (CVPR'2018)</a></li>\n            <li><a href=\"configs/tof/README.md\">TOF (IJCV'2019)</a></li>\n            <li><a href=\"configs/tdan/README.md\">TDAN (CVPR'2020)</a></li>\n            <li><a href=\"configs/basicvsr/README.md\">BasicVSR (CVPR'2021)</a></li>\n            <li><a href=\"configs/iconvsr/README.md\">IconVSR (CVPR'2021)</a></li>\n            <li><a href=\"configs/basicvsr_pp/README.md\">BasicVSR++ (CVPR'2022)</a></li>\n            <li><a href=\"configs/real_basicvsr/README.md\">RealBasicVSR (CVPR'2022)</a></li>\n      </ul>\n      </td>\n      <td>\n        <ul>\n          <li><a href=\"configs/tof/README.md\">TOFlow (IJCV'2019)</a></li>\n          <li><a href=\"configs/cain/README.md\">CAIN (AAAI'2020)</a></li>\n          <li><a href=\"configs/flavr/README.md\">FLAVR (CVPR'2021)</a></li>\n        </ul>\n      </td>\n      <td>\n        <ul>\n          <li><a href=\"configs/inst_colorization/README.md\">InstColorization (CVPR'2020)</a></li>\n        </ul>\n      </td>\n      <td>\n        <ul>\n          <li><a href=\"configs/pix2pix/README.md\">Pix2Pix (CVPR'2017)</a></li>\n          <li><a href=\"configs/cyclegan/README.md\">CycleGAN (ICCV'2017)</a></li>\n        </ul>\n      </td>\n    </tr>\n</td>\n    </tr>\n  </tbody>\n<tbody>\n    <tr align=\"center\" valign=\"bottom\">\n      <td>\n        <b>Inpainting</b>\n      </td>\n      <td>\n        <b>Matting</b>\n      </td>\n      <td>\n        <b>Text-to-Image(Video)</b>\n      </td>\n      <td>\n        <b>3D-aware Generation</b>\n      </td>\n    </tr>\n    <tr valign=\"top\">\n      <td>\n        <ul>\n          <li><a href=\"configs/global_local/README.md\">Global&Local (ToG'2017)</a></li>\n          <li><a href=\"configs/deepfillv1/README.md\">DeepFillv1 (CVPR'2018)</a></li>\n          <li><a href=\"configs/partial_conv/README.md\">PConv (ECCV'2018)</a></li>\n          <li><a href=\"configs/deepfillv2/README.md\">DeepFillv2 (CVPR'2019)</a></li>\n          <li><a href=\"configs/aot_gan/README.md\">AOT-GAN (TVCG'2019)</a></li>\n          <li><a href=\"configs/stable_diffusion/README.md\">Stable Diffusion Inpainting (CVPR'2022)</a></li>\n        </ul>\n      </td>\n      <td>\n        <ul>\n          <li><a href=\"configs/dim/README.md\">DIM (CVPR'2017)</a></li>\n          <li><a href=\"configs/indexnet/README.md\">IndexNet (ICCV'2019)</a></li>\n          <li><a href=\"configs/gca/README.md\">GCA (AAAI'2020)</a></li>\n        </ul>\n      </td>\n      <td>\n        <ul>\n          <li><a href=\"projects/glide/configs/README.md\">GLIDE (NeurIPS'2021)</a></li>\n          <li><a href=\"configs/guided_diffusion/README.md\">Guided Diffusion (NeurIPS'2021)</a></li>\n          <li><a href=\"configs/disco_diffusion/README.md\">Disco-Diffusion (2022)</a></li>\n          <li><a href=\"configs/stable_diffusion/README.md\">Stable-Diffusion (2022)</a></li>\n          <li><a href=\"configs/dreambooth/README.md\">DreamBooth (2022)</a></li>\n          <li><a href=\"configs/textual_inversion/README.md\">Textual Inversion (2022)</a></li>\n          <li><a href=\"projects/prompt_to_prompt/README.md\">Prompt-to-Prompt (2022)</a></li>\n          <li><a href=\"projects/prompt_to_prompt/README.md\">Null-text Inversion (2022)</a></li>\n          <li><a href=\"configs/controlnet/README.md\">ControlNet (2023)</a></li>\n          <li><a href=\"configs/controlnet_animation/README.md\">ControlNet Animation (2023)</a></li>\n          <li><a href=\"configs/stable_diffusion_xl/README.md\">Stable Diffusion XL (2023)</a></li>\n          <li><a href=\"configs/animatediff/README.md\">AnimateDiff (2023)</a></li>\n          <li><a href=\"configs/vico/README.md\">ViCo (2023)</a></li>\n          <li><a href=\"configs/fastcomposer/README.md\">FastComposer (2023)</a></li>\n          <li><a href=\"projects/powerpaint/README.md\">PowerPaint (2023)</a></li>\n        </ul>\n      </td>\n      <td>\n        <ul>\n          <li><a href=\"configs/eg3d/README.md\">EG3D (CVPR'2022)</a></li>\n        </ul>\n      </td>\n    </tr>\n</td>\n    </tr>\n  </tbody>\n</table>\n\nPlease refer to [model_zoo](https://mmagic.readthedocs.io/en/latest/model_zoo/overview.html) for more details.\n\n<p align=\"right\"><a href=\"#table\">\ud83d\udd1dBack to Table of Contents</a></p>\n\n## \ud83e\udd1d Acknowledgement\n\nMMagic is an open source project that is contributed by researchers and engineers from various colleges and companies. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new methods.\n\nWe appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. Thank you all!\n\n<a href=\"https://github.com/open-mmlab/mmagic/graphs/contributors\">\n  <img src=\"https://contrib.rocks/image?repo=open-mmlab/mmagic\" />\n</a>\n\n<p align=\"right\"><a href=\"#table\">\ud83d\udd1dBack to Table of Contents</a></p>\n\n## \ud83d\udd8a\ufe0f Citation\n\nIf MMagic is helpful to your research, please cite it as below.\n\n```bibtex\n@misc{mmagic2023,\n    title = {{MMagic}: {OpenMMLab} Multimodal Advanced, Generative, and Intelligent Creation Toolbox},\n    author = {{MMagic Contributors}},\n    howpublished = {\\url{https://github.com/open-mmlab/mmagic}},\n    year = {2023}\n}\n```\n\n```bibtex\n@misc{mmediting2022,\n    title = {{MMEditing}: {OpenMMLab} Image and Video Editing Toolbox},\n    author = {{MMEditing Contributors}},\n    howpublished = {\\url{https://github.com/open-mmlab/mmediting}},\n    year = {2022}\n}\n```\n\n<p align=\"right\"><a href=\"#table\">\ud83d\udd1dBack to Table of Contents</a></p>\n\n## \ud83c\udfab License\n\nThis project is released under the [Apache 2.0 license](LICENSE).\nPlease refer to [LICENSES](LICENSE) for the careful check, if you are using our code for commercial matters.\n\n<p align=\"right\"><a href=\"#table\">\ud83d\udd1dBack to Table of Contents</a></p>\n\n## \ud83c\udfd7\ufe0f \ufe0fOpenMMLab Family\n\n- [MMEngine](https://github.com/open-mmlab/mmengine): OpenMMLab foundational library for training deep learning models.\n- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.\n- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.\n- [MMPreTrain](https://github.com/open-mmlab/mmpretrain): OpenMMLab Pre-training Toolbox and Benchmark.\n- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.\n- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.\n- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.\n- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.\n- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.\n- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.\n- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.\n- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.\n- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.\n- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.\n- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.\n- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.\n- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.\n- [MMagic](https://github.com/open-mmlab/mmagic): OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox.\n- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.\n\n<p align=\"right\"><a href=\"#table\">\ud83d\udd1dBack to Table of Contents</a></p>\n\n\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox",
    "version": "1.2.0",
    "project_urls": {
        "Homepage": "https://github.com/open-mmlab/mmagic"
    },
    "split_keywords": [
        "computer vision",
        "super resolution",
        "video interpolation",
        "inpainting",
        "matting",
        "sisr",
        "refsr",
        "vsr",
        "gan",
        "vfi"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b71bb62fe9425d069f411efd26c0d70e684620fc2823607414f7eb0c55918d9e",
                "md5": "4fca755589e2f34d6eb26fcbf8a7ffbe",
                "sha256": "efa91041097c40e452dd2037b596a5633a6a512447c28a71c970c24aca04039c"
            },
            "downloads": -1,
            "filename": "mmagic-1.2.0-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4fca755589e2f34d6eb26fcbf8a7ffbe",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": null,
            "size": 1408237,
            "upload_time": "2023-12-18T13:53:22",
            "upload_time_iso_8601": "2023-12-18T13:53:22.362381Z",
            "url": "https://files.pythonhosted.org/packages/b7/1b/b62fe9425d069f411efd26c0d70e684620fc2823607414f7eb0c55918d9e/mmagic-1.2.0-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "403f9617589a681ec38449f44edffd71d61d9db969cec4525dcc5a291fc0c4e2",
                "md5": "774a3629000ffb4f3391d86e610b039a",
                "sha256": "5353aeb9edf28a2b4430ff6630c8d564782b7c1d0d632ae318089ba7b3b6af97"
            },
            "downloads": -1,
            "filename": "mmagic-1.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "774a3629000ffb4f3391d86e610b039a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 864231,
            "upload_time": "2023-12-18T13:53:24",
            "upload_time_iso_8601": "2023-12-18T13:53:24.945969Z",
            "url": "https://files.pythonhosted.org/packages/40/3f/9617589a681ec38449f44edffd71d61d9db969cec4525dcc5a291fc0c4e2/mmagic-1.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-18 13:53:24",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "open-mmlab",
    "github_project": "mmagic",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "circle": true,
    "requirements": [],
    "lcname": "mmagic"
}
        
Elapsed time: 0.15880s