<div align="center">
<img src="docs/en/_static/image/mmediting-logo.png" width="500px"/>
<div> </div>
<div align="center">
<b><font size="5">OpenMMLab website</font></b>
<sup>
<a href="https://openmmlab.com">
<i><font size="4">HOT</font></i>
</a>
</sup>
<b><font size="5">OpenMMLab platform</font></b>
<sup>
<a href="https://platform.openmmlab.com">
<i><font size="4">TRY IT OUT</font></i>
</a>
</sup>
</div>
<div> </div>
[![PyPI](https://badge.fury.io/py/mmedit.svg)](https://pypi.org/project/mmedit/)
[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmediting.readthedocs.io/en/latest/)
[![badge](https://github.com/open-mmlab/mmediting/workflows/build/badge.svg)](https://github.com/open-mmlab/mmediting/actions)
[![codecov](https://codecov.io/gh/open-mmlab/mmediting/branch/master/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmediting)
[![license](https://img.shields.io/github/license/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/blob/master/LICENSE)
[![open issues](https://isitmaintained.com/badge/open/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/issues)
[![issue resolution](https://isitmaintained.com/badge/resolution/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/issues)
[📘Documentation](https://mmediting.readthedocs.io/en/latest/) |
[🛠️Installation](https://mmediting.readthedocs.io/en/latest/install.html) |
[👀Model Zoo](https://mmediting.readthedocs.io/en/latest/_tmp/modelzoo.html) |
[🆕Update News](https://github.com/open-mmlab/mmediting/blob/master/docs/en/changelog.md) |
[🚀Ongoing Projects](https://github.com/open-mmlab/mmediting/projects) |
[🤔Reporting Issues](https://github.com/open-mmlab/mmediting/issues)
</div>
<div align="center">
English | [简体中文](README_zh-CN.md)
</div>
## Introduction
MMEditing is an open-source image and video editing toolbox based on PyTorch. It is a part of the [OpenMMLab](https://openmmlab.com/) project. Currently, MMEditing supports:
<div align="center">
<img src="https://user-images.githubusercontent.com/12756472/158984079-c4754015-c1f6-48c5-ac46-62e79448c372.jpg"/>
</div>
The master branch works with **PyTorch 1.5+**.
Some Demos:
https://user-images.githubusercontent.com/12756472/175944645-cabe8c2b-9f25-440b-91cc-cdac4e752c5a.mp4
https://user-images.githubusercontent.com/12756472/158972813-d8d0f19c-f49c-4618-9967-52652726ef19.mp4
<details open>
<summary>Major features</summary>
- **Modular design**
We decompose the editing framework into different components and one can easily construct a customized editor framework by combining different modules.
- **Support of multiple tasks in editing**
The toolbox directly supports popular and contemporary *inpainting*, *matting*, *super-resolution* and *generation* tasks.
- **State of the art**
The toolbox provides state-of-the-art methods in inpainting/matting/super-resolution/generation.
Note that **MMSR** has been merged into this repo, as a part of MMEditing.
With elaborate designs of the new framework and careful implementations,
hope MMEditing could provide better experience.
## What's New
MMEditing maintains both master and 1.x branches. See more details in [Branch Maintenance Plan](README.md#branch-maintenance-plan).
### 💎 Stable version
**0.16.1** was released in 24/02/2023:
- Support FID and KID metrics.
- Support groups parameter in ResidualBlockNoBN.
- Fix RealESRGAN test dataset.
- Fix dynamic exportable ONNX of `pixel-unshuffle`.
Please refer to [changelog.md](docs/en/changelog.md) for details and release history.
### 🌟 Preview of 1.x version
A brand new version of [**MMEditing v1.0.0rc6**](https://github.com/open-mmlab/mmediting/releases/tag/v1.0.0rc6) was released in 24/02/2023:
- Support all the tasks, models, metrics, and losses in [MMGeneration](https://github.com/open-mmlab/mmgeneration) 😍。
- Unifies interfaces of all components based on [MMEngine](https://github.com/open-mmlab/mmengine).
- Refactored and more flexible [architecture](https://mmediting.readthedocs.io/en/1.x/1_overview.html).
- Support well-known text-to-image method [Stable Diffusion](https://github.com/open-mmlab/mmediting/tree/1.x/configs/stable_diffusion/README.md)!
- Support a new text-to-image algorithm [GLIDE](https://github.com/open-mmlab/mmediting/tree/1.x/projects/glide/configs/README.md)!
- Support Text2Image Task! [Disco-Diffusion](https://github.com/open-mmlab/mmediting/tree/1.x/configs/disco_diffusion/README.md)!
- Support 3D-aware Generation Task! [EG3D](https://github.com/open-mmlab/mmediting/tree/1.x/configs/eg3d/README.md)!
- Support an efficient image restoration algorithm [Restormer](https://github.com/open-mmlab/mmediting/tree/1.x/configs/restormer/README.md)!
- Support swin based image restoration algorithm [SwinIR](https://github.com/open-mmlab/mmediting/tree/1.x/configs/swinir/README.md)!
- Support [Image Colorization](https://github.com/open-mmlab/mmediting/tree/1.x/configs/inst_colorization/README.md).
- [Projects](https://github.com/open-mmlab/mmediting/tree/1.x/projects/README.md) is opened for the community to add projects to MMEditing.
- Support High-level apis and inferencer.
- Support Gradio gui of Inpainting inference.
- Support patch-based and slider-based image and video comparison viewer.
Find more new features in [1.x branch](https://github.com/open-mmlab/mmediting/tree/1.x). Issues and PRs are welcome!
## Installation
MMEditing depends on [PyTorch](https://pytorch.org/) and [MMCV](https://github.com/open-mmlab/mmcv).
Below are quick steps for installation.
**Step 1.**
Install PyTorch following [official instructions](https://pytorch.org/get-started/locally/).
**Step 2.**
Install MMCV with [MIM](https://github.com/open-mmlab/mim).
```shell
pip3 install openmim
mim install mmcv-full
```
**Step 3.**
Install MMEditing from source.
```shell
git clone https://github.com/open-mmlab/mmediting.git
cd mmediting
pip3 install -e .
```
Please refer to [install.md](docs/en/install.md) for more detailed instruction.
## Getting Started
Please see [getting_started.md](docs/en/getting_started.md) and [demo.md](docs/en/demo.md) for the basic usage of MMEditing.
## Model Zoo
Supported algorithms:
<details open>
<summary>Inpainting</summary>
- [x] [Global&Local](configs/inpainting/global_local/README.md) (ToG'2017)
- [x] [DeepFillv1](configs/inpainting/deepfillv1/README.md) (CVPR'2018)
- [x] [PConv](configs/inpainting/partial_conv/README.md) (ECCV'2018)
- [x] [DeepFillv2](configs/inpainting/deepfillv2/README.md) (CVPR'2019)
- [x] [AOT-GAN](configs/inpainting/AOT-GAN/README.md) (TVCG'2021)
</details>
<details open>
<summary>Matting</summary>
- [x] [DIM](configs/mattors/dim/README.md) (CVPR'2017)
- [x] [IndexNet](configs/mattors/indexnet/README.md) (ICCV'2019)
- [x] [GCA](configs/mattors/gca/README.md) (AAAI'2020)
</details>
<details open>
<summary>Image-Super-Resolution</summary>
- [x] [SRCNN](configs/restorers/srcnn/README.md) (TPAMI'2015)
- [x] [SRResNet&SRGAN](configs/restorers/srresnet_srgan/README.md) (CVPR'2016)
- [x] [EDSR](configs/restorers/edsr/README.md) (CVPR'2017)
- [x] [ESRGAN](configs/restorers/esrgan/README.md) (ECCV'2018)
- [x] [RDN](configs/restorers/rdn/README.md) (CVPR'2018)
- [x] [DIC](configs/restorers/dic/README.md) (CVPR'2020)
- [x] [TTSR](configs/restorers/ttsr/README.md) (CVPR'2020)
- [x] [GLEAN](configs/restorers/glean/README.md) (CVPR'2021)
- [x] [LIIF](configs/restorers/liif/README.md) (CVPR'2021)
</details>
<details open>
<summary>Video-Super-Resolution</summary>
- [x] [EDVR](configs/restorers/edvr/README.md) (CVPR'2019)
- [x] [TOF](configs/restorers/tof/README.md) (IJCV'2019)
- [x] [TDAN](configs/restorers/tdan/README.md) (CVPR'2020)
- [x] [BasicVSR](configs/restorers/basicvsr/README.md) (CVPR'2021)
- [x] [IconVSR](configs/restorers/iconvsr/README.md) (CVPR'2021)
- [x] [BasicVSR++](configs/restorers/basicvsr_plusplus/README.md) (CVPR'2022)
- [x] [RealBasicVSR](configs/restorers/real_basicvsr/README.md) (CVPR'2022)
</details>
<details open>
<summary>Generation</summary>
- [x] [CycleGAN](configs/synthesizers/cyclegan/README.md) (ICCV'2017)
- [x] [pix2pix](configs/synthesizers/pix2pix/README.md) (CVPR'2017)
</details>
<details open>
<summary>Video Interpolation</summary>
- [x] [TOFlow](configs/video_interpolators/tof/README.md) (IJCV'2019)
- [x] [CAIN](configs/video_interpolators/cain/README.md) (AAAI'2020)
- [x] [FLAVR](configs/video_interpolators/flavr/README.md) (CVPR'2021)
</details>
Please refer to [model_zoo](https://mmediting.readthedocs.io/en/latest/_tmp/modelzoo.html) for more details.
## Contributing
We appreciate all contributions to improve MMEditing. Please refer to our [contributing guidelines](https://github.com/open-mmlab/mmediting/wiki/A.-Contribution-Guidelines).
## Acknowledgement
MMEditing is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new methods.
## Branch Maintenance Plan
MMEditing currently has two branches, the master and 1.x branches, which go through the following three phases.
| Phase | Time | Branch | description |
| -------------------- | --------------------- | ----------------------------------------------------------------------------- | ---------------------------------------------------------------------------------- |
| RC Period | 2022/9/1 - 2022.12.31 | Release candidate code (1.x version) will be released on 1.x branch. Default master branch is still 0.x version | Master and 1.x branches iterate normally |
| Compatibility Period | 2023/1/1 - 2023.12.31 | **Default master branch will be switched to 1.x branch**, and 0.x branch will correspond to 0.x version | We still maintain the old version 0.x, respond to user needs, but try not to introduce changes that break compatibility; master branch iterates normally |
| Maintenance Period | From 2024/1/1 | Default master branch corresponds to 1.x version and 0.x branch is 0.x version | 0.x branch is in maintenance phase, no more new feature support; master branch is iterating normally |
## Citation
If MMEditing is helpful to your research, please cite it as below.
```bibtex
@misc{mmediting2022,
title = {{MMEditing}: {OpenMMLab} Image and Video Editing Toolbox},
author = {{MMEditing Contributors}},
howpublished = {\url{https://github.com/open-mmlab/mmediting}},
year = {2022}
}
```
## License
This project is released under the [Apache 2.0 license](LICENSE).
## Projects in OpenMMLab
- [MMEngine](https://github.com/open-mmlab/mmengine): OpenMMLab foundational library for training deep learning models.
- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.
- [MMEval](https://github.com/open-mmlab/mmeval): A unified evaluation library for multiple machine learning libraries.
- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.
- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark.
- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.
- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.
- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.
- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.
- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.
- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.
- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.
- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.
- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.
- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.
- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.
- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.
- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.
Raw data
{
"_id": null,
"home_page": "https://github.com/open-mmlab/mmediting",
"name": "mmedit",
"maintainer": "MMEditing Contributors",
"docs_url": null,
"requires_python": "",
"maintainer_email": "openmmlab@gmail.com",
"keywords": "computer vision,super resolution,video interpolation,inpainting,matting,SISR,RefSR,VSR,GAN,VFI",
"author": "",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/6f/cc/f0621c33eeb53d6a3bfe1774f3cdf81d80df2a349dcc3c8dbc3d9f187191/mmedit-0.16.1.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n <img src=\"docs/en/_static/image/mmediting-logo.png\" width=\"500px\"/>\n <div> </div>\n <div align=\"center\">\n <b><font size=\"5\">OpenMMLab website</font></b>\n <sup>\n <a href=\"https://openmmlab.com\">\n <i><font size=\"4\">HOT</font></i>\n </a>\n </sup>\n \n <b><font size=\"5\">OpenMMLab platform</font></b>\n <sup>\n <a href=\"https://platform.openmmlab.com\">\n <i><font size=\"4\">TRY IT OUT</font></i>\n </a>\n </sup>\n </div>\n <div> </div>\n\n[![PyPI](https://badge.fury.io/py/mmedit.svg)](https://pypi.org/project/mmedit/)\n[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmediting.readthedocs.io/en/latest/)\n[![badge](https://github.com/open-mmlab/mmediting/workflows/build/badge.svg)](https://github.com/open-mmlab/mmediting/actions)\n[![codecov](https://codecov.io/gh/open-mmlab/mmediting/branch/master/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmediting)\n[![license](https://img.shields.io/github/license/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/blob/master/LICENSE)\n[![open issues](https://isitmaintained.com/badge/open/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/issues)\n[![issue resolution](https://isitmaintained.com/badge/resolution/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/issues)\n\n[\ud83d\udcd8Documentation](https://mmediting.readthedocs.io/en/latest/) |\n[\ud83d\udee0\ufe0fInstallation](https://mmediting.readthedocs.io/en/latest/install.html) |\n[\ud83d\udc40Model Zoo](https://mmediting.readthedocs.io/en/latest/_tmp/modelzoo.html) |\n[\ud83c\udd95Update News](https://github.com/open-mmlab/mmediting/blob/master/docs/en/changelog.md) |\n[\ud83d\ude80Ongoing Projects](https://github.com/open-mmlab/mmediting/projects) |\n[\ud83e\udd14Reporting Issues](https://github.com/open-mmlab/mmediting/issues)\n\n</div>\n\n<div align=\"center\">\n\nEnglish | [\u7b80\u4f53\u4e2d\u6587](README_zh-CN.md)\n\n</div>\n\n## Introduction\n\nMMEditing is an open-source image and video editing toolbox based on PyTorch. It is a part of the [OpenMMLab](https://openmmlab.com/) project. Currently, MMEditing supports:\n\n<div align=\"center\">\n <img src=\"https://user-images.githubusercontent.com/12756472/158984079-c4754015-c1f6-48c5-ac46-62e79448c372.jpg\"/>\n</div>\n\nThe master branch works with **PyTorch 1.5+**.\n\nSome Demos:\n\nhttps://user-images.githubusercontent.com/12756472/175944645-cabe8c2b-9f25-440b-91cc-cdac4e752c5a.mp4\n\nhttps://user-images.githubusercontent.com/12756472/158972813-d8d0f19c-f49c-4618-9967-52652726ef19.mp4\n\n<details open>\n<summary>Major features</summary>\n\n- **Modular design**\n\n We decompose the editing framework into different components and one can easily construct a customized editor framework by combining different modules.\n\n- **Support of multiple tasks in editing**\n\n The toolbox directly supports popular and contemporary *inpainting*, *matting*, *super-resolution* and *generation* tasks.\n\n- **State of the art**\n\n The toolbox provides state-of-the-art methods in inpainting/matting/super-resolution/generation.\n\nNote that **MMSR** has been merged into this repo, as a part of MMEditing.\nWith elaborate designs of the new framework and careful implementations,\nhope MMEditing could provide better experience.\n\n## What's New\n\nMMEditing maintains both master and 1.x branches. See more details in [Branch Maintenance Plan](README.md#branch-maintenance-plan).\n\n### \ud83d\udc8e Stable version\n\n**0.16.1** was released in 24/02/2023:\n\n- Support FID and KID metrics.\n- Support groups parameter in ResidualBlockNoBN.\n- Fix RealESRGAN test dataset.\n- Fix dynamic exportable ONNX of `pixel-unshuffle`.\n\nPlease refer to [changelog.md](docs/en/changelog.md) for details and release history.\n\n### \ud83c\udf1f Preview of 1.x version\n\nA brand new version of [**MMEditing v1.0.0rc6**](https://github.com/open-mmlab/mmediting/releases/tag/v1.0.0rc6) was released in 24/02/2023:\n\n- Support all the tasks, models, metrics, and losses in [MMGeneration](https://github.com/open-mmlab/mmgeneration) \ud83d\ude0d\u3002\n- Unifies interfaces of all components based on [MMEngine](https://github.com/open-mmlab/mmengine).\n- Refactored and more flexible [architecture](https://mmediting.readthedocs.io/en/1.x/1_overview.html).\n- Support well-known text-to-image method [Stable Diffusion](https://github.com/open-mmlab/mmediting/tree/1.x/configs/stable_diffusion/README.md)!\n- Support a new text-to-image algorithm [GLIDE](https://github.com/open-mmlab/mmediting/tree/1.x/projects/glide/configs/README.md)!\n- Support Text2Image Task! [Disco-Diffusion](https://github.com/open-mmlab/mmediting/tree/1.x/configs/disco_diffusion/README.md)!\n- Support 3D-aware Generation Task! [EG3D](https://github.com/open-mmlab/mmediting/tree/1.x/configs/eg3d/README.md)!\n- Support an efficient image restoration algorithm [Restormer](https://github.com/open-mmlab/mmediting/tree/1.x/configs/restormer/README.md)!\n- Support swin based image restoration algorithm [SwinIR](https://github.com/open-mmlab/mmediting/tree/1.x/configs/swinir/README.md)!\n- Support [Image Colorization](https://github.com/open-mmlab/mmediting/tree/1.x/configs/inst_colorization/README.md).\n- [Projects](https://github.com/open-mmlab/mmediting/tree/1.x/projects/README.md) is opened for the community to add projects to MMEditing.\n- Support High-level apis and inferencer.\n- Support Gradio gui of Inpainting inference.\n- Support patch-based and slider-based image and video comparison viewer.\n\nFind more new features in [1.x branch](https://github.com/open-mmlab/mmediting/tree/1.x). Issues and PRs are welcome!\n\n## Installation\n\nMMEditing depends on [PyTorch](https://pytorch.org/) and [MMCV](https://github.com/open-mmlab/mmcv).\nBelow are quick steps for installation.\n\n**Step 1.**\nInstall PyTorch following [official instructions](https://pytorch.org/get-started/locally/).\n\n**Step 2.**\nInstall MMCV with [MIM](https://github.com/open-mmlab/mim).\n\n```shell\npip3 install openmim\nmim install mmcv-full\n```\n\n**Step 3.**\nInstall MMEditing from source.\n\n```shell\ngit clone https://github.com/open-mmlab/mmediting.git\ncd mmediting\npip3 install -e .\n```\n\nPlease refer to [install.md](docs/en/install.md) for more detailed instruction.\n\n## Getting Started\n\nPlease see [getting_started.md](docs/en/getting_started.md) and [demo.md](docs/en/demo.md) for the basic usage of MMEditing.\n\n## Model Zoo\n\nSupported algorithms:\n\n<details open>\n<summary>Inpainting</summary>\n\n- [x] [Global&Local](configs/inpainting/global_local/README.md) (ToG'2017)\n- [x] [DeepFillv1](configs/inpainting/deepfillv1/README.md) (CVPR'2018)\n- [x] [PConv](configs/inpainting/partial_conv/README.md) (ECCV'2018)\n- [x] [DeepFillv2](configs/inpainting/deepfillv2/README.md) (CVPR'2019)\n- [x] [AOT-GAN](configs/inpainting/AOT-GAN/README.md) (TVCG'2021)\n\n</details>\n\n<details open>\n<summary>Matting</summary>\n\n- [x] [DIM](configs/mattors/dim/README.md) (CVPR'2017)\n- [x] [IndexNet](configs/mattors/indexnet/README.md) (ICCV'2019)\n- [x] [GCA](configs/mattors/gca/README.md) (AAAI'2020)\n\n</details>\n\n<details open>\n<summary>Image-Super-Resolution</summary>\n\n- [x] [SRCNN](configs/restorers/srcnn/README.md) (TPAMI'2015)\n- [x] [SRResNet&SRGAN](configs/restorers/srresnet_srgan/README.md) (CVPR'2016)\n- [x] [EDSR](configs/restorers/edsr/README.md) (CVPR'2017)\n- [x] [ESRGAN](configs/restorers/esrgan/README.md) (ECCV'2018)\n- [x] [RDN](configs/restorers/rdn/README.md) (CVPR'2018)\n- [x] [DIC](configs/restorers/dic/README.md) (CVPR'2020)\n- [x] [TTSR](configs/restorers/ttsr/README.md) (CVPR'2020)\n- [x] [GLEAN](configs/restorers/glean/README.md) (CVPR'2021)\n- [x] [LIIF](configs/restorers/liif/README.md) (CVPR'2021)\n\n</details>\n\n<details open>\n<summary>Video-Super-Resolution</summary>\n\n- [x] [EDVR](configs/restorers/edvr/README.md) (CVPR'2019)\n- [x] [TOF](configs/restorers/tof/README.md) (IJCV'2019)\n- [x] [TDAN](configs/restorers/tdan/README.md) (CVPR'2020)\n- [x] [BasicVSR](configs/restorers/basicvsr/README.md) (CVPR'2021)\n- [x] [IconVSR](configs/restorers/iconvsr/README.md) (CVPR'2021)\n- [x] [BasicVSR++](configs/restorers/basicvsr_plusplus/README.md) (CVPR'2022)\n- [x] [RealBasicVSR](configs/restorers/real_basicvsr/README.md) (CVPR'2022)\n\n</details>\n\n<details open>\n<summary>Generation</summary>\n\n- [x] [CycleGAN](configs/synthesizers/cyclegan/README.md) (ICCV'2017)\n- [x] [pix2pix](configs/synthesizers/pix2pix/README.md) (CVPR'2017)\n\n</details>\n\n<details open>\n<summary>Video Interpolation</summary>\n\n- [x] [TOFlow](configs/video_interpolators/tof/README.md) (IJCV'2019)\n- [x] [CAIN](configs/video_interpolators/cain/README.md) (AAAI'2020)\n- [x] [FLAVR](configs/video_interpolators/flavr/README.md) (CVPR'2021)\n\n</details>\n\nPlease refer to [model_zoo](https://mmediting.readthedocs.io/en/latest/_tmp/modelzoo.html) for more details.\n\n## Contributing\n\nWe appreciate all contributions to improve MMEditing. Please refer to our [contributing guidelines](https://github.com/open-mmlab/mmediting/wiki/A.-Contribution-Guidelines).\n\n## Acknowledgement\n\nMMEditing is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new methods.\n\n## Branch Maintenance Plan\n\nMMEditing currently has two branches, the master and 1.x branches, which go through the following three phases.\n\n| Phase | Time | Branch | description |\n| -------------------- | --------------------- | ----------------------------------------------------------------------------- | ---------------------------------------------------------------------------------- |\n| RC Period | 2022/9/1 - 2022.12.31 | Release candidate code (1.x version) will be released on 1.x branch. Default master branch is still 0.x version | Master and 1.x branches iterate normally |\n| Compatibility Period | 2023/1/1 - 2023.12.31 | **Default master branch will be switched to 1.x branch**, and 0.x branch will correspond to 0.x version | We still maintain the old version 0.x, respond to user needs, but try not to introduce changes that break compatibility; master branch iterates normally |\n| Maintenance Period | From 2024/1/1 | Default master branch corresponds to 1.x version and 0.x branch is 0.x version | 0.x branch is in maintenance phase, no more new feature support; master branch is iterating normally |\n\n## Citation\n\nIf MMEditing is helpful to your research, please cite it as below.\n\n```bibtex\n@misc{mmediting2022,\n title = {{MMEditing}: {OpenMMLab} Image and Video Editing Toolbox},\n author = {{MMEditing Contributors}},\n howpublished = {\\url{https://github.com/open-mmlab/mmediting}},\n year = {2022}\n}\n```\n\n## License\n\nThis project is released under the [Apache 2.0 license](LICENSE).\n\n## Projects in OpenMMLab\n\n- [MMEngine](https://github.com/open-mmlab/mmengine): OpenMMLab foundational library for training deep learning models.\n- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.\n- [MMEval](https://github.com/open-mmlab/mmeval): A unified evaluation library for multiple machine learning libraries.\n- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.\n- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark.\n- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.\n- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.\n- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.\n- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.\n- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.\n- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.\n- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.\n- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.\n- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.\n- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.\n- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.\n- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.\n- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.\n- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.\n- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.\n- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.\n\n\n",
"bugtrack_url": null,
"license": "Apache License 2.0",
"summary": "OpenMMLab Image and Video Editing Toolbox and Benchmark",
"version": "0.16.1",
"split_keywords": [
"computer vision",
"super resolution",
"video interpolation",
"inpainting",
"matting",
"sisr",
"refsr",
"vsr",
"gan",
"vfi"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a8b7a73760c27f04630239bdeceebe62e0dcbedd27178c636bdc9bbdb0db5fe6",
"md5": "774a86194367bcbf0d3f6f242593fe4a",
"sha256": "ab3ef35415dd538df8359b7e0dfa5fa612a5d452df641a3853f463bf2872725c"
},
"downloads": -1,
"filename": "mmedit-0.16.1-py2.py3-none-any.whl",
"has_sig": false,
"md5_digest": "774a86194367bcbf0d3f6f242593fe4a",
"packagetype": "bdist_wheel",
"python_version": "py2.py3",
"requires_python": null,
"size": 606825,
"upload_time": "2023-02-24T04:53:27",
"upload_time_iso_8601": "2023-02-24T04:53:27.476277Z",
"url": "https://files.pythonhosted.org/packages/a8/b7/a73760c27f04630239bdeceebe62e0dcbedd27178c636bdc9bbdb0db5fe6/mmedit-0.16.1-py2.py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "6fccf0621c33eeb53d6a3bfe1774f3cdf81d80df2a349dcc3c8dbc3d9f187191",
"md5": "5436d0f71c12fb0809a64fd0fe0750b2",
"sha256": "04d86e0a23089f75b95e523b202c74471efead26d911958917f880810c5432b3"
},
"downloads": -1,
"filename": "mmedit-0.16.1.tar.gz",
"has_sig": false,
"md5_digest": "5436d0f71c12fb0809a64fd0fe0750b2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 305715,
"upload_time": "2023-02-24T04:53:29",
"upload_time_iso_8601": "2023-02-24T04:53:29.311649Z",
"url": "https://files.pythonhosted.org/packages/6f/cc/f0621c33eeb53d6a3bfe1774f3cdf81d80df2a349dcc3c8dbc3d9f187191/mmedit-0.16.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-02-24 04:53:29",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "open-mmlab",
"github_project": "mmediting",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"circle": true,
"requirements": [],
"lcname": "mmedit"
}