ray-mmedit


Nameray-mmedit JSON
Version 0.16.0 PyPI version JSON
download
home_pagehttps://github.com/open-mmlab/mmediting
SummaryOpenMMLab Image and Video Editing Toolbox and Benchmark
upload_time2023-09-07 03:29:22
maintainerMMEditing Contributors
docs_urlNone
author
requires_python
licenseApache License 2.0
keywords computer vision super resolution video interpolation inpainting matting sisr refsr vsr gan vfi
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
  <img src="docs/en/_static/image/mmediting-logo.png" width="500px"/>
  <div>&nbsp;</div>
  <div align="center">
    <b><font size="5">OpenMMLab website</font></b>
    <sup>
      <a href="https://openmmlab.com">
        <i><font size="4">HOT</font></i>
      </a>
    </sup>
    &nbsp;&nbsp;&nbsp;&nbsp;
    <b><font size="5">OpenMMLab platform</font></b>
    <sup>
      <a href="https://platform.openmmlab.com">
        <i><font size="4">TRY IT OUT</font></i>
      </a>
    </sup>
  </div>
  <div>&nbsp;</div>

[![PyPI](https://badge.fury.io/py/mmedit.svg)](https://pypi.org/project/mmedit/)
[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmediting.readthedocs.io/en/latest/)
[![badge](https://github.com/open-mmlab/mmediting/workflows/build/badge.svg)](https://github.com/open-mmlab/mmediting/actions)
[![codecov](https://codecov.io/gh/open-mmlab/mmediting/branch/master/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmediting)
[![license](https://img.shields.io/github/license/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/blob/master/LICENSE)
[![open issues](https://isitmaintained.com/badge/open/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/issues)
[![issue resolution](https://isitmaintained.com/badge/resolution/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/issues)

[📘Documentation](https://mmediting.readthedocs.io/en/latest/) |
[🛠️Installation](https://mmediting.readthedocs.io/en/latest/install.html) |
[👀Model Zoo](https://mmediting.readthedocs.io/en/latest/_tmp/modelzoo.html) |
[🆕Update News](https://github.com/open-mmlab/mmediting/blob/master/docs/en/changelog.md) |
[🚀Ongoing Projects](https://github.com/open-mmlab/mmediting/projects) |
[🤔Reporting Issues](https://github.com/open-mmlab/mmediting/issues)

</div>

English | [简体中文](/README_zh-CN.md)

## Introduction

MMEditing is an open-source image and video editing toolbox based on PyTorch. It is a part of the [OpenMMLab](https://openmmlab.com/) project.

Currently, MMEditing supports the following tasks:

<div align="center">
  <img src="https://user-images.githubusercontent.com/12756472/158984079-c4754015-c1f6-48c5-ac46-62e79448c372.jpg"/>
</div>

The master branch works with **PyTorch 1.5+**.

Some Demos:

https://user-images.githubusercontent.com/12756472/175944645-cabe8c2b-9f25-440b-91cc-cdac4e752c5a.mp4

https://user-images.githubusercontent.com/12756472/158972813-d8d0f19c-f49c-4618-9967-52652726ef19.mp4

<details open>
<summary>Major features</summary>

- **Modular design**

  We decompose the editing framework into different components and one can easily construct a customized editor framework by combining different modules.

- **Support of multiple tasks in editing**

  The toolbox directly supports popular and contemporary *inpainting*, *matting*, *super-resolution* and *generation* tasks.

- **State of the art**

  The toolbox provides state-of-the-art methods in inpainting/matting/super-resolution/generation.

Note that **MMSR** has been merged into this repo, as a part of MMEditing.
With elaborate designs of the new framework and careful implementations,
hope MMEditing could provide better experience.

## What's New

### 💎 Stable version

**0.16.0** was released in 31/10/2022:

- `VisualizationHook` is deprecated. Users should use `MMEditVisualizationHook` instead.
- Fix FLAVR register.
- Fix the number of channels in RDB.

Please refer to [changelog.md](docs/en/changelog.md) for details and release history.

### 🌟 Preview of 1.x version

A brand new version of [**MMEditing v1.0.0rc1**](https://github.com/open-mmlab/mmediting/releases/tag/v1.0.0rc1) was released in 24/09/2022:

- Support all the tasks, models, metrics, and losses in [MMGeneration](https://github.com/open-mmlab/mmgeneration) 😍。
- Unifies interfaces of all components based on [MMEngine](https://github.com/open-mmlab/mmengine).
- Refactored and more flexible [architecture](https://mmediting.readthedocs.io/en/1.x/1_overview.html).

Find more new features in [1.x branch](https://github.com/open-mmlab/mmediting/tree/1.x). Issues and PRs are welcome!

## Installation

MMEditing depends on [PyTorch](https://pytorch.org/) and [MMCV](https://github.com/open-mmlab/mmcv).
Below are quick steps for installation.

**Step 1.**
Install PyTorch following [official instructions](https://pytorch.org/get-started/locally/).

**Step 2.**
Install MMCV with [MIM](https://github.com/open-mmlab/mim).

```shell
pip3 install openmim
mim install mmcv-full
```

**Step 3.**
Install MMEditing from source.

```shell
git clone https://github.com/open-mmlab/mmediting.git
cd mmediting
pip3 install -e .
```

Please refer to [install.md](docs/en/install.md) for more detailed instruction.

## Getting Started

Please see [getting_started.md](docs/en/getting_started.md) and [demo.md](docs/en/demo.md) for the basic usage of MMEditing.

## Model Zoo

Supported algorithms:

<details open>
<summary>Inpainting</summary>

- [x] [Global&Local](configs/inpainting/global_local/README.md) (ToG'2017)
- [x] [DeepFillv1](configs/inpainting/deepfillv1/README.md) (CVPR'2018)
- [x] [PConv](configs/inpainting/partial_conv/README.md) (ECCV'2018)
- [x] [DeepFillv2](configs/inpainting/deepfillv2/README.md) (CVPR'2019)
- [x] [AOT-GAN](configs/inpainting/AOT-GAN/README.md) (TVCG'2021)

</details>

<details open>
<summary>Matting</summary>

- [x] [DIM](configs/mattors/dim/README.md) (CVPR'2017)
- [x] [IndexNet](configs/mattors/indexnet/README.md) (ICCV'2019)
- [x] [GCA](configs/mattors/gca/README.md) (AAAI'2020)

</details>

<details open>
<summary>Image-Super-Resolution</summary>

- [x] [SRCNN](configs/restorers/srcnn/README.md) (TPAMI'2015)
- [x] [SRResNet&SRGAN](configs/restorers/srresnet_srgan/README.md) (CVPR'2016)
- [x] [EDSR](configs/restorers/edsr/README.md) (CVPR'2017)
- [x] [ESRGAN](configs/restorers/esrgan/README.md) (ECCV'2018)
- [x] [RDN](configs/restorers/rdn/README.md) (CVPR'2018)
- [x] [DIC](configs/restorers/dic/README.md) (CVPR'2020)
- [x] [TTSR](configs/restorers/ttsr/README.md) (CVPR'2020)
- [x] [GLEAN](configs/restorers/glean/README.md) (CVPR'2021)
- [x] [LIIF](configs/restorers/liif/README.md) (CVPR'2021)

</details>

<details open>
<summary>Video-Super-Resolution</summary>

- [x] [EDVR](configs/restorers/edvr/README.md) (CVPR'2019)
- [x] [TOF](configs/restorers/tof/README.md) (IJCV'2019)
- [x] [TDAN](configs/restorers/tdan/README.md) (CVPR'2020)
- [x] [BasicVSR](configs/restorers/basicvsr/README.md) (CVPR'2021)
- [x] [IconVSR](configs/restorers/iconvsr/README.md) (CVPR'2021)
- [x] [BasicVSR++](configs/restorers/basicvsr_plusplus/README.md) (CVPR'2022)
- [x] [RealBasicVSR](configs/restorers/real_basicvsr/README.md) (CVPR'2022)

</details>

<details open>
<summary>Generation</summary>

- [x] [CycleGAN](configs/synthesizers/cyclegan/README.md) (ICCV'2017)
- [x] [pix2pix](configs/synthesizers/pix2pix/README.md) (CVPR'2017)

</details>

<details open>
<summary>Video Interpolation</summary>

- [x] [TOFlow](configs/video_interpolators/tof/README.md) (IJCV'2019)
- [x] [CAIN](configs/video_interpolators/cain/README.md) (AAAI'2020)
- [x] [FLAVR](configs/video_interpolators/flavr/README.md) (CVPR'2021)

</details>

Please refer to [model_zoo](https://mmediting.readthedocs.io/en/latest/_tmp/modelzoo.html) for more details.

## Contributing

We appreciate all contributions to improve MMEditing. Please refer to our [contributing guidelines](https://github.com/open-mmlab/mmediting/wiki/A.-Contribution-Guidelines).

## Acknowledgement

MMEditing is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new methods.

## Citation

If MMEditing is helpful to your research, please cite it as below.

```bibtex
@misc{mmediting2022,
    title = {{MMEditing}: {OpenMMLab} Image and Video Editing Toolbox},
    author = {{MMEditing Contributors}},
    howpublished = {\url{https://github.com/open-mmlab/mmediting}},
    year = {2022}
}
```

## License

This project is released under the [Apache 2.0 license](LICENSE).

## Projects in OpenMMLab

- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.
- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.
- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark.
- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.
- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.
- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.
- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.
- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.
- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.
- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.
- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.
- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.
- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.
- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.
- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.
- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/open-mmlab/mmediting",
    "name": "ray-mmedit",
    "maintainer": "MMEditing Contributors",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "openmmlab@gmail.com",
    "keywords": "computer vision,super resolution,video interpolation,inpainting,matting,SISR,RefSR,VSR,GAN,VFI",
    "author": "",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/9e/51/110fded09bb2d92f2cf7a2007805ffdbde5d71608bf71b05a296053e36a4/ray-mmedit-0.16.0.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n  <img src=\"docs/en/_static/image/mmediting-logo.png\" width=\"500px\"/>\n  <div>&nbsp;</div>\n  <div align=\"center\">\n    <b><font size=\"5\">OpenMMLab website</font></b>\n    <sup>\n      <a href=\"https://openmmlab.com\">\n        <i><font size=\"4\">HOT</font></i>\n      </a>\n    </sup>\n    &nbsp;&nbsp;&nbsp;&nbsp;\n    <b><font size=\"5\">OpenMMLab platform</font></b>\n    <sup>\n      <a href=\"https://platform.openmmlab.com\">\n        <i><font size=\"4\">TRY IT OUT</font></i>\n      </a>\n    </sup>\n  </div>\n  <div>&nbsp;</div>\n\n[![PyPI](https://badge.fury.io/py/mmedit.svg)](https://pypi.org/project/mmedit/)\n[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmediting.readthedocs.io/en/latest/)\n[![badge](https://github.com/open-mmlab/mmediting/workflows/build/badge.svg)](https://github.com/open-mmlab/mmediting/actions)\n[![codecov](https://codecov.io/gh/open-mmlab/mmediting/branch/master/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmediting)\n[![license](https://img.shields.io/github/license/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/blob/master/LICENSE)\n[![open issues](https://isitmaintained.com/badge/open/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/issues)\n[![issue resolution](https://isitmaintained.com/badge/resolution/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/issues)\n\n[\ud83d\udcd8Documentation](https://mmediting.readthedocs.io/en/latest/) |\n[\ud83d\udee0\ufe0fInstallation](https://mmediting.readthedocs.io/en/latest/install.html) |\n[\ud83d\udc40Model Zoo](https://mmediting.readthedocs.io/en/latest/_tmp/modelzoo.html) |\n[\ud83c\udd95Update News](https://github.com/open-mmlab/mmediting/blob/master/docs/en/changelog.md) |\n[\ud83d\ude80Ongoing Projects](https://github.com/open-mmlab/mmediting/projects) |\n[\ud83e\udd14Reporting Issues](https://github.com/open-mmlab/mmediting/issues)\n\n</div>\n\nEnglish | [\u7b80\u4f53\u4e2d\u6587](/README_zh-CN.md)\n\n## Introduction\n\nMMEditing is an open-source image and video editing toolbox based on PyTorch. It is a part of the [OpenMMLab](https://openmmlab.com/) project.\n\nCurrently, MMEditing supports the following tasks:\n\n<div align=\"center\">\n  <img src=\"https://user-images.githubusercontent.com/12756472/158984079-c4754015-c1f6-48c5-ac46-62e79448c372.jpg\"/>\n</div>\n\nThe master branch works with **PyTorch 1.5+**.\n\nSome Demos:\n\nhttps://user-images.githubusercontent.com/12756472/175944645-cabe8c2b-9f25-440b-91cc-cdac4e752c5a.mp4\n\nhttps://user-images.githubusercontent.com/12756472/158972813-d8d0f19c-f49c-4618-9967-52652726ef19.mp4\n\n<details open>\n<summary>Major features</summary>\n\n- **Modular design**\n\n  We decompose the editing framework into different components and one can easily construct a customized editor framework by combining different modules.\n\n- **Support of multiple tasks in editing**\n\n  The toolbox directly supports popular and contemporary *inpainting*, *matting*, *super-resolution* and *generation* tasks.\n\n- **State of the art**\n\n  The toolbox provides state-of-the-art methods in inpainting/matting/super-resolution/generation.\n\nNote that **MMSR** has been merged into this repo, as a part of MMEditing.\nWith elaborate designs of the new framework and careful implementations,\nhope MMEditing could provide better experience.\n\n## What's New\n\n### \ud83d\udc8e Stable version\n\n**0.16.0** was released in 31/10/2022:\n\n- `VisualizationHook` is deprecated. Users should use `MMEditVisualizationHook` instead.\n- Fix FLAVR register.\n- Fix the number of channels in RDB.\n\nPlease refer to [changelog.md](docs/en/changelog.md) for details and release history.\n\n### \ud83c\udf1f Preview of 1.x version\n\nA brand new version of [**MMEditing v1.0.0rc1**](https://github.com/open-mmlab/mmediting/releases/tag/v1.0.0rc1) was released in 24/09/2022:\n\n- Support all the tasks, models, metrics, and losses in [MMGeneration](https://github.com/open-mmlab/mmgeneration) \ud83d\ude0d\u3002\n- Unifies interfaces of all components based on [MMEngine](https://github.com/open-mmlab/mmengine).\n- Refactored and more flexible [architecture](https://mmediting.readthedocs.io/en/1.x/1_overview.html).\n\nFind more new features in [1.x branch](https://github.com/open-mmlab/mmediting/tree/1.x). Issues and PRs are welcome!\n\n## Installation\n\nMMEditing depends on [PyTorch](https://pytorch.org/) and [MMCV](https://github.com/open-mmlab/mmcv).\nBelow are quick steps for installation.\n\n**Step 1.**\nInstall PyTorch following [official instructions](https://pytorch.org/get-started/locally/).\n\n**Step 2.**\nInstall MMCV with [MIM](https://github.com/open-mmlab/mim).\n\n```shell\npip3 install openmim\nmim install mmcv-full\n```\n\n**Step 3.**\nInstall MMEditing from source.\n\n```shell\ngit clone https://github.com/open-mmlab/mmediting.git\ncd mmediting\npip3 install -e .\n```\n\nPlease refer to [install.md](docs/en/install.md) for more detailed instruction.\n\n## Getting Started\n\nPlease see [getting_started.md](docs/en/getting_started.md) and [demo.md](docs/en/demo.md) for the basic usage of MMEditing.\n\n## Model Zoo\n\nSupported algorithms:\n\n<details open>\n<summary>Inpainting</summary>\n\n- [x] [Global&Local](configs/inpainting/global_local/README.md) (ToG'2017)\n- [x] [DeepFillv1](configs/inpainting/deepfillv1/README.md) (CVPR'2018)\n- [x] [PConv](configs/inpainting/partial_conv/README.md) (ECCV'2018)\n- [x] [DeepFillv2](configs/inpainting/deepfillv2/README.md) (CVPR'2019)\n- [x] [AOT-GAN](configs/inpainting/AOT-GAN/README.md) (TVCG'2021)\n\n</details>\n\n<details open>\n<summary>Matting</summary>\n\n- [x] [DIM](configs/mattors/dim/README.md) (CVPR'2017)\n- [x] [IndexNet](configs/mattors/indexnet/README.md) (ICCV'2019)\n- [x] [GCA](configs/mattors/gca/README.md) (AAAI'2020)\n\n</details>\n\n<details open>\n<summary>Image-Super-Resolution</summary>\n\n- [x] [SRCNN](configs/restorers/srcnn/README.md) (TPAMI'2015)\n- [x] [SRResNet&SRGAN](configs/restorers/srresnet_srgan/README.md) (CVPR'2016)\n- [x] [EDSR](configs/restorers/edsr/README.md) (CVPR'2017)\n- [x] [ESRGAN](configs/restorers/esrgan/README.md) (ECCV'2018)\n- [x] [RDN](configs/restorers/rdn/README.md) (CVPR'2018)\n- [x] [DIC](configs/restorers/dic/README.md) (CVPR'2020)\n- [x] [TTSR](configs/restorers/ttsr/README.md) (CVPR'2020)\n- [x] [GLEAN](configs/restorers/glean/README.md) (CVPR'2021)\n- [x] [LIIF](configs/restorers/liif/README.md) (CVPR'2021)\n\n</details>\n\n<details open>\n<summary>Video-Super-Resolution</summary>\n\n- [x] [EDVR](configs/restorers/edvr/README.md) (CVPR'2019)\n- [x] [TOF](configs/restorers/tof/README.md) (IJCV'2019)\n- [x] [TDAN](configs/restorers/tdan/README.md) (CVPR'2020)\n- [x] [BasicVSR](configs/restorers/basicvsr/README.md) (CVPR'2021)\n- [x] [IconVSR](configs/restorers/iconvsr/README.md) (CVPR'2021)\n- [x] [BasicVSR++](configs/restorers/basicvsr_plusplus/README.md) (CVPR'2022)\n- [x] [RealBasicVSR](configs/restorers/real_basicvsr/README.md) (CVPR'2022)\n\n</details>\n\n<details open>\n<summary>Generation</summary>\n\n- [x] [CycleGAN](configs/synthesizers/cyclegan/README.md) (ICCV'2017)\n- [x] [pix2pix](configs/synthesizers/pix2pix/README.md) (CVPR'2017)\n\n</details>\n\n<details open>\n<summary>Video Interpolation</summary>\n\n- [x] [TOFlow](configs/video_interpolators/tof/README.md) (IJCV'2019)\n- [x] [CAIN](configs/video_interpolators/cain/README.md) (AAAI'2020)\n- [x] [FLAVR](configs/video_interpolators/flavr/README.md) (CVPR'2021)\n\n</details>\n\nPlease refer to [model_zoo](https://mmediting.readthedocs.io/en/latest/_tmp/modelzoo.html) for more details.\n\n## Contributing\n\nWe appreciate all contributions to improve MMEditing. Please refer to our [contributing guidelines](https://github.com/open-mmlab/mmediting/wiki/A.-Contribution-Guidelines).\n\n## Acknowledgement\n\nMMEditing is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new methods.\n\n## Citation\n\nIf MMEditing is helpful to your research, please cite it as below.\n\n```bibtex\n@misc{mmediting2022,\n    title = {{MMEditing}: {OpenMMLab} Image and Video Editing Toolbox},\n    author = {{MMEditing Contributors}},\n    howpublished = {\\url{https://github.com/open-mmlab/mmediting}},\n    year = {2022}\n}\n```\n\n## License\n\nThis project is released under the [Apache 2.0 license](LICENSE).\n\n## Projects in OpenMMLab\n\n- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.\n- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.\n- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark.\n- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.\n- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.\n- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.\n- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.\n- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.\n- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.\n- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.\n- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.\n- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.\n- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.\n- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.\n- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.\n- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.\n- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.\n- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.\n- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "OpenMMLab Image and Video Editing Toolbox and Benchmark",
    "version": "0.16.0",
    "project_urls": {
        "Homepage": "https://github.com/open-mmlab/mmediting"
    },
    "split_keywords": [
        "computer vision",
        "super resolution",
        "video interpolation",
        "inpainting",
        "matting",
        "sisr",
        "refsr",
        "vsr",
        "gan",
        "vfi"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9e51110fded09bb2d92f2cf7a2007805ffdbde5d71608bf71b05a296053e36a4",
                "md5": "782bad395c7f075dc050ae40ab33db4a",
                "sha256": "f663f13fd2c666ce59ff8c6870029aa4284b35c29ca799fffc955e30b5a8d5e6"
            },
            "downloads": -1,
            "filename": "ray-mmedit-0.16.0.tar.gz",
            "has_sig": false,
            "md5_digest": "782bad395c7f075dc050ae40ab33db4a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 315113,
            "upload_time": "2023-09-07T03:29:22",
            "upload_time_iso_8601": "2023-09-07T03:29:22.812256Z",
            "url": "https://files.pythonhosted.org/packages/9e/51/110fded09bb2d92f2cf7a2007805ffdbde5d71608bf71b05a296053e36a4/ray-mmedit-0.16.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-09-07 03:29:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "open-mmlab",
    "github_project": "mmediting",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "circle": true,
    "requirements": [],
    "lcname": "ray-mmedit"
}
        
Elapsed time: 0.24640s