Name | ccvfi JSON |
Version |
0.0.2
JSON |
| download |
home_page | https://github.com/TensoRaws/ccvfi |
Summary | an inference lib for video frame interpolation with VapourSynth support |
upload_time | 2025-02-11 00:49:46 |
maintainer | None |
docs_url | None |
author | Tohrusky |
requires_python | <4.0,>=3.9 |
license | MIT |
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# ccvfi
[](https://codecov.io/gh/TensoRaws/ccvfi)
[](https://github.com/TensoRaws/ccvfi/actions/workflows/CI-test.yml)
[](https://github.com/TensoRaws/ccvfi/actions/workflows/Release.yml)
[](https://badge.fury.io/py/ccvfi)

an inference lib for video frame interpolation with VapourSynth support
### Install
Make sure you have Python >= 3.9 and PyTorch >= 1.13 installed
```bash
pip install ccvfi
```
- Install VapourSynth (optional)
### Start
#### cv2
a simple example to use the RIFE (ECCV2022-RIFE) model to process an image sequence.
```python
import cv2
import numpy as np
from ccvfi import AutoModel, ConfigType, VFIBaseModel
model: VFIBaseModel = AutoModel.from_pretrained(
pretrained_model_name=ConfigType.RIFE_IFNet_v426_heavy,
)
img0 = cv2.imdecode(np.fromfile("01.jpg", dtype=np.uint8), cv2.IMREAD_COLOR)
img1 = cv2.imdecode(np.fromfile("02.jpg", dtype=np.uint8), cv2.IMREAD_COLOR)
out = model.inference_image_list(img_list=[img0, img1])[0]
cv2.imwrite("test_out.jpg", out)
```
#### VapourSynth
a simple example to use the VFI (Video Frame-Interpolation) model to process a video (DRBA)
```python
import vapoursynth as vs
from vapoursynth import core
from ccvfi import AutoModel, BaseModelInterface, ConfigType
model: BaseModelInterface = AutoModel.from_pretrained(
pretrained_model_name=ConfigType.DRBA_IFNet,
)
clip = core.bs.VideoSource(source="s.mp4")
clip = core.resize.Bicubic(clip=clip, matrix_in_s="709", format=vs.RGBH)
clip = model.inference_video(clip, tar_fps=60)
clip = core.resize.Bicubic(clip=clip, matrix_s="709", format=vs.YUV420P16)
clip.set_output()
```
See more examples in the [example](./example) directory, ccvfi can register custom configurations and models to extend the functionality
### Current Support
It still in development, the following models are supported:
- [Architecture](./ccvfi/type/arch.py)
- [Model](./ccvfi/type/model.py)
- [Weight(Config)](./ccvfi/type/config.py)
### Reference
- [PyTorch](https://github.com/pytorch/pytorch)
- [BasicSR](https://github.com/XPixelGroup/BasicSR)
- [mmcv](https://github.com/open-mmlab/mmcv)
- [huggingface transformers](https://github.com/huggingface/transformers)
- [VapourSynth](https://www.vapoursynth.com/)
- [HolyWu's functions](https://github.com/HolyWu)
### License
This project is licensed under the MIT - see
the [LICENSE file](https://github.com/TensoRaws/ccvfi/blob/main/LICENSE) for details.
Raw data
{
"_id": null,
"home_page": "https://github.com/TensoRaws/ccvfi",
"name": "ccvfi",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": null,
"author": "Tohrusky",
"author_email": null,
"download_url": null,
"platform": null,
"description": "# ccvfi\n\n[](https://codecov.io/gh/TensoRaws/ccvfi)\n[](https://github.com/TensoRaws/ccvfi/actions/workflows/CI-test.yml)\n[](https://github.com/TensoRaws/ccvfi/actions/workflows/Release.yml)\n[](https://badge.fury.io/py/ccvfi)\n\n\nan inference lib for video frame interpolation with VapourSynth support\n\n### Install\n\nMake sure you have Python >= 3.9 and PyTorch >= 1.13 installed\n\n```bash\npip install ccvfi\n```\n\n- Install VapourSynth (optional)\n\n### Start\n\n#### cv2\n\na simple example to use the RIFE (ECCV2022-RIFE) model to process an image sequence.\n\n```python\nimport cv2\nimport numpy as np\n\nfrom ccvfi import AutoModel, ConfigType, VFIBaseModel\n\nmodel: VFIBaseModel = AutoModel.from_pretrained(\n pretrained_model_name=ConfigType.RIFE_IFNet_v426_heavy,\n)\n\nimg0 = cv2.imdecode(np.fromfile(\"01.jpg\", dtype=np.uint8), cv2.IMREAD_COLOR)\nimg1 = cv2.imdecode(np.fromfile(\"02.jpg\", dtype=np.uint8), cv2.IMREAD_COLOR)\nout = model.inference_image_list(img_list=[img0, img1])[0]\ncv2.imwrite(\"test_out.jpg\", out)\n```\n\n#### VapourSynth\n\na simple example to use the VFI (Video Frame-Interpolation) model to process a video (DRBA)\n\n```python\nimport vapoursynth as vs\nfrom vapoursynth import core\n\nfrom ccvfi import AutoModel, BaseModelInterface, ConfigType\n\nmodel: BaseModelInterface = AutoModel.from_pretrained(\n pretrained_model_name=ConfigType.DRBA_IFNet,\n)\n\nclip = core.bs.VideoSource(source=\"s.mp4\")\nclip = core.resize.Bicubic(clip=clip, matrix_in_s=\"709\", format=vs.RGBH)\nclip = model.inference_video(clip, tar_fps=60)\nclip = core.resize.Bicubic(clip=clip, matrix_s=\"709\", format=vs.YUV420P16)\nclip.set_output()\n```\n\nSee more examples in the [example](./example) directory, ccvfi can register custom configurations and models to extend the functionality\n\n### Current Support\n\nIt still in development, the following models are supported:\n\n- [Architecture](./ccvfi/type/arch.py)\n\n- [Model](./ccvfi/type/model.py)\n\n- [Weight(Config)](./ccvfi/type/config.py)\n\n### Reference\n\n- [PyTorch](https://github.com/pytorch/pytorch)\n- [BasicSR](https://github.com/XPixelGroup/BasicSR)\n- [mmcv](https://github.com/open-mmlab/mmcv)\n- [huggingface transformers](https://github.com/huggingface/transformers)\n- [VapourSynth](https://www.vapoursynth.com/)\n- [HolyWu's functions](https://github.com/HolyWu)\n\n### License\n\nThis project is licensed under the MIT - see\nthe [LICENSE file](https://github.com/TensoRaws/ccvfi/blob/main/LICENSE) for details.\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "an inference lib for video frame interpolation with VapourSynth support",
"version": "0.0.2",
"project_urls": {
"Homepage": "https://github.com/TensoRaws/ccvfi",
"Repository": "https://github.com/TensoRaws/ccvfi"
},
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "fec79b6702ace42afdef4181086613a207f5b8edf7945ce1a34e19e8dbd0c0f5",
"md5": "adc6ec3372305b1c888dece94e905683",
"sha256": "4b895d87e36ef0bd93f3ef5ad4f43cf61c34a7c743dea6531a4089912c2ef6a8"
},
"downloads": -1,
"filename": "ccvfi-0.0.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "adc6ec3372305b1c888dece94e905683",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 34287,
"upload_time": "2025-02-11T00:49:46",
"upload_time_iso_8601": "2025-02-11T00:49:46.593708Z",
"url": "https://files.pythonhosted.org/packages/fe/c7/9b6702ace42afdef4181086613a207f5b8edf7945ce1a34e19e8dbd0c0f5/ccvfi-0.0.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-11 00:49:46",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "TensoRaws",
"github_project": "ccvfi",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "ccvfi"
}