carvekit-colab


Namecarvekit-colab JSON
Version 4.1.2 PyPI version JSON
download
home_pagehttps://github.com/OPHoperHPO/image-background-remove-tool
SummaryOpen-Source background removal framework
upload_time2024-04-08 13:48:14
maintainerNone
docs_urlNone
authorNikita Selin (Anodev)
requires_python>=3.6
licenseApache License v2.0
keywords ml carvekit background removal neural networks machine learning remove bg
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # <p align="center"> ✂️ CarveKit ✂️  </p>

<p align="center"> <img src="docs/imgs/logo.png"> </p>

<p align="center">
<img src="https://github.com/OPHoperHPO/image-background-remove-tool/actions/workflows/master_docker.yaml/badge.svg">
<img src="https://github.com/OPHoperHPO/image-background-remove-tool/actions/workflows/master.yml/badge.svg">
<a href="https://colab.research.google.com/github/OPHoperHPO/image-background-remove-tool/blob/master/docs/other/carvekit_try.ipynb">
<img src="https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667"></a>

</p>


**********************************************************************
<p align="center"> <img align="center" width="512" height="288" src="docs/imgs/compare/readme.jpg"> </p>


> The higher resolution images from the picture above can be seen in the docs/imgs/compare/ and docs/imgs/input folders.

#### 📙 README Language
[Russian](docs/readme/ru.md)
[English](README.md)

## 📄 Description:  
Automated high-quality background removal framework for an image using neural networks.

## 🎆 Features:  
- High Quality
- Batch Processing
- NVIDIA CUDA and CPU processing
- FP16 inference: Fast inference with low memory usage
- Easy inference
- 100% remove.bg compatible FastAPI HTTP API 
- Removes background from hairs
- Easy integration with your code

## ⛱ Try yourself on [Google Colab](https://colab.research.google.com/github/OPHoperHPO/image-background-remove-tool/blob/master/docs/other/carvekit_try.ipynb) 
## ⛓️ How does it work?
It can be briefly described as
1. The user selects a picture or a folder with pictures for processing
2. The photo is preprocessed to ensure the best quality of the output image
3. Using machine learning technology, the background of the image is removed
4. Image post-processing to improve the quality of the processed image
## 🎓 Implemented Neural Networks:
|        Networks         |                   Target                    |             Accuracy             |
|:-----------------------:|:-------------------------------------------:|:--------------------------------:|
| **Tracer-B7** (default) |     **General** (objects, animals, etc)     | **90%** (mean F1-Score, DUTS-TE) |
|         U^2-net         | **Hairs** (hairs, people, animals, objects) |  80.4% (mean F1-Score, DUTS-TE)  |
|         BASNet          |        **General** (people, objects)        |  80.3% (mean F1-Score, DUTS-TE)  |
|        DeepLabV3        |         People, Animals, Cars, etc          |  67.4% (mean IoU, COCO val2017)  |

### Recommended parameters for different models
|  Networks   | Segmentation mask  size | Trimap parameters (dilation, erosion) |
|:-----------:|:-----------------------:|:-------------------------------------:|
| `tracer_b7` |           640           |                (30, 5)                |
|   `u2net`   |           320           |                (30, 5)                |
|  `basnet`   |           320           |                (30, 5)                |
| `deeplabv3` |          1024           |               (40, 20)                |

> ### Notes: 
> 1. The final quality may depend on the resolution of your image, the type of scene or object.
> 2. Use **U2-Net for hairs** and **Tracer-B7 for general images** and correct parameters. \
> It is very important for final quality! Example images was taken by using U2-Net and FBA post-processing.
## 🖼️ Image pre-processing and post-processing methods:
### 🔍 Preprocessing methods:
* `none` - No preprocessing methods used.
> They will be added in the future.
### ✂ Post-processing methods:
* `none` - No post-processing methods used.
* `fba` (default) - This algorithm improves the borders of the image when removing the background from images with hair, etc. using FBA Matting neural network. This method gives the best result in combination with u2net without any preprocessing methods.

## 🏷 Setup for CPU processing:
1. `pip install carvekit --extra-index-url https://download.pytorch.org/whl/cpu`
> The project has been tested on Python versions ranging from 3.9 to 3.11.7.
## 🏷 Setup for GPU processing:  
1. Make sure you have an NVIDIA GPU with 8 GB VRAM.
2. Install `CUDA Toolkit 12.1 and Video Driver for your GPU`
3. `pip install carvekit --extra-index-url https://download.pytorch.org/whl/cu121`
> The project has been tested on Python versions ranging from 3.9 to 3.11.7.
## 🧰 Interact via code:  
### If you don't need deep configuration or don't want to deal with it
``` python
import torch
from carvekit.api.high import HiInterface

# Check doc strings for more information
interface = HiInterface(object_type="hairs-like",  # Can be "object" or "hairs-like".
                        batch_size_seg=5,
                        batch_size_matting=1,
                        device='cuda' if torch.cuda.is_available() else 'cpu',
                        seg_mask_size=640,  # Use 640 for Tracer B7 and 320 for U2Net
                        matting_mask_size=2048,
                        trimap_prob_threshold=231,
                        trimap_dilation=30,
                        trimap_erosion_iters=5,
                        fp16=False)
images_without_background = interface(['./tests/data/cat.jpg'])
cat_wo_bg = images_without_background[0]
cat_wo_bg.save('2.png')

                   
```

### If you want control everything
``` python
import PIL.Image

from carvekit.api.interface import Interface
from carvekit.ml.wrap.fba_matting import FBAMatting
from carvekit.ml.wrap.tracer_b7 import TracerUniversalB7
from carvekit.pipelines.postprocessing import MattingMethod
from carvekit.pipelines.preprocessing import PreprocessingStub
from carvekit.trimap.generator import TrimapGenerator

# Check doc strings for more information
seg_net = TracerUniversalB7(device='cpu',
              batch_size=1)

fba = FBAMatting(device='cpu',
                 input_tensor_size=2048,
                 batch_size=1)

trimap = TrimapGenerator()

preprocessing = PreprocessingStub()

postprocessing = MattingMethod(matting_module=fba,
                               trimap_generator=trimap,
                               device='cpu')

interface = Interface(pre_pipe=preprocessing,
                      post_pipe=postprocessing,
                      seg_pipe=seg_net)

image = PIL.Image.open('tests/data/cat.jpg')
cat_wo_bg = interface([image])[0]
cat_wo_bg.save('2.png')
                   
```


## 🧰 Running the CLI interface:  
 * ```python3 -m carvekit  -i <input_path> -o <output_path> --device <device>```  
 
### Explanation of args:  
````
Usage: carvekit [OPTIONS]

  Performs background removal on specified photos using console interface.

Options:
  -i ./2.jpg                   Path to input file or dir  [required]
  -o ./2.png                   Path to output file or dir
  --pre none                   Preprocessing method
  --post fba                   Postprocessing method.
  --net tracer_b7              Segmentation Network. Check README for more info.
  --recursive                  Enables recursive search for images in a folder
  --batch_size 10              Batch Size for list of images to be loaded to
                               RAM

  --batch_size_seg 5           Batch size for list of images to be processed
                               by segmentation network

  --batch_size_mat 1           Batch size for list of images to be processed
                               by matting network

  --seg_mask_size 640          The size of the input image for the
                               segmentation neural network. Use 640 for Tracer B7 and 320 for U2Net

  --matting_mask_size 2048     The size of the input image for the matting
                               neural network.
  --trimap_dilation 30       The size of the offset radius from the
                                  object mask in pixels when forming an
                                  unknown area
  --trimap_erosion 5        The number of iterations of erosion that the
                                  object's mask will be subjected to before
                                  forming an unknown area
  --trimap_prob_threshold 231
                                  Probability threshold at which the
                                  prob_filter and prob_as_unknown_area
                                  operations will be applied

  --device cpu                 Processing Device.
  --fp16                       Enables mixed precision processing. Use only with CUDA. CPU support is experimental!
  --help                       Show this message and exit.


````
## 📦 Running the Framework / FastAPI HTTP API server via Docker:
Using the API via docker is a **fast** and non-complex way to have a working API.
> **Our docker images are available on [Docker Hub](https://hub.docker.com/r/anodev/carvekit).** \
> Version tags are the same as the releases of the project with suffixes `-cpu` and `-cuda` for CPU and CUDA versions respectively.


<p align="center"> 
<img src="docs/imgs/screenshot/frontend.png"> 
<img src="docs/imgs/screenshot/docs_fastapi.png"> 
</p>

>### Important Notes:
>1. Docker image has default front-end at `/` url and FastAPI backend with docs at `/docs` url.
>2. Authentication is **enabled** by default. \
> **Token keys are reset** on every container restart if ENV variables are not set. \
See `docker-compose.<device>.yml` for more information. \
> **You can see your access keys in the docker container logs.**
> 
>3. There are examples of interaction with the API.\
> See `docs/code_examples/python` for more details
### 🔨 Creating and running a container:
1. Install `docker-compose`
2. Run `docker-compose -f docker-compose.cpu.yml up -d`  # For CPU Processing
3. Run `docker-compose -f docker-compose.cuda.yml up -d`  # For GPU Processing
> Also you can mount folders from your host machine to docker container
> and use the CLI interface inside the docker container to process 
> files in this folder. 

> Building a docker image on Windows is not officially supported. You can try using WSL2 or "Linux Containers Mode" but I haven't tested this.

## ☑️ Testing

### ☑️ Testing with local environment
1. `pip install -r requirements_test.txt`
2. `pytest`

### ☑️ Testing with Docker
1. Run `docker-compose -f docker-compose.cpu.yml run carvekit_api pytest`  # For testing on CPU
2. Run `docker-compose -f docker-compose.cuda.yml run carvekit_api pytest`  # For testing on GPU

## 👪 Credits: [More info](docs/CREDITS.md)

## 💵 Support
  You can thank me for developing this project and buy me a small cup of coffee ☕

| Blockchain |           Cryptocurrency            |          Network          |                                             Wallet                                              |
|:----------:|:-----------------------------------:|:-------------------------:|:-----------------------------------------------------------------------------------------------:|
|  Ethereum  | ETH / USDT / USDC / BNB / Dogecoin  |          Mainnet          |                           0x7Ab1B8015020242D2a9bC48F09b2F34b994bc2F8                            |
|  Ethereum  | ETH / USDT / USDC / BNB / Dogecoin  | BSC (Binance Smart Chain) |                           0x7Ab1B8015020242D2a9bC48F09b2F34b994bc2F8                            |
|  Bitcoin   |                 BTC                 |             -             |                           bc1qmf4qedujhhvcsg8kxpg5zzc2s3jvqssmu7mmhq                            |
|   ZCash    |                 ZEC                 |             -             |                               t1d7b9WxdboGFrcVVHG2ZuwWBgWEKhNUbtm                               |
|    Tron    |                 TRX                 |             -             |                               TH12CADSqSTcNZPvG77GVmYKAe4nrrJB5X                                |
|   Monero   |                 XMR                 |          Mainnet          | 48w2pDYgPtPenwqgnNneEUC9Qt1EE6eD5MucLvU3FGpY3SABudDa4ce5bT1t32oBwchysRCUimCkZVsD1HQRBbxVLF9GTh3 |
|    TON     |                 TON                 |             -             |                        EQCznqTdfOKI3L06QX-3Q802tBL0ecSWIKfkSjU-qsoy0CWE                         |
## 📧 __Feedback__
I will be glad to receive feedback on the project and suggestions for integration.

For all questions write: [farvard34@gmail.com](mailto://farvard34@gmail.com)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/OPHoperHPO/image-background-remove-tool",
    "name": "carvekit-colab",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": null,
    "keywords": "ml, carvekit, background removal, neural networks, machine learning, remove bg",
    "author": "Nikita Selin (Anodev)",
    "author_email": "farvard34@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/1d/18/97d9f4c1b1659982866f86b260285c088f4ffde2ec6de84a60baba19a737/carvekit_colab-4.1.2.tar.gz",
    "platform": null,
    "description": "# <p align=\"center\"> \u2702\ufe0f CarveKit \u2702\ufe0f  </p>\n\n<p align=\"center\"> <img src=\"docs/imgs/logo.png\"> </p>\n\n<p align=\"center\">\n<img src=\"https://github.com/OPHoperHPO/image-background-remove-tool/actions/workflows/master_docker.yaml/badge.svg\">\n<img src=\"https://github.com/OPHoperHPO/image-background-remove-tool/actions/workflows/master.yml/badge.svg\">\n<a href=\"https://colab.research.google.com/github/OPHoperHPO/image-background-remove-tool/blob/master/docs/other/carvekit_try.ipynb\">\n<img src=\"https://camo.githubusercontent.com/52feade06f2fecbf006889a904d221e6a730c194/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667\"></a>\n\n</p>\n\n\n**********************************************************************\n<p align=\"center\"> <img align=\"center\" width=\"512\" height=\"288\" src=\"docs/imgs/compare/readme.jpg\"> </p>\n\n\n> The higher resolution images from the picture above can be seen in the docs/imgs/compare/ and docs/imgs/input folders.\n\n#### \ud83d\udcd9 README Language\n[Russian](docs/readme/ru.md)\n[English](README.md)\n\n## \ud83d\udcc4 Description:  \nAutomated high-quality background removal framework for an image using neural networks.\n\n## \ud83c\udf86 Features:  \n- High Quality\n- Batch Processing\n- NVIDIA CUDA and CPU processing\n- FP16 inference: Fast inference with low memory usage\n- Easy inference\n- 100% remove.bg compatible FastAPI HTTP API \n- Removes background from hairs\n- Easy integration with your code\n\n## \u26f1 Try yourself on [Google Colab](https://colab.research.google.com/github/OPHoperHPO/image-background-remove-tool/blob/master/docs/other/carvekit_try.ipynb) \n## \u26d3\ufe0f How does it work?\nIt can be briefly described as\n1. The user selects a picture or a folder with pictures for processing\n2. The photo is preprocessed to ensure the best quality of the output image\n3. Using machine learning technology, the background of the image is removed\n4. Image post-processing to improve the quality of the processed image\n## \ud83c\udf93 Implemented Neural Networks:\n|        Networks         |                   Target                    |             Accuracy             |\n|:-----------------------:|:-------------------------------------------:|:--------------------------------:|\n| **Tracer-B7** (default) |     **General** (objects, animals, etc)     | **90%** (mean F1-Score, DUTS-TE) |\n|         U^2-net         | **Hairs** (hairs, people, animals, objects) |  80.4% (mean F1-Score, DUTS-TE)  |\n|         BASNet          |        **General** (people, objects)        |  80.3% (mean F1-Score, DUTS-TE)  |\n|        DeepLabV3        |         People, Animals, Cars, etc          |  67.4% (mean IoU, COCO val2017)  |\n\n### Recommended parameters for different models\n|  Networks   | Segmentation mask  size | Trimap parameters (dilation, erosion) |\n|:-----------:|:-----------------------:|:-------------------------------------:|\n| `tracer_b7` |           640           |                (30, 5)                |\n|   `u2net`   |           320           |                (30, 5)                |\n|  `basnet`   |           320           |                (30, 5)                |\n| `deeplabv3` |          1024           |               (40, 20)                |\n\n> ### Notes: \n> 1. The final quality may depend on the resolution of your image, the type of scene or object.\n> 2. Use **U2-Net for hairs** and **Tracer-B7 for general images** and correct parameters. \\\n> It is very important for final quality! Example images was taken by using U2-Net and FBA post-processing.\n## \ud83d\uddbc\ufe0f Image pre-processing and post-processing methods:\n### \ud83d\udd0d Preprocessing methods:\n* `none` - No preprocessing methods used.\n> They will be added in the future.\n### \u2702 Post-processing methods:\n* `none` - No post-processing methods used.\n* `fba` (default) - This algorithm improves the borders of the image when removing the background from images with hair, etc. using FBA Matting neural network. This method gives the best result in combination with u2net without any preprocessing methods.\n\n## \ud83c\udff7 Setup for CPU processing:\n1. `pip install carvekit --extra-index-url https://download.pytorch.org/whl/cpu`\n> The project has been tested on Python versions ranging from 3.9 to 3.11.7.\n## \ud83c\udff7 Setup for GPU processing:  \n1. Make sure you have an NVIDIA GPU with 8 GB VRAM.\n2. Install `CUDA Toolkit 12.1 and Video Driver for your GPU`\n3. `pip install carvekit --extra-index-url https://download.pytorch.org/whl/cu121`\n> The project has been tested on Python versions ranging from 3.9 to 3.11.7.\n## \ud83e\uddf0 Interact via code:  \n### If you don't need deep configuration or don't want to deal with it\n``` python\nimport torch\nfrom carvekit.api.high import HiInterface\n\n# Check doc strings for more information\ninterface = HiInterface(object_type=\"hairs-like\",  # Can be \"object\" or \"hairs-like\".\n                        batch_size_seg=5,\n                        batch_size_matting=1,\n                        device='cuda' if torch.cuda.is_available() else 'cpu',\n                        seg_mask_size=640,  # Use 640 for Tracer B7 and 320 for U2Net\n                        matting_mask_size=2048,\n                        trimap_prob_threshold=231,\n                        trimap_dilation=30,\n                        trimap_erosion_iters=5,\n                        fp16=False)\nimages_without_background = interface(['./tests/data/cat.jpg'])\ncat_wo_bg = images_without_background[0]\ncat_wo_bg.save('2.png')\n\n                   \n```\n\n### If you want control everything\n``` python\nimport PIL.Image\n\nfrom carvekit.api.interface import Interface\nfrom carvekit.ml.wrap.fba_matting import FBAMatting\nfrom carvekit.ml.wrap.tracer_b7 import TracerUniversalB7\nfrom carvekit.pipelines.postprocessing import MattingMethod\nfrom carvekit.pipelines.preprocessing import PreprocessingStub\nfrom carvekit.trimap.generator import TrimapGenerator\n\n# Check doc strings for more information\nseg_net = TracerUniversalB7(device='cpu',\n              batch_size=1)\n\nfba = FBAMatting(device='cpu',\n                 input_tensor_size=2048,\n                 batch_size=1)\n\ntrimap = TrimapGenerator()\n\npreprocessing = PreprocessingStub()\n\npostprocessing = MattingMethod(matting_module=fba,\n                               trimap_generator=trimap,\n                               device='cpu')\n\ninterface = Interface(pre_pipe=preprocessing,\n                      post_pipe=postprocessing,\n                      seg_pipe=seg_net)\n\nimage = PIL.Image.open('tests/data/cat.jpg')\ncat_wo_bg = interface([image])[0]\ncat_wo_bg.save('2.png')\n                   \n```\n\n\n## \ud83e\uddf0 Running the CLI interface:  \n * ```python3 -m carvekit  -i <input_path> -o <output_path> --device <device>```  \n \n### Explanation of args:  \n````\nUsage: carvekit [OPTIONS]\n\n  Performs background removal on specified photos using console interface.\n\nOptions:\n  -i ./2.jpg                   Path to input file or dir  [required]\n  -o ./2.png                   Path to output file or dir\n  --pre none                   Preprocessing method\n  --post fba                   Postprocessing method.\n  --net tracer_b7              Segmentation Network. Check README for more info.\n  --recursive                  Enables recursive search for images in a folder\n  --batch_size 10              Batch Size for list of images to be loaded to\n                               RAM\n\n  --batch_size_seg 5           Batch size for list of images to be processed\n                               by segmentation network\n\n  --batch_size_mat 1           Batch size for list of images to be processed\n                               by matting network\n\n  --seg_mask_size 640          The size of the input image for the\n                               segmentation neural network. Use 640 for Tracer B7 and 320 for U2Net\n\n  --matting_mask_size 2048     The size of the input image for the matting\n                               neural network.\n  --trimap_dilation 30       The size of the offset radius from the\n                                  object mask in pixels when forming an\n                                  unknown area\n  --trimap_erosion 5        The number of iterations of erosion that the\n                                  object's mask will be subjected to before\n                                  forming an unknown area\n  --trimap_prob_threshold 231\n                                  Probability threshold at which the\n                                  prob_filter and prob_as_unknown_area\n                                  operations will be applied\n\n  --device cpu                 Processing Device.\n  --fp16                       Enables mixed precision processing. Use only with CUDA. CPU support is experimental!\n  --help                       Show this message and exit.\n\n\n````\n## \ud83d\udce6 Running the Framework / FastAPI HTTP API server via Docker:\nUsing the API via docker is a **fast** and non-complex way to have a working API.\n> **Our docker images are available on [Docker Hub](https://hub.docker.com/r/anodev/carvekit).** \\\n> Version tags are the same as the releases of the project with suffixes `-cpu` and `-cuda` for CPU and CUDA versions respectively.\n\n\n<p align=\"center\"> \n<img src=\"docs/imgs/screenshot/frontend.png\"> \n<img src=\"docs/imgs/screenshot/docs_fastapi.png\"> \n</p>\n\n>### Important Notes:\n>1. Docker image has default front-end at `/` url and FastAPI backend with docs at `/docs` url.\n>2. Authentication is **enabled** by default. \\\n> **Token keys are reset** on every container restart if ENV variables are not set. \\\nSee `docker-compose.<device>.yml` for more information. \\\n> **You can see your access keys in the docker container logs.**\n> \n>3. There are examples of interaction with the API.\\\n> See `docs/code_examples/python` for more details\n### \ud83d\udd28 Creating and running a container:\n1. Install `docker-compose`\n2. Run `docker-compose -f docker-compose.cpu.yml up -d`  # For CPU Processing\n3. Run `docker-compose -f docker-compose.cuda.yml up -d`  # For GPU Processing\n> Also you can mount folders from your host machine to docker container\n> and use the CLI interface inside the docker container to process \n> files in this folder. \n\n> Building a docker image on Windows is not officially supported. You can try using WSL2 or \"Linux Containers Mode\" but I haven't tested this.\n\n## \u2611\ufe0f Testing\n\n### \u2611\ufe0f Testing with local environment\n1. `pip install -r requirements_test.txt`\n2. `pytest`\n\n### \u2611\ufe0f Testing with Docker\n1. Run `docker-compose -f docker-compose.cpu.yml run carvekit_api pytest`  # For testing on CPU\n2. Run `docker-compose -f docker-compose.cuda.yml run carvekit_api pytest`  # For testing on GPU\n\n## \ud83d\udc6a Credits: [More info](docs/CREDITS.md)\n\n## \ud83d\udcb5 Support\n  You can thank me for developing this project and buy me a small cup of coffee \u2615\n\n| Blockchain |           Cryptocurrency            |          Network          |                                             Wallet                                              |\n|:----------:|:-----------------------------------:|:-------------------------:|:-----------------------------------------------------------------------------------------------:|\n|  Ethereum  | ETH / USDT / USDC / BNB / Dogecoin  |          Mainnet          |                           0x7Ab1B8015020242D2a9bC48F09b2F34b994bc2F8                            |\n|  Ethereum  | ETH / USDT / USDC / BNB / Dogecoin  | BSC (Binance Smart Chain) |                           0x7Ab1B8015020242D2a9bC48F09b2F34b994bc2F8                            |\n|  Bitcoin   |                 BTC                 |             -             |                           bc1qmf4qedujhhvcsg8kxpg5zzc2s3jvqssmu7mmhq                            |\n|   ZCash    |                 ZEC                 |             -             |                               t1d7b9WxdboGFrcVVHG2ZuwWBgWEKhNUbtm                               |\n|    Tron    |                 TRX                 |             -             |                               TH12CADSqSTcNZPvG77GVmYKAe4nrrJB5X                                |\n|   Monero   |                 XMR                 |          Mainnet          | 48w2pDYgPtPenwqgnNneEUC9Qt1EE6eD5MucLvU3FGpY3SABudDa4ce5bT1t32oBwchysRCUimCkZVsD1HQRBbxVLF9GTh3 |\n|    TON     |                 TON                 |             -             |                        EQCznqTdfOKI3L06QX-3Q802tBL0ecSWIKfkSjU-qsoy0CWE                         |\n## \ud83d\udce7 __Feedback__\nI will be glad to receive feedback on the project and suggestions for integration.\n\nFor all questions write: [farvard34@gmail.com](mailto://farvard34@gmail.com)\n",
    "bugtrack_url": null,
    "license": "Apache License v2.0",
    "summary": "Open-Source background removal framework",
    "version": "4.1.2",
    "project_urls": {
        "Homepage": "https://github.com/OPHoperHPO/image-background-remove-tool"
    },
    "split_keywords": [
        "ml",
        " carvekit",
        " background removal",
        " neural networks",
        " machine learning",
        " remove bg"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "26fd3691bc3e1b6336f846fcf85d2b77a5b7f42cf8df121c41fa8d0e77ef2292",
                "md5": "f5654a53235fff25ef40dfbd158b86e2",
                "sha256": "18483ccd98e6566c6e78a67f138c55637d4cf2e02d16a75e3f98e7b6bdadf800"
            },
            "downloads": -1,
            "filename": "carvekit_colab-4.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f5654a53235fff25ef40dfbd158b86e2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 76074,
            "upload_time": "2024-04-08T13:48:12",
            "upload_time_iso_8601": "2024-04-08T13:48:12.637266Z",
            "url": "https://files.pythonhosted.org/packages/26/fd/3691bc3e1b6336f846fcf85d2b77a5b7f42cf8df121c41fa8d0e77ef2292/carvekit_colab-4.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1d1897d9f4c1b1659982866f86b260285c088f4ffde2ec6de84a60baba19a737",
                "md5": "f2d153f1de23a0243a66dc8d4007fd90",
                "sha256": "847c02c9606268cf4f42d066d629e61259c828f108d770e2331de05258fcdc7f"
            },
            "downloads": -1,
            "filename": "carvekit_colab-4.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "f2d153f1de23a0243a66dc8d4007fd90",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 62844,
            "upload_time": "2024-04-08T13:48:14",
            "upload_time_iso_8601": "2024-04-08T13:48:14.525281Z",
            "url": "https://files.pythonhosted.org/packages/1d/18/97d9f4c1b1659982866f86b260285c088f4ffde2ec6de84a60baba19a737/carvekit_colab-4.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-08 13:48:14",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "OPHoperHPO",
    "github_project": "image-background-remove-tool",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "carvekit-colab"
}
        
Elapsed time: 0.21479s