# clipcrop
- Extract sections of images from your image by using OpenAI's CLIP and YoloSmall implemented on HuggingFace Transformers
- Added new capability for segmentation using CLIP and Detr segmentation models
# Installation
```python
pip install clipcrop
```
## Clip Crop
Extract sections of images from your image by using OpenAI's CLIP and YoloSmall implemented on HuggingFace Transformers
### Extraction
```python
from clipcrop import clipcrop
cc = clipcrop.ClipCrop("/content/sample.jpg")
DFE, DM, CLIPM, CLIPP = cc.load_models()
result = cc.extract_image(DFE, DM, CLIPM, CLIPP, "text content", num=2)
```
<!--
### Result
<p style="font-style: italic;">clipcrop = ClipCrop("/content/nm.jpg", "woman in white frock")</p>
<p float="left">
<img src="/nm.jpg" width="600" height="350">
<img src="/clipcrop.jpeg" width="150" height="300">
</p>
<br>
<p style="font-style: italic;">cc = ClipCrop('/content/rd.jpg', 'woman walking', 2)</p>
<p float="left">
<img src="/rd.jpg" width="600" height="350">
<img src="/rmc.jpeg" width="150" height="300">
</p> -->
### Captcha
Solve captacha images using CLIP and Object detection models. Ensure Tesseract is installed and executable in your path
```python
from clipcrop import clipcrop
cc = clipcrop.ClipCrop(image_path)
DFE, DM, CLIPM, CLIPP = cc.load_models()
result = cc.auto_captcha(CLIPM, CLIPP, 4)
```
## Clip Segmentation
Segment out images using Detr Panoptic segmentation pipeline and leverage CLIP models to derive the most probable one for your query
### Extraction
```python
from clipcrop import clipcrop
clipseg = clipcrop.ClipSeg("/content/input.png", "black colored car")
segmentor, clipmodel, clipprocessor = clipseg.load_models()
result = clipseg.segment_image(segmentor, clipmodel, clipprocessor)
```
### Remove Background
```python
from clipcrop import clipcrop
clipseg = clipcrop.ClipSeg("/content/input.png", "black colored car")
result = clipseg.remove_background()
```
### Other projects
- [SnapCode : Extract code blocks from images mixed with normal text](https://github.com/Vishnunkumar/snapcode)
- [HuggingFaceInference: Inference of different uses cases of finetued models](https://github.com/Vishnunkumar/huggingfaceinference)
### Contact
- Feel free to contact me on "nkumarvishnu25@gmail.com"
Raw data
{
"_id": null,
"home_page": "https://github.com/Vishnunkumar/clipcrop/",
"name": "clipcrop",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "clipcrop",
"author": "Vishnu Nandakumar",
"author_email": "nkumarvishnu25@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/75/23/a069505386ee143d3d9605f508d24f218d0aab3107e245dc503ae0c29588/clipcrop-2.5.0.tar.gz",
"platform": null,
"description": "# clipcrop\n- Extract sections of images from your image by using OpenAI's CLIP and YoloSmall implemented on HuggingFace Transformers\n- Added new capability for segmentation using CLIP and Detr segmentation models\n\n# Installation\n```python\npip install clipcrop\n```\n\n## Clip Crop\n\nExtract sections of images from your image by using OpenAI's CLIP and YoloSmall implemented on HuggingFace Transformers \n\n### Extraction\n\n```python\nfrom clipcrop import clipcrop\n\ncc = clipcrop.ClipCrop(\"/content/sample.jpg\")\n\nDFE, DM, CLIPM, CLIPP = cc.load_models()\n\nresult = cc.extract_image(DFE, DM, CLIPM, CLIPP, \"text content\", num=2)\n\n```\n\n<!-- \n### Result\n\n<p style=\"font-style: italic;\">clipcrop = ClipCrop(\"/content/nm.jpg\", \"woman in white frock\")</p>\n<p float=\"left\">\n<img src=\"/nm.jpg\" width=\"600\" height=\"350\">\n<img src=\"/clipcrop.jpeg\" width=\"150\" height=\"300\">\n</p>\n\n<br>\n\n<p style=\"font-style: italic;\">cc = ClipCrop('/content/rd.jpg', 'woman walking', 2)</p>\n<p float=\"left\">\n<img src=\"/rd.jpg\" width=\"600\" height=\"350\">\n<img src=\"/rmc.jpeg\" width=\"150\" height=\"300\">\n</p> -->\n\n### Captcha\nSolve captacha images using CLIP and Object detection models. Ensure Tesseract is installed and executable in your path\n\n```python\nfrom clipcrop import clipcrop\n\ncc = clipcrop.ClipCrop(image_path)\n\nDFE, DM, CLIPM, CLIPP = cc.load_models()\n\nresult = cc.auto_captcha(CLIPM, CLIPP, 4)\n\n```\n\n## Clip Segmentation\n\nSegment out images using Detr Panoptic segmentation pipeline and leverage CLIP models to derive the most probable one for your query\n\n### Extraction\n\n```python\n\nfrom clipcrop import clipcrop\n\nclipseg = clipcrop.ClipSeg(\"/content/input.png\", \"black colored car\")\n\nsegmentor, clipmodel, clipprocessor = clipseg.load_models()\n\nresult = clipseg.segment_image(segmentor, clipmodel, clipprocessor)\n\n```\n\n### Remove Background\n```python\n\nfrom clipcrop import clipcrop\n\nclipseg = clipcrop.ClipSeg(\"/content/input.png\", \"black colored car\")\n\nresult = clipseg.remove_background()\n\n```\n\n### Other projects\n- [SnapCode : Extract code blocks from images mixed with normal text](https://github.com/Vishnunkumar/snapcode)\n- [HuggingFaceInference: Inference of different uses cases of finetued models](https://github.com/Vishnunkumar/huggingfaceinference)\n\n### Contact\n- Feel free to contact me on \"nkumarvishnu25@gmail.com\"\n",
"bugtrack_url": null,
"license": "MIT license",
"summary": "Extract sections from your image by using OpenAI CLIP and Facebooks Detr implemented on HuggingFace Transformers",
"version": "2.5.0",
"project_urls": {
"Homepage": "https://github.com/Vishnunkumar/clipcrop/"
},
"split_keywords": [
"clipcrop"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "8d021181a48499e49e17bc28b70d14ad1a9c4e7b20c1ee2c69f540de2815978a",
"md5": "75fae7c27a0811207e69cc686de1d2cd",
"sha256": "96e69a9854c17b464545f85d9b92b3ffe63a68678ee3849fe0c4b75ba20072cc"
},
"downloads": -1,
"filename": "clipcrop-2.5.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "75fae7c27a0811207e69cc686de1d2cd",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 6291,
"upload_time": "2024-09-08T14:37:50",
"upload_time_iso_8601": "2024-09-08T14:37:50.087970Z",
"url": "https://files.pythonhosted.org/packages/8d/02/1181a48499e49e17bc28b70d14ad1a9c4e7b20c1ee2c69f540de2815978a/clipcrop-2.5.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "7523a069505386ee143d3d9605f508d24f218d0aab3107e245dc503ae0c29588",
"md5": "a62601a2f19834187402eb275b0abb4e",
"sha256": "8c43076578872459c54439c8c29510421111331d13f6a5454ebe763a48efcc45"
},
"downloads": -1,
"filename": "clipcrop-2.5.0.tar.gz",
"has_sig": false,
"md5_digest": "a62601a2f19834187402eb275b0abb4e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 5680,
"upload_time": "2024-09-08T14:37:51",
"upload_time_iso_8601": "2024-09-08T14:37:51.445231Z",
"url": "https://files.pythonhosted.org/packages/75/23/a069505386ee143d3d9605f508d24f218d0aab3107e245dc503ae0c29588/clipcrop-2.5.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-08 14:37:51",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Vishnunkumar",
"github_project": "clipcrop",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "clipcrop"
}