clipq


Nameclipq JSON
Version 0.0.7 PyPI version JSON
download
home_pagehttps://github.com/kyegomez/clipq
SummaryPaper - Pytorch
upload_time2023-10-08 14:49:36
maintainer
docs_urlNone
authorKye Gomez
requires_python>=3.6,<4.0
licenseMIT
keywords artificial intelligence deep learning optimizers prompt engineering
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ClipQ (WIP)

An easy-to-use interface for experimenting with OpenAI's CLIP model by encoding image quadrants. By splitting images into quadrants and encoding each with CLIP, we can explore how the model perceives various parts of an image.

## Appreciation

- [Christopher in LAION for the idea](https://discord.com/channels/823813159592001537/824374369182416994/1158057178582753342)
- Thanks to OpenAI for the CLIP model.
- Inspiration drawn from various CLIP-related projects in the community.



## Table of Contents

- [Installation](#installation)
- [Quickstart](#quickstart)
- [Usage](#usage)
- [Contributing](#contributing)
- [License](#license)
- [Acknowledgments](#acknowledgments)

## Installation

Install the package via pip:

```bash
pip install clipq
```

## Quickstart

Here's a brief example to get you started:

```python
from clipq.main import CLIPQ

#init
test = CLIPQ(query_text="A photo of a cat")

#input, url => embed
vectors = test.run_from_url(url="https://picsum.photos/800", h_splits=3, v_splits=3)

#print
print(vectors)
```

# Documentation
- [Documentation is here, in the docs folder](docs/README.md)


## Contributing

1. Fork the repository on GitHub.
2. Clone the forked repository to your machine.
3. Create a new branch with an appropriate name.
4. Make your changes and commit with a meaningful commit message.
5. Push your changes to your forked repository.
6. Create a Pull Request against the original repository.

## License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

# Todo
- [x] Output captions of all 4 quadrants
- [ ] Make captions using any of the following: openclip G, OpenCLIP G or siglip L or EVA G
- [ ] Image Division: Ability to split an image into quadrants (2x2). Extended ability to split an image into 9 equal parts (3x3).
- [ ] Vector Representation: Generation of a CLIP vector for the entire image and individual CLIP vectors for each split part or quadrant.
- [ ] Sub-clip Concerns: Identification of hard chunking issues with standard quadrant splitting.
- [ ] Noise Reduction: Introduction of non-standard shapes (possibly polygons) for image parts to reduce noise. Aim to tackle interlacing issues during upscaling.
- [ ] Upscaling: Address potential tiling issues during the upscaling process.
- [ ] Flexibility in Sub-clipping: Configurable options to choose between 2x2 or 3x3 image division.
- [ ] Prior Training: Training mechanism to use the data of quadrant CLIP vectors.
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kyegomez/clipq",
    "name": "clipq",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6,<4.0",
    "maintainer_email": "",
    "keywords": "artificial intelligence,deep learning,optimizers,Prompt Engineering",
    "author": "Kye Gomez",
    "author_email": "kye@apac.ai",
    "download_url": "https://files.pythonhosted.org/packages/95/38/e4e33e375a3e7021df81f3b173220109b13d45e9c33517d225b6d9fcbc9d/clipq-0.0.7.tar.gz",
    "platform": null,
    "description": "# ClipQ (WIP)\n\nAn easy-to-use interface for experimenting with OpenAI's CLIP model by encoding image quadrants. By splitting images into quadrants and encoding each with CLIP, we can explore how the model perceives various parts of an image.\n\n## Appreciation\n\n- [Christopher in LAION for the idea](https://discord.com/channels/823813159592001537/824374369182416994/1158057178582753342)\n- Thanks to OpenAI for the CLIP model.\n- Inspiration drawn from various CLIP-related projects in the community.\n\n\n\n## Table of Contents\n\n- [Installation](#installation)\n- [Quickstart](#quickstart)\n- [Usage](#usage)\n- [Contributing](#contributing)\n- [License](#license)\n- [Acknowledgments](#acknowledgments)\n\n## Installation\n\nInstall the package via pip:\n\n```bash\npip install clipq\n```\n\n## Quickstart\n\nHere's a brief example to get you started:\n\n```python\nfrom clipq.main import CLIPQ\n\n#init\ntest = CLIPQ(query_text=\"A photo of a cat\")\n\n#input, url => embed\nvectors = test.run_from_url(url=\"https://picsum.photos/800\", h_splits=3, v_splits=3)\n\n#print\nprint(vectors)\n```\n\n# Documentation\n- [Documentation is here, in the docs folder](docs/README.md)\n\n\n## Contributing\n\n1. Fork the repository on GitHub.\n2. Clone the forked repository to your machine.\n3. Create a new branch with an appropriate name.\n4. Make your changes and commit with a meaningful commit message.\n5. Push your changes to your forked repository.\n6. Create a Pull Request against the original repository.\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n# Todo\n- [x] Output captions of all 4 quadrants\n- [ ] Make captions using any of the following: openclip G, OpenCLIP G or siglip L or EVA G\n- [ ] Image Division: Ability to split an image into quadrants (2x2). Extended ability to split an image into 9 equal parts (3x3).\n- [ ] Vector Representation: Generation of a CLIP vector for the entire image and individual CLIP vectors for each split part or quadrant.\n- [ ] Sub-clip Concerns: Identification of hard chunking issues with standard quadrant splitting.\n- [ ] Noise Reduction: Introduction of non-standard shapes (possibly polygons) for image parts to reduce noise. Aim to tackle interlacing issues during upscaling.\n- [ ] Upscaling: Address potential tiling issues during the upscaling process.\n- [ ] Flexibility in Sub-clipping: Configurable options to choose between 2x2 or 3x3 image division.\n- [ ] Prior Training: Training mechanism to use the data of quadrant CLIP vectors.",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Paper - Pytorch",
    "version": "0.0.7",
    "project_urls": {
        "Homepage": "https://github.com/kyegomez/clipq",
        "Repository": "https://github.com/kyegomez/clipq"
    },
    "split_keywords": [
        "artificial intelligence",
        "deep learning",
        "optimizers",
        "prompt engineering"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3dfe762a5c1d2c324997e2f9a405c820011f975eb25265a83b1888ce8cfb26f6",
                "md5": "55535f346fb1fd5d663e5005a2af92ac",
                "sha256": "a87e76453951e717151f8ce3989b7da8e2d5072903146c51f6e523c8543b7aeb"
            },
            "downloads": -1,
            "filename": "clipq-0.0.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "55535f346fb1fd5d663e5005a2af92ac",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6,<4.0",
            "size": 5218,
            "upload_time": "2023-10-08T14:49:34",
            "upload_time_iso_8601": "2023-10-08T14:49:34.842249Z",
            "url": "https://files.pythonhosted.org/packages/3d/fe/762a5c1d2c324997e2f9a405c820011f975eb25265a83b1888ce8cfb26f6/clipq-0.0.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9538e4e33e375a3e7021df81f3b173220109b13d45e9c33517d225b6d9fcbc9d",
                "md5": "a07decd1a033b428f70bd627ad78f78d",
                "sha256": "ac72e69355c66f0c320f584903cc8f227f9122f372bb5a24d5fddcd0a262faa7"
            },
            "downloads": -1,
            "filename": "clipq-0.0.7.tar.gz",
            "has_sig": false,
            "md5_digest": "a07decd1a033b428f70bd627ad78f78d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6,<4.0",
            "size": 5183,
            "upload_time": "2023-10-08T14:49:36",
            "upload_time_iso_8601": "2023-10-08T14:49:36.398039Z",
            "url": "https://files.pythonhosted.org/packages/95/38/e4e33e375a3e7021df81f3b173220109b13d45e9c33517d225b6d9fcbc9d/clipq-0.0.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-08 14:49:36",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "clipq",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "clipq"
}
        
Elapsed time: 0.15301s