<p align="center">
<img src="https://github.com/RobinDong/tiny_multimodal/blob/1fbdfb6320b50c23a2bbb899db5e56b415d9fbbb/assets/tiny_multimodal.png?raw=true")
</p>
<p align="center">
<a href="https://en.wikipedia.org/wiki/MIT_License">
<img src="https://img.shields.io/badge/license-MIT-blue"/>
</a>
<a href="https://github.com/psf/black">
<img src="https://img.shields.io/badge/code%20style-black-000000.svg"/>
</a>
<a href="https://github.com/pylint-dev/pylint">
<img src="https://img.shields.io/badge/linting-pylint-yellowgreen"/>
</a>
</p>
# Tiny Multimodal
A simple and "tiny" implementation of many multimodal models. It supports training/finetuning/deploying these tiny-sized models.
Unlike the popular "large" models, all the models in this repo will be restricted to train on my RTX 3080 Ti so the implementation will not be totally the same to the original papers.
## quick start
### create environment
```
conda create -n tinym python=3.12
conda activate tinym
git clone git@github.com:RobinDong/tiny_multimodal.git
cd tiny_multimodal
python -m pip install -r requirements.txt
```
### prepare dataset for training
Download [conceptual-12m](https://github.com/google-research-datasets/conceptual-12m) from [Huggingface](https://huggingface.co/datasets/pixparse/cc12m-wds) to directory `cc12m-wds`.
Use `utils/extract_tars.py` to convert CC12M to ready-to-use format:
```
python utils/extract_tars.py --input_path=<YOUR_DIR>/cc12m-wds/ --output_path=<YOUR_OUTPUT_PATH> --jobs=<YOUR_CPU_CORES>
```
### train
```
python train.py --provider CLIP
```
## acknowledgements
This repo is still in developing. Please be patient for more multi-modal models.
Any issue or pull request is welcome.
Raw data
{
"_id": null,
"home_page": null,
"name": "tinymm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8.0",
"maintainer_email": null,
"keywords": "Vision-Language, Multimodal, Deep Learning, Library, PyTorch",
"author": "Robin Dong",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/17/2d/6d1fce2148abc9edb703f558f6ba6263a75936ec991092c3bfec0b321b66/tinymm-0.10.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n <img src=\"https://github.com/RobinDong/tiny_multimodal/blob/1fbdfb6320b50c23a2bbb899db5e56b415d9fbbb/assets/tiny_multimodal.png?raw=true\")\n</p>\n\n<p align=\"center\">\n <a href=\"https://en.wikipedia.org/wiki/MIT_License\">\n <img src=\"https://img.shields.io/badge/license-MIT-blue\"/>\n </a>\n <a href=\"https://github.com/psf/black\">\n <img src=\"https://img.shields.io/badge/code%20style-black-000000.svg\"/>\n </a>\n <a href=\"https://github.com/pylint-dev/pylint\">\n <img src=\"https://img.shields.io/badge/linting-pylint-yellowgreen\"/>\n </a>\n</p>\n\n# Tiny Multimodal\n\nA simple and \"tiny\" implementation of many multimodal models. It supports training/finetuning/deploying these tiny-sized models.\nUnlike the popular \"large\" models, all the models in this repo will be restricted to train on my RTX 3080 Ti so the implementation will not be totally the same to the original papers.\n\n## quick start\n\n### create environment\n\n```\nconda create -n tinym python=3.12\nconda activate tinym\n\ngit clone git@github.com:RobinDong/tiny_multimodal.git\ncd tiny_multimodal\npython -m pip install -r requirements.txt\n```\n\n### prepare dataset for training\n\nDownload [conceptual-12m](https://github.com/google-research-datasets/conceptual-12m) from [Huggingface](https://huggingface.co/datasets/pixparse/cc12m-wds) to directory `cc12m-wds`.\n\nUse `utils/extract_tars.py` to convert CC12M to ready-to-use format:\n```\npython utils/extract_tars.py --input_path=<YOUR_DIR>/cc12m-wds/ --output_path=<YOUR_OUTPUT_PATH> --jobs=<YOUR_CPU_CORES>\n```\n\n### train\n```\npython train.py --provider CLIP\n```\n\n## acknowledgements\nThis repo is still in developing. Please be patient for more multi-modal models.\n\nAny issue or pull request is welcome.\n",
"bugtrack_url": null,
"license": "MIT License",
"summary": "A simple and 'tiny' implementation of many multimodal models",
"version": "0.10",
"project_urls": null,
"split_keywords": [
"vision-language",
" multimodal",
" deep learning",
" library",
" pytorch"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "bc3447052fe0c98c247104b54d05a49ddfd89c3732a1c8bfa9f90ad565fb165f",
"md5": "5365e33ce3217f07efeee93266448eee",
"sha256": "3b40a4226fdb3a6ae68883cdaec2c23f3665330aa5e4f3ed1bf1f22080551c91"
},
"downloads": -1,
"filename": "tinymm-0.10-py3-none-any.whl",
"has_sig": false,
"md5_digest": "5365e33ce3217f07efeee93266448eee",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8.0",
"size": 16723,
"upload_time": "2024-04-04T23:25:27",
"upload_time_iso_8601": "2024-04-04T23:25:27.894470Z",
"url": "https://files.pythonhosted.org/packages/bc/34/47052fe0c98c247104b54d05a49ddfd89c3732a1c8bfa9f90ad565fb165f/tinymm-0.10-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "172d6d1fce2148abc9edb703f558f6ba6263a75936ec991092c3bfec0b321b66",
"md5": "ca672fa2abce8d41856c4bde15ffa595",
"sha256": "64e663c2bb0b04de5a25389d67058eeaab2a6bef576d6ae389ae8099156db8f0"
},
"downloads": -1,
"filename": "tinymm-0.10.tar.gz",
"has_sig": false,
"md5_digest": "ca672fa2abce8d41856c4bde15ffa595",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8.0",
"size": 13102,
"upload_time": "2024-04-04T23:25:30",
"upload_time_iso_8601": "2024-04-04T23:25:30.093633Z",
"url": "https://files.pythonhosted.org/packages/17/2d/6d1fce2148abc9edb703f558f6ba6263a75936ec991092c3bfec0b321b66/tinymm-0.10.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-04 23:25:30",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "tinymm"
}