# HI-ML Multimodal Toolbox
This toolbox provides models for multimodal health data.
The code is available on [GitHub][1] and [Hugging Face 🤗][6].
## Getting started
The best way to get started is by running the [phrase grounding notebook][2] and the [examples](#examples).
All the dependencies will be installed upon execution, so Python 3.9 and [Jupyter][3] are the only requirements to get started.
The notebook can also be run on [Binder][4], without the need to download any code or install any libraries:
[![Binder](https://mybinder.org/badge_logo.svg)][4]
## Installation
The latest version can be installed using `pip`:
```console
pip install --upgrade hi-ml-multimodal
```
### Development
For development, it is recommended to clone the repository and set up the environment using [`conda`][5]:
```console
git clone https://github.com/microsoft/hi-ml.git
cd hi-ml-multimodal
make env
```
This will create a `conda` environment named `multimodal` and install all the dependencies to run and test the package.
You can visit the [API documentation][9] for a deeper understanding of our tools.
## Examples
For zero-shot classification of images using text prompts, please refer to the [example script](./test_multimodal/vlp/test_zero_shot_classification.py) that utilises a small subset of [Open-Indiana CXR
dataset][10] for pneumonia detection in chest X-ray images.
Please note that the examples and models are not intended for deployed use cases (commercial or otherwise), which is currently out-of-scope.
## Hugging Face 🤗
While the [GitHub repository][1] provides examples and pipelines to use our models,
the weights and model cards are hosted on [Hugging Face 🤗][6].
## Credit
If you use our code or models in your research, please cite our recent ECCV and CVPR papers:
> Boecking, B., Usuyama, N. et al. (2022). [Making the Most of Text Semantics to Improve Biomedical Vision–Language Processing][7]. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13696. Springer, Cham. [https://doi.org/10.1007/978-3-031-20059-5_1][7]
> Bannur, S., Hyland, S., et al. (2023). [Learning to Exploit Temporal Structure for Biomedical Vision-Language Processing][8]. In: CVPR 2023.
### BibTeX
```bibtex
@InProceedings{10.1007/978-3-031-20059-5_1,
author="Boecking, Benedikt and Usuyama, Naoto and Bannur, Shruthi and Castro, Daniel C. and Schwaighofer, Anton and Hyland, Stephanie and Wetscherek, Maria and Naumann, Tristan and Nori, Aditya and Alvarez-Valle, Javier and Poon, Hoifung and Oktay, Ozan",
editor="Avidan, Shai and Brostow, Gabriel and Ciss{\'e}, Moustapha and Farinella, Giovanni Maria and Hassner, Tal",
title="Making the Most of Text Semantics to Improve Biomedical Vision--Language Processing",
booktitle="Computer Vision -- ECCV 2022",
year="2022",
publisher="Springer Nature Switzerland",
address="Cham",
pages="1--21",
isbn="978-3-031-20059-5"
}
@inproceedings{bannur2023learning,
title={Learning to Exploit Temporal Structure for Biomedical Vision{\textendash}Language Processing},
author={Shruthi Bannur and Stephanie Hyland and Qianchu Liu and Fernando P\'{e}rez-Garc\'{i}a and Maximilian Ilse and Daniel C. Castro and Benedikt Boecking and Harshita Sharma and Kenza Bouzid and Anja Thieme and Anton Schwaighofer and Maria Wetscherek and Matthew P. Lungren and Aditya Nori and Javier Alvarez-Valle and Ozan Oktay},
booktitle={Conference on Computer Vision and Pattern Recognition 2023},
year={2023},
url={https://openreview.net/forum?id=5jScn5xsbo}
}
```
[1]: https://github.com/microsoft/hi-ml/tree/main/hi-ml-multimodal
[2]: https://github.com/microsoft/hi-ml/tree/main/hi-ml-multimodal/notebooks/phrase_grounding.ipynb
[3]: https://jupyter.org/
[4]: https://mybinder.org/v2/gh/microsoft/hi-ml/HEAD?labpath=hi-ml-multimodal%2Fnotebooks%2Fphrase_grounding.ipynb
[5]: https://docs.conda.io/en/latest/miniconda.html
[6]: https://aka.ms/biovil-models
[7]: https://doi.org/10.1007/978-3-031-20059-5_1
[8]: https://arxiv.org/abs/2301.04558
[9]: https://hi-ml.readthedocs.io/en/latest/api/multimodal.html
[10]: https://openi.nlm.nih.gov/faq
Raw data
{
"_id": null,
"home_page": "https://github.com/microsoft/hi-ml",
"name": "hi-ml-multimodal",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": null,
"author": "Biomedical Imaging Team @ Microsoft Health Futures",
"author_email": "innereyedev@microsoft.com",
"download_url": "https://files.pythonhosted.org/packages/08/84/5b1dcfb23500a905db5aed77522d94f62a1ebfe581941b8817573d06c5da/hi-ml-multimodal-0.2.2.tar.gz",
"platform": null,
"description": "# HI-ML Multimodal Toolbox\n\nThis toolbox provides models for multimodal health data.\nThe code is available on [GitHub][1] and [Hugging Face \ud83e\udd17][6].\n\n## Getting started\n\nThe best way to get started is by running the [phrase grounding notebook][2] and the [examples](#examples).\nAll the dependencies will be installed upon execution, so Python 3.9 and [Jupyter][3] are the only requirements to get started.\n\nThe notebook can also be run on [Binder][4], without the need to download any code or install any libraries:\n\n[![Binder](https://mybinder.org/badge_logo.svg)][4]\n\n## Installation\n\nThe latest version can be installed using `pip`:\n\n```console\npip install --upgrade hi-ml-multimodal\n```\n\n### Development\n\nFor development, it is recommended to clone the repository and set up the environment using [`conda`][5]:\n\n```console\ngit clone https://github.com/microsoft/hi-ml.git\ncd hi-ml-multimodal\nmake env\n```\n\nThis will create a `conda` environment named `multimodal` and install all the dependencies to run and test the package.\n\nYou can visit the [API documentation][9] for a deeper understanding of our tools.\n\n## Examples\n\nFor zero-shot classification of images using text prompts, please refer to the [example script](./test_multimodal/vlp/test_zero_shot_classification.py) that utilises a small subset of [Open-Indiana CXR\ndataset][10] for pneumonia detection in chest X-ray images.\nPlease note that the examples and models are not intended for deployed use cases (commercial or otherwise), which is currently out-of-scope.\n\n## Hugging Face \ud83e\udd17\n\nWhile the [GitHub repository][1] provides examples and pipelines to use our models,\nthe weights and model cards are hosted on [Hugging Face \ud83e\udd17][6].\n\n## Credit\n\nIf you use our code or models in your research, please cite our recent ECCV and CVPR papers:\n\n> Boecking, B., Usuyama, N. et al. (2022). [Making the Most of Text Semantics to Improve Biomedical Vision\u2013Language Processing][7]. In: Avidan, S., Brostow, G., Ciss\u00e9, M., Farinella, G.M., Hassner, T. (eds) Computer Vision \u2013 ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13696. Springer, Cham. [https://doi.org/10.1007/978-3-031-20059-5_1][7]\n\n> Bannur, S., Hyland, S., et al. (2023). [Learning to Exploit Temporal Structure for Biomedical Vision-Language Processing][8]. In: CVPR 2023.\n\n### BibTeX\n\n```bibtex\n@InProceedings{10.1007/978-3-031-20059-5_1,\n author=\"Boecking, Benedikt and Usuyama, Naoto and Bannur, Shruthi and Castro, Daniel C. and Schwaighofer, Anton and Hyland, Stephanie and Wetscherek, Maria and Naumann, Tristan and Nori, Aditya and Alvarez-Valle, Javier and Poon, Hoifung and Oktay, Ozan\",\n editor=\"Avidan, Shai and Brostow, Gabriel and Ciss{\\'e}, Moustapha and Farinella, Giovanni Maria and Hassner, Tal\",\n title=\"Making the Most of Text Semantics to Improve Biomedical Vision--Language Processing\",\n booktitle=\"Computer Vision -- ECCV 2022\",\n year=\"2022\",\n publisher=\"Springer Nature Switzerland\",\n address=\"Cham\",\n pages=\"1--21\",\n isbn=\"978-3-031-20059-5\"\n}\n\n@inproceedings{bannur2023learning,\n title={Learning to Exploit Temporal Structure for Biomedical Vision{\\textendash}Language Processing},\n author={Shruthi Bannur and Stephanie Hyland and Qianchu Liu and Fernando P\\'{e}rez-Garc\\'{i}a and Maximilian Ilse and Daniel C. Castro and Benedikt Boecking and Harshita Sharma and Kenza Bouzid and Anja Thieme and Anton Schwaighofer and Maria Wetscherek and Matthew P. Lungren and Aditya Nori and Javier Alvarez-Valle and Ozan Oktay},\n booktitle={Conference on Computer Vision and Pattern Recognition 2023},\n year={2023},\n url={https://openreview.net/forum?id=5jScn5xsbo}\n}\n```\n\n[1]: https://github.com/microsoft/hi-ml/tree/main/hi-ml-multimodal\n[2]: https://github.com/microsoft/hi-ml/tree/main/hi-ml-multimodal/notebooks/phrase_grounding.ipynb\n[3]: https://jupyter.org/\n[4]: https://mybinder.org/v2/gh/microsoft/hi-ml/HEAD?labpath=hi-ml-multimodal%2Fnotebooks%2Fphrase_grounding.ipynb\n[5]: https://docs.conda.io/en/latest/miniconda.html\n[6]: https://aka.ms/biovil-models\n[7]: https://doi.org/10.1007/978-3-031-20059-5_1\n[8]: https://arxiv.org/abs/2301.04558\n[9]: https://hi-ml.readthedocs.io/en/latest/api/multimodal.html\n[10]: https://openi.nlm.nih.gov/faq\n",
"bugtrack_url": null,
"license": "MIT License",
"summary": "Microsoft Health Futures package to work with multi-modal health data",
"version": "0.2.2",
"project_urls": {
"Homepage": "https://github.com/microsoft/hi-ml"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "073118bbc85891b3e576766a8556222c148fa438ef8c5a60d317c3e58c3457f6",
"md5": "79c6fdbf6420851af3cb71b6e07be855",
"sha256": "f9a0f4f897e6ae4f4f0574c4fe037d1b3902e834ebc761c01a153e6e09dc2745"
},
"downloads": -1,
"filename": "hi_ml_multimodal-0.2.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "79c6fdbf6420851af3cb71b6e07be855",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 36234,
"upload_time": "2024-04-11T09:33:29",
"upload_time_iso_8601": "2024-04-11T09:33:29.558499Z",
"url": "https://files.pythonhosted.org/packages/07/31/18bbc85891b3e576766a8556222c148fa438ef8c5a60d317c3e58c3457f6/hi_ml_multimodal-0.2.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "08845b1dcfb23500a905db5aed77522d94f62a1ebfe581941b8817573d06c5da",
"md5": "1edabd6c13a17a9e012ac3752c238e2a",
"sha256": "4f4144d3710275ad11a49e218c13db9427bb8b7d8700084cfda434d869d3f868"
},
"downloads": -1,
"filename": "hi-ml-multimodal-0.2.2.tar.gz",
"has_sig": false,
"md5_digest": "1edabd6c13a17a9e012ac3752c238e2a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 27325,
"upload_time": "2024-04-11T09:33:30",
"upload_time_iso_8601": "2024-04-11T09:33:30.829972Z",
"url": "https://files.pythonhosted.org/packages/08/84/5b1dcfb23500a905db5aed77522d94f62a1ebfe581941b8817573d06c5da/hi-ml-multimodal-0.2.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-11 09:33:30",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "microsoft",
"github_project": "hi-ml",
"travis_ci": false,
"coveralls": true,
"github_actions": true,
"test_requirements": [
{
"name": "black",
"specs": [
[
"==",
"22.1.0"
]
]
},
{
"name": "coverage",
"specs": [
[
"==",
"7.1.0"
]
]
},
{
"name": "flake8-bugbear",
"specs": [
[
"==",
"23.3.12"
]
]
},
{
"name": "mypy",
"specs": [
[
"==",
"1.0.1"
]
]
},
{
"name": "pre-commit",
"specs": [
[
"==",
"2.21.0"
]
]
},
{
"name": "pylint",
"specs": [
[
"==",
"2.16.2"
]
]
},
{
"name": "pycobertura",
"specs": [
[
"==",
"3.0.0"
]
]
},
{
"name": "pytest",
"specs": [
[
"==",
"7.2.1"
]
]
},
{
"name": "pytest-cov",
"specs": [
[
"==",
"4.0.0"
]
]
},
{
"name": "pytest-rerunfailures",
"specs": [
[
"==",
"11.1.1"
]
]
},
{
"name": "pytest-timeout",
"specs": [
[
"==",
"2.1.0"
]
]
},
{
"name": "scikit-learn",
"specs": []
}
],
"lcname": "hi-ml-multimodal"
}