# Welcome to ROICaT
<div>
<img src="docs/media/logo1.png" alt="ROICaT" width="200" align="right" style="margin-left: 20px"/>
</div>
[![build](https://github.com/RichieHakim/ROICaT/actions/workflows/.github/workflows/build.yml/badge.svg)](https://github.com/RichieHakim/ROICaT/actions/workflows/build.yml)
[![PyPI version](https://badge.fury.io/py/roicat.svg)](https://badge.fury.io/py/roicat)
[![Downloads](https://pepy.tech/badge/roicat)](https://pepy.tech/project/roicat)
- **Documentation: [https://roicat.readthedocs.io/en/latest/](https://roicat.readthedocs.io/en/latest/)**
- Discussion forum: [https://groups.google.com/g/roicat_support](https://groups.google.com/g/roicat_support)
- Technical support: [Github Issues](https://github.com/RichieHakim/ROICaT/issues)
## **R**egion **O**f **I**nterest **C**lassification **a**nd **T**racking ᗢ
A simple-to-use Python package for automatically classifying images of cells and tracking them across imaging sessions/planes.
<div>
<img src="docs/media/tracking_FOV_clusters_rich.gif" alt="tracking_FOV_clusters_rich" width="400" align="right" style="margin-left: 20px"/>
</div>
**Why use ROICaT?**
- **It's easy to use. You don't need to know how to code. You can use the
interactive notebooks to run the pipelines with just a few clicks.**
- ROICaT was made to be better than existing tools. It is capable of classifying
and tracking neuron ROIs at accuracies approaching human performance. Several
labs currently use ROICaT to do automatic tracking and classification of ROIs
with no post-hoc curation required.
- Great effort was taken to optimize performance. Computational requirements are
minimal and run times are fast.
With ROICaT, you can:
- **Classify ROIs** into different categories (e.g. neurons, dendrites, glia,
etc.).
- **Track ROIs** across imaging sessions/planes (e.g. ROI #1 in session 1 is the
same as ROI #7 in session 2).
**What data types can ROICaT process?**
- ROICaT can accept any imaging data format including: Suite2p, CaImAn, CNMF,
NWB, raw/custom ROI data and more. See below for details on how to use any
data type with ROICaT.
**What are the minimum computing needs?**
- We recommend the following as a starting point:
- 4 GB of RAM (more for large data sets e.g., ~32 GB for 100K neurons)
- GPU not required but will increase run speeds ~5-50x
<br>
<br>
# How to use ROICaT
<div>
<img src="docs/media/umap_with_labels.png" alt="ROICaT" width="300" align="right" style="margin-left: 20px"/>
</div>
Listed below, we have a suite of easy to run notebooks for running the ROICaT
pipelines.
#### First time users:
Try it out using our Google CoLab notebooks below which can be run fully
remotely without installing anything on your computer.
#### Normal usage:
We recommend using our Jupyter notebooks which can be run locally on any
computer.
### TRACKING:
- [Interactive
notebook](https://github.com/RichieHakim/ROICaT/blob/main/notebooks/jupyter/tracking/1_tracking_interactive_notebook.ipynb)
- [Google
CoLab](https://githubtocolab.com/RichieHakim/ROICaT/blob/main/notebooks/colab/tracking/1_tracking_interactive_notebook.ipynb)
- [Command line interface script](https://github.com/RichieHakim/ROICaT/blob/main/scripts/run_tracking.sh):
```shell
roicat --pipeline tracking --path_params /path/to/params.yaml --dir_data /folder/with/data/ --dir_save /folder/save/ --prefix_name_save expName --verbose
```
### CLASSIFICATION:
- [Interactive notebook -
Drawing](https://github.com/RichieHakim/ROICaT/blob/main/notebooks/jupyter/classification/A1_classify_by_drawingSelection.ipynb)
- [Google CoLab -
Drawing](https://githubtocolab.com/RichieHakim/ROICaT/blob/main/notebooks/colab/classification/A1_classify_by_drawingSelection_colab.ipynb)
- [Interactive notebook -
Labeling](https://github.com/RichieHakim/ROICaT/blob/main/notebooks/jupyter/classification/B1_labeling_interactive.ipynb)
- [Interactive notebook - Train
classifier](https://github.com/RichieHakim/ROICaT/blob/main/notebooks/jupyter/classification/B2_classifier_train_interactive.ipynb)
- [Interactive notebook - Inference with
classifier](https://github.com/RichieHakim/ROICaT/blob/main/notebooks/jupyter/classification/B3_classifier_inference_interactive.ipynb)
**OTHER:**
- [Custom data importing
notebook](https://github.com/RichieHakim/ROICaT/blob/main/notebooks/jupyter/other/demo_data_importing.ipynb)
- Use the API to integrate ROICaT functions into your own code:
[Documentation](https://roicat.readthedocs.io/en/latest/roicat.html).
- Run the full tracking pipeline using `roicat.pipelines.pipeline_tracking` with
default parameters generated from `roicat.util.get_default_paramaters()`.
<!-- - Train a new ROInet model using the provided Jupyter Notebook [TODO: link]. -->
# General workflow:
- **Pass ROIs through ROInet:** Images of the ROIs are passed through a neural
network which outputs a feature vector for each image describing what the ROI
looks like.
- **Classification:** The feature vectors can then be used to classify ROIs:
- A simple regression-like classifier can be trained using user-supplied
labeled data (e.g. an array of images of ROIs and a corresponding array of
labels for each ROI).
- Alternatively, classification can be done by projecting the feature vectors
into a lower-dimensional space using UMAP and then simply circling the
region of space to classify the ROIs.
- **Tracking**: The feature vectors can be combined with information about the
position of the ROIs to track the ROIs across imaging sessions/planes.
# Installation
ROICaT works on Windows, MacOS, and Linux. If you have any issues during the
installation process, please make a [github
issue](https://github.com/RichieHakim/ROICaT/issues) with the error.
### 0. Requirements
- [Anaconda](https://www.anaconda.com/distribution/) or
[Miniconda](https://docs.conda.io/en/latest/miniconda.html).
- If using Windows: [Microsoft C++ Build
Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/)
- The below commands should be run in the terminal (Mac/Linux) or Anaconda
Prompt (Windows).
### 1. (Recommended) Create a new conda environment
```
conda create -n roicat python=3.12
conda activate roicat
```
You will need to activate the environment with `conda activate roicat` each time
you want to use ROICaT.
### 2. Install ROICaT
```
pip install roicat[all]
pip install git+https://github.com/RichieHakim/roiextractors
```
**Note on zsh:** if you are using a zsh terminal, change command to: `pip3
install --user 'roicat[all]'` For installing GPU support on Windows, see
[Troubleshooting](#troubleshooting-gpu-support) below.
<br>
**Note on opencv:** The headless version of opencv is installed by default. If
the regular version is already installed, you will need to uninstall it first.
### 3. Clone the repo to get the notebooks
```
git clone https://github.com/RichieHakim/ROICaT
```
Then, navigate to the `ROICaT/notebooks/jupyter` directory to run the notebooks.
# Upgrading versions
There are 2 parts to upgrading ROICaT: the **Python package** and the
**repository files** which contain the notebooks and scripts.\
Activate your environment first, then...\
To upgrade the Python package, run:
```
pip install --upgrade roicat[all]
```
To upgrade the repository files, navigate your terminal to the `ROICaT` folder and run:
```
git pull
```
# Troubleshooting Installation
### Troubleshooting package installation issues
If you have issues importing packages like `roicat` or any of its dependencies, try reinstalling `roicat` with the following commands within the environment:
```
pip uninstall roicat
pip install --upgrade --force --no-cache-dir roicat[all]
```
### Troubleshooting HDBSCAN installation issues
If you are using **Windows** receive the error: `ERROR: Could not build wheels for hdbscan, which is
required to install pyproject.toml-based projects` on Windows, make sure that
you have installed Microsoft C++ Build Tools. If not, download from
[here](https://visualstudio.microsoft.com/visual-cpp-build-tools/) and run the
commands:
```
cd path/to/vs_buildtools.exe
vs_buildtools.exe --norestart --passive --downloadThenInstall --includeRecommended --add Microsoft.VisualStudio.Workload.NativeDesktop --add Microsoft.VisualStudio.Workload.VCTools --add Microsoft.VisualStudio.Workload.MSBuildTools
```
Then, try proceeding with the installation by rerunning the pip install commands
above.
([reference](https://stackoverflow.com/questions/64261546/how-to-solve-error-microsoft-visual-c-14-0-or-greater-is-required-when-inst))
### Troubleshooting (GPU support)
GPU support is not required. Windows users will often need to manually install a
CUDA version of pytorch (see below). Note that you can check your nvidia driver
version using the shell command: `nvidia-smi` if you have drivers installed.
Use the following command to check your PyTorch version and if it is GPU
enabled:
```
python -c "import torch, torchvision; print(f'Using versions: torch=={torch.__version__}, torchvision=={torchvision.__version__}'); print(f'torch.cuda.is_available() = {torch.cuda.is_available()}')"
```
**Outcome 1:** Output expected if GPU is enabled:
```
Using versions: torch==X.X.X+cuXXX, torchvision==X.X.X+cuXXX
torch.cuda.is_available() = True
```
This is the ideal outcome. You are using a <u>CUDA</u> version of PyTorch and
your GPU is enabled.
**Outcome 2:** Output expected if <u>non-CUDA</u> version of PyTorch is
installed:
```
Using versions: torch==X.X.X, torchvision==X.X.X
OR
Using versions: torch==X.X.X+cpu, torchvision==X.X.X+cpu
torch.cuda.is_available() = False
```
If a <u>non-CUDA</u> version of PyTorch is installed, please follow the
instructions here: https://pytorch.org/get-started/locally/ to install a CUDA
version. If you are using a GPU, make sure you have a [CUDA compatible NVIDIA
GPU](https://developer.nvidia.com/cuda-gpus) and
[drivers](https://developer.nvidia.com/cuda-toolkit-archive) that match the same
version as the PyTorch CUDA version you choose. All CUDA 11.x versions are
intercompatible, so if you have CUDA 11.8 drivers, you can install
`torch==2.0.1+cu117`.
**Solution:**<br>
If you are sure you have a compatible GPU and correct drivers, you can force
install the GPU version of pytorch, see the pytorch installation instructions.
Links for the [latest version](https://pytorch.org/get-started/locally/) or
[older versions](https://pytorch.org/get-started/previous-versions/). Example:
```
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
```
**Outcome 3:** Output expected if CUDA version of PyTorch is installed but GPU
is not available:
```
Using versions: torch==X.X.X+cuXXX, torchvision==X.X.X+cuXXX
torch.cuda.is_available() = False
```
If a CUDA version of PyTorch is installed but GPU is not available, make sure
you have a [CUDA compatible NVIDIA GPU](https://developer.nvidia.com/cuda-gpus)
and [drivers](https://developer.nvidia.com/cuda-toolkit-archive) that match the
same version as the PyTorch CUDA version you choose. All CUDA 11.x versions are
intercompatible, so if you have CUDA 11.8 drivers, you can install
`torch==2.0.1+cu117`.
# TODO:
#### algorithmic improvements:
- [ ] Add in method to use more similarity metrics for tracking
- [ ] Coordinate descent on each similarity metric
- [ ] Add F and Fneu to data_roicat, dFoF and trace quality metric functions
- [ ] Add in notebook for demonstrating using temporal similarity metrics (SWT on dFoF)
- [ ] Make a standard classifier
- [ ] Try other clustering methods
- [ ] Make image aligner based on image similarity + RANSAC of centroids or s_SF
- [ ] Better post-hoc curation metrics and visualizations
#### code improvements:
- [ ] Update automatic regression module (make new repo for it)
- [ ] Switch to ONNX for ROINet
- [ ] Some more integration tests
- [ ] Add more documentation / tutorials
- [ ] Make a GUI
- [ ] Finish ROIextractors integration
- [ ] Make a Docker container
- [ ] Make colab demo notebook not require user data
- [ ] Make a better CLI
#### other:
- [ ] Write the paper
- [ ] Make tweet about it
- [ ] Make a video or two on how to use it
- [ ] Maybe use lightthetorch for torch installation
- [ ] Better Readme
- [ ] More documentation
- [ ] Make a regression model for in-plane-ness
Raw data
{
"_id": null,
"home_page": "https://github.com/RichieHakim/ROICaT",
"name": "roicat",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13,>=3.10",
"maintainer_email": null,
"keywords": "neuroscience, neuroimaging, machine learning, deep learning",
"author": "Richard Hakim",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/0e/73/4b852ea02d28bb1630bcf946f78d94870e79e3a05198402eaa7574312dd7/roicat-1.4.1.tar.gz",
"platform": null,
"description": "# Welcome to ROICaT\n\n<div>\n <img src=\"docs/media/logo1.png\" alt=\"ROICaT\" width=\"200\" align=\"right\" style=\"margin-left: 20px\"/>\n</div>\n\n[![build](https://github.com/RichieHakim/ROICaT/actions/workflows/.github/workflows/build.yml/badge.svg)](https://github.com/RichieHakim/ROICaT/actions/workflows/build.yml) \n[![PyPI version](https://badge.fury.io/py/roicat.svg)](https://badge.fury.io/py/roicat)\n[![Downloads](https://pepy.tech/badge/roicat)](https://pepy.tech/project/roicat)\n\n- **Documentation: [https://roicat.readthedocs.io/en/latest/](https://roicat.readthedocs.io/en/latest/)**\n- Discussion forum: [https://groups.google.com/g/roicat_support](https://groups.google.com/g/roicat_support)\n- Technical support: [Github Issues](https://github.com/RichieHakim/ROICaT/issues)\n\n## **R**egion **O**f **I**nterest **C**lassification **a**nd **T**racking \u15e2\nA simple-to-use Python package for automatically classifying images of cells and tracking them across imaging sessions/planes.\n<div>\n <img src=\"docs/media/tracking_FOV_clusters_rich.gif\" alt=\"tracking_FOV_clusters_rich\" width=\"400\" align=\"right\" style=\"margin-left: 20px\"/>\n</div>\n\n**Why use ROICaT?**\n- **It's easy to use. You don't need to know how to code. You can use the\n interactive notebooks to run the pipelines with just a few clicks.**\n- ROICaT was made to be better than existing tools. It is capable of classifying\n and tracking neuron ROIs at accuracies approaching human performance. Several\n labs currently use ROICaT to do automatic tracking and classification of ROIs\n with no post-hoc curation required. \n- Great effort was taken to optimize performance. Computational requirements are\n minimal and run times are fast.\n\nWith ROICaT, you can:\n- **Classify ROIs** into different categories (e.g. neurons, dendrites, glia,\n etc.).\n- **Track ROIs** across imaging sessions/planes (e.g. ROI #1 in session 1 is the\n same as ROI #7 in session 2).\n\n**What data types can ROICaT process?** \n- ROICaT can accept any imaging data format including: Suite2p, CaImAn, CNMF,\n NWB, raw/custom ROI data and more. See below for details on how to use any\n data type with ROICaT.\n\n**What are the minimum computing needs?** \n- We recommend the following as a starting point: \n - 4 GB of RAM (more for large data sets e.g., ~32 GB for 100K neurons)\n - GPU not required but will increase run speeds ~5-50x\n\n\n<br>\n<br>\n\n# How to use ROICaT\n<div>\n <img src=\"docs/media/umap_with_labels.png\" alt=\"ROICaT\" width=\"300\" align=\"right\" style=\"margin-left: 20px\"/>\n</div>\n\nListed below, we have a suite of easy to run notebooks for running the ROICaT\npipelines. \n#### First time users:\nTry it out using our Google CoLab notebooks below which can be run fully\nremotely without installing anything on your computer.\n#### Normal usage:\nWe recommend using our Jupyter notebooks which can be run locally on any\ncomputer.\n\n### TRACKING: \n- [Interactive\n notebook](https://github.com/RichieHakim/ROICaT/blob/main/notebooks/jupyter/tracking/1_tracking_interactive_notebook.ipynb)\n- [Google\n CoLab](https://githubtocolab.com/RichieHakim/ROICaT/blob/main/notebooks/colab/tracking/1_tracking_interactive_notebook.ipynb)\n- [Command line interface script](https://github.com/RichieHakim/ROICaT/blob/main/scripts/run_tracking.sh): \n```shell\nroicat --pipeline tracking --path_params /path/to/params.yaml --dir_data /folder/with/data/ --dir_save /folder/save/ --prefix_name_save expName --verbose\n```\n \n### CLASSIFICATION:\n- [Interactive notebook -\n Drawing](https://github.com/RichieHakim/ROICaT/blob/main/notebooks/jupyter/classification/A1_classify_by_drawingSelection.ipynb)\n- [Google CoLab -\n Drawing](https://githubtocolab.com/RichieHakim/ROICaT/blob/main/notebooks/colab/classification/A1_classify_by_drawingSelection_colab.ipynb)\n- [Interactive notebook -\n Labeling](https://github.com/RichieHakim/ROICaT/blob/main/notebooks/jupyter/classification/B1_labeling_interactive.ipynb)\n- [Interactive notebook - Train\n classifier](https://github.com/RichieHakim/ROICaT/blob/main/notebooks/jupyter/classification/B2_classifier_train_interactive.ipynb)\n- [Interactive notebook - Inference with\n classifier](https://github.com/RichieHakim/ROICaT/blob/main/notebooks/jupyter/classification/B3_classifier_inference_interactive.ipynb)\n\n**OTHER:** \n- [Custom data importing\n notebook](https://github.com/RichieHakim/ROICaT/blob/main/notebooks/jupyter/other/demo_data_importing.ipynb)\n- Use the API to integrate ROICaT functions into your own code:\n [Documentation](https://roicat.readthedocs.io/en/latest/roicat.html).\n- Run the full tracking pipeline using `roicat.pipelines.pipeline_tracking` with\n default parameters generated from `roicat.util.get_default_paramaters()`.\n<!-- - Train a new ROInet model using the provided Jupyter Notebook [TODO: link]. -->\n\n# General workflow:\n- **Pass ROIs through ROInet:** Images of the ROIs are passed through a neural\n network which outputs a feature vector for each image describing what the ROI\n looks like.\n- **Classification:** The feature vectors can then be used to classify ROIs:\n - A simple regression-like classifier can be trained using user-supplied\n labeled data (e.g. an array of images of ROIs and a corresponding array of\n labels for each ROI).\n - Alternatively, classification can be done by projecting the feature vectors\n into a lower-dimensional space using UMAP and then simply circling the\n region of space to classify the ROIs.\n- **Tracking**: The feature vectors can be combined with information about the\n position of the ROIs to track the ROIs across imaging sessions/planes.\n\n\n# Installation\nROICaT works on Windows, MacOS, and Linux. If you have any issues during the\ninstallation process, please make a [github\nissue](https://github.com/RichieHakim/ROICaT/issues) with the error.\n\n### 0. Requirements\n- [Anaconda](https://www.anaconda.com/distribution/) or\n [Miniconda](https://docs.conda.io/en/latest/miniconda.html).\n- If using Windows: [Microsoft C++ Build\n Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/)\n- The below commands should be run in the terminal (Mac/Linux) or Anaconda\n Prompt (Windows).\n\n### 1. (Recommended) Create a new conda environment\n```\nconda create -n roicat python=3.12\nconda activate roicat\n```\nYou will need to activate the environment with `conda activate roicat` each time\nyou want to use ROICaT.\n\n### 2. Install ROICaT\n```\npip install roicat[all]\npip install git+https://github.com/RichieHakim/roiextractors\n```\n**Note on zsh:** if you are using a zsh terminal, change command to: `pip3\ninstall --user 'roicat[all]'` For installing GPU support on Windows, see\n[Troubleshooting](#troubleshooting-gpu-support) below. \n<br>\n**Note on opencv:** The headless version of opencv is installed by default. If\nthe regular version is already installed, you will need to uninstall it first.\n\n### 3. Clone the repo to get the notebooks\n```\ngit clone https://github.com/RichieHakim/ROICaT\n```\nThen, navigate to the `ROICaT/notebooks/jupyter` directory to run the notebooks.\n\n\n# Upgrading versions\nThere are 2 parts to upgrading ROICaT: the **Python package** and the\n**repository files** which contain the notebooks and scripts.\\\nActivate your environment first, then...\\\nTo upgrade the Python package, run:\n```\npip install --upgrade roicat[all]\n```\nTo upgrade the repository files, navigate your terminal to the `ROICaT` folder and run:\n```\ngit pull\n```\n\n\n# Troubleshooting Installation\n### Troubleshooting package installation issues\nIf you have issues importing packages like `roicat` or any of its dependencies, try reinstalling `roicat` with the following commands within the environment:\n```\npip uninstall roicat\npip install --upgrade --force --no-cache-dir roicat[all]\n```\n\n### Troubleshooting HDBSCAN installation issues\nIf you are using **Windows** receive the error: `ERROR: Could not build wheels for hdbscan, which is\nrequired to install pyproject.toml-based projects` on Windows, make sure that\nyou have installed Microsoft C++ Build Tools. If not, download from\n[here](https://visualstudio.microsoft.com/visual-cpp-build-tools/) and run the\ncommands:\n```\ncd path/to/vs_buildtools.exe\nvs_buildtools.exe --norestart --passive --downloadThenInstall --includeRecommended --add Microsoft.VisualStudio.Workload.NativeDesktop --add Microsoft.VisualStudio.Workload.VCTools --add Microsoft.VisualStudio.Workload.MSBuildTools\n```\nThen, try proceeding with the installation by rerunning the pip install commands\nabove.\n([reference](https://stackoverflow.com/questions/64261546/how-to-solve-error-microsoft-visual-c-14-0-or-greater-is-required-when-inst))\n\n### Troubleshooting (GPU support)\nGPU support is not required. Windows users will often need to manually install a\nCUDA version of pytorch (see below). Note that you can check your nvidia driver\nversion using the shell command: `nvidia-smi` if you have drivers installed. \n\nUse the following command to check your PyTorch version and if it is GPU\nenabled:\n```\npython -c \"import torch, torchvision; print(f'Using versions: torch=={torch.__version__}, torchvision=={torchvision.__version__}'); print(f'torch.cuda.is_available() = {torch.cuda.is_available()}')\"\n```\n**Outcome 1:** Output expected if GPU is enabled:\n```\nUsing versions: torch==X.X.X+cuXXX, torchvision==X.X.X+cuXXX\ntorch.cuda.is_available() = True\n```\nThis is the ideal outcome. You are using a <u>CUDA</u> version of PyTorch and\nyour GPU is enabled.\n\n**Outcome 2:** Output expected if <u>non-CUDA</u> version of PyTorch is\ninstalled:\n```\nUsing versions: torch==X.X.X, torchvision==X.X.X\nOR\nUsing versions: torch==X.X.X+cpu, torchvision==X.X.X+cpu\ntorch.cuda.is_available() = False\n```\nIf a <u>non-CUDA</u> version of PyTorch is installed, please follow the\ninstructions here: https://pytorch.org/get-started/locally/ to install a CUDA\nversion. If you are using a GPU, make sure you have a [CUDA compatible NVIDIA\nGPU](https://developer.nvidia.com/cuda-gpus) and\n[drivers](https://developer.nvidia.com/cuda-toolkit-archive) that match the same\nversion as the PyTorch CUDA version you choose. All CUDA 11.x versions are\nintercompatible, so if you have CUDA 11.8 drivers, you can install\n`torch==2.0.1+cu117`.\n\n**Solution:**<br>\nIf you are sure you have a compatible GPU and correct drivers, you can force\ninstall the GPU version of pytorch, see the pytorch installation instructions.\nLinks for the [latest version](https://pytorch.org/get-started/locally/) or\n[older versions](https://pytorch.org/get-started/previous-versions/). Example:\n```\npip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118\n```\n\n**Outcome 3:** Output expected if CUDA version of PyTorch is installed but GPU\nis not available:\n```\nUsing versions: torch==X.X.X+cuXXX, torchvision==X.X.X+cuXXX\ntorch.cuda.is_available() = False\n```\nIf a CUDA version of PyTorch is installed but GPU is not available, make sure\nyou have a [CUDA compatible NVIDIA GPU](https://developer.nvidia.com/cuda-gpus)\nand [drivers](https://developer.nvidia.com/cuda-toolkit-archive) that match the\nsame version as the PyTorch CUDA version you choose. All CUDA 11.x versions are\nintercompatible, so if you have CUDA 11.8 drivers, you can install\n`torch==2.0.1+cu117`.\n\n\n# TODO:\n#### algorithmic improvements:\n- [ ] Add in method to use more similarity metrics for tracking\n- [ ] Coordinate descent on each similarity metric\n- [ ] Add F and Fneu to data_roicat, dFoF and trace quality metric functions\n- [ ] Add in notebook for demonstrating using temporal similarity metrics (SWT on dFoF)\n- [ ] Make a standard classifier\n- [ ] Try other clustering methods\n- [ ] Make image aligner based on image similarity + RANSAC of centroids or s_SF\n- [ ] Better post-hoc curation metrics and visualizations\n#### code improvements:\n- [ ] Update automatic regression module (make new repo for it)\n- [ ] Switch to ONNX for ROINet\n- [ ] Some more integration tests\n- [ ] Add more documentation / tutorials\n- [ ] Make a GUI\n- [ ] Finish ROIextractors integration\n- [ ] Make a Docker container\n- [ ] Make colab demo notebook not require user data\n- [ ] Make a better CLI\n#### other:\n- [ ] Write the paper\n- [ ] Make tweet about it\n- [ ] Make a video or two on how to use it\n- [ ] Maybe use lightthetorch for torch installation\n- [ ] Better Readme\n- [ ] More documentation\n- [ ] Make a regression model for in-plane-ness\n",
"bugtrack_url": null,
"license": "LICENSE",
"summary": "A library for classifying and tracking ROIs.",
"version": "1.4.1",
"project_urls": {
"Homepage": "https://github.com/RichieHakim/ROICaT"
},
"split_keywords": [
"neuroscience",
" neuroimaging",
" machine learning",
" deep learning"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "91bf9e2128ef51699e1ba39f882badc1fce46bdd2aad37a6216d055ca48b8a82",
"md5": "4c122626ac16891b54a4c7181573b288",
"sha256": "e00bcab9139be28cda81c05450d4861ace547039dd049ae89150141a8e8a4a50"
},
"downloads": -1,
"filename": "roicat-1.4.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4c122626ac16891b54a4c7181573b288",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.13,>=3.10",
"size": 210857,
"upload_time": "2024-10-23T04:13:17",
"upload_time_iso_8601": "2024-10-23T04:13:17.407116Z",
"url": "https://files.pythonhosted.org/packages/91/bf/9e2128ef51699e1ba39f882badc1fce46bdd2aad37a6216d055ca48b8a82/roicat-1.4.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "0e734b852ea02d28bb1630bcf946f78d94870e79e3a05198402eaa7574312dd7",
"md5": "a1c26b516297565bb97da87df92ad75f",
"sha256": "b0c7f8e30f45a968064ca2407b06886d8ba822c1b6d5256d6412293390670f96"
},
"downloads": -1,
"filename": "roicat-1.4.1.tar.gz",
"has_sig": false,
"md5_digest": "a1c26b516297565bb97da87df92ad75f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13,>=3.10",
"size": 218371,
"upload_time": "2024-10-23T04:13:19",
"upload_time_iso_8601": "2024-10-23T04:13:19.349639Z",
"url": "https://files.pythonhosted.org/packages/0e/73/4b852ea02d28bb1630bcf946f78d94870e79e3a05198402eaa7574312dd7/roicat-1.4.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-23 04:13:19",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "RichieHakim",
"github_project": "ROICaT",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "hdbscan",
"specs": [
[
"==",
"0.8.39"
]
]
},
{
"name": "holoviews",
"specs": [
[
"==",
"1.19.1"
]
]
},
{
"name": "jupyter",
"specs": [
[
"==",
"1.0.0"
]
]
},
{
"name": "kymatio",
"specs": [
[
"==",
"0.3.0"
]
]
},
{
"name": "matplotlib",
"specs": [
[
"==",
"3.9.2"
]
]
},
{
"name": "natsort",
"specs": [
[
"==",
"8.4.0"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"2.0.2"
]
]
},
{
"name": "opencv_contrib_python_headless",
"specs": [
[
"<=",
"4.10.0.84"
]
]
},
{
"name": "optuna",
"specs": [
[
"==",
"4.0.0"
]
]
},
{
"name": "Pillow",
"specs": [
[
"==",
"11.0.0"
]
]
},
{
"name": "pytest",
"specs": [
[
"==",
"8.3.3"
]
]
},
{
"name": "scikit_learn",
"specs": [
[
"==",
"1.5.2"
]
]
},
{
"name": "scipy",
"specs": [
[
"==",
"1.14.1"
]
]
},
{
"name": "seaborn",
"specs": [
[
"==",
"0.13.2"
]
]
},
{
"name": "sparse",
"specs": [
[
"==",
"0.15.4"
]
]
},
{
"name": "tqdm",
"specs": [
[
"==",
"4.66.5"
]
]
},
{
"name": "umap_learn",
"specs": [
[
"==",
"0.5.6"
]
]
},
{
"name": "xxhash",
"specs": [
[
"==",
"3.5.0"
]
]
},
{
"name": "bokeh",
"specs": [
[
"==",
"3.6.0"
]
]
},
{
"name": "psutil",
"specs": [
[
"==",
"6.1.0"
]
]
},
{
"name": "py_cpuinfo",
"specs": [
[
"==",
"9.0.0"
]
]
},
{
"name": "GPUtil",
"specs": [
[
"==",
"1.4.0"
]
]
},
{
"name": "PyYAML",
"specs": [
[
"==",
"6.0.2"
]
]
},
{
"name": "mat73",
"specs": [
[
"==",
"0.65"
]
]
},
{
"name": "torch",
"specs": [
[
"==",
"2.5.0"
]
]
},
{
"name": "torchvision",
"specs": [
[
"==",
"0.20.0"
]
]
},
{
"name": "torchaudio",
"specs": [
[
"==",
"2.5.0"
]
]
},
{
"name": "selenium",
"specs": [
[
"==",
"4.25.0"
]
]
},
{
"name": "skl2onnx",
"specs": [
[
"==",
"1.17.0"
]
]
},
{
"name": "onnx",
"specs": [
[
"==",
"1.17.0"
]
]
},
{
"name": "onnxruntime",
"specs": [
[
"==",
"1.19.2"
]
]
},
{
"name": "jupyter_bokeh",
"specs": [
[
"==",
"4.0.5"
]
]
},
{
"name": "onnx2torch",
"specs": [
[
"==",
"1.5.15"
]
]
},
{
"name": "scikit-image",
"specs": [
[
"==",
"0.24.0"
]
]
},
{
"name": "richfile",
"specs": [
[
">=",
"0.4.5"
]
]
},
{
"name": "romatch-roicat",
"specs": [
[
">=",
"0.1.0"
]
]
},
{
"name": "kornia",
"specs": [
[
"==",
"0.7.3"
]
]
}
],
"lcname": "roicat"
}