face-rhythm


Nameface-rhythm JSON
Version 0.2.5 PyPI version JSON
download
home_pagehttps://github.com/RichieHakim/face-rhythm
SummaryA pipeline for analysis of facial behavior using optical flow
upload_time2024-04-25 05:04:43
maintainerNone
docs_urlNone
authorRich Hakim
requires_pythonNone
licenseLICENSE
keywords neuroscience neuroimaging machine learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Face-Rhythm

# Installation

### 0. Requirements <br>
- Operating system:
  - Ubuntu >= 18.04 (other linux versions usually okay but not actively maintained)
  - Windows >= 10
  - Mac >= 12
- [Anaconda](https://www.anaconda.com/distribution/) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html).
- If using linux/unix: GCC >= 5.4.0, ideally == 9.2.0. Google how to do this on your operating system. Check with: `gcc --version`.
- **Optional:** [CUDA compatible NVIDIA GPU](https://developer.nvidia.com/cuda-gpus) and [drivers](https://developer.nvidia.com/cuda-toolkit-archive). Using a GPU can increase the speeds for the TCA step, but is not necessary.
- The below commands should be run in the terminal (Mac/Linux) or Anaconda Prompt (Windows).
<br>

### 1. Clone this repo <br>
This will create a folder called **face-rhythm** in your current directory. This repository folder contains the source code AND the interactive notebooks needed to run the pipeline. <br>
**`git clone https://github.com/RichieHakim/face-rhythm/`**<br>
**`cd face-rhythm`**<br>

### 2. Create a conda environment
This will also install the **face-rhythm** package and all of its dependencies into the environment. <br>
**`conda env create --file environment.yml`**<br>

Activate the environment: <br>
**`conda activate face_rhythm`** <br>

### Optional Direct installation <br>
You can also directly install the **face-rhythm** package from PyPI into the environment of your choice. Note that you will still need to download/clone the repository for the notebooks. <br>
##### Option 1: Install from PyPI <br>
**`pip install face-rhythm[all]`**<br>
##### Option 2: Install from source <br>
**`pip install -e .[all]`**<br>

<br>
<br>

# Usage

#### Notebooks
The easiest way to use **face-rhythm** is through the interactive notebooks. They are found in the following directory: `face-rhythm/notebooks/`. <br>
- The `interactive_pipeline_basic.ipynb` notebook contains the main pipeline and instructions on how to use it. <br>
- The `interactive_set_ROIs_only.ipynb` notebook is useful for when you want to run a batch job of many videos/sessions and need to set the ROIs for each video/session ahead of time. <br>

#### Command line
The basic pipeline in the interactive notebook is also provided as a function within the `face_rhythm/pipelines.py` module. In the `scripts` folder, you'll find a script called `run_pipeline_basic.py` that can be used to run the pipeline from the command line. An example `params.json` file is also in that folder to use as a template for your runs. <br>



<br>
<br>

# Repository Organization
    face-rhythm
    ├── notebooks  <- Jupyter notebooks containing the main pipeline and some demos.
    |   ├── basic_face_rhythm_notebook.ipynb  <- Main pipeline notebook.
    |   └── interactive_set_ROIs_only.ipynb   <- Notebook for setting ROIs only.
    |
    ├── face-rhythm  <- Source code for use in this project.
    │   ├── project.py           <- Contains methods for project directory organization and preparation
    │   ├── data_importing.py    <- Contains classes for importing data (like videos)
    |   ├── rois.py              <- Contains classes for defining regions of interest (ROIs) to analyze
    |   ├── point_tracking.py    <- Contains classes for tracking points in videos
    |   ├── spectral_analysis.py <- Contains classes for spectral decomposition
    |   ├── decomposition.py     <- Contains classes for TCA decomposition
    |   ├── utils.py             <- Contains utility functions for face-rhythm
    |   ├── visualization.py     <- Contains classes for visualizing data
    |   ├── helpers.py           <- Contains general helper functions (non-face-rhythm specific)
    |   ├── h5_handling.py       <- Contains classes for handling h5 files
    │   └── __init__.py          <- Makes src a Python module    
    |
    ├── setup.py   <- makes project pip installable (pip install -e .) so src can be imported
    ├── LICENSE    <- License file
    ├── Makefile   <- Makefile with commands like `make data` or `make train`
    ├── README.md  <- The top-level README for developers using this project.
    ├── docs       <- A default Sphinx project; see sphinx-doc.org for details
    └── tox.ini    <- tox file with settings for running tox; see tox.readthedocs.io

<br>
<br>

# Project Directory Organization

    Project Directory
    ├── config.yaml           <- Configuration parameters to run each module in the pipeline. Dictionary.
    ├── run_info.json         <- Output information from each module. Dictionary.
    │
    ├── run_data              <- Output data from each module.
    │   ├── Dataset_videos.h5 <- Output data from Dataset_videos class. Contains metadata about the videos.
    │   ├── ROIs.h5           <- Output data from ROIs class. Contains ROI masks.
    │   ├── PointTracker.h5   <- Output data from PointTracker class. Contains point tracking data.
    |   ├── VQT_Analyzer.h5   <- Output data from VQT_Analyzer class. Contains spectral decomposition data.
    │   ├── TCA.h5            <- Output data from TCA class. Contains TCA decomposition data.
    │   
    └── visualizations        <- Output visualizations.
        ├── factors_rearranged_[frequency].png  <- Example of a rearranged factor plot.
        └── point_tracking_demo.avi             <- Example video.

    

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/RichieHakim/face-rhythm",
    "name": "face-rhythm",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "neuroscience, neuroimaging, machine learning",
    "author": "Rich Hakim",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/9e/7a/a1d58d013e477b0e8a689b348d15b2a043428cd165d7a0007a0aded024bf/face_rhythm-0.2.5.tar.gz",
    "platform": "Any",
    "description": "# Face-Rhythm\n\n# Installation\n\n### 0. Requirements <br>\n- Operating system:\n  - Ubuntu >= 18.04 (other linux versions usually okay but not actively maintained)\n  - Windows >= 10\n  - Mac >= 12\n- [Anaconda](https://www.anaconda.com/distribution/) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html).\n- If using linux/unix: GCC >= 5.4.0, ideally == 9.2.0. Google how to do this on your operating system. Check with: `gcc --version`.\n- **Optional:** [CUDA compatible NVIDIA GPU](https://developer.nvidia.com/cuda-gpus) and [drivers](https://developer.nvidia.com/cuda-toolkit-archive). Using a GPU can increase the speeds for the TCA step, but is not necessary.\n- The below commands should be run in the terminal (Mac/Linux) or Anaconda Prompt (Windows).\n<br>\n\n### 1. Clone this repo <br>\nThis will create a folder called **face-rhythm** in your current directory. This repository folder contains the source code AND the interactive notebooks needed to run the pipeline. <br>\n**`git clone https://github.com/RichieHakim/face-rhythm/`**<br>\n**`cd face-rhythm`**<br>\n\n### 2. Create a conda environment\nThis will also install the **face-rhythm** package and all of its dependencies into the environment. <br>\n**`conda env create --file environment.yml`**<br>\n\nActivate the environment: <br>\n**`conda activate face_rhythm`** <br>\n\n### Optional Direct installation <br>\nYou can also directly install the **face-rhythm** package from PyPI into the environment of your choice. Note that you will still need to download/clone the repository for the notebooks. <br>\n##### Option 1: Install from PyPI <br>\n**`pip install face-rhythm[all]`**<br>\n##### Option 2: Install from source <br>\n**`pip install -e .[all]`**<br>\n\n<br>\n<br>\n\n# Usage\n\n#### Notebooks\nThe easiest way to use **face-rhythm** is through the interactive notebooks. They are found in the following directory: `face-rhythm/notebooks/`. <br>\n- The `interactive_pipeline_basic.ipynb` notebook contains the main pipeline and instructions on how to use it. <br>\n- The `interactive_set_ROIs_only.ipynb` notebook is useful for when you want to run a batch job of many videos/sessions and need to set the ROIs for each video/session ahead of time. <br>\n\n#### Command line\nThe basic pipeline in the interactive notebook is also provided as a function within the `face_rhythm/pipelines.py` module. In the `scripts` folder, you'll find a script called `run_pipeline_basic.py` that can be used to run the pipeline from the command line. An example `params.json` file is also in that folder to use as a template for your runs. <br>\n\n\n\n<br>\n<br>\n\n# Repository Organization\n    face-rhythm\n    \u251c\u2500\u2500 notebooks  <- Jupyter notebooks containing the main pipeline and some demos.\n    |   \u251c\u2500\u2500 basic_face_rhythm_notebook.ipynb  <- Main pipeline notebook.\n    |   \u2514\u2500\u2500 interactive_set_ROIs_only.ipynb   <- Notebook for setting ROIs only.\n    |\n    \u251c\u2500\u2500 face-rhythm  <- Source code for use in this project.\n    \u2502\u00a0\u00a0 \u251c\u2500\u2500 project.py           <- Contains methods for project directory organization and preparation\n    \u2502\u00a0\u00a0 \u251c\u2500\u2500 data_importing.py    <- Contains classes for importing data (like videos)\n    |   \u251c\u2500\u2500 rois.py              <- Contains classes for defining regions of interest (ROIs) to analyze\n    |   \u251c\u2500\u2500 point_tracking.py    <- Contains classes for tracking points in videos\n    |   \u251c\u2500\u2500 spectral_analysis.py <- Contains classes for spectral decomposition\n    |   \u251c\u2500\u2500 decomposition.py     <- Contains classes for TCA decomposition\n    |   \u251c\u2500\u2500 utils.py             <- Contains utility functions for face-rhythm\n    |   \u251c\u2500\u2500 visualization.py     <- Contains classes for visualizing data\n    |   \u251c\u2500\u2500 helpers.py           <- Contains general helper functions (non-face-rhythm specific)\n    |   \u251c\u2500\u2500 h5_handling.py       <- Contains classes for handling h5 files\n    \u2502\u00a0\u00a0 \u2514\u2500\u2500 __init__.py          <- Makes src a Python module    \n    |\n    \u251c\u2500\u2500 setup.py   <- makes project pip installable (pip install -e .) so src can be imported\n    \u251c\u2500\u2500 LICENSE    <- License file\n    \u251c\u2500\u2500 Makefile   <- Makefile with commands like `make data` or `make train`\n    \u251c\u2500\u2500 README.md  <- The top-level README for developers using this project.\n    \u251c\u2500\u2500 docs       <- A default Sphinx project; see sphinx-doc.org for details\n    \u2514\u2500\u2500 tox.ini    <- tox file with settings for running tox; see tox.readthedocs.io\n\n<br>\n<br>\n\n# Project Directory Organization\n\n    Project Directory\n    \u251c\u2500\u2500 config.yaml           <- Configuration parameters to run each module in the pipeline. Dictionary.\n    \u251c\u2500\u2500 run_info.json         <- Output information from each module. Dictionary.\n    \u2502\n    \u251c\u2500\u2500 run_data              <- Output data from each module.\n    \u2502\u00a0\u00a0 \u251c\u2500\u2500 Dataset_videos.h5 <- Output data from Dataset_videos class. Contains metadata about the videos.\n    \u2502\u00a0\u00a0 \u251c\u2500\u2500 ROIs.h5           <- Output data from ROIs class. Contains ROI masks.\n    \u2502\u00a0\u00a0 \u251c\u2500\u2500 PointTracker.h5   <- Output data from PointTracker class. Contains point tracking data.\n    |   \u251c\u2500\u2500 VQT_Analyzer.h5   <- Output data from VQT_Analyzer class. Contains spectral decomposition data.\n    \u2502\u00a0\u00a0 \u251c\u2500\u2500 TCA.h5            <- Output data from TCA class. Contains TCA decomposition data.\n    \u2502   \n    \u2514\u2500\u2500 visualizations        <- Output visualizations.\n     \u00a0\u00a0 \u251c\u2500\u2500 factors_rearranged_[frequency].png  <- Example of a rearranged factor plot.\n     \u00a0\u00a0 \u2514\u2500\u2500 point_tracking_demo.avi             <- Example video.\n\n    \n",
    "bugtrack_url": null,
    "license": "LICENSE",
    "summary": "A pipeline for analysis of facial behavior using optical flow",
    "version": "0.2.5",
    "project_urls": {
        "Homepage": "https://github.com/RichieHakim/face-rhythm"
    },
    "split_keywords": [
        "neuroscience",
        " neuroimaging",
        " machine learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b664327ce4ec3f9182b5a5258e09173e13f60795a50ac1fd0e94700312db934e",
                "md5": "299bf8c02b7aa2bdd9db46109d0af57d",
                "sha256": "38ec770cf46e4bcc06e246addfe4cc85e57a1c1dc8dab56ecce86562559bb339"
            },
            "downloads": -1,
            "filename": "face_rhythm-0.2.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "299bf8c02b7aa2bdd9db46109d0af57d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 106404,
            "upload_time": "2024-04-25T05:04:41",
            "upload_time_iso_8601": "2024-04-25T05:04:41.402353Z",
            "url": "https://files.pythonhosted.org/packages/b6/64/327ce4ec3f9182b5a5258e09173e13f60795a50ac1fd0e94700312db934e/face_rhythm-0.2.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9e7aa1d58d013e477b0e8a689b348d15b2a043428cd165d7a0007a0aded024bf",
                "md5": "229d96485f121b2668c947bd34097c25",
                "sha256": "b5e3db339e47e64a19ae839cd2b924238ed430df1b9fcf5dfb1933d86c8879b0"
            },
            "downloads": -1,
            "filename": "face_rhythm-0.2.5.tar.gz",
            "has_sig": false,
            "md5_digest": "229d96485f121b2668c947bd34097c25",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 108925,
            "upload_time": "2024-04-25T05:04:43",
            "upload_time_iso_8601": "2024-04-25T05:04:43.381108Z",
            "url": "https://files.pythonhosted.org/packages/9e/7a/a1d58d013e477b0e8a689b348d15b2a043428cd165d7a0007a0aded024bf/face_rhythm-0.2.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-25 05:04:43",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "RichieHakim",
    "github_project": "face-rhythm",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "tox": true,
    "lcname": "face-rhythm"
}
        
Elapsed time: 0.24632s