blechpy


Nameblechpy JSON
Version 2.2.13 PyPI version JSON
download
home_pagehttps://github.com/nubs01/blechpy
SummaryPackage for exrtacting, processing and analyzing Intan and OpenEphys data
upload_time2023-12-11 21:13:39
maintainer
docs_urlNone
authorRoshan Nanu, Daniel Svedberg
requires_python>=3.6
license
keywords blech katz_lab intan electrophysiology neuroscience
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            See the <a href='https://nubs01.github.io/blechpy'>full documentation</a> here.

- [blechpy](#blechpy)
- [Installation](#installation)
- [Usage](#usage)
- [Datasets](#datasets)
  * [Starting wit a raw dataset](#starting-wit-a-raw-dataset)
    + [Create dataset](#create-dataset)
    + [Initialize Parameters](#initialize-parameters)
    + [Basic Processing](#basic-processing)
    + [Viewing a Dataset](#viewing-a-dataset)
  * [Loading an existing dataset](#loading-an-existing-dataset)
  * [Import processed dataset into dataset framework](#import-processed-dataset-into-dataset-framework)
- [Experiments](#experiments)
  * [Creating an experiment](#creating-an-experiment)
  * [Editing recordings](#editing-recordings)
  * [Held unit detection](#held-unit-detection)

<small><i><a href='http://ecotrust-canada.github.io/markdown-toc/'>Table of contents generated with markdown-toc</a></i></small>

# blechpy
This is a package to extract, process and analyze electrophysiology data recorded with Intan or OpenEphys recording systems. This package is customized to store experiment and analysis metadata for the BLECh Lab (Katz lab) @ Brandeis University, but can readily be used and customized for other labs.

# Requirements
### Operating system:
Currently, blechpy is only developed and validated to work properly on Linux operating systems. It is possible to use blechpy on Mac, but some GUI features may not work properly. It is not possible to use blechpy on Windows. 

### Virtual environments
Because blechpy depends on a very specific mix of package versions, it is required to install blechpy in a virtual environment. *We highly recommend using miniconda to handle your virtual environments. You can download miniconda here: https://docs.conda.io/en/latest/miniconda.html*

### Hardware
We recommend using a computer with at least 32gb of ram and a muti-core processor. The more cores and memory, the better. Memory usage scales with core usage--memory needed to run without overflow errors as you increase the number of cores used. It is possible to run memory-intensive functions with fewer cores to avoid overflow errors, but this will increase processing time. It is also possible to re-run memory-intensive functions after an overflow error, and the function will pick up where it left off.

### Data
Right now this pipeline is only compatible with recordings done with Intan's 'one file per channel' or 'one file per signal type' recordings settings.


# Installation + maintainance
### Installation

Create a miniconda environment with: 
```bash
conda create -n blechpy python==3.7.13
conda activate blechpy
```
Now you can install the package with pip:
```bash
pip install blechpy
```

### Activation

Once you have installed blechpy, you will need to perform some steps to "activate" blechpy, whenever you want to use it. 

1) open up a bash terminal (control+alt+t on ubuntu)
2) activate your miniconda environment with the following command:
```bash
conda activate blechpy
```
3) start an ipython console by typing into the terminal
```bash 
ipython
```
4) You will now be in an ipython console. Now, import blechpy. Simply type:
```python
import blechpy
```
Now, you can use blechpy functions in your ipython console.

### Updating
To update blechpy, open up a bash terminal and type:
```bash
conda activate blechpy #activate your blechpy virtual environment
pip install blechpy -U #install updated version of blechpy
```

### Troubleshooting Segmentation Fault: Only applies if you are using Ubuntu version 20.XX LTS
If your operating system is Ubuntu version 20.XX LTS, "import blechpy" may throw a "segmentation fault" error. This is because numba version 0.48 available via pip-install is corrupted. You can fix this issue by reinstalling numba via conda, by entering the following command in your bash terminal:

```bash
conda install numba=0.48.0
```

# Blechpy Overview
blechpy handles experimental metadata using data_objects which are tied to a directory encompassing some level of data. Existing types of data_objects include:
* dataset
    * object for a single recording session
    * to create a dataset, you will need to have recording files from a single recording in its own "dataset folder". The path to this folder is the "recording directory"
    * The dataset processing pipleine creates 2 critical files that will live alongside your recording files in the dataset folder: the .h5 file and the .p file. The .h5 file contains the actual processed data, along with some metadata. The .p file contains additional critical metadata. 
    * code lives in blechpy/datastructures/dataset.py
* experiment
    * object encompasing an ordered set of recordings from a single animal
    * individual recordings must first be processed as datasets
    * to create an experiment, you will need to have all the dataset folders from a single animal in its own "experiment folder". The path to this folder is the "experiment directory"
    * code lives in blechpy/datastructures/experiment.py
* project
    * object that can encompass multiple experiments & data groups and allow analysis or group differences
    * to create a project, you will need to have all the experiment folders from a single project in its own "project folder". The path to this folder is the "project directory"
    * code lives in blechpy/datastructures/project.py
* HMMHandler
  * object that can be used to set up and run hidden markov model analysis on a dataset
  * HMMHandler objects are created on the level of the dataset. You will need to have a fully processed dataset to create an HMMHandler object from it.

# Dataset Processing (start here if you have a raw recording)

### Basic dataset processing pipeline:

With a brand new *shiny* dataset, the most basic recommended data extraction workflow would be:
```python
dat = blechpy.dataset('/path/to/data/dir/') #create dataset object. Path to data dir should be your recording directory
# IMPORTANT: only run blechpy.dataset ONCE on a dataset, unless you want to overwrite the existing dataset and your preprocessing
# to load an existing dataset, use dat = blechpy.load_dataset('/path/to/data/dir/') instead
dat.initParams(data_quality='hp') # follow GUI prompts. 
dat.extract_data()          # Extracts raw data into HDF5 store
dat.create_trial_list()     # Creates table of digital input triggers
dat.mark_dead_channels()    # View traces and label electrodes as dead, or just pass list of dead channels
dat.common_average_reference() # Use common average referencing on data. Repalces raw with referenced data in HDF5 store
dat.detect_spikes()        # Detect spikes in data. Replaces raw data with spike data in HDF5 store
dat.blech_clust_run(umap=True)       # Cluster data using GMM
dat.sort_spikes(electrode_number) # Split, merge and label clusters as units. Follow GUI prompts. Perform this for every electrode
dat.post_sorting() #run this after you finish sorting all electrodes
dat.make_PSTH_plots() #optional: make PSTH plots for all units 
dat.make_raster_plots() #optional: make raster plots for all units
```

### troubleshooting common error:
It is common to get the following error after running the functions `dat.detect_spikes() ` or `dat.blech_clust_run()`:
```python
TerminatedWorkerError: A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker.
```
**If you encounter this error, simply re-run the function which caused the error, and it will pick-up where it left off.** Occasionally, you may have to do this several times before the function completes.

The reason for this error is that these functions multi-process across channels, but underlying libraries like scipy also parallelize their own operations. This makes it impossible for the program to know how much memory will be used to automatically constrain the number of multi-processes used. 

## Explainers and useful substitutions: 
### blechpy.dataset(): make a NEW dataset
blechpy.dataset() makes a NEW dataset or to OVERWRITES an existing dataset. DO NOT use it on an existing dataset unless you want to overwrite the existing dataset and lose you preprocessing progress with it. 
```python
dat = blechpy.dataset('path/to/recording/directory') # replace quoted text with the filepath to the folder where your recording files are
# or
dat = blechpy.dataset()  # for user interface to select directory
```
This will create a new dataset object and setup basic file paths. You should only do this when starting data processing for the first time. If you use it on a processed dataset, it will get overwritten.

### blechpy.load_dataset(): LOAD an existing dataset
If you already have a dataset and want to pick up where you left off, use blechpy.load_dataset() instead of blechpy.dataset(). 
```python
dat = blechpy.load_dataset('/path/to/recording/directory')  # create dataset object
# or
dat = blechpy.load_dataset()  # for user interface to select directory
# or
dat = blechpy.load_dataset('path/to/dataset/save/file.p')
```

### initParams(): initialize parameters
```python
dat.initParams() 
```
Initalizes all analysis parameters with a series of prompts.
See prompts for optional keyword params.
Primarily setups parameters for:
* Flattening Port & Channel in Electrode designations
* Common average referencing
* Labelling areas of electrodes
* Labelling digital inputs & outputs
* Labelling dead electrodes
* Clustering parameters
* Spike array creation
* PSTH creation
* Palatability/Identity Responsiveness calculations

Initial parameters are pulled from default json files in the dio subpackage.
Parameters for a dataset are written to json files in a *parameters* folder in the recording directory. 

#### useful presets:
```python
dat.initParams(data_quality='noisy') # alternative: less strict clustering parameters
dat.initParams(car_keyword='2site_OE64') # automatically map channels to hirose-connector 64ch OEPS EIB in 2-site implantation
dat.initParams(car_keyword='bilateral64') # automatically map channels to omnetics-connector 64ch EIB in 2-site implantation
dat.initParams(shell=True) # alternative: bypass GUI interface in favor of shell interface, useful if working over SSH or GUI is broken
#remember that you can chain any combination of valid keyword arguments together, eg.:
dat.initParams(data_quality='hp', car_keyword='bilateral64', shell=True)
```

### mark_dead_channels(): mark dead channels for exclusion from common average refrencing and clustering
```python
dat.mark_dead_channels() # opens GUI to view traces and label dead channels
```
Marking dead channels is critical for good common average referencing, since dead channels typically have a signal that differs a lot from the "true" average voltage at the electrode tips.

#### HIGHLY RECOMMENDED preset: 
If you already know your dead channels a-priori, you can pass them to mark_dead_channels() as a list of integers:
```python
dat.mark_dead_channels([dead channel indices]) # dead channel indices eg. : [1,2,3]
```

### blech_clust_run(): run clustering
blech_clust_run's keywords can change the clustering algorithm and/or parameters 
```python
dat.blech_clust_run(data_quality='noisy') # alternative: re-run clustering with less strict parameters
dat.blech_clust_run(umap=True) # alternative: cluster with UMAP instead of GMM, improves clustering
dat.blech_clust_run() # default uses PCA instead of UMAP, which is faster, but lower quality clustering
```

## Other useful functions:
### dat._change_root() for moving a dataset:
If you want to move a dataset folder, it is critical you perform the following steps:
1) move the dataset folder to the desired location 
2) copy the new dataset folder directory (right click on folder, select copy)
3) in the ipython console, run the following commands:
```Python
new_directory = 'path/to/new/dataset/folder' # You can paste the directory by right clicking and selecting 'paste filename' 
dat = blechpy.load_dataset(new_directory) # load the dataset
dat._change_root(new_directory) # change the root directory of the dataset to the new directory
dat.save() # save the new directory to the dataset file
```
### Checking processing progress:
```python
dat.processing_status
```
Can provide an overview of basic data extraction and processing steps that need to be taken.

### Viewing a Dataset
Experiments can be easily viewed wih: `print(dat)`
A summary can also be exported to a text with: `dat.export_to_text()`


## Import processed dataset into dataset framework (in development)
```python
dat = blechpy.port_in_dataset()
# or
dat = blechpy.port_in_dataset('/path/to/recording/directory')
```

# Experiments
## Creating an experiment
```python
exp = blechpy.experiment('/path/to/dir/encasing/recordings')
# or
exp = blechpy.experiment()
```
This will initalize an experiment with all recording folders within the chosen directory.

## Editing recordings
```python
exp.add_recording('/path/to/new/recording/dir/')    # Add recording
exp.remove_recording('rec_label')                   # remove a recording dir 
```
Recordings are assigned labels when added to the experiment that can be used to easily reference exerpiments.

## Held unit detection
```python
exp.detect_held_units()
```
Uses raw waveforms from sorted units to determine if units can be confidently classified as "held". Results are stored in exp.held_units as a pandas DataFrame.
This also creates plots and exports data to a created directory:
/path/to/experiment/experiment-name_analysis

# Analysis
The `blechpy.analysis` module has a lot of useful tools for analyzing your data.
Most notable is the `blechpy.analysis.poissonHMM` module which will allow fitting of the HMM models to your data. See tutorials. 



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/nubs01/blechpy",
    "name": "blechpy",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "blech katz_lab Intan electrophysiology neuroscience",
    "author": "Roshan Nanu, Daniel Svedberg",
    "author_email": "roshan.nanu@gmail.com, dan.ake.svedberg@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/71/7b/9ee16c59d63610f79e62300fe22f12c2c5cca9365c8edfed242d2fab6488/blechpy-2.2.13.tar.gz",
    "platform": null,
    "description": "See the <a href='https://nubs01.github.io/blechpy'>full documentation</a> here.\n\n- [blechpy](#blechpy)\n- [Installation](#installation)\n- [Usage](#usage)\n- [Datasets](#datasets)\n  * [Starting wit a raw dataset](#starting-wit-a-raw-dataset)\n    + [Create dataset](#create-dataset)\n    + [Initialize Parameters](#initialize-parameters)\n    + [Basic Processing](#basic-processing)\n    + [Viewing a Dataset](#viewing-a-dataset)\n  * [Loading an existing dataset](#loading-an-existing-dataset)\n  * [Import processed dataset into dataset framework](#import-processed-dataset-into-dataset-framework)\n- [Experiments](#experiments)\n  * [Creating an experiment](#creating-an-experiment)\n  * [Editing recordings](#editing-recordings)\n  * [Held unit detection](#held-unit-detection)\n\n<small><i><a href='http://ecotrust-canada.github.io/markdown-toc/'>Table of contents generated with markdown-toc</a></i></small>\n\n# blechpy\nThis is a package to extract, process and analyze electrophysiology data recorded with Intan or OpenEphys recording systems. This package is customized to store experiment and analysis metadata for the BLECh Lab (Katz lab) @ Brandeis University, but can readily be used and customized for other labs.\n\n# Requirements\n### Operating system:\nCurrently, blechpy is only developed and validated to work properly on Linux operating systems. It is possible to use blechpy on Mac, but some GUI features may not work properly. It is not possible to use blechpy on Windows. \n\n### Virtual environments\nBecause blechpy depends on a very specific mix of package versions, it is required to install blechpy in a virtual environment. *We highly recommend using miniconda to handle your virtual environments. You can download miniconda here: https://docs.conda.io/en/latest/miniconda.html*\n\n### Hardware\nWe recommend using a computer with at least 32gb of ram and a muti-core processor. The more cores and memory, the better. Memory usage scales with core usage--memory needed to run without overflow errors as you increase the number of cores used. It is possible to run memory-intensive functions with fewer cores to avoid overflow errors, but this will increase processing time. It is also possible to re-run memory-intensive functions after an overflow error, and the function will pick up where it left off.\n\n### Data\nRight now this pipeline is only compatible with recordings done with Intan's 'one file per channel' or 'one file per signal type' recordings settings.\n\n\n# Installation + maintainance\n### Installation\n\nCreate a miniconda environment with: \n```bash\nconda create -n blechpy python==3.7.13\nconda activate blechpy\n```\nNow you can install the package with pip:\n```bash\npip install blechpy\n```\n\n### Activation\n\nOnce you have installed blechpy, you will need to perform some steps to \"activate\" blechpy, whenever you want to use it. \n\n1) open up a bash terminal (control+alt+t on ubuntu)\n2) activate your miniconda environment with the following command:\n```bash\nconda activate blechpy\n```\n3) start an ipython console by typing into the terminal\n```bash \nipython\n```\n4) You will now be in an ipython console. Now, import blechpy. Simply type:\n```python\nimport blechpy\n```\nNow, you can use blechpy functions in your ipython console.\n\n### Updating\nTo update blechpy, open up a bash terminal and type:\n```bash\nconda activate blechpy #activate your blechpy virtual environment\npip install blechpy -U #install updated version of blechpy\n```\n\n### Troubleshooting Segmentation Fault: Only applies if you are using Ubuntu version 20.XX LTS\nIf your operating system is Ubuntu version 20.XX LTS, \"import blechpy\" may throw a \"segmentation fault\" error. This is because numba version 0.48 available via pip-install is corrupted. You can fix this issue by reinstalling numba via conda, by entering the following command in your bash terminal:\n\n```bash\nconda install numba=0.48.0\n```\n\n# Blechpy Overview\nblechpy handles experimental metadata using data_objects which are tied to a directory encompassing some level of data. Existing types of data_objects include:\n* dataset\n    * object for a single recording session\n    * to create a dataset, you will need to have recording files from a single recording in its own \"dataset folder\". The path to this folder is the \"recording directory\"\n    * The dataset processing pipleine creates 2 critical files that will live alongside your recording files in the dataset folder: the .h5 file and the .p file. The .h5 file contains the actual processed data, along with some metadata. The .p file contains additional critical metadata. \n    * code lives in blechpy/datastructures/dataset.py\n* experiment\n    * object encompasing an ordered set of recordings from a single animal\n    * individual recordings must first be processed as datasets\n    * to create an experiment, you will need to have all the dataset folders from a single animal in its own \"experiment folder\". The path to this folder is the \"experiment directory\"\n    * code lives in blechpy/datastructures/experiment.py\n* project\n    * object that can encompass multiple experiments & data groups and allow analysis or group differences\n    * to create a project, you will need to have all the experiment folders from a single project in its own \"project folder\". The path to this folder is the \"project directory\"\n    * code lives in blechpy/datastructures/project.py\n* HMMHandler\n  * object that can be used to set up and run hidden markov model analysis on a dataset\n  * HMMHandler objects are created on the level of the dataset. You will need to have a fully processed dataset to create an HMMHandler object from it.\n\n# Dataset Processing (start here if you have a raw recording)\n\n### Basic dataset processing pipeline:\n\nWith a brand new *shiny* dataset, the most basic recommended data extraction workflow would be:\n```python\ndat = blechpy.dataset('/path/to/data/dir/') #create dataset object. Path to data dir should be your recording directory\n# IMPORTANT: only run blechpy.dataset ONCE on a dataset, unless you want to overwrite the existing dataset and your preprocessing\n# to load an existing dataset, use dat = blechpy.load_dataset('/path/to/data/dir/') instead\ndat.initParams(data_quality='hp') # follow GUI prompts. \ndat.extract_data()          # Extracts raw data into HDF5 store\ndat.create_trial_list()     # Creates table of digital input triggers\ndat.mark_dead_channels()    # View traces and label electrodes as dead, or just pass list of dead channels\ndat.common_average_reference() # Use common average referencing on data. Repalces raw with referenced data in HDF5 store\ndat.detect_spikes()        # Detect spikes in data. Replaces raw data with spike data in HDF5 store\ndat.blech_clust_run(umap=True)       # Cluster data using GMM\ndat.sort_spikes(electrode_number) # Split, merge and label clusters as units. Follow GUI prompts. Perform this for every electrode\ndat.post_sorting() #run this after you finish sorting all electrodes\ndat.make_PSTH_plots() #optional: make PSTH plots for all units \ndat.make_raster_plots() #optional: make raster plots for all units\n```\n\n### troubleshooting common error:\nIt is common to get the following error after running the functions `dat.detect_spikes() ` or `dat.blech_clust_run()`:\n```python\nTerminatedWorkerError: A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker.\n```\n**If you encounter this error, simply re-run the function which caused the error, and it will pick-up where it left off.** Occasionally, you may have to do this several times before the function completes.\n\nThe reason for this error is that these functions multi-process across channels, but underlying libraries like scipy also parallelize their own operations. This makes it impossible for the program to know how much memory will be used to automatically constrain the number of multi-processes used. \n\n## Explainers and useful substitutions: \n### blechpy.dataset(): make a NEW dataset\nblechpy.dataset() makes a NEW dataset or to OVERWRITES an existing dataset. DO NOT use it on an existing dataset unless you want to overwrite the existing dataset and lose you preprocessing progress with it. \n```python\ndat = blechpy.dataset('path/to/recording/directory') # replace quoted text with the filepath to the folder where your recording files are\n# or\ndat = blechpy.dataset()  # for user interface to select directory\n```\nThis will create a new dataset object and setup basic file paths. You should only do this when starting data processing for the first time. If you use it on a processed dataset, it will get overwritten.\n\n### blechpy.load_dataset(): LOAD an existing dataset\nIf you already have a dataset and want to pick up where you left off, use blechpy.load_dataset() instead of blechpy.dataset(). \n```python\ndat = blechpy.load_dataset('/path/to/recording/directory')  # create dataset object\n# or\ndat = blechpy.load_dataset()  # for user interface to select directory\n# or\ndat = blechpy.load_dataset('path/to/dataset/save/file.p')\n```\n\n### initParams(): initialize parameters\n```python\ndat.initParams() \n```\nInitalizes all analysis parameters with a series of prompts.\nSee prompts for optional keyword params.\nPrimarily setups parameters for:\n* Flattening Port & Channel in Electrode designations\n* Common average referencing\n* Labelling areas of electrodes\n* Labelling digital inputs & outputs\n* Labelling dead electrodes\n* Clustering parameters\n* Spike array creation\n* PSTH creation\n* Palatability/Identity Responsiveness calculations\n\nInitial parameters are pulled from default json files in the dio subpackage.\nParameters for a dataset are written to json files in a *parameters* folder in the recording directory. \n\n#### useful presets:\n```python\ndat.initParams(data_quality='noisy') # alternative: less strict clustering parameters\ndat.initParams(car_keyword='2site_OE64') # automatically map channels to hirose-connector 64ch OEPS EIB in 2-site implantation\ndat.initParams(car_keyword='bilateral64') # automatically map channels to omnetics-connector 64ch EIB in 2-site implantation\ndat.initParams(shell=True) # alternative: bypass GUI interface in favor of shell interface, useful if working over SSH or GUI is broken\n#remember that you can chain any combination of valid keyword arguments together, eg.:\ndat.initParams(data_quality='hp', car_keyword='bilateral64', shell=True)\n```\n\n### mark_dead_channels(): mark dead channels for exclusion from common average refrencing and clustering\n```python\ndat.mark_dead_channels() # opens GUI to view traces and label dead channels\n```\nMarking dead channels is critical for good common average referencing, since dead channels typically have a signal that differs a lot from the \"true\" average voltage at the electrode tips.\n\n#### HIGHLY RECOMMENDED preset: \nIf you already know your dead channels a-priori, you can pass them to mark_dead_channels() as a list of integers:\n```python\ndat.mark_dead_channels([dead channel indices]) # dead channel indices eg. : [1,2,3]\n```\n\n### blech_clust_run(): run clustering\nblech_clust_run's keywords can change the clustering algorithm and/or parameters \n```python\ndat.blech_clust_run(data_quality='noisy') # alternative: re-run clustering with less strict parameters\ndat.blech_clust_run(umap=True) # alternative: cluster with UMAP instead of GMM, improves clustering\ndat.blech_clust_run() # default uses PCA instead of UMAP, which is faster, but lower quality clustering\n```\n\n## Other useful functions:\n### dat._change_root() for moving a dataset:\nIf you want to move a dataset folder, it is critical you perform the following steps:\n1) move the dataset folder to the desired location \n2) copy the new dataset folder directory (right click on folder, select copy)\n3) in the ipython console, run the following commands:\n```Python\nnew_directory = 'path/to/new/dataset/folder' # You can paste the directory by right clicking and selecting 'paste filename' \ndat = blechpy.load_dataset(new_directory) # load the dataset\ndat._change_root(new_directory) # change the root directory of the dataset to the new directory\ndat.save() # save the new directory to the dataset file\n```\n### Checking processing progress:\n```python\ndat.processing_status\n```\nCan provide an overview of basic data extraction and processing steps that need to be taken.\n\n### Viewing a Dataset\nExperiments can be easily viewed wih: `print(dat)`\nA summary can also be exported to a text with: `dat.export_to_text()`\n\n\n## Import processed dataset into dataset framework (in development)\n```python\ndat = blechpy.port_in_dataset()\n# or\ndat = blechpy.port_in_dataset('/path/to/recording/directory')\n```\n\n# Experiments\n## Creating an experiment\n```python\nexp = blechpy.experiment('/path/to/dir/encasing/recordings')\n# or\nexp = blechpy.experiment()\n```\nThis will initalize an experiment with all recording folders within the chosen directory.\n\n## Editing recordings\n```python\nexp.add_recording('/path/to/new/recording/dir/')    # Add recording\nexp.remove_recording('rec_label')                   # remove a recording dir \n```\nRecordings are assigned labels when added to the experiment that can be used to easily reference exerpiments.\n\n## Held unit detection\n```python\nexp.detect_held_units()\n```\nUses raw waveforms from sorted units to determine if units can be confidently classified as \"held\". Results are stored in exp.held_units as a pandas DataFrame.\nThis also creates plots and exports data to a created directory:\n/path/to/experiment/experiment-name_analysis\n\n# Analysis\nThe `blechpy.analysis` module has a lot of useful tools for analyzing your data.\nMost notable is the `blechpy.analysis.poissonHMM` module which will allow fitting of the HMM models to your data. See tutorials. \n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Package for exrtacting, processing and analyzing Intan and OpenEphys data",
    "version": "2.2.13",
    "project_urls": {
        "Homepage": "https://github.com/nubs01/blechpy"
    },
    "split_keywords": [
        "blech",
        "katz_lab",
        "intan",
        "electrophysiology",
        "neuroscience"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7bea9bdb7507aa8289d10bf0bf36ddec771b9a9c562c09c06b56dbc83d8296e6",
                "md5": "97fa15fbe5002f138356ed449cb1ccab",
                "sha256": "636f01d9050997d93fbaf5e44b176768a47c82f955663ebcf73c3aa4b90962b4"
            },
            "downloads": -1,
            "filename": "blechpy-2.2.13-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "97fa15fbe5002f138356ed449cb1ccab",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 170792,
            "upload_time": "2023-12-11T21:13:36",
            "upload_time_iso_8601": "2023-12-11T21:13:36.094139Z",
            "url": "https://files.pythonhosted.org/packages/7b/ea/9bdb7507aa8289d10bf0bf36ddec771b9a9c562c09c06b56dbc83d8296e6/blechpy-2.2.13-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "717b9ee16c59d63610f79e62300fe22f12c2c5cca9365c8edfed242d2fab6488",
                "md5": "c601ea2aae507c0399a4a39b2ad75a54",
                "sha256": "95c0def351bb9cc0e16b22ff9e0c1cc5335c4dd22c2eb7a9521417b01f5d9053"
            },
            "downloads": -1,
            "filename": "blechpy-2.2.13.tar.gz",
            "has_sig": false,
            "md5_digest": "c601ea2aae507c0399a4a39b2ad75a54",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 154430,
            "upload_time": "2023-12-11T21:13:39",
            "upload_time_iso_8601": "2023-12-11T21:13:39.172565Z",
            "url": "https://files.pythonhosted.org/packages/71/7b/9ee16c59d63610f79e62300fe22f12c2c5cca9365c8edfed242d2fab6488/blechpy-2.2.13.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-11 21:13:39",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "nubs01",
    "github_project": "blechpy",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "blechpy"
}
        
Elapsed time: 0.15282s