=========================
SynaptiConn
=========================
|ProjectStatus| |Version| |BuildStatus| |Docs| |License| |PythonVersions|
.. |ProjectStatus| image:: http://www.repostatus.org/badges/latest/active.svg
:target: https://www.repostatus.org/#active
:alt: project status
.. |Version| image:: https://img.shields.io/pypi/v/synapticonn.svg
:target: https://pypi.python.org/pypi/synapticonn/
:alt: version
.. |BuildStatus| image:: https://github.com/mzabolocki/SynaptiConn/actions/workflows/build.yml/badge.svg
:target: https://github.com/mzabolocki/SynaptiConn/actions/workflows/build.yml
:alt: build status
.. |Docs| image:: https://github.com/mzabolocki/SynaptiConn/actions/workflows/docs.yml/badge.svg
:target: https://github.com/mzabolocki/SynaptiConn/actions/workflows/docs.yml
:alt: docs status
.. |License| image:: https://img.shields.io/pypi/l/synapticonn.svg
:target: https://opensource.org/licenses/Apache-2.0
:alt: license
.. |PythonVersions| image:: https://img.shields.io/pypi/pyversions/synapticonn.svg
:target: https://pypi.python.org/pypi/synapticonn/
:alt: python versions
.. image:: https://github.com/mzabolocki/SynaptiConn/raw/main/docs/img/synapti_conn_logo_v2.png
:alt: SynaptiConn
:width: 40%
:align: center
Overview
---------
SynaptiConn is a python package for inferring monosynaptic connections from single-unit spike-train data.
The package provides a set of tools for analyzing spike-train data, including spike-train cross-correlation analysis, and for inferring monosynaptic connections using a model-based approach.
The package is designed to be user-friendly and flexible, and can be used to analyze spike-train data from a variety of experimental paradigms.
Monosynaptic connections, both excitatory and inhibitory connections, are determined with a model-based approach that fits a set of connection features to the observed spike-train cross-correlation.
The package can determine the most likely set of connections that underlie the observed cross-correlation. The package also provides a set of tools for visualizing the data and model fits,
and for exporting the connection features.
In future versions, the package will include additional tools for analyzing spike-train data, and for inferring connections from other types of data, using a variety of models.
**Please Star the project to support us and Watch to always stay up-to-date!**
Installation
------------
To install the stable version of SynaptiConn, you can use pip:
SynaptiConn (stable version)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: bash
pip install synapticonn
The development version of SynaptiConn can be installed by cloning the repository and
installing using pip:
Development version
~~~~~~~~~~~~~~~~~~~~~~
To get the current development version, first clone this repository:
.. code-block:: bash
git clone https://github.com/mzabolocki/SynaptiConn
To install this cloned copy, move into the directory you just cloned, and run:
.. code-block:: shell
$ pip install .
Editable Version
~~~~~~~~~~~~~~~~~~~~~~
To install an editable version, download the development version as above, and run:
.. code-block:: shell
$ pip install -e .
Documentation
--------------
The 'synapticonn' package includes a full set of code documentation.
To see the documentation for the candidate release, see
`here <https://mzabolocki.github.io/SynaptiConn/>`_.
Dependencies
-------------
`synapticonn` is written in Python, and requires Python >= 3.8 to run.
It requires the following dependencies:
- `numpy <https://github.com/numpy/numpy>`_
- `scipy <https://github.com/scipy/scipy>`_ >= 0.19
- `matplotlib <https://github.com/matplotlib/matplotlib>`_ is needed to visualize data and model fits
- `pandas <https://github.com/pandas-dev/pandas>`_ is needed for exporting connection features to dataframes
- `joblib <https://github.com/joblib/joblib>`_ is needed for parallel processing
- `openpyxl <https://github.com/theorchard/openpyxl>`_ is needed for exporting connection features to excel files
We recommend using the `Anaconda <https://www.anaconda.com/distribution/>`_ distribution to manage these requirements.
Quick start
-----------
The module is object orientated, and the main class is `SynaptiConn`, which is used to analyze spike-train data and infer monosynaptic connections.
An example how to use the package is shown below:
.. code-block:: python
# import the model object
from synapticonn import SynaptiConn
# initialize the model object
snc = SynaptiConn(spike_times,
method="cross-correlation",
time_unit="ms",
srate=30_000,
recording_length_t=600*1000,
bin_size_t=1,
max_lag_t=100,)
# set the spike unit ids to be used for the analysis
spike_pairs = [(0, 6), (0, 7), (0, 8), (0, 9)]
# fit the model and report the monosynaptic connection results
snc.report(spike_pairs)
**Define the settings**
The `SynaptiConn` object is initialized with the following settings:
- `spike_times` : dict
A dictionary of spike times for each neuron, where the keys are the neuron IDs, and the values are arrays of spike times.
- `method` : str
The method to use for inferring connections. Currently, only 'cross-correlation' is supported. This will be expanded in future versions.
- `time_unit` : str
The time unit of the spike times. Currently, only 'ms' is supported. This will be expanded in future versions.
- `srate` : float
The sampling rate of the spike times, in Hz.
- `recording_length_t` : float
The length of the recording, in the same time unit as the spike times.
- `bin_size_t` : float
The size of the bins to use for the cross-correlation analysis, in the same time unit as the spike times.
- `max_lag_t` : float
The maximum lag to use for the cross-correlation analysis, in the same time unit as the spike times.
**Note that a full set of examples and tutorials are provided in the documentation.
These provide a more detailed overview of how to use the package, and how to interpret the results.**
Documentation will be maintained and updated regularly, and we welcome feedback and suggestions for improvements.
Spike-train data
-----------------
SynaptiConn is designed to work with spike-train data, which can be provided in the form of a dict of spike times for each neuron.
These are to be organised as a dictionary, where the keys are the neuron IDs, and the values are arrays of spike times.
It is recommended to use the `SpikeInterface <https://spikeinterface.readthedocs.io/en/latest/modules/sorters.html>`_ package to process, load and organize spike-train data.
All spike-units should be subject to appropriate spike-sorting procedures before being analyzed with SynaptiConn. This includes removing noise and artifacts,
and ensuring that the spike times are accurate. For further information, please see the quality control metric outline from
`Allen Brain documentation <https://allensdk.readthedocs.io/en/latest/_static/examples/nb/ecephys_quality_metrics.html#d-prime>`_.
If unsure of the data quality, SynaptiConn has simple quality control checks built in, which can be used to filter out poor quality data.
*In future versions, we plan to include additional spike-time data types, such as NWB files, and other file formats. Further,
we plan to include additional spike-time data loaders, to make it easier to load and organize spike-time data, along with additional quality control checks.*
Raw data
{
"_id": null,
"home_page": "https://github.com/mzabolocki/SynaptiConn",
"name": "synapticonn",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "mzabolocki@gmail.com",
"keywords": "neuroscience, connectomics, synaptic connections",
"author": "Michael Zabolocki",
"author_email": "mzabolocki@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/65/b4/a2d1cf66162d2c2bdea64f1216710185b4a240a3fcb4bf1a3fcb7fa5b1f4/synapticonn-0.0.1rc2.tar.gz",
"platform": "any",
"description": "=========================\nSynaptiConn\n=========================\n\n|ProjectStatus| |Version| |BuildStatus| |Docs| |License| |PythonVersions|\n\n.. |ProjectStatus| image:: http://www.repostatus.org/badges/latest/active.svg\n :target: https://www.repostatus.org/#active\n :alt: project status\n\n.. |Version| image:: https://img.shields.io/pypi/v/synapticonn.svg\n :target: https://pypi.python.org/pypi/synapticonn/\n :alt: version\n\n.. |BuildStatus| image:: https://github.com/mzabolocki/SynaptiConn/actions/workflows/build.yml/badge.svg\n :target: https://github.com/mzabolocki/SynaptiConn/actions/workflows/build.yml\n :alt: build status\n\n.. |Docs| image:: https://github.com/mzabolocki/SynaptiConn/actions/workflows/docs.yml/badge.svg\n :target: https://github.com/mzabolocki/SynaptiConn/actions/workflows/docs.yml\n :alt: docs status\n\n.. |License| image:: https://img.shields.io/pypi/l/synapticonn.svg\n :target: https://opensource.org/licenses/Apache-2.0\n :alt: license\n\n.. |PythonVersions| image:: https://img.shields.io/pypi/pyversions/synapticonn.svg\n :target: https://pypi.python.org/pypi/synapticonn/\n :alt: python versions\n\n\n.. image:: https://github.com/mzabolocki/SynaptiConn/raw/main/docs/img/synapti_conn_logo_v2.png\n :alt: SynaptiConn\n :width: 40%\n :align: center\n\n\nOverview\n---------\nSynaptiConn is a python package for inferring monosynaptic connections from single-unit spike-train data.\n\nThe package provides a set of tools for analyzing spike-train data, including spike-train cross-correlation analysis, and for inferring monosynaptic connections using a model-based approach.\nThe package is designed to be user-friendly and flexible, and can be used to analyze spike-train data from a variety of experimental paradigms.\n\nMonosynaptic connections, both excitatory and inhibitory connections, are determined with a model-based approach that fits a set of connection features to the observed spike-train cross-correlation.\nThe package can determine the most likely set of connections that underlie the observed cross-correlation. The package also provides a set of tools for visualizing the data and model fits,\nand for exporting the connection features. \n\nIn future versions, the package will include additional tools for analyzing spike-train data, and for inferring connections from other types of data, using a variety of models.\n\n**Please Star the project to support us and Watch to always stay up-to-date!**\n\nInstallation\n------------\n\nTo install the stable version of SynaptiConn, you can use pip:\n\nSynaptiConn (stable version)\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n.. code-block:: bash\n\n pip install synapticonn\n\nThe development version of SynaptiConn can be installed by cloning the repository and \ninstalling using pip:\n\nDevelopment version\n~~~~~~~~~~~~~~~~~~~~~~\nTo get the current development version, first clone this repository:\n\n.. code-block:: bash\n \n git clone https://github.com/mzabolocki/SynaptiConn\n\nTo install this cloned copy, move into the directory you just cloned, and run:\n\n.. code-block:: shell\n\n $ pip install .\n\nEditable Version\n~~~~~~~~~~~~~~~~~~~~~~\n\nTo install an editable version, download the development version as above, and run:\n\n.. code-block:: shell\n\n $ pip install -e .\n\nDocumentation\n--------------\nThe 'synapticonn' package includes a full set of code documentation.\n\nTo see the documentation for the candidate release, see\n`here <https://mzabolocki.github.io/SynaptiConn/>`_.\n\nDependencies\n-------------\n\n`synapticonn` is written in Python, and requires Python >= 3.8 to run.\n\nIt requires the following dependencies:\n\n- `numpy <https://github.com/numpy/numpy>`_\n- `scipy <https://github.com/scipy/scipy>`_ >= 0.19\n- `matplotlib <https://github.com/matplotlib/matplotlib>`_ is needed to visualize data and model fits\n- `pandas <https://github.com/pandas-dev/pandas>`_ is needed for exporting connection features to dataframes\n- `joblib <https://github.com/joblib/joblib>`_ is needed for parallel processing\n- `openpyxl <https://github.com/theorchard/openpyxl>`_ is needed for exporting connection features to excel files\n\nWe recommend using the `Anaconda <https://www.anaconda.com/distribution/>`_ distribution to manage these requirements.\n\nQuick start\n-----------\nThe module is object orientated, and the main class is `SynaptiConn`, which is used to analyze spike-train data and infer monosynaptic connections.\n\nAn example how to use the package is shown below:\n\n.. code-block:: python\n \n # import the model object\n from synapticonn import SynaptiConn\n\n # initialize the model object\n snc = SynaptiConn(spike_times,\n method=\"cross-correlation\",\n time_unit=\"ms\",\n srate=30_000,\n recording_length_t=600*1000,\n bin_size_t=1,\n max_lag_t=100,)\n \n # set the spike unit ids to be used for the analysis\n spike_pairs = [(0, 6), (0, 7), (0, 8), (0, 9)]\n \n # fit the model and report the monosynaptic connection results\n snc.report(spike_pairs)\n\n**Define the settings**\n\nThe `SynaptiConn` object is initialized with the following settings:\n\n- `spike_times` : dict\n A dictionary of spike times for each neuron, where the keys are the neuron IDs, and the values are arrays of spike times.\n- `method` : str\n The method to use for inferring connections. Currently, only 'cross-correlation' is supported. This will be expanded in future versions.\n- `time_unit` : str\n The time unit of the spike times. Currently, only 'ms' is supported. This will be expanded in future versions.\n- `srate` : float\n The sampling rate of the spike times, in Hz.\n- `recording_length_t` : float\n The length of the recording, in the same time unit as the spike times.\n- `bin_size_t` : float\n The size of the bins to use for the cross-correlation analysis, in the same time unit as the spike times.\n- `max_lag_t` : float\n The maximum lag to use for the cross-correlation analysis, in the same time unit as the spike times.\n\n**Note that a full set of examples and tutorials are provided in the documentation.\nThese provide a more detailed overview of how to use the package, and how to interpret the results.**\n\nDocumentation will be maintained and updated regularly, and we welcome feedback and suggestions for improvements.\n\nSpike-train data\n-----------------\nSynaptiConn is designed to work with spike-train data, which can be provided in the form of a dict of spike times for each neuron.\nThese are to be organised as a dictionary, where the keys are the neuron IDs, and the values are arrays of spike times.\n\nIt is recommended to use the `SpikeInterface <https://spikeinterface.readthedocs.io/en/latest/modules/sorters.html>`_ package to process, load and organize spike-train data.\nAll spike-units should be subject to appropriate spike-sorting procedures before being analyzed with SynaptiConn. This includes removing noise and artifacts,\nand ensuring that the spike times are accurate. For further information, please see the quality control metric outline from\n`Allen Brain documentation <https://allensdk.readthedocs.io/en/latest/_static/examples/nb/ecephys_quality_metrics.html#d-prime>`_.\n\nIf unsure of the data quality, SynaptiConn has simple quality control checks built in, which can be used to filter out poor quality data.\n\n*In future versions, we plan to include additional spike-time data types, such as NWB files, and other file formats. Further, \nwe plan to include additional spike-time data loaders, to make it easier to load and organize spike-time data, along with additional quality control checks.*\n",
"bugtrack_url": null,
"license": "Apache License, 2.0",
"summary": "Inferring monosynaptic connections in neural circuits.",
"version": "0.0.1rc2",
"project_urls": {
"Homepage": "https://github.com/mzabolocki/SynaptiConn"
},
"split_keywords": [
"neuroscience",
" connectomics",
" synaptic connections"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "d8fea05976d377fde1ef942dc48642be33d2b610d6d230b4b6b5adbd5cb6ad0c",
"md5": "d0d633ee5350d12c0c31e69904510cb0",
"sha256": "3cf7cea55072c6ff5412e3fc3a9c57026b93882a58f38d38ba207ab5cc17586e"
},
"downloads": -1,
"filename": "synapticonn-0.0.1rc2-py2.py3-none-any.whl",
"has_sig": false,
"md5_digest": "d0d633ee5350d12c0c31e69904510cb0",
"packagetype": "bdist_wheel",
"python_version": "py2.py3",
"requires_python": ">=3.8",
"size": 53042,
"upload_time": "2024-11-25T18:11:01",
"upload_time_iso_8601": "2024-11-25T18:11:01.897284Z",
"url": "https://files.pythonhosted.org/packages/d8/fe/a05976d377fde1ef942dc48642be33d2b610d6d230b4b6b5adbd5cb6ad0c/synapticonn-0.0.1rc2-py2.py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "65b4a2d1cf66162d2c2bdea64f1216710185b4a240a3fcb4bf1a3fcb7fa5b1f4",
"md5": "b7649b9806e11a29d4528819dc062719",
"sha256": "0da3e1d24a6f82331c6559c52e98e79d802f48c72c8592a15446b2dc1387c787"
},
"downloads": -1,
"filename": "synapticonn-0.0.1rc2.tar.gz",
"has_sig": false,
"md5_digest": "b7649b9806e11a29d4528819dc062719",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 42079,
"upload_time": "2024-11-25T18:11:03",
"upload_time_iso_8601": "2024-11-25T18:11:03.983544Z",
"url": "https://files.pythonhosted.org/packages/65/b4/a2d1cf66162d2c2bdea64f1216710185b4a240a3fcb4bf1a3fcb7fa5b1f4/synapticonn-0.0.1rc2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-25 18:11:03",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "mzabolocki",
"github_project": "SynaptiConn",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "jupyter",
"specs": []
},
{
"name": "ipykernel",
"specs": []
},
{
"name": "numpy",
"specs": []
},
{
"name": "scipy",
"specs": []
},
{
"name": "pandas",
"specs": []
},
{
"name": "matplotlib",
"specs": []
},
{
"name": "ipympl",
"specs": []
},
{
"name": "openpyxl",
"specs": []
},
{
"name": "joblib",
"specs": []
}
],
"tox": true,
"lcname": "synapticonn"
}