improv


Nameimprov JSON
Version 0.2.2 PyPI version JSON
download
home_page
SummaryPlatform for adaptive neuroscience experiments
upload_time2024-01-15 20:39:25
maintainer
docs_urlNone
author
requires_python>=3.6
licenseMIT License Copyright (c) 2019 Pearson Lab at Duke University Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords neuroscience adaptive closed loop
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # improv
[![PyPI](https://img.shields.io/pypi/v/improv?style=flat-square?style=flat-square)](https://pypi.org/project/improv)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/improv?style=flat-square)](https://pypi.org/project/improv)
[![docs](https://github.com/project-improv/improv/actions/workflows/docs.yaml/badge.svg?style=flat-square)](https://project-improv.github.io/)
[![tests](https://github.com/project-improv/improv/actions/workflows/CI.yaml/badge.svg?style=flat-square)](https://project-improv.github.io/)
[![Coverage Status](https://coveralls.io/repos/github/project-improv/improv/badge.svg?branch=main)](https://coveralls.io/github/project-improv/improv?branch=main)
[![PyPI - License](https://img.shields.io/pypi/l/improv?style=flat-square)](https://opensource.org/licenses/MIT)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg?style=flat-square)](https://github.com/psf/black)

A flexible software platform for real-time and adaptive neuroscience experiments.

_improv_ is a streaming software platform designed to enable adaptive experiments. By analyzing data, such as 2-photon calcium images, as it comes in, we can obtain information about the current brain state in real time and use it to adaptively modify an experiment as data collection is ongoing. 

![](https://dibs-web01.vm.duke.edu/pearson/assets/improv/improvGif.gif)

This video shows raw 2-photon calcium imaging data in zebrafish, with cells detected in real time by [CaImAn](https://github.com/flatironinstitute/CaImAn), and directional tuning curves (shown as colored neurons) and functional connectivity (lines) estimated online, during a live experiment. Here only a few minutes of data have been acquired, and neurons are colored by their strongest response to visual simuli shown so far.
We also provide up-to-the-moment estimates of the functional connectivity by fitting linear-nonlinear-Poisson models online, as each new piece of data is acquired. Simple visualizations offer real-time insights, allowing for adaptive experiments that change in response to the current state of the brain.


### How improv works

<img src="https://dibs-web01.vm.duke.edu/pearson/assets/improv/improv_design.png" width=85%>

improv allows users to flexibly specify and manage adaptive experiments to integrate data collection, preprocessing, visualization, and user-defined analytics. All kinds of behavioral, neural, or modeling data can be incorporated, and input and output data streams are managed independently and asynchronously. With this design, streaming analyses and real-time interventions can be easily integrated into various experimental setups. improv manages the backend engineering of data flow and task execution for all steps in an experimental pipeline in real time, without requiring user oversight. Users need only define their particular processing pipeline with simple text files and are free to define their own streaming analyses via Python classes, allowing for rapid prototyping of adaptive experiments.  
  <br />
  <br />
  
<img src="https://dibs-web01.vm.duke.edu/pearson/assets/improv/actor_model.png" width=60%>

_improv_'s design is based on a streamlined version of the actor model for concurrent computation. Each component of the system (experimental pipeline) is considered an 'actor' and has a unique role. They interact via message passing, without the need for a central broker. Actors are implemented as user-defined classes that inherit from _improv_'s `Actor` class, which supplies all queues for message passing and orchestrates process execution and error handling. Messages between actors are composed of keys that correspond to items in a shared, in-memory data store. This both minimizes communication overhead and data copying between processes. 



## Installation

For installation instructions, please consult the [docs](https://project-improv.github.io/improv/installation.html) on our github.

### Contact
To get in touch, feel free to reach out on Twitter <a href="http://twitter.com/annedraelos" target="_blank">@annedraelos</a> or <a href="http://twitter.com/jmxpearson" target="_blank">@jmxpearson</a>. 

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "improv",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "neuroscience,adaptive,closed loop",
    "author": "",
    "author_email": "Anne Draelos <adraelos@umich.edu>, John Pearson <john.pearson@duke.edu>",
    "download_url": "https://files.pythonhosted.org/packages/77/1c/6c60019985ff7a7afbfafdcb02a32141a459bea6e02052b2a9e3026aaebc/improv-0.2.2.tar.gz",
    "platform": null,
    "description": "# improv\n[![PyPI](https://img.shields.io/pypi/v/improv?style=flat-square?style=flat-square)](https://pypi.org/project/improv)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/improv?style=flat-square)](https://pypi.org/project/improv)\n[![docs](https://github.com/project-improv/improv/actions/workflows/docs.yaml/badge.svg?style=flat-square)](https://project-improv.github.io/)\n[![tests](https://github.com/project-improv/improv/actions/workflows/CI.yaml/badge.svg?style=flat-square)](https://project-improv.github.io/)\n[![Coverage Status](https://coveralls.io/repos/github/project-improv/improv/badge.svg?branch=main)](https://coveralls.io/github/project-improv/improv?branch=main)\n[![PyPI - License](https://img.shields.io/pypi/l/improv?style=flat-square)](https://opensource.org/licenses/MIT)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg?style=flat-square)](https://github.com/psf/black)\n\nA flexible software platform for real-time and adaptive neuroscience experiments.\n\n_improv_ is a streaming software platform designed to enable adaptive experiments. By analyzing data, such as 2-photon calcium images, as it comes in, we can obtain information about the current brain state in real time and use it to adaptively modify an experiment as data collection is ongoing. \n\n![](https://dibs-web01.vm.duke.edu/pearson/assets/improv/improvGif.gif)\n\nThis video shows raw 2-photon calcium imaging data in zebrafish, with cells detected in real time by [CaImAn](https://github.com/flatironinstitute/CaImAn), and directional tuning curves (shown as colored neurons) and functional connectivity (lines) estimated online, during a live experiment. Here only a few minutes of data have been acquired, and neurons are colored by their strongest response to visual simuli shown so far.\nWe also provide up-to-the-moment estimates of the functional connectivity by fitting linear-nonlinear-Poisson models online, as each new piece of data is acquired. Simple visualizations offer real-time insights, allowing for adaptive experiments that change in response to the current state of the brain.\n\n\n### How improv works\n\n<img src=\"https://dibs-web01.vm.duke.edu/pearson/assets/improv/improv_design.png\" width=85%>\n\nimprov allows users to flexibly specify and manage adaptive experiments to integrate data collection, preprocessing, visualization, and user-defined analytics. All kinds of behavioral, neural, or modeling data can be incorporated, and input and output data streams are managed independently and asynchronously. With this design, streaming analyses and real-time interventions can be easily integrated into various experimental setups. improv manages the backend engineering of data flow and task execution for all steps in an experimental pipeline in real time, without requiring user oversight. Users need only define their particular processing pipeline with simple text files and are free to define their own streaming analyses via Python classes, allowing for rapid prototyping of adaptive experiments.  \n  <br />\n  <br />\n  \n<img src=\"https://dibs-web01.vm.duke.edu/pearson/assets/improv/actor_model.png\" width=60%>\n\n_improv_'s design is based on a streamlined version of the actor model for concurrent computation. Each component of the system (experimental pipeline) is considered an 'actor' and has a unique role. They interact via message passing, without the need for a central broker. Actors are implemented as user-defined classes that inherit from _improv_'s `Actor` class, which supplies all queues for message passing and orchestrates process execution and error handling. Messages between actors are composed of keys that correspond to items in a shared, in-memory data store. This both minimizes communication overhead and data copying between processes. \n\n\n\n## Installation\n\nFor installation instructions, please consult the [docs](https://project-improv.github.io/improv/installation.html) on our github.\n\n### Contact\nTo get in touch, feel free to reach out on Twitter <a href=\"http://twitter.com/annedraelos\" target=\"_blank\">@annedraelos</a> or <a href=\"http://twitter.com/jmxpearson\" target=\"_blank\">@jmxpearson</a>. \n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2019 Pearson Lab at Duke University  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "Platform for adaptive neuroscience experiments",
    "version": "0.2.2",
    "project_urls": null,
    "split_keywords": [
        "neuroscience",
        "adaptive",
        "closed loop"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "94b72d0eb6266ee1acfabf54479526c946cb9bfbd2d39d5cfe469e5a20f7fbdc",
                "md5": "1ceab72969a8201ece4d3f90a116d0d2",
                "sha256": "8924cb08e11a3cb018de59f3dc8291e7d5b68c6641a1681d41fc30cd40b204d0"
            },
            "downloads": -1,
            "filename": "improv-0.2.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1ceab72969a8201ece4d3f90a116d0d2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 40820,
            "upload_time": "2024-01-15T20:39:23",
            "upload_time_iso_8601": "2024-01-15T20:39:23.530298Z",
            "url": "https://files.pythonhosted.org/packages/94/b7/2d0eb6266ee1acfabf54479526c946cb9bfbd2d39d5cfe469e5a20f7fbdc/improv-0.2.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "771c6c60019985ff7a7afbfafdcb02a32141a459bea6e02052b2a9e3026aaebc",
                "md5": "789f3f408bf93d6a466eaadad0fbdc0b",
                "sha256": "76b485a95c3dd52e3b764222f2aa64c382c000e7d10acb3536041d20446d6c1b"
            },
            "downloads": -1,
            "filename": "improv-0.2.2.tar.gz",
            "has_sig": false,
            "md5_digest": "789f3f408bf93d6a466eaadad0fbdc0b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 638913,
            "upload_time": "2024-01-15T20:39:25",
            "upload_time_iso_8601": "2024-01-15T20:39:25.390894Z",
            "url": "https://files.pythonhosted.org/packages/77/1c/6c60019985ff7a7afbfafdcb02a32141a459bea6e02052b2a9e3026aaebc/improv-0.2.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-15 20:39:25",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "improv"
}
        
Elapsed time: 0.20962s