dataprob


Namedataprob JSON
Version 0.9.4 PyPI version JSON
download
home_pageNone
SummaryDo likelihood based parameter estimation using maximum likeihood and bayesian methods
upload_time2024-09-18 03:53:21
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseCopyright (c) 2020 Michael J. Harms. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords likelihood maximum likelihood ml bayesian mcmc monte carlo regression estimator
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ========
dataprob
========

.. image:: docs/badges/tests-badge.svg

.. image:: docs/badges/coverage-badge.svg

dataprob was designed to allow scientists to easily fit user-defined models to 
experimental data. It allows maximum likelihood, bootstrap, and Bayesian
analyses with a simple and consistent interface. 

Design principles
=================

+ **ease of use:** Users write a python function that describes their model, 
  then load in their experimental data as a dataframe. 
+ **dataframe centric:** Uses a pandas dataframe to specify parameter bounds,
  guesses, fixedness, and priors. Observed data can be passed in as a
  dataframe or numpy vector. All outputs are pandas dataframes. 
+ **consistent experience:** Users can run maximum-likelihood, bootstrap 
  resampling, or Bayesian MCMC analyses with an identical interface and nearly
  identical diagnostic outputs. 
+ **interpretable:** Provides diagnostic plots and runs tests to validate
  fit results. 

Simple example
==============

The following code generates noisy linear data and uses dataprob to find 
the maximum likelihood estimate of its slope and intercept. 
`Run on Google Colab <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/simple-example.ipynb>`_.

.. code-block:: python
    
    import dataprob
    import numpy as np

    # Generate "experimental" linear data (slope = 5, intercept = 5.7) that has
    # random noise on each point. 
    x_array = np.linspace(0,10,25)
    noise = np.random.normal(loc=0,scale=0.5,size=x_array.shape)
    y_obs = 5*x_array + 5.7 + noise

    # 1. Define a linear model
    def linear_model(m=1,b=1,x=[]):
        return m*x + b

    # 2. Set up the analysis. 'method' can be "ml", "mcmc", or "bootstrap"
    f = dataprob.setup(linear_model,
                       method="ml",
                       non_fit_kwargs={"x":x_array})

    # 3. Fit the parameters of linear_model model to y_obs, assuming uncertainty
    #    of 0.5 on each observed point. 
    f.fit(y_obs=y_obs,
          y_std=0.5)

    # 4. Access results
    fig = dataprob.plot_summary(f)
    fig = dataprob.plot_corner(f)
    print(f.fit_df)
    print(f.fit_quality)

The plots will be:

.. image:: docs/source/_static/simple-example_plot-summary.svg
    :align: center
    :alt: data.plot_summary result
    :width: 75%

.. image:: docs/source/_static/simple-example_plot-corner.svg
    :align: center
    :alt: data.plot_corner result
    :width: 75%


The ``f.fit_df`` dataframe will look something like:

+-------+-------+----------+-------+--------+---------+-------+-----------+
| index | name  | estimate | std   | low_95 | high_95 | ...   | prior_std |
+=======+=======+==========+=======+========+=========+=======+===========+
| ``m`` | ``m`` | 5.009    | 0.045 | 4.817  | 5.202   | ...   | ``NaN``   |  
+-------+-------+----------+-------+--------+---------+-------+-----------+
| ``b`` | ``b`` | 5.644    | 0.274 |  4.465 | 6.822   | ...   | ``NaN``   |
+-------+-------+----------+-------+--------+---------+-------+-----------+

The ``f.fit_quality`` dataframe will look something like:

+---------------+---------------------------------------------+---------+---------+
| name          | description                                 | is_good | value   |
+===============+=============================================+=========+=========+
| num_obs       | number of observations                      | True    | 25.000  |
+---------------+---------------------------------------------+---------+---------+
| num_param     | number of fit parameters                    | True    | 2.000   |
+---------------+---------------------------------------------+---------+---------+
| lnL           | log likelihood                              | True    | -18.761 |
+---------------+---------------------------------------------+---------+---------+
| chi2          | chi^2 goodness-of-fit                       | True    | 0.241   |
+---------------+---------------------------------------------+---------+---------+
| reduced_chi2  | reduced chi^2                               | True    | 1.192   |
+---------------+---------------------------------------------+---------+---------+
| mean0_resid   | t-test for residual mean != 0               | True    | 1.000   |
+---------------+---------------------------------------------+---------+---------+
| durbin-watson | Durbin-Watson test for correlated residuals | True    | 2.265   |
+---------------+---------------------------------------------+---------+---------+
| ljung-box     | Ljung-Box test for correlated residuals     | True    | 0.943   |
+---------------+---------------------------------------------+---------+---------+



Installation
============

We recommend installing dataprob with pip:

.. code-block:: shell

    pip install dataprob

To install from source and run tests:

.. code-block:: shell

    git clone https://github.com/harmslab/dataprob.git
    cd dataprob
    pip install .

    # to run test-suite
    pytest --runslow

Examples
========

A good way to learn how to use the library is by working through examples. The
following notebooks are included in the `dataprob/examples/` directory. They are
self-contained demonstrations in which dataprob is used to analyze various
classes of experimental data. The links below launch each notebook in Google
Colab:

+ `api-example.ipynb <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/api-example.ipynb>`_: shows various features of the API when analyzing a linear model
+ `linear.ipynb <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/linear.ipynb>`_: fit a linear model to noisy data (2 parameter, linear)
+ `binding.ipynb <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/binding.ipynb>`_: a single-site binding interaction (2 parameter, sigmoidal curve)
+ `michaelis-menten.ipynb <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/michaelis-menten.ipynb>`_: Michaelis-Menten model of enzyme kinetics (2 parameter, sigmoidal curve)
+ `lagged-exponential.ipynb <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/lagged-exponential.ipynb>`_: bacterial growth curve with initial lag phase (3 parameter, exponential)
+ `multi-gaussian.ipynb <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/multi-gaussian.ipynb>`_: two overlapping normal distributions (6 parameter, Gaussian)
+ `periodic.ipynb <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/periodic.ipynb>`_: periodic data (3 parameter, sine) 
+ `polynomial.ipynb <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/polynomial.ipynb>`_: nonlinear data with no obvious form (5 parameter, polynomial)
+ `linear-extrapolation-folding.ipynb <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/linear-extrapolation-folding.ipynb>`_: protein equilibrium unfolding data (6 parameter, linear embedded in sigmoidal)


Documentation
=============

Full documentation is on `readthedocs <https://dataprob.readthedocs.io>`_.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "dataprob",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "Mike Harms <harms@uoregon.edu>",
    "keywords": "likelihood, maximum likelihood, ML, Bayesian, MCMC, monte carlo, regression, estimator",
    "author": null,
    "author_email": "Mike Harms <harms@uoregon.edu>",
    "download_url": "https://files.pythonhosted.org/packages/5e/61/68dcf223120b8d4feceff927e850f48aa54fc877ae81b8c3e3e656fb5378/dataprob-0.9.4.tar.gz",
    "platform": null,
    "description": "========\ndataprob\n========\n\n.. image:: docs/badges/tests-badge.svg\n\n.. image:: docs/badges/coverage-badge.svg\n\ndataprob was designed to allow scientists to easily fit user-defined models to \nexperimental data. It allows maximum likelihood, bootstrap, and Bayesian\nanalyses with a simple and consistent interface. \n\nDesign principles\n=================\n\n+ **ease of use:** Users write a python function that describes their model, \n  then load in their experimental data as a dataframe. \n+ **dataframe centric:** Uses a pandas dataframe to specify parameter bounds,\n  guesses, fixedness, and priors. Observed data can be passed in as a\n  dataframe or numpy vector. All outputs are pandas dataframes. \n+ **consistent experience:** Users can run maximum-likelihood, bootstrap \n  resampling, or Bayesian MCMC analyses with an identical interface and nearly\n  identical diagnostic outputs. \n+ **interpretable:** Provides diagnostic plots and runs tests to validate\n  fit results. \n\nSimple example\n==============\n\nThe following code generates noisy linear data and uses dataprob to find \nthe maximum likelihood estimate of its slope and intercept. \n`Run on Google Colab <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/simple-example.ipynb>`_.\n\n.. code-block:: python\n    \n    import dataprob\n    import numpy as np\n\n    # Generate \"experimental\" linear data (slope = 5, intercept = 5.7) that has\n    # random noise on each point. \n    x_array = np.linspace(0,10,25)\n    noise = np.random.normal(loc=0,scale=0.5,size=x_array.shape)\n    y_obs = 5*x_array + 5.7 + noise\n\n    # 1. Define a linear model\n    def linear_model(m=1,b=1,x=[]):\n        return m*x + b\n\n    # 2. Set up the analysis. 'method' can be \"ml\", \"mcmc\", or \"bootstrap\"\n    f = dataprob.setup(linear_model,\n                       method=\"ml\",\n                       non_fit_kwargs={\"x\":x_array})\n\n    # 3. Fit the parameters of linear_model model to y_obs, assuming uncertainty\n    #    of 0.5 on each observed point. \n    f.fit(y_obs=y_obs,\n          y_std=0.5)\n\n    # 4. Access results\n    fig = dataprob.plot_summary(f)\n    fig = dataprob.plot_corner(f)\n    print(f.fit_df)\n    print(f.fit_quality)\n\nThe plots will be:\n\n.. image:: docs/source/_static/simple-example_plot-summary.svg\n    :align: center\n    :alt: data.plot_summary result\n    :width: 75%\n\n.. image:: docs/source/_static/simple-example_plot-corner.svg\n    :align: center\n    :alt: data.plot_corner result\n    :width: 75%\n\n\nThe ``f.fit_df`` dataframe will look something like:\n\n+-------+-------+----------+-------+--------+---------+-------+-----------+\n| index | name  | estimate | std   | low_95 | high_95 | ...   | prior_std |\n+=======+=======+==========+=======+========+=========+=======+===========+\n| ``m`` | ``m`` | 5.009    | 0.045 | 4.817  | 5.202   | ...   | ``NaN``   |  \n+-------+-------+----------+-------+--------+---------+-------+-----------+\n| ``b`` | ``b`` | 5.644    | 0.274 |  4.465 | 6.822   | ...   | ``NaN``   |\n+-------+-------+----------+-------+--------+---------+-------+-----------+\n\nThe ``f.fit_quality`` dataframe will look something like:\n\n+---------------+---------------------------------------------+---------+---------+\n| name          | description                                 | is_good | value   |\n+===============+=============================================+=========+=========+\n| num_obs       | number of observations                      | True    | 25.000  |\n+---------------+---------------------------------------------+---------+---------+\n| num_param     | number of fit parameters                    | True    | 2.000   |\n+---------------+---------------------------------------------+---------+---------+\n| lnL           | log likelihood                              | True    | -18.761 |\n+---------------+---------------------------------------------+---------+---------+\n| chi2          | chi^2 goodness-of-fit                       | True    | 0.241   |\n+---------------+---------------------------------------------+---------+---------+\n| reduced_chi2  | reduced chi^2                               | True    | 1.192   |\n+---------------+---------------------------------------------+---------+---------+\n| mean0_resid   | t-test for residual mean != 0               | True    | 1.000   |\n+---------------+---------------------------------------------+---------+---------+\n| durbin-watson | Durbin-Watson test for correlated residuals | True    | 2.265   |\n+---------------+---------------------------------------------+---------+---------+\n| ljung-box     | Ljung-Box test for correlated residuals     | True    | 0.943   |\n+---------------+---------------------------------------------+---------+---------+\n\n\n\nInstallation\n============\n\nWe recommend installing dataprob with pip:\n\n.. code-block:: shell\n\n    pip install dataprob\n\nTo install from source and run tests:\n\n.. code-block:: shell\n\n    git clone https://github.com/harmslab/dataprob.git\n    cd dataprob\n    pip install .\n\n    # to run test-suite\n    pytest --runslow\n\nExamples\n========\n\nA good way to learn how to use the library is by working through examples. The\nfollowing notebooks are included in the `dataprob/examples/` directory. They are\nself-contained demonstrations in which dataprob is used to analyze various\nclasses of experimental data. The links below launch each notebook in Google\nColab:\n\n+ `api-example.ipynb <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/api-example.ipynb>`_: shows various features of the API when analyzing a linear model\n+ `linear.ipynb <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/linear.ipynb>`_: fit a linear model to noisy data (2 parameter, linear)\n+ `binding.ipynb <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/binding.ipynb>`_: a single-site binding interaction (2 parameter, sigmoidal curve)\n+ `michaelis-menten.ipynb <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/michaelis-menten.ipynb>`_: Michaelis-Menten model of enzyme kinetics (2 parameter, sigmoidal curve)\n+ `lagged-exponential.ipynb <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/lagged-exponential.ipynb>`_: bacterial growth curve with initial lag phase (3 parameter, exponential)\n+ `multi-gaussian.ipynb <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/multi-gaussian.ipynb>`_: two overlapping normal distributions (6 parameter, Gaussian)\n+ `periodic.ipynb <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/periodic.ipynb>`_: periodic data (3 parameter, sine) \n+ `polynomial.ipynb <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/polynomial.ipynb>`_: nonlinear data with no obvious form (5 parameter, polynomial)\n+ `linear-extrapolation-folding.ipynb <https://githubtocolab.com/harmslab/dataprob/blob/main/examples/linear-extrapolation-folding.ipynb>`_: protein equilibrium unfolding data (6 parameter, linear embedded in sigmoidal)\n\n\nDocumentation\n=============\n\nFull documentation is on `readthedocs <https://dataprob.readthedocs.io>`_.\n",
    "bugtrack_url": null,
    "license": "Copyright (c) 2020 Michael J. Harms.  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "Do likelihood based parameter estimation using maximum likeihood and bayesian methods",
    "version": "0.9.4",
    "project_urls": {
        "Bug Tracker": "https://github.com/harmslab/dataprob/issues",
        "Repository": "https://github.com/harmslab/dataprob.git"
    },
    "split_keywords": [
        "likelihood",
        " maximum likelihood",
        " ml",
        " bayesian",
        " mcmc",
        " monte carlo",
        " regression",
        " estimator"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b6e060a95ddf57e74c18fb56242cb651826f281d849d04ffde090519c4fbed11",
                "md5": "d123c5bd79e65a379bb67469d8bc83d9",
                "sha256": "58ad959ed297c9346ea27c3c5e84d41768b2e4a6c9f9b96ca769e73f5cd98223"
            },
            "downloads": -1,
            "filename": "dataprob-0.9.4-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d123c5bd79e65a379bb67469d8bc83d9",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": ">=3.10",
            "size": 62452,
            "upload_time": "2024-09-18T03:53:20",
            "upload_time_iso_8601": "2024-09-18T03:53:20.153536Z",
            "url": "https://files.pythonhosted.org/packages/b6/e0/60a95ddf57e74c18fb56242cb651826f281d849d04ffde090519c4fbed11/dataprob-0.9.4-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5e6168dcf223120b8d4feceff927e850f48aa54fc877ae81b8c3e3e656fb5378",
                "md5": "9d95b7f92681c727951f9b320d631537",
                "sha256": "ad29a3498e5e90f2cca7435e32beff665065eb41d938e534df601e4352c8b38b"
            },
            "downloads": -1,
            "filename": "dataprob-0.9.4.tar.gz",
            "has_sig": false,
            "md5_digest": "9d95b7f92681c727951f9b320d631537",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 50121,
            "upload_time": "2024-09-18T03:53:21",
            "upload_time_iso_8601": "2024-09-18T03:53:21.412123Z",
            "url": "https://files.pythonhosted.org/packages/5e/61/68dcf223120b8d4feceff927e850f48aa54fc877ae81b8c3e3e656fb5378/dataprob-0.9.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-18 03:53:21",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "harmslab",
    "github_project": "dataprob",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "dataprob"
}
        
Elapsed time: 0.94963s