tinyda


Nametinyda JSON
Version 0.9.20 PyPI version JSON
download
home_pagehttps://github.com/mikkelbue/tinyda
SummaryDelayed Acceptance MCMC Sampler
upload_time2024-09-03 15:46:34
maintainerNone
docs_urlNone
authorMikkel Bue Lykkegaard
requires_python<4.0.0,>=3.10
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![](https://github.com/mikkelbue/tinyDA/blob/main/misc/tinyDA.png)

# tinyDA
Multilevel Delayed Acceptance MCMC sampler with finite-length subchain sampling and adaptive error modelling. This is intended as a simple, lightweight implementation, with minimal dependencies, i.e. nothing beyond the SciPy stack and ArviZ. It is fully imperative and easy to use!

For instructions, have a look at the [documentation](https://tinyda.readthedocs.io/en/latest/), the [examples](https://github.com/mikkelbue/tinyDA/tree/main/examples) or the [usage section below](#usage).

## Installation
tinyDA can be installed from PyPI:
```
pip install tinyda
```

## Dependencies
* NumPy
* SciPy
* ArviZ
* tqdm
* [Ray](https://docs.ray.io/en/master/) (multiprocessing, optional)

## Features

### Samplers
* Metropolis-Hastings
* Delayed Acceptance (Christen & Fox, 2005)
* Multilevel Delayed Acceptance (Lykkegaard et al. 2022)

### Proposals
* Random Walk Metropolis Hastings (RWMH) - Metropolis et al. (1953), Hastings (1970)
* preconditioned Crank-Nicolson (pCN) - Cotter et al. (2013)
* Adaptive Metropolis (AM) - Haario et al. (2001)
* Operator-weighted pCN - Law (2014)
* Metropolis Adjusted Langevin Algorithm (MALA) - Roberts & Tweedie (1996)
* DREAM(Z) - Vrugt (2016)
* Multiple-Try Metropolis (MTM) - Liu et al. (2000)

### Adaptive Error Models
* State independent - Cui et al. (2018)
* State dependent - Cui et al. (2018)

### Diagnostics
* Convert a tinyDA chain to an ArviZ InferenceData object for near-unlimited diagnostics!

## Usage
Documentation is available at [Read the Docs](https://tinyda.readthedocs.io/en/latest/). A few illustrative examples are available as Jupyter Notebooks in the root directory. Below is a short summary of the core features.

### Distributions
The prior and likelihood can be defined using standard `scipy.stats` classes:
```python
import tinyDA as tda

from scipy.stats import multivariate_normal

# set the prior mean and covariance.
mean_prior = np.zeros(n_dim)
cov_prior = np.eye(n_dim)

# set the covariance of the likelihood.
cov_likelihood = sigma**2*np.eye(data.shape[0])

# initialise the prior distribution and likelihood.
my_prior = multivariate_normal(mean_prior, cov_prior)
my_loglike = tda.GaussianLogLike(data, cov_likelihood)
```
If using a Gaussian likelihood, we recommend using the `tinyDA` implementation, since it is unnormalised and plays along well with `tda.AdaptiveLogLike` used for the Adaptive Error Model. Home-brew distributions can easily be defined, and must have a `.rvs()` method for drawing random samples and a `logpdf(x)` method for computing the log-likelihood, as per the `SciPy` implementation.

### tinyDA.Posterior
The heart of the TinyDA sampler is the `tinyDA.Posterior`, which is responsible for:
1. Calling the model with some parameters (a proposal) and collecting the model output.
2. Evaluating the prior density of the parameters, and the likelihood of the data, given the parameters.
3. Constructing `tda.Link` instances that hold information for each sample.

![](https://github.com/mikkelbue/tinyDA/blob/main/misc/flowchart.png)

The `tinyDA.Posterior` takes as input the prior, the likelihood, and a forward model. Therefore, a forward model must be defined. This model can be either a function `model_output = my_function(parameters)` or a class instance with a `.__call__(self, parameters)` method. The function or `__call__` method must return either just the model output or a tuple of `(model_output, qoi)`. In this example, we define a class that performs simple linear regression on whatever inputs `x` we have.

```python
class MyLinearModel:
    def __init__(self, x):

        self.x = x
        
    def __call__(self, parameters):
        
        # the model output is a simple linear regression
        model_output = parameters[0] + parameters[1]*self.x
        
        # no quantity of interest beyond the parameters.
        qoi = None
        
        # return both.
        return model_output, qoi

my_model = MyLinearModel(x)
my_posterior = tda.Posterior(my_prior, my_loglike, my_model)
```

### Proposals
A proposal is simply initialised with its parameters:
```python
# set the covariance of the proposal distribution.
am_cov = np.eye(n_dim)

# set the number of iterations before starting adaptation.
am_t0 = 1000

# set some adaptive metropolis tuning parameters.
am_sd = 1
am_epsilon = 1e-6

# initialise the proposal.
my_proposal = tda.AdaptiveMetropolis(C0=am_cov, t0=am_t0, sd=am_sd, epsilon=am_epsilon)
```

### Sampling
After defining a proposal, a coarse posterior `my_posterior_coarse`, and a fine posterior `my_posterior_fine`, the Delayed Acceptance sampler can be run using `tinyDA.sample()`:
```python
my_chains = tda.sample([my_posterior_coarse, my_posterior_fine], 
                       my_proposal, 
                       iterations=12000, 
                       n_chains=2, 
                       subsampling_rate=10)
```

If using a hirarchy with more than two models, a Multilevel Delayed Acceptance sampler can be run by supplying a list of posteriors in ascending order and a correponsing list of subsampling rates:
```python
my_chains = tda.sample([my_posterior_level0, 
                        my_posterior_level1, 
                        my_posterior_level2, 
                        my_posterior_level3], 
                       my_proposal, 
                       iterations=12000, 
                       n_chains=2, 
                       subsampling_rate=[10, 5, 5])
```

### Postprocessing
The entire sampling history is now stored in `my_chains` in the form of a dictionary with tinyDA.Link instances. You can convert the output of `tinyDA.sample()` to an ArviZ InferenceData object with 
```python
idata = tda.to_inference_data(my_chains, burnin=2000)
```
If you want to have a look at the coarse samples, you can pass an additional argument:
```python
idata = tda.to_inference_data(my_chains, level='coarse', burnin=20000)
```

The `idata` object can then be used with the ArviZ diagnostics suite to e.g. get MCMC statistics, plot the traces and so on.

## Contributing
If you feel that tinyDA is missing some features, or that something could be improved, please do not hesitate to create a fork and submit a PR! If you want to help improve the package, please have a look at the [issues](https://github.com/mikkelbue/tinyDA/issues) and consider if something seems doable to you.

If you would like to contribute, please consider the following:
* It's called tinyDA because it's small. The list of dependencies should be kept **short**. Great things can be achieved using NumPy!
* tinyDA has loads of nice features, but it's somewhat lacking in terms of CI. Any kind of CI, tests and improvements to the software infrastructure would be greatly appreciated!

The development of tinyDA is sponsored by [digiLab](https://www.digilab.co.uk/).

## TODO
* ~~Parallel multi-chain sampling~~
* ~~More user-friendly diagnostics~~
* ~~Multilevel Delayed Acceptance~~
* ~~MALA proposal~~
* ~~Tests~~
* Variance Reduction
* Wrapper for framework-agnostic adaptive coarse model
* Embedded spaces for hierachical models

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/mikkelbue/tinyda",
    "name": "tinyda",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0.0,>=3.10",
    "maintainer_email": null,
    "keywords": null,
    "author": "Mikkel Bue Lykkegaard",
    "author_email": "mikkelbue@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/23/c0/495e78272b92f9213f9e6802a4960263b51998d47964f3028fcca7a86ef2/tinyda-0.9.20.tar.gz",
    "platform": null,
    "description": "![](https://github.com/mikkelbue/tinyDA/blob/main/misc/tinyDA.png)\n\n# tinyDA\nMultilevel Delayed Acceptance MCMC sampler with finite-length subchain sampling and adaptive error modelling. This is intended as a simple, lightweight implementation, with minimal dependencies, i.e. nothing beyond the SciPy stack and ArviZ. It is fully imperative and easy to use!\n\nFor instructions, have a look at the [documentation](https://tinyda.readthedocs.io/en/latest/), the [examples](https://github.com/mikkelbue/tinyDA/tree/main/examples) or the [usage section below](#usage).\n\n## Installation\ntinyDA can be installed from PyPI:\n```\npip install tinyda\n```\n\n## Dependencies\n* NumPy\n* SciPy\n* ArviZ\n* tqdm\n* [Ray](https://docs.ray.io/en/master/) (multiprocessing, optional)\n\n## Features\n\n### Samplers\n* Metropolis-Hastings\n* Delayed Acceptance (Christen & Fox, 2005)\n* Multilevel Delayed Acceptance (Lykkegaard et al. 2022)\n\n### Proposals\n* Random Walk Metropolis Hastings (RWMH) - Metropolis et al. (1953), Hastings (1970)\n* preconditioned Crank-Nicolson (pCN) - Cotter et al. (2013)\n* Adaptive Metropolis (AM) - Haario et al. (2001)\n* Operator-weighted pCN - Law (2014)\n* Metropolis Adjusted Langevin Algorithm (MALA) - Roberts & Tweedie (1996)\n* DREAM(Z) - Vrugt (2016)\n* Multiple-Try Metropolis (MTM) - Liu et al. (2000)\n\n### Adaptive Error Models\n* State independent - Cui et al. (2018)\n* State dependent - Cui et al. (2018)\n\n### Diagnostics\n* Convert a tinyDA chain to an ArviZ InferenceData object for near-unlimited diagnostics!\n\n## Usage\nDocumentation is available at [Read the Docs](https://tinyda.readthedocs.io/en/latest/). A few illustrative examples are available as Jupyter Notebooks in the root directory. Below is a short summary of the core features.\n\n### Distributions\nThe prior and likelihood can be defined using standard `scipy.stats` classes:\n```python\nimport tinyDA as tda\n\nfrom scipy.stats import multivariate_normal\n\n# set the prior mean and covariance.\nmean_prior = np.zeros(n_dim)\ncov_prior = np.eye(n_dim)\n\n# set the covariance of the likelihood.\ncov_likelihood = sigma**2*np.eye(data.shape[0])\n\n# initialise the prior distribution and likelihood.\nmy_prior = multivariate_normal(mean_prior, cov_prior)\nmy_loglike = tda.GaussianLogLike(data, cov_likelihood)\n```\nIf using a Gaussian likelihood, we recommend using the `tinyDA` implementation, since it is unnormalised and plays along well with `tda.AdaptiveLogLike` used for the Adaptive Error Model. Home-brew distributions can easily be defined, and must have a `.rvs()` method for drawing random samples and a `logpdf(x)` method for computing the log-likelihood, as per the `SciPy` implementation.\n\n### tinyDA.Posterior\nThe heart of the TinyDA sampler is the `tinyDA.Posterior`, which is responsible for:\n1. Calling the model with some parameters (a proposal) and collecting the model output.\n2. Evaluating the prior density of the parameters, and the likelihood of the data, given the parameters.\n3. Constructing `tda.Link` instances that hold information for each sample.\n\n![](https://github.com/mikkelbue/tinyDA/blob/main/misc/flowchart.png)\n\nThe `tinyDA.Posterior` takes as input the prior, the likelihood, and a forward model. Therefore, a forward model must be defined. This model can be either a function `model_output = my_function(parameters)` or a class instance with a `.__call__(self, parameters)` method. The function or `__call__` method must return either just the model output or a tuple of `(model_output, qoi)`. In this example, we define a class that performs simple linear regression on whatever inputs `x` we have.\n\n```python\nclass MyLinearModel:\n    def __init__(self, x):\n\n        self.x = x\n        \n    def __call__(self, parameters):\n        \n        # the model output is a simple linear regression\n        model_output = parameters[0] + parameters[1]*self.x\n        \n        # no quantity of interest beyond the parameters.\n        qoi = None\n        \n        # return both.\n        return model_output, qoi\n\nmy_model = MyLinearModel(x)\nmy_posterior = tda.Posterior(my_prior, my_loglike, my_model)\n```\n\n### Proposals\nA proposal is simply initialised with its parameters:\n```python\n# set the covariance of the proposal distribution.\nam_cov = np.eye(n_dim)\n\n# set the number of iterations before starting adaptation.\nam_t0 = 1000\n\n# set some adaptive metropolis tuning parameters.\nam_sd = 1\nam_epsilon = 1e-6\n\n# initialise the proposal.\nmy_proposal = tda.AdaptiveMetropolis(C0=am_cov, t0=am_t0, sd=am_sd, epsilon=am_epsilon)\n```\n\n### Sampling\nAfter defining a proposal, a coarse posterior `my_posterior_coarse`, and a fine posterior `my_posterior_fine`, the Delayed Acceptance sampler can be run using `tinyDA.sample()`:\n```python\nmy_chains = tda.sample([my_posterior_coarse, my_posterior_fine], \n                       my_proposal, \n                       iterations=12000, \n                       n_chains=2, \n                       subsampling_rate=10)\n```\n\nIf using a hirarchy with more than two models, a Multilevel Delayed Acceptance sampler can be run by supplying a list of posteriors in ascending order and a correponsing list of subsampling rates:\n```python\nmy_chains = tda.sample([my_posterior_level0, \n                        my_posterior_level1, \n                        my_posterior_level2, \n                        my_posterior_level3], \n                       my_proposal, \n                       iterations=12000, \n                       n_chains=2, \n                       subsampling_rate=[10, 5, 5])\n```\n\n### Postprocessing\nThe entire sampling history is now stored in `my_chains` in the form of a dictionary with tinyDA.Link instances. You can convert the output of `tinyDA.sample()` to an ArviZ InferenceData object with \n```python\nidata = tda.to_inference_data(my_chains, burnin=2000)\n```\nIf you want to have a look at the coarse samples, you can pass an additional argument:\n```python\nidata = tda.to_inference_data(my_chains, level='coarse', burnin=20000)\n```\n\nThe `idata` object can then be used with the ArviZ diagnostics suite to e.g. get MCMC statistics, plot the traces and so on.\n\n## Contributing\nIf you feel that tinyDA is missing some features, or that something could be improved, please do not hesitate to create a fork and submit a PR! If you want to help improve the package, please have a look at the [issues](https://github.com/mikkelbue/tinyDA/issues) and consider if something seems doable to you.\n\nIf you would like to contribute, please consider the following:\n* It's called tinyDA because it's small. The list of dependencies should be kept **short**. Great things can be achieved using NumPy!\n* tinyDA has loads of nice features, but it's somewhat lacking in terms of CI. Any kind of CI, tests and improvements to the software infrastructure would be greatly appreciated!\n\nThe development of tinyDA is sponsored by [digiLab](https://www.digilab.co.uk/).\n\n## TODO\n* ~~Parallel multi-chain sampling~~\n* ~~More user-friendly diagnostics~~\n* ~~Multilevel Delayed Acceptance~~\n* ~~MALA proposal~~\n* ~~Tests~~\n* Variance Reduction\n* Wrapper for framework-agnostic adaptive coarse model\n* Embedded spaces for hierachical models\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Delayed Acceptance MCMC Sampler",
    "version": "0.9.20",
    "project_urls": {
        "Bug Tracker": "https://github.com/mikkelbue/tinyDA/issues",
        "Documentation": "https://tinyda.readthedocs.io",
        "Homepage": "https://github.com/mikkelbue/tinyda",
        "Repository": "https://github.com/mikkelbue/tinyda"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f6f65e908f8b2fa013bb336449db3d26acc1b14989c8b1c56c3a79b4c58a45a0",
                "md5": "39e87ab222beeb941aacc8df15b090e0",
                "sha256": "08e20ee64730b730869c8672e56238b1b8f703cd534c86c946f57abd7f43ae8c"
            },
            "downloads": -1,
            "filename": "tinyda-0.9.20-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "39e87ab222beeb941aacc8df15b090e0",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0.0,>=3.10",
            "size": 36871,
            "upload_time": "2024-09-03T15:46:33",
            "upload_time_iso_8601": "2024-09-03T15:46:33.448504Z",
            "url": "https://files.pythonhosted.org/packages/f6/f6/5e908f8b2fa013bb336449db3d26acc1b14989c8b1c56c3a79b4c58a45a0/tinyda-0.9.20-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "23c0495e78272b92f9213f9e6802a4960263b51998d47964f3028fcca7a86ef2",
                "md5": "8eadc5fb635ee02d3bcc40c9fce5b21b",
                "sha256": "2089041de8f562e14a2fe6d3a1a9ba0f301b3964ca7500f4a333d3330d383029"
            },
            "downloads": -1,
            "filename": "tinyda-0.9.20.tar.gz",
            "has_sig": false,
            "md5_digest": "8eadc5fb635ee02d3bcc40c9fce5b21b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0.0,>=3.10",
            "size": 34505,
            "upload_time": "2024-09-03T15:46:34",
            "upload_time_iso_8601": "2024-09-03T15:46:34.784483Z",
            "url": "https://files.pythonhosted.org/packages/23/c0/495e78272b92f9213f9e6802a4960263b51998d47964f3028fcca7a86ef2/tinyda-0.9.20.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-03 15:46:34",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mikkelbue",
    "github_project": "tinyda",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "tinyda"
}
        
Elapsed time: 0.29660s