bayesflow


Namebayesflow JSON
Version 1.1.4 PyPI version JSON
download
home_pagehttps://github.com/stefanradev93/bayesflow
Summary"Amortizing Bayesian Inference With Neural Networks"
upload_time2023-09-12 16:17:42
maintainerStefan T. Radev
docs_urlNone
authorThe BayesFlow Developers
requires_python>=3.9
licenseMIT
keywords amortized bayesian inference invertible neural networks simulation-based inference approximate bayesian computation model comparison
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # BayesFlow <img src="https://github.com/stefanradev93/BayesFlow/blob/master/img/bayesflow_hex.png?raw=true" align="right" width=20% height=20% />

[![Actions Status](https://github.com/stefanradev93/bayesflow/workflows/Tests/badge.svg)](https://github.com/stefanradev93/bayesflow/actions)
[![Licence](https://img.shields.io/github/license/stefanradev93/BayesFlow)](https://img.shields.io/github/license/stefanradev93/BayesFlow)

Welcome to our BayesFlow library for efficient simulation-based Bayesian workflows! Our library enables users to create specialized neural networks for *amortized Bayesian inference*, which repay users with rapid statistical inference after a potentially longer simulation-based training phase.

For starters, check out some of our walk-through notebooks:

1. [Quickstart amortized posterior estimation](examples/Intro_Amortized_Posterior_Estimation.ipynb)
2. [Tackling strange bimodal distributions](examples/TwoMoons_Bimodal_Posterior.ipynb)
3. [Detecting model misspecification in posterior inference](examples/Model_Misspecification.ipynb)
4. [Principled Bayesian workflow for cognitive models](examples/LCA_Model_Posterior_Estimation.ipynb)
5. [Posterior estimation for ODEs](examples/Linear_ODE_system.ipynb)
6. [Posterior estimation for SIR-like models](examples/Covid19_Initial_Posterior_Estimation.ipynb)
7. [Model comparison for cognitive models](examples/Model_Comparison_MPT.ipynb)
8. [Hierarchical model comparison for cognitive models](examples/Hierarchical_Model_Comparison_MPT.ipynb)

## Project Documentation

The project documentation is available at <https://bayesflow.org>

## Installation

See [INSTALL.rst](INSTALL.rst) for installation instructions.

## Conceptual Overview

A cornerstone idea of amortized Bayesian inference is to employ generative
neural networks for parameter estimation, model comparison, and model validation
when working with intractable simulators whose behavior as a whole is too
complex to be described analytically. The figure below presents a higher-level
overview of neurally bootstrapped Bayesian inference.

<img src="https://github.com/stefanradev93/BayesFlow/blob/master/img/high_level_framework.png?raw=true" width=80% height=80%>

## Getting Started: Parameter Estimation

The core functionality of BayesFlow is amortized Bayesian posterior estimation, as described in our paper:

Radev, S. T., Mertens, U. K., Voss, A., Ardizzone, L., & Köthe, U. (2020).
BayesFlow: Learning complex stochastic models with invertible neural networks.
<em>IEEE Transactions on Neural Networks and Learning Systems</em>, available
for free at: https://arxiv.org/abs/2003.06281.

However, since then, we have substantially extended the BayesFlow library such that
it is now much more general and cleaner than what we describe in the above paper.

### Minimal Example

```python
import numpy as np
import bayesflow as bf
```

To introduce you to the basic workflow of the library, let's consider
a simple 2D Gaussian model, from which we want to obtain
posterior inference.  We assume a Gaussian simulator (likelihood)
and a Gaussian prior for the means of the two components,
which are our only model parameters in this example:

```python
def simulator(theta, n_obs=50, scale=1.0):
    return np.random.default_rng().normal(loc=theta, scale=scale, size=(n_obs, theta.shape[0]))

def prior(D=2, mu=0., sigma=1.0):
    return np.random.default_rng().normal(loc=mu, scale=sigma, size=D)
```

Then, we connect the `prior` with the `simulator` using a `GenerativeModel` wrapper:

```python
generative_model = bf.simulation.GenerativeModel(prior, simulator)
```

Next, we create our BayesFlow setup consisting of a summary and an inference network:

```python
summary_net = bf.networks.DeepSet()
inference_net = bf.networks.InvertibleNetwork(num_params=2)
amortized_posterior = bf.amortizers.AmortizedPosterior(inference_net, summary_net)
```

Finally, we connect the networks with the generative model via a `Trainer` instance:

```python
trainer = bf.trainers.Trainer(amortizer=amortized_posterior, generative_model=generative_model, memory=True)
```

We are now ready to train an amortized posterior approximator. For instance,
to run online training, we simply call:

```python
losses = trainer.train_online(epochs=10, iterations_per_epoch=500, batch_size=32)
```

Before inference, we can use simulation-based calibration (SBC,
https://arxiv.org/abs/1804.06788) to check the computational faithfulness of
the model-amortizer combination:

```python
fig = trainer.diagnose_sbc_histograms()
```

<img src="https://github.com/stefanradev93/BayesFlow/blob/master/img/showcase_sbc.png?raw=true" width=65% height=65%>

The histograms are roughly uniform and lie within the expected range for
well-calibrated inference algorithms as indicated by the shaded gray areas.
Accordingly, our amortizer seems to have converged to the intended target.

Amortized inference on new (real or simulated) data is then easy and fast.
For example, we can simulate 200 new data sets and generate 500 posterior draws
per data set:

```python
new_sims = trainer.configurator(generative_model(200))
posterior_draws = amortized_posterior.sample(new_sims, n_samples=500)
```

We can then quickly inspect the how well the model can recover its parameters
across the simulated data sets.

```python
fig = bf.diagnostics.plot_recovery(posterior_draws, new_sims['parameters'])
```

<img src="https://github.com/stefanradev93/BayesFlow/blob/master/img/showcase_recovery.png?raw=true" width=65% height=65%>

For any individual data set, we can also compare the parameters' posteriors with
their corresponding priors:

```python
fig = bf.diagnostics.plot_posterior_2d(posterior_draws[0], prior=generative_model.prior)
```

<img src="https://github.com/stefanradev93/BayesFlow/blob/master/img/showcase_posterior.png?raw=true" width=45% height=45%>

We see clearly how the posterior shrinks relative to the prior for both
model parameters as a result of conditioning on the data.

### References and Further Reading

- Radev, S. T., Mertens, U. K., Voss, A., Ardizzone, L., & Köthe, U. (2020).
BayesFlow: Learning complex stochastic models with invertible neural networks.
<em>IEEE Transactions on Neural Networks and Learning Systems, 33(4)</em>, 1452-1466.

- Radev, S. T., Graw, F., Chen, S., Mutters, N. T., Eichel, V. M., Bärnighausen, T., & Köthe, U. (2021).
OutbreakFlow: Model-based Bayesian inference of disease outbreak dynamics with invertible neural networks and its application to the COVID-19 pandemics in Germany. <em>PLoS computational biology, 17(10)</em>, e1009472.

- Bieringer, S., Butter, A., Heimel, T., Höche, S., Köthe, U., Plehn, T., & Radev, S. T. (2021).
Measuring QCD splittings with invertible networks. <em>SciPost Physics, 10(6)</em>, 126.

- von Krause, M., Radev, S. T., & Voss, A. (2022).
Mental speed is high until age 60 as revealed by analysis of over a million participants.
<em>Nature Human Behaviour, 6(5)</em>, 700-708.

## Model Misspecification

What if we are dealing with misspecified models? That is, how faithful is our
amortized inference if the generative model is a poor representation of reality?
A modified loss function optimizes the learned summary statistics towards a unit
Gaussian and reliably detects model misspecification during inference time.


<img src="https://github.com/stefanradev93/BayesFlow/blob/master/examples/img/model_misspecification_amortized_sbi.png" width=100% height=100%>

In order to use this method, you should only provide the `summary_loss_fun` argument
to the `AmortizedPosterior` instance:

```python
amortized_posterior = bf.amortizers.AmortizedPosterior(inference_net, summary_net, summary_loss_fun='MMD')
```

The amortizer knows how to combine its losses and you can inspect the summary space for outliers during inference.

### References and Further Reading

- Schmitt, M., Bürkner P. C., Köthe U., & Radev S. T. (2022). Detecting Model
Misspecification in Amortized Bayesian Inference with Neural Networks. <em>ArXiv
preprint</em>, available for free at: https://arxiv.org/abs/2112.08866

## Model Comparison

BayesFlow can not only be used for parameter estimation, but also to perform approximate Bayesian model comparison via posterior model probabilities or Bayes factors.
Let's extend the minimal example from before with a second model $M_2$ that we want to compare with our original model $M_1$:

```python
def simulator(theta, n_obs=50, scale=1.0):
    return np.random.default_rng().normal(loc=theta, scale=scale, size=(n_obs, theta.shape[0]))

def prior_m1(D=2, mu=0., sigma=1.0):
    return np.random.default_rng().normal(loc=mu, scale=sigma, size=D)

def prior_m2(D=2, mu=2., sigma=1.0):
    return np.random.default_rng().normal(loc=mu, scale=sigma, size=D)
```

For the purpose of this illustration, the two toy models only differ with respect to their prior specification ($M_1: \mu = 0, M_2: \mu = 2$). We create both models as before and use a `MultiGenerativeModel` wrapper to combine them in a `meta_model`:

```python
model_m1 = bf.simulation.GenerativeModel(prior_m1, simulator, simulator_is_batched=False)
model_m2 = bf.simulation.GenerativeModel(prior_m2, simulator, simulator_is_batched=False)
meta_model = bf.simulation.MultiGenerativeModel([model_m1, model_m2])
```

Next, we construct our neural network with a `PMPNetwork` for approximating posterior model probabilities:

```python
summary_net = bf.networks.DeepSet()
probability_net = bf.networks.PMPNetwork(num_models=2)
amortized_bmc = bf.amortizers.AmortizedModelComparison(probability_net, summary_net)
```

We combine all previous steps with a `Trainer` instance and train the neural approximator:

```python
trainer = bf.trainers.Trainer(amortizer=amortized_bmc, generative_model=meta_model)
losses = trainer.train_online(epochs=3, iterations_per_epoch=100, batch_size=32)
```

Let's simulate data sets from our models to check our networks' performance:

```python
sims = trainer.configurator(meta_model(5000))
```

When feeding the data to our trained network, we almost immediately obtain posterior model probabilities for each of the 5000 data sets:

```python
model_probs = amortized_bmc.posterior_probs(sims)
```

How good are these predicted probabilities in the closed world? We can have a look at the calibration:

```python
cal_curves = bf.diagnostics.plot_calibration_curves(sims["model_indices"], model_probs)
```

<img src="https://github.com/stefanradev93/BayesFlow/blob/master/img/showcase_calibration_curves.png?raw=true" width=65% height=65%>

Our approximator shows excellent calibration, with the calibration curve being closely aligned to the diagonal, an expected calibration error (ECE) near 0 and most predicted probabilities being certain of the model underlying a data set. We can further assess patterns of misclassification with a confusion matrix:

```python
conf_matrix = bf.diagnostics.plot_confusion_matrix(sims["model_indices"], model_probs)
```

<img src="https://github.com/stefanradev93/BayesFlow/blob/master/img/showcase_confusion_matrix.png?raw=true" width=44% height=44%>

For the vast majority of simulated data sets, the "true" data-generating model is correctly identified. With these diagnostic results backing us up, we can proceed and apply our trained network to empirical data.

BayesFlow is also able to conduct model comparison for hierarchical models. See this [tutorial notebook](examples/Hierarchical_Model_Comparison_MPT.ipynb) for an introduction to the associated workflow.

### References and Further Reading

- Radev S. T., D’Alessandro M., Mertens U. K., Voss A., Köthe U., & Bürkner P.
C. (2021). Amortized Bayesian Model Comparison with Evidental Deep Learning.
<em>IEEE Transactions on Neural Networks and Learning Systems</em>.
doi:10.1109/TNNLS.2021.3124052 available for free at: https://arxiv.org/abs/2004.10629

- Schmitt, M., Radev, S. T., & Bürkner, P. C. (2022). Meta-Uncertainty in
Bayesian Model Comparison. In <em>International Conference on Artificial Intelligence
and Statistics</em>, 11-29, PMLR, available for free at: https://arxiv.org/abs/2210.07278

- Elsemüller, L., Schnuerch, M., Bürkner, P. C., & Radev, S. T. (2023). A Deep
Learning Method for Comparing Bayesian Hierarchical Models. <em>ArXiv preprint</em>,
available for free at: https://arxiv.org/abs/2301.11873

## Likelihood Emulation

In order to learn the exchangeable (i.e., permutation invariant) likelihood from the minimal example instead of the posterior, you may use the `AmortizedLikelihood` wrapper:

```python
likelihood_net = bf.networks.InvertibleNetwork(num_params=2)
amortized_likelihood = bf.amortizers.AmortizedLikelihood(likelihood_net)
```

This wrapper can interact with a `Trainer` instance in the same way as the `AmortizedPosterior`. Finally, you can also learn the likelihood and the posterior *simultaneously* by using the `AmortizedPosteriorLikelihood` wrapper and choosing your preferred training scheme:

```python
joint_amortizer = bf.amortizers.AmortizedPosteriorLikelihood(amortized_posterior, amortized_likelihood)
```

Learning both densities enables us to approximate marginal likelihoods or perform approximate leave-one-out cross-validation (LOO-CV) for prior or posterior predictive model comparison, respectively.

### References and Further Reading

Radev, S. T., Schmitt, M., Pratz, V., Picchini, U., Köthe, U., & Bürkner, P.-C. (2023).
JANA: Jointly amortized neural approximation of complex Bayesian models.
*Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, 216*, 1695-1706.
([arXiv](https://arxiv.org/abs/2302.09125))([PMLR](https://proceedings.mlr.press/v216/radev23a.html))

## Support
This work is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy -– EXC-2181 - 390900948 (the Heidelberg Cluster of Excellence STRUCTURES) and -- EXC-2075 - 390740016 (the Stuttgart Cluster of Excellence SimTech), the Informatics for Life initiative funded by the Klaus Tschira Foundation, and Google Cloud through the Academic Research Grants program.

## Citing BayesFlow

You can cite BayesFlow along the lines of:

- We approximated the posterior with neural posterior estimation and learned summary statistics (NPE; Radev et al., 2020), as implemented in the BayesFlow software for amortized Bayesian workflows (Radev et al., 2023b).
- We approximated the likelihood with neural likelihood estimation (NLE; Papamakarios et al., 2019) without hand-cafted summary statistics, as implemented in the BayesFlow software for amortized Bayesian workflows (Radev et al., 2023b).
- We performed simultaneous posterior and likelihood estimation with jointly amortized neural approximation (JANA; Radev et al., 2023a), as implemented in the BayesFlow software for amortized Bayesian workflows (Radev et al., 2023b).

1. Radev, S. T., Schmitt, M., Schumacher, L., Elsemüller, L., Pratz, V., Schälte, Y., Köthe, U., & Bürkner, P.-C. (2023). BayesFlow: Amortized Bayesian workflows with neural networks. *arXiv:2306.16015*. ([arXiv](https://arxiv.org/abs/2306.16015))
2. Radev, S. T., Mertens, U. K., Voss, A., Ardizzone, L., Köthe, U. (2020). BayesFlow: Learning complex stochastic models with invertible neural networks. *IEEE Transactions on Neural Networks and Learning Systems, 33(4)*, 1452-1466. ([arXiv](https://arxiv.org/abs/2003.06281))([IEEE TNNLS](https://ieeexplore.ieee.org/document/9298920))
3. Radev, S. T., Schmitt, M., Pratz, V., Picchini, U., Köthe, U., & Bürkner, P.-C. (2023). JANA: Jointly amortized neural approximation of complex Bayesian models. *Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, 216*, 1695-1706. ([arXiv](https://arxiv.org/abs/2302.09125))([PMLR](https://proceedings.mlr.press/v216/radev23a.html))

**BibTeX:**

```
@misc{radev2023bayesflow,
 title = {{BayesFlow}: Amortized Bayesian workflows with neural networks},
 author = {Stefan T Radev and Marvin Schmitt and Lukas Schumacher and Lasse Elsem\"{u}ller and Valentin Pratz and Yannik Sch\"{a}lte and Ullrich K\"{o}the and Paul-Christian B\"{u}rkner},
 year = {2023},
 publisher= {arXiv},
 url={https://arxiv.org/abs/2306.16015}
}

@article{radev2020bayesflow,
  title={{BayesFlow}: Learning complex stochastic models with invertible neural networks},
  author={Radev, Stefan T. and Mertens, Ulf K. and Voss, Andreas and Ardizzone, Lynton and K{\"o}the, Ullrich},
  journal={IEEE transactions on neural networks and learning systems},
  volume={33},
  number={4},
  pages={1452--1466},
  year={2020},
  publisher={IEEE}
}

@inproceedings{pmlr-v216-radev23a,
  title = 	 {{JANA}: Jointly amortized neural approximation of complex {B}ayesian models},
  author =       {Radev, Stefan T. and Schmitt, Marvin and Pratz, Valentin and Picchini, Umberto and K\"othe, Ullrich and B\"urkner, Paul-Christian},
  booktitle = 	 {Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence},
  pages = 	 {1695--1706},
  year = 	 {2023},
  volume = 	 {216},
  series = 	 {Proceedings of Machine Learning Research},
  publisher =    {PMLR}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/stefanradev93/bayesflow",
    "name": "bayesflow",
    "maintainer": "Stefan T. Radev",
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": "",
    "keywords": "amortized Bayesian inference,invertible neural networks,simulation-based inference,approximate Bayesian computation,model comparison",
    "author": "The BayesFlow Developers",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/e9/63/08c1d35983c66c79dc5bba9d31ef5854f45ef76807722c424e62e3001a07/bayesflow-1.1.4.tar.gz",
    "platform": null,
    "description": "# BayesFlow <img src=\"https://github.com/stefanradev93/BayesFlow/blob/master/img/bayesflow_hex.png?raw=true\" align=\"right\" width=20% height=20% />\n\n[![Actions Status](https://github.com/stefanradev93/bayesflow/workflows/Tests/badge.svg)](https://github.com/stefanradev93/bayesflow/actions)\n[![Licence](https://img.shields.io/github/license/stefanradev93/BayesFlow)](https://img.shields.io/github/license/stefanradev93/BayesFlow)\n\nWelcome to our BayesFlow library for efficient simulation-based Bayesian workflows! Our library enables users to create specialized neural networks for *amortized Bayesian inference*, which repay users with rapid statistical inference after a potentially longer simulation-based training phase.\n\nFor starters, check out some of our walk-through notebooks:\n\n1. [Quickstart amortized posterior estimation](examples/Intro_Amortized_Posterior_Estimation.ipynb)\n2. [Tackling strange bimodal distributions](examples/TwoMoons_Bimodal_Posterior.ipynb)\n3. [Detecting model misspecification in posterior inference](examples/Model_Misspecification.ipynb)\n4. [Principled Bayesian workflow for cognitive models](examples/LCA_Model_Posterior_Estimation.ipynb)\n5. [Posterior estimation for ODEs](examples/Linear_ODE_system.ipynb)\n6. [Posterior estimation for SIR-like models](examples/Covid19_Initial_Posterior_Estimation.ipynb)\n7. [Model comparison for cognitive models](examples/Model_Comparison_MPT.ipynb)\n8. [Hierarchical model comparison for cognitive models](examples/Hierarchical_Model_Comparison_MPT.ipynb)\n\n## Project Documentation\n\nThe project documentation is available at <https://bayesflow.org>\n\n## Installation\n\nSee [INSTALL.rst](INSTALL.rst) for installation instructions.\n\n## Conceptual Overview\n\nA cornerstone idea of amortized Bayesian inference is to employ generative\nneural networks for parameter estimation, model comparison, and model validation\nwhen working with intractable simulators whose behavior as a whole is too\ncomplex to be described analytically. The figure below presents a higher-level\noverview of neurally bootstrapped Bayesian inference.\n\n<img src=\"https://github.com/stefanradev93/BayesFlow/blob/master/img/high_level_framework.png?raw=true\" width=80% height=80%>\n\n## Getting Started: Parameter Estimation\n\nThe core functionality of BayesFlow is amortized Bayesian posterior estimation, as described in our paper:\n\nRadev, S. T., Mertens, U. K., Voss, A., Ardizzone, L., & K\u00f6the, U. (2020).\nBayesFlow: Learning complex stochastic models with invertible neural networks.\n<em>IEEE Transactions on Neural Networks and Learning Systems</em>, available\nfor free at: https://arxiv.org/abs/2003.06281.\n\nHowever, since then, we have substantially extended the BayesFlow library such that\nit is now much more general and cleaner than what we describe in the above paper.\n\n### Minimal Example\n\n```python\nimport numpy as np\nimport bayesflow as bf\n```\n\nTo introduce you to the basic workflow of the library, let's consider\na simple 2D Gaussian model, from which we want to obtain\nposterior inference.  We assume a Gaussian simulator (likelihood)\nand a Gaussian prior for the means of the two components,\nwhich are our only model parameters in this example:\n\n```python\ndef simulator(theta, n_obs=50, scale=1.0):\n    return np.random.default_rng().normal(loc=theta, scale=scale, size=(n_obs, theta.shape[0]))\n\ndef prior(D=2, mu=0., sigma=1.0):\n    return np.random.default_rng().normal(loc=mu, scale=sigma, size=D)\n```\n\nThen, we connect the `prior` with the `simulator` using a `GenerativeModel` wrapper:\n\n```python\ngenerative_model = bf.simulation.GenerativeModel(prior, simulator)\n```\n\nNext, we create our BayesFlow setup consisting of a summary and an inference network:\n\n```python\nsummary_net = bf.networks.DeepSet()\ninference_net = bf.networks.InvertibleNetwork(num_params=2)\namortized_posterior = bf.amortizers.AmortizedPosterior(inference_net, summary_net)\n```\n\nFinally, we connect the networks with the generative model via a `Trainer` instance:\n\n```python\ntrainer = bf.trainers.Trainer(amortizer=amortized_posterior, generative_model=generative_model, memory=True)\n```\n\nWe are now ready to train an amortized posterior approximator. For instance,\nto run online training, we simply call:\n\n```python\nlosses = trainer.train_online(epochs=10, iterations_per_epoch=500, batch_size=32)\n```\n\nBefore inference, we can use simulation-based calibration (SBC,\nhttps://arxiv.org/abs/1804.06788) to check the computational faithfulness of\nthe model-amortizer combination:\n\n```python\nfig = trainer.diagnose_sbc_histograms()\n```\n\n<img src=\"https://github.com/stefanradev93/BayesFlow/blob/master/img/showcase_sbc.png?raw=true\" width=65% height=65%>\n\nThe histograms are roughly uniform and lie within the expected range for\nwell-calibrated inference algorithms as indicated by the shaded gray areas.\nAccordingly, our amortizer seems to have converged to the intended target.\n\nAmortized inference on new (real or simulated) data is then easy and fast.\nFor example, we can simulate 200 new data sets and generate 500 posterior draws\nper data set:\n\n```python\nnew_sims = trainer.configurator(generative_model(200))\nposterior_draws = amortized_posterior.sample(new_sims, n_samples=500)\n```\n\nWe can then quickly inspect the how well the model can recover its parameters\nacross the simulated data sets.\n\n```python\nfig = bf.diagnostics.plot_recovery(posterior_draws, new_sims['parameters'])\n```\n\n<img src=\"https://github.com/stefanradev93/BayesFlow/blob/master/img/showcase_recovery.png?raw=true\" width=65% height=65%>\n\nFor any individual data set, we can also compare the parameters' posteriors with\ntheir corresponding priors:\n\n```python\nfig = bf.diagnostics.plot_posterior_2d(posterior_draws[0], prior=generative_model.prior)\n```\n\n<img src=\"https://github.com/stefanradev93/BayesFlow/blob/master/img/showcase_posterior.png?raw=true\" width=45% height=45%>\n\nWe see clearly how the posterior shrinks relative to the prior for both\nmodel parameters as a result of conditioning on the data.\n\n### References and Further Reading\n\n- Radev, S. T., Mertens, U. K., Voss, A., Ardizzone, L., & K\u00f6the, U. (2020).\nBayesFlow: Learning complex stochastic models with invertible neural networks.\n<em>IEEE Transactions on Neural Networks and Learning Systems, 33(4)</em>, 1452-1466.\n\n- Radev, S. T., Graw, F., Chen, S., Mutters, N. T., Eichel, V. M., B\u00e4rnighausen, T., & K\u00f6the, U. (2021).\nOutbreakFlow: Model-based Bayesian inference of disease outbreak dynamics with invertible neural networks and its application to the COVID-19 pandemics in Germany. <em>PLoS computational biology, 17(10)</em>, e1009472.\n\n- Bieringer, S., Butter, A., Heimel, T., H\u00f6che, S., K\u00f6the, U., Plehn, T., & Radev, S. T. (2021).\nMeasuring QCD splittings with invertible networks. <em>SciPost Physics, 10(6)</em>, 126.\n\n- von Krause, M., Radev, S. T., & Voss, A. (2022).\nMental speed is high until age 60 as revealed by analysis of over a million participants.\n<em>Nature Human Behaviour, 6(5)</em>, 700-708.\n\n## Model Misspecification\n\nWhat if we are dealing with misspecified models? That is, how faithful is our\namortized inference if the generative model is a poor representation of reality?\nA modified loss function optimizes the learned summary statistics towards a unit\nGaussian and reliably detects model misspecification during inference time.\n\n\n<img src=\"https://github.com/stefanradev93/BayesFlow/blob/master/examples/img/model_misspecification_amortized_sbi.png\" width=100% height=100%>\n\nIn order to use this method, you should only provide the `summary_loss_fun` argument\nto the `AmortizedPosterior` instance:\n\n```python\namortized_posterior = bf.amortizers.AmortizedPosterior(inference_net, summary_net, summary_loss_fun='MMD')\n```\n\nThe amortizer knows how to combine its losses and you can inspect the summary space for outliers during inference.\n\n### References and Further Reading\n\n- Schmitt, M., B\u00fcrkner P. C., K\u00f6the U., & Radev S. T. (2022). Detecting Model\nMisspecification in Amortized Bayesian Inference with Neural Networks. <em>ArXiv\npreprint</em>, available for free at: https://arxiv.org/abs/2112.08866\n\n## Model Comparison\n\nBayesFlow can not only be used for parameter estimation, but also to perform approximate Bayesian model comparison via posterior model probabilities or Bayes factors.\nLet's extend the minimal example from before with a second model $M_2$ that we want to compare with our original model $M_1$:\n\n```python\ndef simulator(theta, n_obs=50, scale=1.0):\n    return np.random.default_rng().normal(loc=theta, scale=scale, size=(n_obs, theta.shape[0]))\n\ndef prior_m1(D=2, mu=0., sigma=1.0):\n    return np.random.default_rng().normal(loc=mu, scale=sigma, size=D)\n\ndef prior_m2(D=2, mu=2., sigma=1.0):\n    return np.random.default_rng().normal(loc=mu, scale=sigma, size=D)\n```\n\nFor the purpose of this illustration, the two toy models only differ with respect to their prior specification ($M_1: \\mu = 0, M_2: \\mu = 2$). We create both models as before and use a `MultiGenerativeModel` wrapper to combine them in a `meta_model`:\n\n```python\nmodel_m1 = bf.simulation.GenerativeModel(prior_m1, simulator, simulator_is_batched=False)\nmodel_m2 = bf.simulation.GenerativeModel(prior_m2, simulator, simulator_is_batched=False)\nmeta_model = bf.simulation.MultiGenerativeModel([model_m1, model_m2])\n```\n\nNext, we construct our neural network with a `PMPNetwork` for approximating posterior model probabilities:\n\n```python\nsummary_net = bf.networks.DeepSet()\nprobability_net = bf.networks.PMPNetwork(num_models=2)\namortized_bmc = bf.amortizers.AmortizedModelComparison(probability_net, summary_net)\n```\n\nWe combine all previous steps with a `Trainer` instance and train the neural approximator:\n\n```python\ntrainer = bf.trainers.Trainer(amortizer=amortized_bmc, generative_model=meta_model)\nlosses = trainer.train_online(epochs=3, iterations_per_epoch=100, batch_size=32)\n```\n\nLet's simulate data sets from our models to check our networks' performance:\n\n```python\nsims = trainer.configurator(meta_model(5000))\n```\n\nWhen feeding the data to our trained network, we almost immediately obtain posterior model probabilities for each of the 5000 data sets:\n\n```python\nmodel_probs = amortized_bmc.posterior_probs(sims)\n```\n\nHow good are these predicted probabilities in the closed world? We can have a look at the calibration:\n\n```python\ncal_curves = bf.diagnostics.plot_calibration_curves(sims[\"model_indices\"], model_probs)\n```\n\n<img src=\"https://github.com/stefanradev93/BayesFlow/blob/master/img/showcase_calibration_curves.png?raw=true\" width=65% height=65%>\n\nOur approximator shows excellent calibration, with the calibration curve being closely aligned to the diagonal, an expected calibration error (ECE) near 0 and most predicted probabilities being certain of the model underlying a data set. We can further assess patterns of misclassification with a confusion matrix:\n\n```python\nconf_matrix = bf.diagnostics.plot_confusion_matrix(sims[\"model_indices\"], model_probs)\n```\n\n<img src=\"https://github.com/stefanradev93/BayesFlow/blob/master/img/showcase_confusion_matrix.png?raw=true\" width=44% height=44%>\n\nFor the vast majority of simulated data sets, the \"true\" data-generating model is correctly identified. With these diagnostic results backing us up, we can proceed and apply our trained network to empirical data.\n\nBayesFlow is also able to conduct model comparison for hierarchical models. See this [tutorial notebook](examples/Hierarchical_Model_Comparison_MPT.ipynb) for an introduction to the associated workflow.\n\n### References and Further Reading\n\n- Radev S. T., D\u2019Alessandro M., Mertens U. K., Voss A., K\u00f6the U., & B\u00fcrkner P.\nC. (2021). Amortized Bayesian Model Comparison with Evidental Deep Learning.\n<em>IEEE Transactions on Neural Networks and Learning Systems</em>.\ndoi:10.1109/TNNLS.2021.3124052 available for free at: https://arxiv.org/abs/2004.10629\n\n- Schmitt, M., Radev, S. T., & B\u00fcrkner, P. C. (2022). Meta-Uncertainty in\nBayesian Model Comparison. In <em>International Conference on Artificial Intelligence\nand Statistics</em>, 11-29, PMLR, available for free at: https://arxiv.org/abs/2210.07278\n\n- Elsem\u00fcller, L., Schnuerch, M., B\u00fcrkner, P. C., & Radev, S. T. (2023). A Deep\nLearning Method for Comparing Bayesian Hierarchical Models. <em>ArXiv preprint</em>,\navailable for free at: https://arxiv.org/abs/2301.11873\n\n## Likelihood Emulation\n\nIn order to learn the exchangeable (i.e., permutation invariant) likelihood from the minimal example instead of the posterior, you may use the `AmortizedLikelihood` wrapper:\n\n```python\nlikelihood_net = bf.networks.InvertibleNetwork(num_params=2)\namortized_likelihood = bf.amortizers.AmortizedLikelihood(likelihood_net)\n```\n\nThis wrapper can interact with a `Trainer` instance in the same way as the `AmortizedPosterior`. Finally, you can also learn the likelihood and the posterior *simultaneously* by using the `AmortizedPosteriorLikelihood` wrapper and choosing your preferred training scheme:\n\n```python\njoint_amortizer = bf.amortizers.AmortizedPosteriorLikelihood(amortized_posterior, amortized_likelihood)\n```\n\nLearning both densities enables us to approximate marginal likelihoods or perform approximate leave-one-out cross-validation (LOO-CV) for prior or posterior predictive model comparison, respectively.\n\n### References and Further Reading\n\nRadev, S. T., Schmitt, M., Pratz, V., Picchini, U., K\u00f6the, U., & B\u00fcrkner, P.-C. (2023).\nJANA: Jointly amortized neural approximation of complex Bayesian models.\n*Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, 216*, 1695-1706.\n([arXiv](https://arxiv.org/abs/2302.09125))([PMLR](https://proceedings.mlr.press/v216/radev23a.html))\n\n## Support\nThis work is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany\u2019s Excellence Strategy -\u2013 EXC-2181 - 390900948 (the Heidelberg Cluster of Excellence STRUCTURES) and -- EXC-2075 - 390740016 (the Stuttgart Cluster of Excellence SimTech), the Informatics for Life initiative funded by the Klaus Tschira Foundation, and Google Cloud through the Academic Research Grants program.\n\n## Citing BayesFlow\n\nYou can cite BayesFlow along the lines of:\n\n- We approximated the posterior with neural posterior estimation and learned summary statistics (NPE; Radev et al., 2020), as implemented in the BayesFlow software for amortized Bayesian workflows (Radev et al., 2023b).\n- We approximated the likelihood with neural likelihood estimation (NLE; Papamakarios et al., 2019) without hand-cafted summary statistics, as implemented in the BayesFlow software for amortized Bayesian workflows (Radev et al., 2023b).\n- We performed simultaneous posterior and likelihood estimation with jointly amortized neural approximation (JANA; Radev et al., 2023a), as implemented in the BayesFlow software for amortized Bayesian workflows (Radev et al., 2023b).\n\n1. Radev, S. T., Schmitt, M., Schumacher, L., Elsem\u00fcller, L., Pratz, V., Sch\u00e4lte, Y., K\u00f6the, U., & B\u00fcrkner, P.-C. (2023). BayesFlow: Amortized Bayesian workflows with neural networks. *arXiv:2306.16015*. ([arXiv](https://arxiv.org/abs/2306.16015))\n2. Radev, S. T., Mertens, U. K., Voss, A., Ardizzone, L., K\u00f6the, U. (2020). BayesFlow: Learning complex stochastic models with invertible neural networks. *IEEE Transactions on Neural Networks and Learning Systems, 33(4)*, 1452-1466. ([arXiv](https://arxiv.org/abs/2003.06281))([IEEE TNNLS](https://ieeexplore.ieee.org/document/9298920))\n3. Radev, S. T., Schmitt, M., Pratz, V., Picchini, U., K\u00f6the, U., & B\u00fcrkner, P.-C. (2023). JANA: Jointly amortized neural approximation of complex Bayesian models. *Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, 216*, 1695-1706. ([arXiv](https://arxiv.org/abs/2302.09125))([PMLR](https://proceedings.mlr.press/v216/radev23a.html))\n\n**BibTeX:**\n\n```\n@misc{radev2023bayesflow,\n title = {{BayesFlow}: Amortized Bayesian workflows with neural networks},\n author = {Stefan T Radev and Marvin Schmitt and Lukas Schumacher and Lasse Elsem\\\"{u}ller and Valentin Pratz and Yannik Sch\\\"{a}lte and Ullrich K\\\"{o}the and Paul-Christian B\\\"{u}rkner},\n year = {2023},\n publisher= {arXiv},\n url={https://arxiv.org/abs/2306.16015}\n}\n\n@article{radev2020bayesflow,\n  title={{BayesFlow}: Learning complex stochastic models with invertible neural networks},\n  author={Radev, Stefan T. and Mertens, Ulf K. and Voss, Andreas and Ardizzone, Lynton and K{\\\"o}the, Ullrich},\n  journal={IEEE transactions on neural networks and learning systems},\n  volume={33},\n  number={4},\n  pages={1452--1466},\n  year={2020},\n  publisher={IEEE}\n}\n\n@inproceedings{pmlr-v216-radev23a,\n  title = \t {{JANA}: Jointly amortized neural approximation of complex {B}ayesian models},\n  author =       {Radev, Stefan T. and Schmitt, Marvin and Pratz, Valentin and Picchini, Umberto and K\\\"othe, Ullrich and B\\\"urkner, Paul-Christian},\n  booktitle = \t {Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence},\n  pages = \t {1695--1706},\n  year = \t {2023},\n  volume = \t {216},\n  series = \t {Proceedings of Machine Learning Research},\n  publisher =    {PMLR}\n}\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "\"Amortizing Bayesian Inference With Neural Networks\"",
    "version": "1.1.4",
    "project_urls": {
        "Bug Tracker": "https://github.com/stefanradev93/bayesflow/issues",
        "Changelog": "https://github.com/stefanradev93/bayesflow/blob/future/CHANGELOG.rst",
        "Documentation": "https://bayesflow.readthedocs.io",
        "Homepage": "https://github.com/stefanradev93/bayesflow"
    },
    "split_keywords": [
        "amortized bayesian inference",
        "invertible neural networks",
        "simulation-based inference",
        "approximate bayesian computation",
        "model comparison"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5a505c02e116c6b2ed349e5522cbc15c5f99c0a03c28f8967d9a633a4355da79",
                "md5": "80aab181135b09eacb1874f8105af10c",
                "sha256": "e45acfb8499c826b1a6596095d4fa87834bf1aa0fa8194fb5dd972e15588722c"
            },
            "downloads": -1,
            "filename": "bayesflow-1.1.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "80aab181135b09eacb1874f8105af10c",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 179809,
            "upload_time": "2023-09-12T16:17:39",
            "upload_time_iso_8601": "2023-09-12T16:17:39.758296Z",
            "url": "https://files.pythonhosted.org/packages/5a/50/5c02e116c6b2ed349e5522cbc15c5f99c0a03c28f8967d9a633a4355da79/bayesflow-1.1.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e96308c1d35983c66c79dc5bba9d31ef5854f45ef76807722c424e62e3001a07",
                "md5": "1a2226511dfc98988e0c8bee3b28af3e",
                "sha256": "6adf5504553d6d168705e074a8f952ab0205f7470799417210fb2ba2a3c328bf"
            },
            "downloads": -1,
            "filename": "bayesflow-1.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "1a2226511dfc98988e0c8bee3b28af3e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 9725132,
            "upload_time": "2023-09-12T16:17:42",
            "upload_time_iso_8601": "2023-09-12T16:17:42.389267Z",
            "url": "https://files.pythonhosted.org/packages/e9/63/08c1d35983c66c79dc5bba9d31ef5854f45ef76807722c424e62e3001a07/bayesflow-1.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-09-12 16:17:42",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "stefanradev93",
    "github_project": "bayesflow",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "bayesflow"
}
        
Elapsed time: 0.11416s