[![Python](https://img.shields.io/pypi/pyversions/jaxns.svg)](https://badge.fury.io/py/jaxns)
[![PyPI](https://badge.fury.io/py/jaxns.svg)](https://badge.fury.io/py/jaxns)
[![Documentation Status](https://readthedocs.org/projects/jaxns/badge/?version=latest)](https://jaxns.readthedocs.io/en/latest/?badge=latest)
Main
Status: ![Workflow name](https://github.com/JoshuaAlbert/jaxns/actions/workflows/unittests.yml/badge.svg?branch=main)
Develop
Status: ![Workflow name](https://github.com/JoshuaAlbert/jaxns/actions/workflows/unittests.yml/badge.svg?branch=develop)
![JAXNS](https://github.com/JoshuaAlbert/jaxns/raw/main/jaxns_logo.png)
## Mission: _To make nested sampling **faster, easier, and more powerful**_
# What is it?
JAXNS is:
1) a simple and powerful probabilistic programming framework using nested sampling as the engine;
2) coded in JAX in a manner that allows lowering the entire inference algorithm to XLA primitives, which are
JIT-compiled for high performance;
3) continuously improving on its mission of making nested sampling faster, easier, and more powerful; and
4) citable, use the [(old) pre-print here](https://arxiv.org/abs/2012.15286).
What can you do with JAXNS?
1) Compute the Bayesian evidence of a model or hypothesis (the ultimate scientific method);
2) Produce high-quality samples from the posterior distribution;
3) Easily handle degenerate difficult multi-modal posteriors;
4) Model both discrete and continuous priors and likelihoods;
5) Encode complex constraints on the prior space;
6) Easily embed neural networks or any other ML model in the likelihood/prior;
## JAXNS Probabilistic Programming Framework
JAXNS provides a powerful JAX-based probabilistic programming framework, which allows you to define probabilistic
models easily, and use them for advanced purposes. Probabilistic models can have both Bayesian and parameterised
variables.
Bayesian variables are random variables, and are sampled from a prior distribution.
Parameterised variables are point-wise representations of a prior distribution, and are thus not random.
Associated with them is the log-probability of the prior distribution at that point.
Let's break apart an example of a simple probabilistic model. Note, this example can also be followed
in [docs/examples/intro_example.ipynb](docs/examples/intro_example.ipynb).
### Defining a probabilistic model
Prior models are functions that produce generators of `Prior` objects.
The function must eventually return the inputs to the likelihood function.
The returned values of a yielded `Prior` is a simple JAX array, i.e. you can do anything you want to it with JAX ops.
The rules of static programming apply, i.e. you cannot dynamically allocate arrays.
JAXNS makes use of the Tensorflow Probability library for defining prior distributions, thus you can use __almost__
any of the TFP distributions. You can also use any of the TFP bijectors to define transformed distributions.
Distributions do have some requirements to be valid for use in JAXNS.
1. They must have a quantile function, i.e. `dist.quantile(dist.cdf(x)) == x`.
2. They must have a `log_prob` method that returns the log-probability of the distribution at a given value.
Most of the TFP distributions satisfy these requirements.
JAXNS has some special priors defined that can't be defined from TFP, see `jaxns.framework.special_priors`. You can
always request more if you need them.
Prior variables __may__ be named but don't have to be. If they are named then they can be collected later via a
transformation, otherwise they are deemed hidden variables.
The output values of prior models are the inputs to the likelihood function. They can be PyTree's,
e.g. `typing.NamedTuple`'s.
Finally, priors can become point-wise estimates of the prior distribution, by calling `parametrised()`. This turns a
Bayesian variable into a parameterised variable, e.g. one which can be used in optimisation.
```python
import jax
import tensorflow_probability.substrates.jax as tfp
tfpd = tfp.distributions
from jaxns.framework.model import Model
from jaxns.framework.prior import Prior
def prior_model():
mu = yield Prior(tfpd.Normal(loc=0., scale=1.))
# Let's make sigma a parameterised variable
sigma = yield Prior(tfpd.Exponential(rate=1.), name='sigma').parametrised()
x = yield Prior(tfpd.Cauchy(loc=mu, scale=sigma), name='x')
uncert = yield Prior(tfpd.Exponential(rate=1.), name='uncert')
return x, uncert
def log_likelihood(x, uncert):
return tfpd.Normal(loc=0., scale=uncert).log_prob(x)
model = Model(prior_model=prior_model, log_likelihood=log_likelihood)
# You can sanity check the model (always a good idea when exploring)
model.sanity_check(key=jax.random.PRNGKey(0), S=100)
# The size of the Bayesian part of the prior space is `model.U_ndims`.
```
### Sampling and transforming variables
There are two spaces of samples:
1. U-space: samples in base measure space, and is dimensionless, or rather has units of probability.
2. X-space: samples in the space of the model, and has units of the prior variable.
```python
# Sample the prior in U-space (base measure)
U = model.sample_U(key=jax.random.PRNGKey(0))
# Transform to X-space
X = model.transform(U=U)
# Only named Bayesian prior variables are returned, the rest are treated as hidden variables.
assert set(X.keys()) == {'x', 'uncert'}
# Get the return value of the prior model, i.e. the input to the likelihood
x_sample, uncert_sample = model.prepare_input(U=U)
```
### Computing log-probabilities
All computations are based on the U-space variables.
```python
# Evaluate different parts of the model
log_prob_prior = model.log_prob_prior(U)
log_prob_likelihood = model.log_prob_likelihood(U, allow_nan=False)
log_prob_joint = model.log_prob_joint(U, allow_nan=False)
```
### Computing gradients of the joint probability w.r.t. parameters
```python
init_params = model.params
def log_prob_joint_fn(params, U):
# Calling model with params returns a new model with the params set
return model(params).log_prob_joint(U, allow_nan=False)
value, grad = jax.value_and_grad(log_prob_joint_fn)(init_params, U)
```
## Nested Sampling Engine
Given a probabilistic model, JAXNS can perform nested sampling on it. This allows computing the Bayesian evidence and
posterior samples.
```python
from jaxns import NestedSampler
ns = NestedSampler(model=model, max_samples=1e5)
# Run the sampler
termination_reason, state = ns(jax.random.PRNGKey(42))
# Get the results
results = ns.to_results(termination_reason=termination_reason, state=state)
```
#### To AOT or JIT-compile the sampler
```python
# Ahead of time compilation (sometimes useful)
ns_aot = jax.jit(ns).lower(jax.random.PRNGKey(42)).compile()
# Just-in-time compilation (usually useful)
ns_jit = jax.jit(ns)
```
You can inspect the results, and plot them.
```python
from jaxns import summary, plot_diagnostics, plot_cornerplot, save_results, load_results
# Optionally save the results to file
save_results(results, 'results.json')
# To load the results back use this
results = load_results('results.json')
summary(results)
plot_diagnostics(results)
plot_cornerplot(results)
```
Output:
```
--------
Termination Conditions:
Small remaining evidence
--------
likelihood evals: 149918
samples: 3780
phantom samples: 1710
likelihood evals / sample: 39.7
phantom fraction (%): 45.2%
--------
logZ=-1.65 +- 0.15
H=-1.13
ESS=132
--------
uncert: mean +- std.dev. | 10%ile / 50%ile / 90%ile | MAP est. | max(L) est.
uncert: 0.68 +- 0.58 | 0.13 / 0.48 / 1.37 | 0.0 | 0.0
--------
x: mean +- std.dev. | 10%ile / 50%ile / 90%ile | MAP est. | max(L) est.
x: 0.07 +- 0.62 | -0.57 / 0.06 / 0.73 | 0.0 | 0.0
--------
```
![](docs/examples/intro_diagnostics.png)
![](docs/examples/intro_cornerplot.png)
### Using the posterior samples
Nested sampling produces weighted posterior samples. To use for most use cases, you can simply resample (with
replacement).
```python
from jaxns import resample
samples = resample(
key=jax.random.PRNGKey(0),
samples=results.samples,
log_weights=results.log_dp_mean,
S=1000,
replace=True
)
```
### Maximising the evidence
The Bayesian evidence is the ultimate model selection density, and choosing a model that maximises the evidence is
the best way to select a model. We can use the evidence maximisation algorithm to optimise the parametrised variables
of the model, in the manner that maximises the evidence. Below `EvidenceMaximisation` does this for the model we defined
above, where the parametrised variables are
automatically constrained to be in the right range, and numerical stability is ensured with proper scaling.
We see that the evidence maximisation chooses a `sigma` the is very small.
```python
from jaxns.experimental import EvidenceMaximisation
# Let's train the sigma parameter to maximise the evidence
em = EvidenceMaximisation(model)
results, params = em.train(num_steps=5)
summary(results, with_parametrised=True)
```
Output:
```
--------
Termination Conditions:
Small remaining evidence
--------
likelihood evals: 72466
samples: 1440
phantom samples: 0
likelihood evals / sample: 50.3
phantom fraction (%): 0.0%
--------
logZ=-1.119 +- 0.098
H=-0.93
ESS=241
--------
sigma: mean +- std.dev. | 10%ile / 50%ile / 90%ile | MAP est. | max(L) est.
sigma: 5.40077599e-05 +- 3.6e-12 | 5.40077563e-05 / 5.40077563e-05 / 5.40077563e-05 | 5.40077563e-05 | 5.40077563e-05
--------
uncert: mean +- std.dev. | 10%ile / 50%ile / 90%ile | MAP est. | max(L) est.
uncert: 0.6 +- 0.54 | 0.05 / 0.45 / 1.37 | 0.0 | 0.0
--------
x: mean +- std.dev. | 10%ile / 50%ile / 90%ile | MAP est. | max(L) est.
x: 0.01 +- 0.56 | -0.6 / -0.0 / 0.69 | 0.0 | -0.0
--------
```
# Documentation
You can read the documentation [here](https://jaxns.readthedocs.io/en/latest/#). In addition, JAXNS is partially
described in the
[original paper](https://arxiv.org/abs/2012.15286), as well as the paper on [Phantom-Powered Nested
Sampling paper](https://arxiv.org/abs/2312.11330).
# Install
**Notes:**
1. JAXNS requires >= Python 3.9. It is always highly recommended to use the latest version of Python.
2. It is always highly recommended to use a unique virtual environment for each project.
To use **miniconda**, ensure it is installed on your system, then run the following commands:
```bash
# To create a new env, if necessary
conda create -n jaxns_py python=3.12
conda activate jaxns_py
```
## For end users
Install directly from PyPi,
```bash
pip install jaxns
```
## For development
Clone repo `git clone https://www.github.com/JoshuaAlbert/jaxns.git`, and install:
```bash
cd jaxns
pip install -r requirements.txt
pip install -r requirements-tests.txt
pip install -r requirements-examples.txt
pip install .
```
# Getting help and contributing examples
Do you have a neat Bayesian problem, and want to solve it with JAXNS?
I'm really encourage anyone in either the scientific community or industry to get involved and join the discussion
forum.
Please use the [github discussion forum](https://github.com/JoshuaAlbert/jaxns/discussions) for getting help, or
contributing examples/neat use cases.
# Quick start
Checkout the examples [here](https://jaxns.readthedocs.io/en/latest/#).
## Caveats
The caveat is that you need to be able to define your likelihood function with JAX. UPDATE: now you can just
use the `@jaxify_likelihood` decorator to run with arbitrary pythonic likelihoods.
# Speed test comparison with other nested sampling packages
JAXNS is really fast because it uses JAX.
JAXNS is much faster than PolyChord, MultiNEST, and dynesty, typically achieving two to three orders of magnitude
improvement in run time, for models with cheap likelihood evaluations.
This is shown in (https://arxiv.org/abs/2012.15286).
Recently JAXNS has implemented Phantom-Powered Nested Sampling, which helps for parameter inference. This is shown
in (https://arxiv.org/abs/2312.11330).
# Note on performance with parallelisation and GPUS
To use parallel computing, you can simply pass `devices` to the `NestedSampler` constructor. This will distributed
sampling over the devices. To use GPUs you can pass `jax.devices('gpu')` to the `devices` argument. You can also se all
your CPUs by placing `os.environ["XLA_FLAGS"] = f"--xla_force_host_platform_device_count={os.cpu_count()}"`
before importing JAXNS.
# Change Log
13 Nov, 2024 -- JAXNS 2.6.6 released. Minor improvements to plotting.
9 Nov, 2024 -- JAXNS 2.6.5 released. Added gradient guided nested sampling. Removed `num_parallel_workers` in favour
`devices`.
4 Nov, 2024 -- JAXNS 2.6.4 released. Resolved bias when using phantom points.
1 Oct, 2024 -- JAXNS 2.6.3 released. Enable pytrees in context.
25 Sep, 2024 -- JAXNS 2.6.2 released. Fixed some important (not so edge) cases. Made faster. Handle no seed scenarios.
24 Sep, 2024 -- JAXNS 2.6.1 released. Sharded parallel JAXNS. Rewrite of internals to support sharded parallelisation.
20 Aug, 2024 -- JAXNS 2.6.0 released. Removed haiku dependency. Implemented our own
context. `jaxns.framework.context.convert_external_params` enables interfacing with any external NN libary.
24 Jul, 2024 -- JAXNS 2.5.3 released. Replacing framework U-space with W-space. Maintained external API in U space.
23 Jul, 2024 -- JAXNS 2.5.2 released. Added explicit density prior. Sped up parametrisation. Scan associative
implemented.
27 May, 2024 -- JAXS 2.5.1 released. Fixed minor accuracy degradation introduced in 2.4.13.
15 May, 2024 -- JAXNS 2.5.0 released. Added ability to handle non-JAX likelihoods, e.g. if you have a simulation
framework with python bindings you can now use it for likelihoods in JAXNS. Small performance improvements.
22 Apr, 2024 -- JAXNS 2.4.13 released. Fixes bug where slice sampling not invariant to monotonic transforms of
likelihood.
20 Mar, 2024 -- JAXNS 2.4.12 released. Minor bug fixes, and readability improvements. Added Empirical special prior.
5 Mar, 2024 -- JAXNS 2.4.11/b released. Add `random_init` to parametrised variables. Enable special priors to be
parametrised.
23 Feb, 2024 -- JAXNS 2.4.10 released. Hotfix for import error.
21 Feb, 2024 -- JAXNS 2.4.9 released. Minor improvements to some priors, and bug fixes.
31 Jan, 2024 -- JAXNS 2.4.8 released. Improved global optimisation performance using gradient slicing.
Improved evidence maximisation.
25 Jan, 2024 -- JAXNS 2.4.6/7 released. Added logging. Use L-BFGS for Evidence Maximisation M-step. Fix bug in finetune.
24 Jan, 2024 -- JAXNS 2.4.5 released. Gradient based finetuning global optimisation using L-BFGS. Added ability to
simulate prior models without bulding model (for data generation.)
15 Jan, 2024 -- JAXNS 2.4.4 released. Fix performance issue for larger `max_samples`. Fixed bug in termination
conditions. Improved parallel performance.
10 Jan, 2024 -- JAXNS 2.4.2/3 released. Another performance boost, and experimental global optimiser.
9 Jan, 2024 -- JAXNS 2.4.1 released. Improve performance slightly for larger `max_samples`, still a performance issue.
8 Jan, 2024 -- JAXNS 2.4.0 released. Python 3.9+ becomes supported. Migrate parametrised models to stable.
All models are now default able to be parametrised, so you can use hk.Parameter anywhere in the model.
21 Dec, 2023 -- JAXNS 2.3.4 released. Correction for ESS and logZ uncert. `parameter_estimation` mode.
20 Dec, 2023 -- JAXNS 2.3.2/3 released. Improved default parameters. `difficult_model` mode. Improve plotting.
18 Dec, 2023 -- JAXNS 2.3.1 released. Paper open science release. Default parameters from paper.
11 Dec, 2023 -- JAXNS 2.3.0 released. Release of Phantom-Powered Nested Sampling algorithm.
5 Oct, 2023 -- JAXNS 2.2.6 released. Minor update to evidence maximisation.
3 Oct, 2023 -- JAXNS 2.2.5 released. Parametrised priors, and evidence maximisation added.
24 Sept, 2023 -- JAXNS 2.2.4 released. Add marginalising from saved U samples.
28 July, 2023 -- JAXNS 2.2.3 released. Bug fix for singular priors.
26 June, 2023 -- JAXNS 2.2.1 released. Multi-ellipsoidal sampler added back in. Adaptive refinement disabled, as a bias
has been detected in it.
15 June, 2023 -- JAXNS 2.2.0 released. Added support to allow TFP bijectors to defined transformed distributions. Other
minor improvements.
15 April, 2023 -- JAXNS 2.1.0 released. pmap used on outer-most loops allowing efficient device-device communication
during parallel runs.
8 March, 2023 -- JAXNS 2.0.1 released. Changed how we're doing annotations to support python 3.8 again.
3 January, 2023 -- JAXNS 2.0 released. Complete overhaul of components. New way to build models.
5 August, 2022 -- JAXNS 1.1.1 released. Pytree shaped priors.
2 June, 2022 -- JAXNS 1.1.0 released. Dynamic sampling takes advantage of adaptive refinement. Parallelisation. Bayesian
opt and global opt modules.
30 May, 2022 -- JAXNS 1.0.1 released. Improvements to speed, parallelisation, and structure of code.
9 April, 2022 -- JAXNS 1.0.0 released. Parallel sampling, dynamic search, and adaptive refinement. Global optimiser
released.
2 Jun, 2021 -- JAXNS 0.0.7 released.
13 May, 2021 -- JAXNS 0.0.6 released.
8 Mar, 2021 -- JAXNS 0.0.5 released.
8 Mar, 2021 -- JAXNS 0.0.4 released.
7 Mar, 2021 -- JAXNS 0.0.3 released.
28 Feb, 2021 -- JAXNS 0.0.2 released.
28 Feb, 2021 -- JAXNS 0.0.1 released.
1 January, 2021 -- Paper submitted
## Star History
<a href="https://star-history.com/#joshuaalbert/jaxns&Date">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=joshuaalbert/jaxns&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=joshuaalbert/jaxns&type=Date" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=joshuaalbert/jaxns&type=Date" />
</picture>
</a>
Raw data
{
"_id": null,
"home_page": null,
"name": "jaxns",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "bayesian-methods, scientific-computing, probabilistic-programming, nested-sampling, model-comparison, bayesian-computing, jax, scientific-machine-learning",
"author": null,
"author_email": "\"Joshua G. Albert\" <albert@strw.leidenuniv.nl>",
"download_url": "https://files.pythonhosted.org/packages/81/fb/f4f3d7d28595f73ad10d8724b1729e2d2dc80025605e33f57f11cc58ee0b/jaxns-2.6.6.tar.gz",
"platform": null,
"description": "[![Python](https://img.shields.io/pypi/pyversions/jaxns.svg)](https://badge.fury.io/py/jaxns)\n[![PyPI](https://badge.fury.io/py/jaxns.svg)](https://badge.fury.io/py/jaxns)\n[![Documentation Status](https://readthedocs.org/projects/jaxns/badge/?version=latest)](https://jaxns.readthedocs.io/en/latest/?badge=latest)\n\nMain\nStatus: ![Workflow name](https://github.com/JoshuaAlbert/jaxns/actions/workflows/unittests.yml/badge.svg?branch=main)\n\nDevelop\nStatus: ![Workflow name](https://github.com/JoshuaAlbert/jaxns/actions/workflows/unittests.yml/badge.svg?branch=develop)\n\n![JAXNS](https://github.com/JoshuaAlbert/jaxns/raw/main/jaxns_logo.png)\n\n## Mission: _To make nested sampling **faster, easier, and more powerful**_\n\n# What is it?\n\nJAXNS is:\n\n1) a simple and powerful probabilistic programming framework using nested sampling as the engine;\n2) coded in JAX in a manner that allows lowering the entire inference algorithm to XLA primitives, which are\n JIT-compiled for high performance;\n3) continuously improving on its mission of making nested sampling faster, easier, and more powerful; and\n4) citable, use the [(old) pre-print here](https://arxiv.org/abs/2012.15286).\n\nWhat can you do with JAXNS?\n\n1) Compute the Bayesian evidence of a model or hypothesis (the ultimate scientific method);\n2) Produce high-quality samples from the posterior distribution;\n3) Easily handle degenerate difficult multi-modal posteriors;\n4) Model both discrete and continuous priors and likelihoods;\n5) Encode complex constraints on the prior space;\n6) Easily embed neural networks or any other ML model in the likelihood/prior;\n\n## JAXNS Probabilistic Programming Framework\n\nJAXNS provides a powerful JAX-based probabilistic programming framework, which allows you to define probabilistic\nmodels easily, and use them for advanced purposes. Probabilistic models can have both Bayesian and parameterised\nvariables.\nBayesian variables are random variables, and are sampled from a prior distribution.\nParameterised variables are point-wise representations of a prior distribution, and are thus not random.\nAssociated with them is the log-probability of the prior distribution at that point.\n\nLet's break apart an example of a simple probabilistic model. Note, this example can also be followed\nin [docs/examples/intro_example.ipynb](docs/examples/intro_example.ipynb).\n\n### Defining a probabilistic model\n\nPrior models are functions that produce generators of `Prior` objects.\nThe function must eventually return the inputs to the likelihood function.\nThe returned values of a yielded `Prior` is a simple JAX array, i.e. you can do anything you want to it with JAX ops.\nThe rules of static programming apply, i.e. you cannot dynamically allocate arrays.\n\nJAXNS makes use of the Tensorflow Probability library for defining prior distributions, thus you can use __almost__\nany of the TFP distributions. You can also use any of the TFP bijectors to define transformed distributions.\n\nDistributions do have some requirements to be valid for use in JAXNS.\n\n1. They must have a quantile function, i.e. `dist.quantile(dist.cdf(x)) == x`.\n2. They must have a `log_prob` method that returns the log-probability of the distribution at a given value.\n\nMost of the TFP distributions satisfy these requirements.\n\nJAXNS has some special priors defined that can't be defined from TFP, see `jaxns.framework.special_priors`. You can\nalways request more if you need them.\n\nPrior variables __may__ be named but don't have to be. If they are named then they can be collected later via a\ntransformation, otherwise they are deemed hidden variables.\n\nThe output values of prior models are the inputs to the likelihood function. They can be PyTree's,\ne.g. `typing.NamedTuple`'s.\n\nFinally, priors can become point-wise estimates of the prior distribution, by calling `parametrised()`. This turns a\nBayesian variable into a parameterised variable, e.g. one which can be used in optimisation.\n\n```python\nimport jax\nimport tensorflow_probability.substrates.jax as tfp\n\ntfpd = tfp.distributions\n\nfrom jaxns.framework.model import Model\nfrom jaxns.framework.prior import Prior\n\n\ndef prior_model():\n mu = yield Prior(tfpd.Normal(loc=0., scale=1.))\n # Let's make sigma a parameterised variable\n sigma = yield Prior(tfpd.Exponential(rate=1.), name='sigma').parametrised()\n x = yield Prior(tfpd.Cauchy(loc=mu, scale=sigma), name='x')\n uncert = yield Prior(tfpd.Exponential(rate=1.), name='uncert')\n return x, uncert\n\n\ndef log_likelihood(x, uncert):\n return tfpd.Normal(loc=0., scale=uncert).log_prob(x)\n\n\nmodel = Model(prior_model=prior_model, log_likelihood=log_likelihood)\n\n# You can sanity check the model (always a good idea when exploring)\nmodel.sanity_check(key=jax.random.PRNGKey(0), S=100)\n\n# The size of the Bayesian part of the prior space is `model.U_ndims`.\n```\n\n### Sampling and transforming variables\n\nThere are two spaces of samples:\n\n1. U-space: samples in base measure space, and is dimensionless, or rather has units of probability.\n2. X-space: samples in the space of the model, and has units of the prior variable.\n\n```python\n# Sample the prior in U-space (base measure)\nU = model.sample_U(key=jax.random.PRNGKey(0))\n# Transform to X-space\nX = model.transform(U=U)\n# Only named Bayesian prior variables are returned, the rest are treated as hidden variables.\nassert set(X.keys()) == {'x', 'uncert'}\n\n# Get the return value of the prior model, i.e. the input to the likelihood\nx_sample, uncert_sample = model.prepare_input(U=U)\n```\n\n### Computing log-probabilities\n\nAll computations are based on the U-space variables.\n\n```python\n# Evaluate different parts of the model\nlog_prob_prior = model.log_prob_prior(U)\nlog_prob_likelihood = model.log_prob_likelihood(U, allow_nan=False)\nlog_prob_joint = model.log_prob_joint(U, allow_nan=False)\n```\n\n### Computing gradients of the joint probability w.r.t. parameters\n\n```python\ninit_params = model.params\n\n\ndef log_prob_joint_fn(params, U):\n # Calling model with params returns a new model with the params set\n return model(params).log_prob_joint(U, allow_nan=False)\n\n\nvalue, grad = jax.value_and_grad(log_prob_joint_fn)(init_params, U)\n```\n\n## Nested Sampling Engine\n\nGiven a probabilistic model, JAXNS can perform nested sampling on it. This allows computing the Bayesian evidence and\nposterior samples.\n\n```python\nfrom jaxns import NestedSampler\n\nns = NestedSampler(model=model, max_samples=1e5)\n\n# Run the sampler\ntermination_reason, state = ns(jax.random.PRNGKey(42))\n# Get the results\nresults = ns.to_results(termination_reason=termination_reason, state=state)\n```\n\n#### To AOT or JIT-compile the sampler\n\n```python\n# Ahead of time compilation (sometimes useful)\nns_aot = jax.jit(ns).lower(jax.random.PRNGKey(42)).compile()\n\n# Just-in-time compilation (usually useful)\nns_jit = jax.jit(ns)\n```\n\nYou can inspect the results, and plot them.\n\n```python\nfrom jaxns import summary, plot_diagnostics, plot_cornerplot, save_results, load_results\n\n# Optionally save the results to file\nsave_results(results, 'results.json')\n# To load the results back use this\nresults = load_results('results.json')\n\nsummary(results)\nplot_diagnostics(results)\nplot_cornerplot(results)\n```\n\nOutput:\n\n```\n--------\nTermination Conditions:\nSmall remaining evidence\n--------\nlikelihood evals: 149918\nsamples: 3780\nphantom samples: 1710\nlikelihood evals / sample: 39.7\nphantom fraction (%): 45.2%\n--------\nlogZ=-1.65 +- 0.15\nH=-1.13\nESS=132\n--------\nuncert: mean +- std.dev. | 10%ile / 50%ile / 90%ile | MAP est. | max(L) est.\nuncert: 0.68 +- 0.58 | 0.13 / 0.48 / 1.37 | 0.0 | 0.0\n--------\nx: mean +- std.dev. | 10%ile / 50%ile / 90%ile | MAP est. | max(L) est.\nx: 0.07 +- 0.62 | -0.57 / 0.06 / 0.73 | 0.0 | 0.0\n--------\n```\n\n![](docs/examples/intro_diagnostics.png)\n![](docs/examples/intro_cornerplot.png)\n\n### Using the posterior samples\n\nNested sampling produces weighted posterior samples. To use for most use cases, you can simply resample (with\nreplacement).\n\n```python\nfrom jaxns import resample\n\nsamples = resample(\n key=jax.random.PRNGKey(0),\n samples=results.samples,\n log_weights=results.log_dp_mean,\n S=1000,\n replace=True\n)\n```\n\n### Maximising the evidence\n\nThe Bayesian evidence is the ultimate model selection density, and choosing a model that maximises the evidence is\nthe best way to select a model. We can use the evidence maximisation algorithm to optimise the parametrised variables\nof the model, in the manner that maximises the evidence. Below `EvidenceMaximisation` does this for the model we defined\nabove, where the parametrised variables are\nautomatically constrained to be in the right range, and numerical stability is ensured with proper scaling.\n\nWe see that the evidence maximisation chooses a `sigma` the is very small.\n\n```python\nfrom jaxns.experimental import EvidenceMaximisation\n\n# Let's train the sigma parameter to maximise the evidence\n\nem = EvidenceMaximisation(model)\nresults, params = em.train(num_steps=5)\n\nsummary(results, with_parametrised=True)\n```\n\nOutput:\n\n```\n--------\nTermination Conditions:\nSmall remaining evidence\n--------\nlikelihood evals: 72466\nsamples: 1440\nphantom samples: 0\nlikelihood evals / sample: 50.3\nphantom fraction (%): 0.0%\n--------\nlogZ=-1.119 +- 0.098\nH=-0.93\nESS=241\n--------\nsigma: mean +- std.dev. | 10%ile / 50%ile / 90%ile | MAP est. | max(L) est.\nsigma: 5.40077599e-05 +- 3.6e-12 | 5.40077563e-05 / 5.40077563e-05 / 5.40077563e-05 | 5.40077563e-05 | 5.40077563e-05\n--------\nuncert: mean +- std.dev. | 10%ile / 50%ile / 90%ile | MAP est. | max(L) est.\nuncert: 0.6 +- 0.54 | 0.05 / 0.45 / 1.37 | 0.0 | 0.0\n--------\nx: mean +- std.dev. | 10%ile / 50%ile / 90%ile | MAP est. | max(L) est.\nx: 0.01 +- 0.56 | -0.6 / -0.0 / 0.69 | 0.0 | -0.0\n--------\n```\n\n# Documentation\n\nYou can read the documentation [here](https://jaxns.readthedocs.io/en/latest/#). In addition, JAXNS is partially\ndescribed in the\n[original paper](https://arxiv.org/abs/2012.15286), as well as the paper on [Phantom-Powered Nested\nSampling paper](https://arxiv.org/abs/2312.11330).\n\n# Install\n\n**Notes:**\n\n1. JAXNS requires >= Python 3.9. It is always highly recommended to use the latest version of Python.\n2. It is always highly recommended to use a unique virtual environment for each project.\n To use **miniconda**, ensure it is installed on your system, then run the following commands:\n\n```bash\n# To create a new env, if necessary\nconda create -n jaxns_py python=3.12\nconda activate jaxns_py\n```\n\n## For end users\n\nInstall directly from PyPi,\n\n```bash\npip install jaxns\n```\n\n## For development\n\nClone repo `git clone https://www.github.com/JoshuaAlbert/jaxns.git`, and install:\n\n```bash\ncd jaxns\npip install -r requirements.txt\npip install -r requirements-tests.txt\npip install -r requirements-examples.txt\npip install .\n```\n\n# Getting help and contributing examples\n\nDo you have a neat Bayesian problem, and want to solve it with JAXNS?\nI'm really encourage anyone in either the scientific community or industry to get involved and join the discussion\nforum.\nPlease use the [github discussion forum](https://github.com/JoshuaAlbert/jaxns/discussions) for getting help, or\ncontributing examples/neat use cases.\n\n# Quick start\n\nCheckout the examples [here](https://jaxns.readthedocs.io/en/latest/#).\n\n## Caveats\n\nThe caveat is that you need to be able to define your likelihood function with JAX. UPDATE: now you can just\nuse the `@jaxify_likelihood` decorator to run with arbitrary pythonic likelihoods.\n\n# Speed test comparison with other nested sampling packages\n\nJAXNS is really fast because it uses JAX.\nJAXNS is much faster than PolyChord, MultiNEST, and dynesty, typically achieving two to three orders of magnitude\nimprovement in run time, for models with cheap likelihood evaluations.\nThis is shown in (https://arxiv.org/abs/2012.15286).\n\nRecently JAXNS has implemented Phantom-Powered Nested Sampling, which helps for parameter inference. This is shown\nin (https://arxiv.org/abs/2312.11330).\n\n# Note on performance with parallelisation and GPUS\n\nTo use parallel computing, you can simply pass `devices` to the `NestedSampler` constructor. This will distributed\nsampling over the devices. To use GPUs you can pass `jax.devices('gpu')` to the `devices` argument. You can also se all\nyour CPUs by placing `os.environ[\"XLA_FLAGS\"] = f\"--xla_force_host_platform_device_count={os.cpu_count()}\"`\nbefore importing JAXNS.\n\n# Change Log\n\n13 Nov, 2024 -- JAXNS 2.6.6 released. Minor improvements to plotting.\n\n9 Nov, 2024 -- JAXNS 2.6.5 released. Added gradient guided nested sampling. Removed `num_parallel_workers` in favour\n`devices`.\n\n4 Nov, 2024 -- JAXNS 2.6.4 released. Resolved bias when using phantom points.\n\n1 Oct, 2024 -- JAXNS 2.6.3 released. Enable pytrees in context.\n\n25 Sep, 2024 -- JAXNS 2.6.2 released. Fixed some important (not so edge) cases. Made faster. Handle no seed scenarios.\n\n24 Sep, 2024 -- JAXNS 2.6.1 released. Sharded parallel JAXNS. Rewrite of internals to support sharded parallelisation.\n\n20 Aug, 2024 -- JAXNS 2.6.0 released. Removed haiku dependency. Implemented our own\ncontext. `jaxns.framework.context.convert_external_params` enables interfacing with any external NN libary.\n\n24 Jul, 2024 -- JAXNS 2.5.3 released. Replacing framework U-space with W-space. Maintained external API in U space.\n\n23 Jul, 2024 -- JAXNS 2.5.2 released. Added explicit density prior. Sped up parametrisation. Scan associative\nimplemented.\n\n27 May, 2024 -- JAXS 2.5.1 released. Fixed minor accuracy degradation introduced in 2.4.13.\n\n15 May, 2024 -- JAXNS 2.5.0 released. Added ability to handle non-JAX likelihoods, e.g. if you have a simulation\nframework with python bindings you can now use it for likelihoods in JAXNS. Small performance improvements.\n\n22 Apr, 2024 -- JAXNS 2.4.13 released. Fixes bug where slice sampling not invariant to monotonic transforms of\nlikelihood.\n\n20 Mar, 2024 -- JAXNS 2.4.12 released. Minor bug fixes, and readability improvements. Added Empirical special prior.\n\n5 Mar, 2024 -- JAXNS 2.4.11/b released. Add `random_init` to parametrised variables. Enable special priors to be\nparametrised.\n\n23 Feb, 2024 -- JAXNS 2.4.10 released. Hotfix for import error.\n\n21 Feb, 2024 -- JAXNS 2.4.9 released. Minor improvements to some priors, and bug fixes.\n\n31 Jan, 2024 -- JAXNS 2.4.8 released. Improved global optimisation performance using gradient slicing.\nImproved evidence maximisation.\n\n25 Jan, 2024 -- JAXNS 2.4.6/7 released. Added logging. Use L-BFGS for Evidence Maximisation M-step. Fix bug in finetune.\n\n24 Jan, 2024 -- JAXNS 2.4.5 released. Gradient based finetuning global optimisation using L-BFGS. Added ability to\nsimulate prior models without bulding model (for data generation.)\n\n15 Jan, 2024 -- JAXNS 2.4.4 released. Fix performance issue for larger `max_samples`. Fixed bug in termination\nconditions. Improved parallel performance.\n\n10 Jan, 2024 -- JAXNS 2.4.2/3 released. Another performance boost, and experimental global optimiser.\n\n9 Jan, 2024 -- JAXNS 2.4.1 released. Improve performance slightly for larger `max_samples`, still a performance issue.\n\n8 Jan, 2024 -- JAXNS 2.4.0 released. Python 3.9+ becomes supported. Migrate parametrised models to stable.\nAll models are now default able to be parametrised, so you can use hk.Parameter anywhere in the model.\n\n21 Dec, 2023 -- JAXNS 2.3.4 released. Correction for ESS and logZ uncert. `parameter_estimation` mode.\n\n20 Dec, 2023 -- JAXNS 2.3.2/3 released. Improved default parameters. `difficult_model` mode. Improve plotting.\n\n18 Dec, 2023 -- JAXNS 2.3.1 released. Paper open science release. Default parameters from paper.\n\n11 Dec, 2023 -- JAXNS 2.3.0 released. Release of Phantom-Powered Nested Sampling algorithm.\n\n5 Oct, 2023 -- JAXNS 2.2.6 released. Minor update to evidence maximisation.\n\n3 Oct, 2023 -- JAXNS 2.2.5 released. Parametrised priors, and evidence maximisation added.\n\n24 Sept, 2023 -- JAXNS 2.2.4 released. Add marginalising from saved U samples.\n\n28 July, 2023 -- JAXNS 2.2.3 released. Bug fix for singular priors.\n\n26 June, 2023 -- JAXNS 2.2.1 released. Multi-ellipsoidal sampler added back in. Adaptive refinement disabled, as a bias\nhas been detected in it.\n\n15 June, 2023 -- JAXNS 2.2.0 released. Added support to allow TFP bijectors to defined transformed distributions. Other\nminor improvements.\n\n15 April, 2023 -- JAXNS 2.1.0 released. pmap used on outer-most loops allowing efficient device-device communication\nduring parallel runs.\n\n8 March, 2023 -- JAXNS 2.0.1 released. Changed how we're doing annotations to support python 3.8 again.\n\n3 January, 2023 -- JAXNS 2.0 released. Complete overhaul of components. New way to build models.\n\n5 August, 2022 -- JAXNS 1.1.1 released. Pytree shaped priors.\n\n2 June, 2022 -- JAXNS 1.1.0 released. Dynamic sampling takes advantage of adaptive refinement. Parallelisation. Bayesian\nopt and global opt modules.\n\n30 May, 2022 -- JAXNS 1.0.1 released. Improvements to speed, parallelisation, and structure of code.\n\n9 April, 2022 -- JAXNS 1.0.0 released. Parallel sampling, dynamic search, and adaptive refinement. Global optimiser\nreleased.\n\n2 Jun, 2021 -- JAXNS 0.0.7 released.\n\n13 May, 2021 -- JAXNS 0.0.6 released.\n\n8 Mar, 2021 -- JAXNS 0.0.5 released.\n\n8 Mar, 2021 -- JAXNS 0.0.4 released.\n\n7 Mar, 2021 -- JAXNS 0.0.3 released.\n\n28 Feb, 2021 -- JAXNS 0.0.2 released.\n\n28 Feb, 2021 -- JAXNS 0.0.1 released.\n\n1 January, 2021 -- Paper submitted\n\n## Star History\n\n<a href=\"https://star-history.com/#joshuaalbert/jaxns&Date\">\n <picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://api.star-history.com/svg?repos=joshuaalbert/jaxns&type=Date&theme=dark\" />\n <source media=\"(prefers-color-scheme: light)\" srcset=\"https://api.star-history.com/svg?repos=joshuaalbert/jaxns&type=Date\" />\n <img alt=\"Star History Chart\" src=\"https://api.star-history.com/svg?repos=joshuaalbert/jaxns&type=Date\" />\n </picture>\n</a>\n",
"bugtrack_url": null,
"license": "Apache Software License 2.0",
"summary": "Nested Sampling in JAX",
"version": "2.6.6",
"project_urls": {
"Homepage": "https://github.com/joshuaalbert/jaxns"
},
"split_keywords": [
"bayesian-methods",
" scientific-computing",
" probabilistic-programming",
" nested-sampling",
" model-comparison",
" bayesian-computing",
" jax",
" scientific-machine-learning"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "45f8d7e4fa07bd750e78dd632d11482e8d1e694f7f7b7b5f6ba806f85796c750",
"md5": "6d8ea15bd88dba5d2591ae082a35d0fc",
"sha256": "df42496a48cb36a98a69c48cac431b9ab3b54a56f11b7935302ad5dfb68d0162"
},
"downloads": -1,
"filename": "jaxns-2.6.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "6d8ea15bd88dba5d2591ae082a35d0fc",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 147347,
"upload_time": "2024-11-13T12:50:11",
"upload_time_iso_8601": "2024-11-13T12:50:11.000536Z",
"url": "https://files.pythonhosted.org/packages/45/f8/d7e4fa07bd750e78dd632d11482e8d1e694f7f7b7b5f6ba806f85796c750/jaxns-2.6.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "81fbf4f3d7d28595f73ad10d8724b1729e2d2dc80025605e33f57f11cc58ee0b",
"md5": "25af11a2c892072725f0109435d335ac",
"sha256": "3a8335d3effea4d30d05183d8e3713891faa229b425b98af0e51b97e18022fbd"
},
"downloads": -1,
"filename": "jaxns-2.6.6.tar.gz",
"has_sig": false,
"md5_digest": "25af11a2c892072725f0109435d335ac",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 122227,
"upload_time": "2024-11-13T12:50:12",
"upload_time_iso_8601": "2024-11-13T12:50:12.548189Z",
"url": "https://files.pythonhosted.org/packages/81/fb/f4f3d7d28595f73ad10d8724b1729e2d2dc80025605e33f57f11cc58ee0b/jaxns-2.6.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-13 12:50:12",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "joshuaalbert",
"github_project": "jaxns",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "jax",
"specs": [
[
">=",
"0.4.25"
]
]
},
{
"name": "jaxlib",
"specs": []
},
{
"name": "matplotlib",
"specs": []
},
{
"name": "numpy",
"specs": [
[
"<",
"2"
]
]
},
{
"name": "scipy",
"specs": []
},
{
"name": "tensorflow_probability",
"specs": []
},
{
"name": "tqdm",
"specs": []
},
{
"name": "optax",
"specs": []
},
{
"name": "jaxopt",
"specs": []
}
],
"lcname": "jaxns"
}