# PARIS Monte Carlo Sampler
**An efficient adaptive importance sampler for high-dimensional multi-modal Bayesian inference.**
PARIS (**Parallel Adaptive Reweighting Importance Sampling**) combines global exploration with local adaptation to tackle complex posteriors. The workflow is simple:
1. **Global Initialization**: Start with a space-filling design (e.g. Latin Hypercube Sampling) to seed promising regions.
2. **Adaptive Proposals**: Each seed runs its own importance sampling process, where the proposal is a Gaussian mixture centered on past weighted samples with covariance estimated from the local sample set.
3. **Dynamic Reweighting**: All samples are reweighted against the evolving proposal mixture, ensuring unbiased estimates and self-correcting any early overweights.
4. **Mode Clustering**: Parallel processes that converge to the same region are merged to avoid redundancy, while distinct modes are preserved.
5. **Posterior & Evidence**: The collected weighted samples directly reconstruct the posterior and yield accurate Bayesian evidence estimates.
This adaptive–parallel design allows PARIS to efficiently discover, refine, and integrate over complex multi-modal landscapes with minimal tuning and far fewer likelihood calls than conventional approaches.
## Features
* **Adaptive Proposals per Seed** – Each process maintains its own proposal, evolving a local Gaussian mixture that adapts to past samples.
* **Auto-balanced Exploration** – High-weight discoveries automatically attract more samples, while overweights self-correct over time.
* **Accurate Evidence Estimation** – Bayesian evidence is computed directly from importance weights, no extra machinery needed.
* **Parallel Mode Discovery** – Multiple seeds explore independently, merging only when they converge to the same mode.
* **Intuitive Hyperparameters** – Settings like number of seeds, initial covariance, and merge thresholds map directly to prior knowledge.
* **Efficiency at Scale** – Handles high-dimensional, multi-modal targets with substantially fewer likelihood calls.
* **Boundary-safe** – Automatically respects \[0,1]^d priors.
* **Multiprocessing Ready** – Runs smoothly across CPU cores for large inference tasks.
## Installation
### From PyPI (when available)
```bash
pip install parismc
```
### From Source
```bash
git clone https://github.com/yourusername/parismc.git
cd parismc
pip install -e .
```
### Development Installation
```bash
git clone https://github.com/yourusername/parismc.git
cd parismc
pip install -e .[dev]
```
## Quick Start
```python
import numpy as np
from parismc import Sampler, SamplerConfig
# Define your log-likelihood function
def log_likelihood(x):
"""Example: multivariate Gaussian log-likelihood"""
return -0.5 * np.sum(x**2, axis=1)
# Create sampler configuration
config = SamplerConfig(
alpha=1000,
latest_prob_index=1000,
boundary_limiting=True,
use_pool=False # Set to True for multiprocessing
)
# Initialize sampler
ndim = 2
n_walkers = 5
init_cov_list = [np.eye(ndim) * 0.1] * n_walkers
sampler = Sampler(
ndim=ndim,
n_seed=n_walkers,
log_reward_func=log_likelihood,
init_cov_list=init_cov_list,
config=config
)
# Prepare initial samples
sampler.prepare_lhs_samples(lhs_num=1000, batch_size=100)
# Run sampling
sampler.run_sampling(num_iterations=500, savepath='./results')
# Get results
samples, weights = sampler.get_samples_with_weights(flatten=True)
```
## Advanced Usage
### Custom Prior Transform
```python
def uniform_to_normal(x):
"""Transform from [0,1]^d to unbounded space"""
from scipy.stats import norm
return norm.ppf(x)
sampler = Sampler(
ndim=ndim,
n_seed=n_walkers,
log_reward_func=log_likelihood,
init_cov_list=init_cov_list,
prior_transform=uniform_to_normal
)
```
### Configuration Options
```python
config = SamplerConfig(
proc_merge_prob=0.9, # Probability threshold for merging clusters
alpha=1000, # Importance sampling parameter
latest_prob_index=1000, # Number of recent samples for weighting
trail_size=1000, # Maximum trial samples per iteration
boundary_limiting=True, # Enable boundary constraint handling
use_beta=True, # Use beta correction for boundaries
integral_num=100000, # Monte Carlo samples for beta estimation
gamma=100, # Covariance update frequency
use_pool=True, # Enable multiprocessing
n_pool=4 # Number of processes
)
```
## API Reference
### Main Classes
- `Sampler`: Main sampling class
- `SamplerConfig`: Configuration dataclass
### Key Methods
- `prepare_lhs_samples()`: Initialize with Latin Hypercube Sampling
- `run_sampling()`: Execute the sampling process
- `get_samples_with_weights()`: Retrieve samples and importance weights
- `save_state()` / `load_state()`: State persistence
### Utility Functions
- `find_sigma_level()`: Compute confidence level thresholds
- `oracle_approximating_shrinkage()`: Covariance regularization
- Various weighting and clustering utilities
## Requirements
- Python >= 3.8
- NumPy >= 1.20.0
- SciPy >= 1.7.0
- scikit-learn >= 1.0.0
- smt >= 2.0.0
- tqdm >= 4.62.0
## License
MIT License - see LICENSE file for details.
## Contributing
Contributions are welcome! Please feel free to submit pull requests or open issues.
## Citation
If you use this software in your research, please cite:
```bibtex
@software{parismc,
title={Parallel Adaptive Reweighting Importance Sampling (PARIS)},
author={Miaoxin Liu, Alvin J. K. Chua},
year={2025},
url={https://github.com/mx-Liu123/parismc}
}
```
Raw data
{
"_id": null,
"home_page": "https://github.com/mx-Liu123/parismc",
"name": "parismc",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "monte carlo, bayesian inference, importance sampling, multimodal, adaptive sampling, MCMC",
"author": "Alvin J. K. Chua",
"author_email": "Miaoxin Liu <mx.liu123@outlook.com>",
"download_url": "https://files.pythonhosted.org/packages/94/dd/425dd3119485084772e3a300684ff8d7e615d10c36556f91e756d1404327/parismc-0.1.0.tar.gz",
"platform": null,
"description": "# PARIS Monte Carlo Sampler\n\n**An efficient adaptive importance sampler for high-dimensional multi-modal Bayesian inference.**\n\nPARIS (**Parallel Adaptive Reweighting Importance Sampling**) combines global exploration with local adaptation to tackle complex posteriors. The workflow is simple:\n\n1. **Global Initialization**: Start with a space-filling design (e.g. Latin Hypercube Sampling) to seed promising regions.\n2. **Adaptive Proposals**: Each seed runs its own importance sampling process, where the proposal is a Gaussian mixture centered on past weighted samples with covariance estimated from the local sample set.\n3. **Dynamic Reweighting**: All samples are reweighted against the evolving proposal mixture, ensuring unbiased estimates and self-correcting any early overweights.\n4. **Mode Clustering**: Parallel processes that converge to the same region are merged to avoid redundancy, while distinct modes are preserved.\n5. **Posterior & Evidence**: The collected weighted samples directly reconstruct the posterior and yield accurate Bayesian evidence estimates.\n\nThis adaptive\u2013parallel design allows PARIS to efficiently discover, refine, and integrate over complex multi-modal landscapes with minimal tuning and far fewer likelihood calls than conventional approaches.\n\n## Features\n\n* **Adaptive Proposals per Seed** \u2013 Each process maintains its own proposal, evolving a local Gaussian mixture that adapts to past samples.\n* **Auto-balanced Exploration** \u2013 High-weight discoveries automatically attract more samples, while overweights self-correct over time.\n* **Accurate Evidence Estimation** \u2013 Bayesian evidence is computed directly from importance weights, no extra machinery needed.\n* **Parallel Mode Discovery** \u2013 Multiple seeds explore independently, merging only when they converge to the same mode.\n* **Intuitive Hyperparameters** \u2013 Settings like number of seeds, initial covariance, and merge thresholds map directly to prior knowledge.\n* **Efficiency at Scale** \u2013 Handles high-dimensional, multi-modal targets with substantially fewer likelihood calls.\n* **Boundary-safe** \u2013 Automatically respects \\[0,1]^d priors.\n* **Multiprocessing Ready** \u2013 Runs smoothly across CPU cores for large inference tasks.\n\n## Installation\n\n### From PyPI (when available)\n```bash\npip install parismc\n```\n\n### From Source\n```bash\ngit clone https://github.com/yourusername/parismc.git\ncd parismc\npip install -e .\n```\n\n### Development Installation\n```bash\ngit clone https://github.com/yourusername/parismc.git\ncd parismc\npip install -e .[dev]\n```\n\n## Quick Start\n\n```python\nimport numpy as np\nfrom parismc import Sampler, SamplerConfig\n\n# Define your log-likelihood function\ndef log_likelihood(x):\n \"\"\"Example: multivariate Gaussian log-likelihood\"\"\"\n return -0.5 * np.sum(x**2, axis=1)\n\n# Create sampler configuration\nconfig = SamplerConfig(\n alpha=1000,\n latest_prob_index=1000,\n boundary_limiting=True,\n use_pool=False # Set to True for multiprocessing\n)\n\n# Initialize sampler\nndim = 2\nn_walkers = 5\ninit_cov_list = [np.eye(ndim) * 0.1] * n_walkers\n\nsampler = Sampler(\n ndim=ndim,\n n_seed=n_walkers,\n log_reward_func=log_likelihood,\n init_cov_list=init_cov_list,\n config=config\n)\n\n# Prepare initial samples\nsampler.prepare_lhs_samples(lhs_num=1000, batch_size=100)\n\n# Run sampling\nsampler.run_sampling(num_iterations=500, savepath='./results')\n\n# Get results\nsamples, weights = sampler.get_samples_with_weights(flatten=True)\n```\n\n## Advanced Usage\n\n### Custom Prior Transform\n\n```python\ndef uniform_to_normal(x):\n \"\"\"Transform from [0,1]^d to unbounded space\"\"\"\n from scipy.stats import norm\n return norm.ppf(x)\n\nsampler = Sampler(\n ndim=ndim,\n n_seed=n_walkers,\n log_reward_func=log_likelihood,\n init_cov_list=init_cov_list,\n prior_transform=uniform_to_normal\n)\n```\n\n### Configuration Options\n\n```python\nconfig = SamplerConfig(\n proc_merge_prob=0.9, # Probability threshold for merging clusters\n alpha=1000, # Importance sampling parameter\n latest_prob_index=1000, # Number of recent samples for weighting\n trail_size=1000, # Maximum trial samples per iteration\n boundary_limiting=True, # Enable boundary constraint handling\n use_beta=True, # Use beta correction for boundaries\n integral_num=100000, # Monte Carlo samples for beta estimation\n gamma=100, # Covariance update frequency\n use_pool=True, # Enable multiprocessing\n n_pool=4 # Number of processes\n)\n```\n\n## API Reference\n\n### Main Classes\n\n- `Sampler`: Main sampling class\n- `SamplerConfig`: Configuration dataclass\n\n### Key Methods\n\n- `prepare_lhs_samples()`: Initialize with Latin Hypercube Sampling\n- `run_sampling()`: Execute the sampling process\n- `get_samples_with_weights()`: Retrieve samples and importance weights\n- `save_state()` / `load_state()`: State persistence\n\n### Utility Functions\n\n- `find_sigma_level()`: Compute confidence level thresholds\n- `oracle_approximating_shrinkage()`: Covariance regularization\n- Various weighting and clustering utilities\n\n## Requirements\n\n- Python >= 3.8\n- NumPy >= 1.20.0\n- SciPy >= 1.7.0\n- scikit-learn >= 1.0.0\n- smt >= 2.0.0\n- tqdm >= 4.62.0\n\n## License\n\nMIT License - see LICENSE file for details.\n\n## Contributing\n\nContributions are welcome! Please feel free to submit pull requests or open issues.\n\n## Citation\n\nIf you use this software in your research, please cite:\n\n```bibtex\n@software{parismc,\n title={Parallel Adaptive Reweighting Importance Sampling (PARIS)},\n author={Miaoxin Liu, Alvin J. K. Chua},\n year={2025},\n url={https://github.com/mx-Liu123/parismc}\n}\n\n```\n\n\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "PARIS: Parallel Adaptive Reweighting Importance Sampling for high-dimensional multi-modal Bayesian inference",
"version": "0.1.0",
"project_urls": {
"Bug Reports": "https://github.com/mx-Liu123/parismc/issues",
"Documentation": "https://github.com/mx-Liu123/parismc/blob/main/README.md",
"Homepage": "https://github.com/mx-Liu123/parismc",
"Repository": "https://github.com/mx-Liu123/parismc"
},
"split_keywords": [
"monte carlo",
" bayesian inference",
" importance sampling",
" multimodal",
" adaptive sampling",
" mcmc"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "3175788af9d5623b7a3dcd6bb310f471aa8a37e9c81049b3bda6980f16f603fc",
"md5": "87f844a4eafbc9d0e4bbdf85bd124947",
"sha256": "53a3e7bc56efe35b7662fcd6367a0728396f49ef23c4539308003a4b4caccccc"
},
"downloads": -1,
"filename": "parismc-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "87f844a4eafbc9d0e4bbdf85bd124947",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 27216,
"upload_time": "2025-09-03T03:09:36",
"upload_time_iso_8601": "2025-09-03T03:09:36.551322Z",
"url": "https://files.pythonhosted.org/packages/31/75/788af9d5623b7a3dcd6bb310f471aa8a37e9c81049b3bda6980f16f603fc/parismc-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "94dd425dd3119485084772e3a300684ff8d7e615d10c36556f91e756d1404327",
"md5": "ff43eba148ddbc6fcf185e5277488eaf",
"sha256": "ea8eda54b149e1aeffafd870399d355d909d7e43ba78f7aa5f81bef37ee277ba"
},
"downloads": -1,
"filename": "parismc-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "ff43eba148ddbc6fcf185e5277488eaf",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 25760,
"upload_time": "2025-09-03T03:09:38",
"upload_time_iso_8601": "2025-09-03T03:09:38.271584Z",
"url": "https://files.pythonhosted.org/packages/94/dd/425dd3119485084772e3a300684ff8d7e615d10c36556f91e756d1404327/parismc-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-03 03:09:38",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "mx-Liu123",
"github_project": "parismc",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "numpy",
"specs": [
[
">=",
"1.20.0"
]
]
},
{
"name": "scipy",
"specs": [
[
">=",
"1.7.0"
]
]
},
{
"name": "scikit-learn",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "smt",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "tqdm",
"specs": [
[
">=",
"4.62.0"
]
]
}
],
"lcname": "parismc"
}