Name | nuovoLIRA JSON |
Version |
0.6.0
JSON |
| download |
home_page | |
Summary | A Bayesian procedure to delineate the boundary of an extended astronomical object |
upload_time | 2023-06-16 12:14:31 |
maintainer | |
docs_url | None |
author | Brendan Martin |
requires_python | |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
nuovoLIRA
==============================
# What is it?
A method to implement the Bayesian model described here: [https://nuovolira.tiiny.site/](https://nuovolira.tiiny.site/).
# Installation
```
pip install --upgrade pip
pip install nuovoLIRA
```
# Main Features
- Algorithms that sample from the conditional distributions of the NuovoLIRA model
# Source Code
The source code is currently hosted on GitHub at [https://github.com/bmartin9/nuovolira-pypi](https://github.com/bmartin9/nuovolira-pypi).
# Dependencies
The external packages needed to use nuovoLIRA are listed in the requirements.txt file at [https://github.com/bmartin9/nuovolira-pypi](https://github.com/bmartin9/nuovolira-pypi).
To create a python virtual environment and install these requirements, download requirements.txt from [https://github.com/bmartin9/nuovolira-pypi](https://github.com/bmartin9/nuovolira-pypi) and do for example
```
python -m venv \path\to\myenv
pip install -r /path/to/requirements.txt
```
# Example Usage
To sample from the conditional distribution of $Z$ (equation (33) in [https://nuovolira.tiiny.site/](https://nuovolira.tiiny.site/)) using the Swendsen Wang algorithm do
```
from nuovoLIRA.models.deconvolver import *
from numpy.random import default_rng
random_state = default_rng(seed=SEED)
Z_init = np.random.choice([0, 1], size=(10,10), p=[1./3, 2./3])
data = np.random.randint(0,40,size=(10,10))
Z_sampler = Sample_Z(random_state=random_state,
initial_Z = Z_init,
beta = 2,
lam_b = 1,
lam_e = 20,
y = data
)
Z_new = Z_sampler.Z_update(Z_init)
```
# Contact
For issues/discussions please email b.martin22@imperial.ac.uk
Project Organization
------------
├── LICENSE
├── Makefile <- Makefile with commands like `make data` or `make train`
├── README.md <- The top-level README for developers using this project.
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
│
├── docs <- A default Sphinx project; see sphinx-doc.org for details
│
├── models <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
│ the creator's initials, and a short `-` delimited description, e.g.
│ `1.0-jqp-initial-data-exploration`.
│
├── references <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│ └── figures <- Generated graphics and figures to be used in reporting
│
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`
│
├── setup.py <- makes project pip installable (pip install -e .) so src can be imported
├── src <- Source code for use in this project.
│ ├── __init__.py <- Makes src a Python module
│ │
│ ├── data <- Scripts to download or generate data
│ │ └── make_dataset.py
│ │
│ ├── features <- Scripts to turn raw data into features for modeling
│ │ └── build_features.py
│ │
│ ├── models <- Scripts to train models and then use trained models to make
│ │ │ predictions
│ │ ├── predict_model.py
│ │ └── train_model.py
│ │
│ └── visualization <- Scripts to create exploratory and results oriented visualizations
│ └── visualize.py
│
└── tox.ini <- tox file with settings for running tox; see tox.readthedocs.io
--------
<p><small>Project based on the <a target="_blank" href="https://drivendata.github.io/cookiecutter-data-science/">cookiecutter data science project template</a>. #cookiecutterdatascience</small></p>
Raw data
{
"_id": null,
"home_page": "",
"name": "nuovoLIRA",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "",
"author": "Brendan Martin",
"author_email": "",
"download_url": "https://files.pythonhosted.org/packages/1f/b2/3278fafd5aa72818c3b4b91676b22e99a2fbd62305e723f149e9521ff8f4/nuovoLIRA-0.6.0.tar.gz",
"platform": null,
"description": "nuovoLIRA\n==============================\n\n# What is it? \nA method to implement the Bayesian model described here: [https://nuovolira.tiiny.site/](https://nuovolira.tiiny.site/).\n\n# Installation \n```\npip install --upgrade pip \npip install nuovoLIRA \n``` \n\n# Main Features \n- Algorithms that sample from the conditional distributions of the NuovoLIRA model \n\n# Source Code\nThe source code is currently hosted on GitHub at [https://github.com/bmartin9/nuovolira-pypi](https://github.com/bmartin9/nuovolira-pypi).\n\n# Dependencies \nThe external packages needed to use nuovoLIRA are listed in the requirements.txt file at [https://github.com/bmartin9/nuovolira-pypi](https://github.com/bmartin9/nuovolira-pypi). \n\nTo create a python virtual environment and install these requirements, download requirements.txt from [https://github.com/bmartin9/nuovolira-pypi](https://github.com/bmartin9/nuovolira-pypi) and do for example \n\n``` \npython -m venv \\path\\to\\myenv\npip install -r /path/to/requirements.txt\n```\n\n# Example Usage \nTo sample from the conditional distribution of $Z$ (equation (33) in [https://nuovolira.tiiny.site/](https://nuovolira.tiiny.site/)) using the Swendsen Wang algorithm do\n\n```\nfrom nuovoLIRA.models.deconvolver import * \nfrom numpy.random import default_rng\n\nrandom_state = default_rng(seed=SEED) \nZ_init = np.random.choice([0, 1], size=(10,10), p=[1./3, 2./3])\ndata = np.random.randint(0,40,size=(10,10))\n\nZ_sampler = Sample_Z(random_state=random_state,\n initial_Z = Z_init,\n beta = 2,\n lam_b = 1,\n lam_e = 20,\n y = data\n)\n\nZ_new = Z_sampler.Z_update(Z_init) \n```\n\n# Contact \nFor issues/discussions please email b.martin22@imperial.ac.uk\n\nProject Organization\n------------\n\n \u251c\u2500\u2500 LICENSE\n \u251c\u2500\u2500 Makefile <- Makefile with commands like `make data` or `make train`\n \u251c\u2500\u2500 README.md <- The top-level README for developers using this project.\n \u251c\u2500\u2500 data\n \u2502\u00a0\u00a0 \u251c\u2500\u2500 external <- Data from third party sources.\n \u2502\u00a0\u00a0 \u251c\u2500\u2500 interim <- Intermediate data that has been transformed.\n \u2502\u00a0\u00a0 \u251c\u2500\u2500 processed <- The final, canonical data sets for modeling.\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 raw <- The original, immutable data dump.\n \u2502\n \u251c\u2500\u2500 docs <- A default Sphinx project; see sphinx-doc.org for details\n \u2502\n \u251c\u2500\u2500 models <- Trained and serialized models, model predictions, or model summaries\n \u2502\n \u251c\u2500\u2500 notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),\n \u2502 the creator's initials, and a short `-` delimited description, e.g.\n \u2502 `1.0-jqp-initial-data-exploration`.\n \u2502\n \u251c\u2500\u2500 references <- Data dictionaries, manuals, and all other explanatory materials.\n \u2502\n \u251c\u2500\u2500 reports <- Generated analysis as HTML, PDF, LaTeX, etc.\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 figures <- Generated graphics and figures to be used in reporting\n \u2502\n \u251c\u2500\u2500 requirements.txt <- The requirements file for reproducing the analysis environment, e.g.\n \u2502 generated with `pip freeze > requirements.txt`\n \u2502\n \u251c\u2500\u2500 setup.py <- makes project pip installable (pip install -e .) so src can be imported\n \u251c\u2500\u2500 src <- Source code for use in this project.\n \u2502\u00a0\u00a0 \u251c\u2500\u2500 __init__.py <- Makes src a Python module\n \u2502 \u2502\n \u2502\u00a0\u00a0 \u251c\u2500\u2500 data <- Scripts to download or generate data\n \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 make_dataset.py\n \u2502 \u2502\n \u2502\u00a0\u00a0 \u251c\u2500\u2500 features <- Scripts to turn raw data into features for modeling\n \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 build_features.py\n \u2502 \u2502\n \u2502\u00a0\u00a0 \u251c\u2500\u2500 models <- Scripts to train models and then use trained models to make\n \u2502 \u2502 \u2502 predictions\n \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u251c\u2500\u2500 predict_model.py\n \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 train_model.py\n \u2502 \u2502\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 visualization <- Scripts to create exploratory and results oriented visualizations\n \u2502\u00a0\u00a0 \u2514\u2500\u2500 visualize.py\n \u2502\n \u2514\u2500\u2500 tox.ini <- tox file with settings for running tox; see tox.readthedocs.io\n\n\n--------\n\n<p><small>Project based on the <a target=\"_blank\" href=\"https://drivendata.github.io/cookiecutter-data-science/\">cookiecutter data science project template</a>. #cookiecutterdatascience</small></p>\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A Bayesian procedure to delineate the boundary of an extended astronomical object",
"version": "0.6.0",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "1fb23278fafd5aa72818c3b4b91676b22e99a2fbd62305e723f149e9521ff8f4",
"md5": "30e2d3d039ba57efccc53eb7a522507f",
"sha256": "58749ce8247d029bdd7dd4f2407777c4529f118996589cdea02a1353fcb28255"
},
"downloads": -1,
"filename": "nuovoLIRA-0.6.0.tar.gz",
"has_sig": false,
"md5_digest": "30e2d3d039ba57efccc53eb7a522507f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 13639,
"upload_time": "2023-06-16T12:14:31",
"upload_time_iso_8601": "2023-06-16T12:14:31.539388Z",
"url": "https://files.pythonhosted.org/packages/1f/b2/3278fafd5aa72818c3b4b91676b22e99a2fbd62305e723f149e9521ff8f4/nuovoLIRA-0.6.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-06-16 12:14:31",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "nuovolira"
}