lumin


Namelumin JSON
Version 0.9.1 PyPI version JSON
download
home_pagehttps://mode-collaboration.github.io/
SummaryLUMIN Unifies Many Improvements for Networks: A PyTorch wrapper to make deep learning more accessable to scientists.
upload_time2024-10-29 08:05:07
maintainerNone
docs_urlNone
authorGiles Strong
requires_python<4.0,>=3.10
licenseApache Software License 2.0
keywords deep learning differential programming physics science statistics
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![pypi lumin version](https://img.shields.io/pypi/v/lumin.svg)](https://pypi.python.org/pypi/lumin)
[![lumin python compatibility](https://img.shields.io/pypi/pyversions/lumin.svg)](https://pypi.python.org/pypi/lumin) [![lumin license](https://img.shields.io/pypi/l/lumin.svg)](https://pypi.python.org/pypi/lumin) [![Documentation Status](https://readthedocs.org/projects/lumin/badge/?version=stable)](https://lumin.readthedocs.io/en/stable/?badge=stable) [![DOI](https://zenodo.org/badge/163840693.svg)](https://zenodo.org/badge/latestdoi/163840693)

# <img src="./docs/source/_static/img/Lumin-logo-tall-text-sans-large.png" height="256"/>

# LUMIN: Lumin Unifies Many Improvements for Networks

LUMIN is a deep-learning and data-analysis ecosystem for High-Energy Physics. Similar to [Keras](https://keras.io/) and [fastai](https://github.com/fastai/fastai) it is a wrapper framework for a graph computation library (PyTorch), but includes many useful functions to handle domain-specific requirements and problems. It also intends to provide easy access to state-of-the-art methods, but still be flexible enough for users to inherit from base classes and override methods to meet their own demands.

Online documentation may be found at https://lumin.readthedocs.io/en/stable

For an introduction and motivation for LUMIN, checkout this talk from IML-2019 at CERN: [video](https://cds.cern.ch/record/2672119), [slides](https://indico.cern.ch/event/766872/timetable/?view=standard#29-lumin-a-deep-learning-and-d).
And for a live tutorial, checkout my talk at PyHEP 2021: https://www.youtube.com/watch?v=keDWQKHCa2o (tutorial repo here: https://github.com/GilesStrong/talk_pyhep21_lumin)

## Distinguishing Characteristics

### Data objects

- Use with large datasets: HEP data can become quite large, making training difficult:
    - The `FoldYielder` class provides on-demand access to data stored in HDF5 format, only loading into memory what is required.
    - Conversion from ROOT and CSV to HDF5 is easy to achieve using (see examples)
    - `FoldYielder` provides conversion methods to Pandas `DataFrame` for use with other internal methods and external packages
- Non-network-specific methods expect Pandas `DataFrame` allowing their use without having to convert to `FoldYielder`.

### Deep learning

- PyTorch > 1.0
- Inclusion of recent deep learning techniques and practices, including:
    - Dynamic learning rate, momentum, beta_1:
        - Cyclical, [Smith, 2015](https://arxiv.org/abs/1506.01186)
        - Cosine annealed [Loschilov & Hutter, 2016](https://arxiv.org/abs/1608.03983)
        - 1-cycle, [Smith, 2018](https://arxiv.org/abs/1803.09820)
    - HEP-specific data augmentation during training and inference
    - Advanced ensembling methods:
        - Snapshot ensembles [Huang et al., 2017](https://arxiv.org/abs/1704.00109)
        - Fast geometric ensembles [Garipov et al., 2018](https://arxiv.org/abs/1802.10026)
        - Stochastic Weight Averaging [Izmailov et al., 2018](https://arxiv.org/abs/1803.05407)
    - Learning Rate Finders, [Smith, 2015](https://arxiv.org/abs/1506.01186)
    - Entity embedding of categorical features, [Guo & Berkhahn, 2016](https://arxiv.org/abs/1604.06737)
    - Label smoothing [Szegedy et al., 2015](https://arxiv.org/abs/1512.00567)
    - Running batchnorm [fastai 2019](https://course19.fast.ai/videos/?lesson=10)
- Flexible architecture construction:
    - `ModelBuilder` takes parameters and modules to yield networks on-demand
    - Networks constructed from modular blocks:
        - Head - Takes input features
        - Body - Contains most of the hidden layers
        - Tail - Scales down the body to the desired number of outputs
        - Endcap - Optional layer for use post-training to provide further computation on model outputs; useful when training on a proxy objective
    - Easy loading and saving of pre-trained embedding weights
    - Modern architectures like:
        - Residual and dense(-like) networks ([He et al. 2015](https://arxiv.org/abs/1512.03385) & [Huang et al. 2016](https://arxiv.org/abs/1608.06993))
        - Graph nets for physics objects, e.g. [Battaglia, Pascanu, Lai, Rezende, Kavukcuoglu, 2016](https://arxiv.org/abs/1612.00222), [Moreno et al., 2019](https://arxiv.org/abs/1908.05318), and [Qasim, Kieseler, Iiyama, & Pierini, 2019](https://link.springer.com/article/10.1140/epjc/s10052-019-7113-9), with optional self-attention [Vaswani et al., 2017](https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf).
        - Recurrent layers for series of objects
        - 1D convolutional networks for series of objects
        - Squeeze-excitation blocks [Hu, Shen, Albanie, Sun, & Wu, 2017](https://arxiv.org/abs/1709.01507)
        - HEP-specific architectures, e.g. LorentzBoostNetworks [Erdmann, Geiser, Rath, Rieger, 2018](https://arxiv.org/abs/1812.09722)
- Configurable initialisations, including LSUV [Mishkin, Matas, 2016](https://arxiv.org/abs/1511.06422)
- HEP-specific losses, e.g. Asimov loss [Elwood & Krücker, 2018](https://arxiv.org/abs/1806.00322)
- Exotic training schemes, e.g. Learning to Pivot with Adversarial Networks [Louppe, Kagan, & Cranmer, 2016](https://papers.nips.cc/paper/2017/hash/48ab2f9b45957ab574cf005eb8a76760-Abstract.html)
- Easy training and inference of ensembles of models:
    - Default training method `fold_train_ensemble`, trains a specified number of models as well as just a single model
    - `Ensemble` class handles the (metric-weighted) construction of an ensemble, its inference, saving and loading, and interpretation
- Easy exporting of models to other libraries via Onnx
- Use with CPU and NVidia GPU
- Evaluation on domain-specific metrics such as Approximate Median Significance via `EvalMetric` class
- fastai-style callbacks and stateful model-fitting, allowing training, models, losses, and data to be accessible and adjustable at any point

### Feature selection methods

- Dendrograms of feature-pair monotonicity
- Feature importance via auto-optimised SK-Learn random forests
- Mutual dependence (via RFPImp)
- Automatic filtering and selection of features

### Interpretation

- Feature importance for models and ensembles
- Embedding visualisation
- 1D & 2D partial dependency plots (via PDPbox)

### Plotting

- Variety of domain-specific plotting functions
- Unified appearance via `PlotSettings` class - class accepted by every plot function providing control of plot appearance, titles, colour schemes, et cetera

### Universal handling of sample weights

- HEP events are normally accompanied by weight characterising the acceptance and production cross-section of that particular event, or to flatten some distribution.
- Relevant methods and classes can take account of these weights.
- This includes training, interpretation, and plotting
- Expansion of PyTorch losses to better handle weights

### Parameter optimisation

- Optimal learning rate via cross-validated range tests [Smith, 2015](https://arxiv.org/abs/1506.01186)
- Quick, rough optimisation of random forest hyper parameters
- Generalisable Cut & Count thresholds
- 1D discriminant binning with respect to bin-fill uncertainty

### Statistics and uncertainties

- Integral to experimental science
- Quantitative results are accompanied by uncertainties
- Use of bootstrapping to improve precision of statistics estimated from small samples

### Look and feel

- LUMIN aims to feel fast to use - liberal use of progress bars mean you're able to always know when tasks will finish, and get live updates of training
- Guaranteed to spark joy (in its current beta state, LUMIN may instead ignite rage, despair, and frustration - *dev.*)

## Examples

Several examples are present in the form of Jupyter Notebooks in the `examples` folder. These can be run also on Google Colab to allow you to quickly try out the package.

1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/Simple_Binary_Classification_of_earnings.ipynb) `examples/Simple_Binary_Classification_of_earnings.ipynb`: Very basic binary-classification example
1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/Binary_Classification_Signal_versus_Background.ipynb) `examples/Binary_Classification_Signal_versus_Background.ipynb`: Binary-classification example in a high-energy physics context
1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/Multiclass_Classification_Signal_versus_Backgrounds.ipynb) `examples/Multiclass_Classification_Signal_versus_Backgrounds.ipynb`: Multiclass-classification example in a high-energy physics context
1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/Single_Target_Regression_Di-Higgs_mass_prediction.ipynb) `examples/Single_Target_Regression_Di-Higgs_mass_prediction.ipynb`: Single-target regression example in a high-energy physics context
1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/Multi_Target_Regression_Di-tau_momenta.ipynb) `examples/Multi_Target_Regression_Di-tau_momenta.ipynb`: Multi-target regression example in a high-energy physics context
1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/Feature_Selection.ipynb) `examples/Feature_Selection.ipynb`: In-depth walkthrough for automated feature-selection
1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/Advanced_Model_Building.ipynb) `examples/Advanced_Model_Building.ipynb`: In-depth look at building more complicated models and a few advanced interpretation techniques
1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/Model_Exporting.ipynb) `examples/Model_Exporting.ipynb`: Walkthough for exporting a trained model to ONNX and TensorFlow
1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/RNNs_CNNs_and_GNNs_for_matrix_data.ipynb) `examples/RNNs_CNNs_and_GNNs_for_matrix_data.ipynb`: Various examples of applying RNNs, CNNs, and GNNs to matrix data (top-tagging on jet constituents)
1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/Learning_To_Pivot.ipynb) `examples/Learning_To_Pivot.ipynb`: Example of adversarial training for parameter invariance

## Installation

### From PyPI

The main package can be installed via:
`pip install lumin`

Full functionality requires an additional package as described below.

### For development

Check out the repo locally:

```bash
git clone git@github.com:GilesStrong/lumin.git
cd lumin
```

For development usage, we use [`poetry`](https://python-poetry.org/docs/#installing-with-the-official-installer) to handle dependency installation.
Poetry can be installed via, e.g.

```bash
curl -sSL https://install.python-poetry.org | python3 -
poetry self update
```

and ensuring that `poetry` is available in your `$PATH`

Lumin requires `python >= 3.10`. This can be installed via e.g. [`pyenv`](https://github.com/pyenv/pyenv):

```bash
curl https://pyenv.run | bash
pyenv update
pyenv install 3.10
pyenv local 3.10
```

Install the dependencies:

```bash
poetry install
poetry self add poetry-plugin-export
poetry config warnings.export false
poetry run pre-commit install
```

### Optional requirements

- sparse: enables loading on COO sparse-format tensors, install via e.g. `pip install sparse`
- pdpbox: enables partial dependency plots, install via e.g. `pip install pdpbox`
  - **Note**: `pdpbox` includes docs dependencies in its build environment, which can result in conflicts. A fork of `pdpbox` which removes these dependencies can be installed from [https://github.com/GilesStrong/PDPbox]

## Notes

### Why use LUMIN

TMVA contained in CERN's ROOT system, has been the default choice for BDT training for analysis and reconstruction algorithms due to never having to leave ROOT format. With the gradual move to DNN approaches, more scientists are looking to move their data out of ROOT to use the wider selection of tools which are available. Keras appears to be the first stop due to its ease of use, however implementing recent methods in Keras can be difficult, and sometimes requires dropping back to the tensor library that it aims to abstract. Indeed, the prequel to LUMIN was a similar wrapper for Keras ([HEPML_Tools](https://github.com/GilesStrong/hepml_tools)) which involved some pretty ugly hacks.
The fastai framework provides access to these recent methods, however doesn't yet support sample weights to the extent that HEP requires.
LUMIN aims to provide the best of both, Keras-style sample weighting and fastai training methods, while focussing on columnar data and providing domain-specific metrics, plotting, and statistical treatment of results and uncertainties.

### Data types

LUMIN is primarily designed for use on columnar data, and from version 0.5 onwards this also includes *matrix data*; ordered series and un-ordered groups of objects. With some extra work it can be used on other data formats, but at the moment it has nothing special to offer. Whilst recent work in HEP has made use of jet images and GANs, these normally hijack existing ideas and models. Perhaps once we get established, domain specific approaches which necessitate the use of a specialised framework, then LUMIN could grow to meet those demands, but for now I'd recommend checking out the fastai library, especially for image data.

With just one main developer, I'm simply focussing on the data types and applications I need for my own research and common use cases in HEP. If, however you would like to use LUMIN's other methods for your own work on other data formats, then you are most welcome to contribute and help to grow LUMIN to better meet the needs of the scientific community.

### Future

The current priority is to improve the documentation, add unit tests, and expand the examples.

The next step will be to try to increase the user base and number of contributors. I'm aiming to achieve this through presentations, tutorials, blog posts, and papers.

Further improvements will be in the direction of implementing new methods and (HEP-specific) architectures, as well as providing helper functions and data exporters to statistical analysis packages like Combine and PYHF.

### Contributing & feedback

Contributions, suggestions, and feedback are most welcome! The issue tracker on this repo is probably the best place to report bugs et cetera.

### Code style

Nope, the majority of the code-base does not conform to PEP8. PEP8 has its uses, but my understanding is that it primarily written for developers and maintainers of software whose users never need to read the source code. As a maths-heavy research framework which users are expected to interact with, PEP8 isn't the best style. Instead, I'm aiming to follow more [the style of fastai](https://docs.fast.ai/dev/style.html), which emphasises, in particular, reducing vertical space (useful for reading source code in a notebook) naming and abbreviating variables according to their importance and lifetime (easier to recognise which variables have a larger scope and permits easier writing of mathematical operations). A full list of the abbreviations used may be found in [abbr.md](https://github.com/GilesStrong/lumin/blob/master/abbr.md)

### Why is LUMIN called LUMIN?

Aside from being a recursive acronym (and therefore the best kind of acronym) lumin is short for 'luminosity'. In high-energy physics, the integrated luminosity of the data collected by an experiment is the main driver in the results that analyses obtain. With the paradigm shift towards multivariate analyses, however, improved methods can be seen as providing 'artificial luminosity'; e.g. the gain offered by some DNN could be measured in terms of the amount of extra data that would have to be collected to achieve the same result with a more traditional analysis. Luminosity can also be connected to the fact that LUMIN is built a
LUMIN is primarily developed by Giles Strong; a British-born doctor in particle physics, researcher at INFN-Padova (Italy), and a member of the CMS collaboration at CERN, and a founding member of the MODE Collaboration (differentiable optimisation for detector design).

As LUMIN has grown, it has welcomed contributions from members of the scientific and software development community. Check out the [contributors page](https://github.com/GilesStrong/lumin/graphs/contributors) for a complete list.

Certainly more developers and contributors are welcome to join and help out!

### Reference

If you have used LUMIN in your analysis work and wish to cite it, the preferred reference is: *Giles C. Strong, LUMIN, Zenodo (Mar. 2019), https://doi.org/10.5281/zenodo.2601857, Note: Please check https://github.com/GilesStrong/lumin/graphs/contributors for the full list of contributors*

```
@misc{giles_chatham_strong_2019_2601857,  
  author       = {Giles Chatham Strong},  
  title        = {LUMIN},  
  month        = mar,  
  year         = 2019,  
  note         = {{Please check https://github.com/GilesStrong/lumin/graphs/contributors for the full list of contributors}},  
  doi          = {10.5281/zenodo.2601857},  
  url          = {https://doi.org/10.5281/zenodo.2601857}  
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://mode-collaboration.github.io/",
    "name": "lumin",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": "deep learning, differential programming, physics, science, statistics",
    "author": "Giles Strong",
    "author_email": "giles.c.strong@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/c4/65/5cc3308f65229cf6957f3f12028ceb0e56471089779080a587b6e51781d4/lumin-0.9.1.tar.gz",
    "platform": null,
    "description": "[![pypi lumin version](https://img.shields.io/pypi/v/lumin.svg)](https://pypi.python.org/pypi/lumin)\n[![lumin python compatibility](https://img.shields.io/pypi/pyversions/lumin.svg)](https://pypi.python.org/pypi/lumin) [![lumin license](https://img.shields.io/pypi/l/lumin.svg)](https://pypi.python.org/pypi/lumin) [![Documentation Status](https://readthedocs.org/projects/lumin/badge/?version=stable)](https://lumin.readthedocs.io/en/stable/?badge=stable) [![DOI](https://zenodo.org/badge/163840693.svg)](https://zenodo.org/badge/latestdoi/163840693)\n\n# <img src=\"./docs/source/_static/img/Lumin-logo-tall-text-sans-large.png\" height=\"256\"/>\n\n# LUMIN: Lumin Unifies Many Improvements for Networks\n\nLUMIN is a deep-learning and data-analysis ecosystem for High-Energy Physics. Similar to [Keras](https://keras.io/) and [fastai](https://github.com/fastai/fastai) it is a wrapper framework for a graph computation library (PyTorch), but includes many useful functions to handle domain-specific requirements and problems. It also intends to provide easy access to state-of-the-art methods, but still be flexible enough for users to inherit from base classes and override methods to meet their own demands.\n\nOnline documentation may be found at https://lumin.readthedocs.io/en/stable\n\nFor an introduction and motivation for LUMIN, checkout this talk from IML-2019 at CERN: [video](https://cds.cern.ch/record/2672119), [slides](https://indico.cern.ch/event/766872/timetable/?view=standard#29-lumin-a-deep-learning-and-d).\nAnd for a live tutorial, checkout my talk at PyHEP 2021: https://www.youtube.com/watch?v=keDWQKHCa2o (tutorial repo here: https://github.com/GilesStrong/talk_pyhep21_lumin)\n\n## Distinguishing Characteristics\n\n### Data objects\n\n- Use with large datasets: HEP data can become quite large, making training difficult:\n    - The `FoldYielder` class provides on-demand access to data stored in HDF5 format, only loading into memory what is required.\n    - Conversion from ROOT and CSV to HDF5 is easy to achieve using (see examples)\n    - `FoldYielder` provides conversion methods to Pandas `DataFrame` for use with other internal methods and external packages\n- Non-network-specific methods expect Pandas `DataFrame` allowing their use without having to convert to `FoldYielder`.\n\n### Deep learning\n\n- PyTorch > 1.0\n- Inclusion of recent deep learning techniques and practices, including:\n    - Dynamic learning rate, momentum, beta_1:\n        - Cyclical, [Smith, 2015](https://arxiv.org/abs/1506.01186)\n        - Cosine annealed [Loschilov & Hutter, 2016](https://arxiv.org/abs/1608.03983)\n        - 1-cycle, [Smith, 2018](https://arxiv.org/abs/1803.09820)\n    - HEP-specific data augmentation during training and inference\n    - Advanced ensembling methods:\n        - Snapshot ensembles [Huang et al., 2017](https://arxiv.org/abs/1704.00109)\n        - Fast geometric ensembles [Garipov et al., 2018](https://arxiv.org/abs/1802.10026)\n        - Stochastic Weight Averaging [Izmailov et al., 2018](https://arxiv.org/abs/1803.05407)\n    - Learning Rate Finders, [Smith, 2015](https://arxiv.org/abs/1506.01186)\n    - Entity embedding of categorical features, [Guo & Berkhahn, 2016](https://arxiv.org/abs/1604.06737)\n    - Label smoothing [Szegedy et al., 2015](https://arxiv.org/abs/1512.00567)\n    - Running batchnorm [fastai 2019](https://course19.fast.ai/videos/?lesson=10)\n- Flexible architecture construction:\n    - `ModelBuilder` takes parameters and modules to yield networks on-demand\n    - Networks constructed from modular blocks:\n        - Head - Takes input features\n        - Body - Contains most of the hidden layers\n        - Tail - Scales down the body to the desired number of outputs\n        - Endcap - Optional layer for use post-training to provide further computation on model outputs; useful when training on a proxy objective\n    - Easy loading and saving of pre-trained embedding weights\n    - Modern architectures like:\n        - Residual and dense(-like) networks ([He et al. 2015](https://arxiv.org/abs/1512.03385) & [Huang et al. 2016](https://arxiv.org/abs/1608.06993))\n        - Graph nets for physics objects, e.g. [Battaglia, Pascanu, Lai, Rezende, Kavukcuoglu, 2016](https://arxiv.org/abs/1612.00222), [Moreno et al., 2019](https://arxiv.org/abs/1908.05318), and [Qasim, Kieseler, Iiyama, & Pierini, 2019](https://link.springer.com/article/10.1140/epjc/s10052-019-7113-9), with optional self-attention [Vaswani et al., 2017](https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf).\n        - Recurrent layers for series of objects\n        - 1D convolutional networks for series of objects\n        - Squeeze-excitation blocks [Hu, Shen, Albanie, Sun, & Wu, 2017](https://arxiv.org/abs/1709.01507)\n        - HEP-specific architectures, e.g. LorentzBoostNetworks [Erdmann, Geiser, Rath, Rieger, 2018](https://arxiv.org/abs/1812.09722)\n- Configurable initialisations, including LSUV [Mishkin, Matas, 2016](https://arxiv.org/abs/1511.06422)\n- HEP-specific losses, e.g. Asimov loss [Elwood & Kr\u00fccker, 2018](https://arxiv.org/abs/1806.00322)\n- Exotic training schemes, e.g. Learning to Pivot with Adversarial Networks [Louppe, Kagan, & Cranmer, 2016](https://papers.nips.cc/paper/2017/hash/48ab2f9b45957ab574cf005eb8a76760-Abstract.html)\n- Easy training and inference of ensembles of models:\n    - Default training method `fold_train_ensemble`, trains a specified number of models as well as just a single model\n    - `Ensemble` class handles the (metric-weighted) construction of an ensemble, its inference, saving and loading, and interpretation\n- Easy exporting of models to other libraries via Onnx\n- Use with CPU and NVidia GPU\n- Evaluation on domain-specific metrics such as Approximate Median Significance via `EvalMetric` class\n- fastai-style callbacks and stateful model-fitting, allowing training, models, losses, and data to be accessible and adjustable at any point\n\n### Feature selection methods\n\n- Dendrograms of feature-pair monotonicity\n- Feature importance via auto-optimised SK-Learn random forests\n- Mutual dependence (via RFPImp)\n- Automatic filtering and selection of features\n\n### Interpretation\n\n- Feature importance for models and ensembles\n- Embedding visualisation\n- 1D & 2D partial dependency plots (via PDPbox)\n\n### Plotting\n\n- Variety of domain-specific plotting functions\n- Unified appearance via `PlotSettings` class - class accepted by every plot function providing control of plot appearance, titles, colour schemes, et cetera\n\n### Universal handling of sample weights\n\n- HEP events are normally accompanied by weight characterising the acceptance and production cross-section of that particular event, or to flatten some distribution.\n- Relevant methods and classes can take account of these weights.\n- This includes training, interpretation, and plotting\n- Expansion of PyTorch losses to better handle weights\n\n### Parameter optimisation\n\n- Optimal learning rate via cross-validated range tests [Smith, 2015](https://arxiv.org/abs/1506.01186)\n- Quick, rough optimisation of random forest hyper parameters\n- Generalisable Cut & Count thresholds\n- 1D discriminant binning with respect to bin-fill uncertainty\n\n### Statistics and uncertainties\n\n- Integral to experimental science\n- Quantitative results are accompanied by uncertainties\n- Use of bootstrapping to improve precision of statistics estimated from small samples\n\n### Look and feel\n\n- LUMIN aims to feel fast to use - liberal use of progress bars mean you're able to always know when tasks will finish, and get live updates of training\n- Guaranteed to spark joy (in its current beta state, LUMIN may instead ignite rage, despair, and frustration - *dev.*)\n\n## Examples\n\nSeveral examples are present in the form of Jupyter Notebooks in the `examples` folder. These can be run also on Google Colab to allow you to quickly try out the package.\n\n1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/Simple_Binary_Classification_of_earnings.ipynb) `examples/Simple_Binary_Classification_of_earnings.ipynb`: Very basic binary-classification example\n1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/Binary_Classification_Signal_versus_Background.ipynb) `examples/Binary_Classification_Signal_versus_Background.ipynb`: Binary-classification example in a high-energy physics context\n1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/Multiclass_Classification_Signal_versus_Backgrounds.ipynb) `examples/Multiclass_Classification_Signal_versus_Backgrounds.ipynb`: Multiclass-classification example in a high-energy physics context\n1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/Single_Target_Regression_Di-Higgs_mass_prediction.ipynb) `examples/Single_Target_Regression_Di-Higgs_mass_prediction.ipynb`: Single-target regression example in a high-energy physics context\n1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/Multi_Target_Regression_Di-tau_momenta.ipynb) `examples/Multi_Target_Regression_Di-tau_momenta.ipynb`: Multi-target regression example in a high-energy physics context\n1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/Feature_Selection.ipynb) `examples/Feature_Selection.ipynb`: In-depth walkthrough for automated feature-selection\n1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/Advanced_Model_Building.ipynb) `examples/Advanced_Model_Building.ipynb`: In-depth look at building more complicated models and a few advanced interpretation techniques\n1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/Model_Exporting.ipynb) `examples/Model_Exporting.ipynb`: Walkthough for exporting a trained model to ONNX and TensorFlow\n1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/RNNs_CNNs_and_GNNs_for_matrix_data.ipynb) `examples/RNNs_CNNs_and_GNNs_for_matrix_data.ipynb`: Various examples of applying RNNs, CNNs, and GNNs to matrix data (top-tagging on jet constituents)\n1. [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/GilesStrong/lumin/blob/v0.9.1/examples/Learning_To_Pivot.ipynb) `examples/Learning_To_Pivot.ipynb`: Example of adversarial training for parameter invariance\n\n## Installation\n\n### From PyPI\n\nThe main package can be installed via:\n`pip install lumin`\n\nFull functionality requires an additional package as described below.\n\n### For development\n\nCheck out the repo locally:\n\n```bash\ngit clone git@github.com:GilesStrong/lumin.git\ncd lumin\n```\n\nFor development usage, we use [`poetry`](https://python-poetry.org/docs/#installing-with-the-official-installer) to handle dependency installation.\nPoetry can be installed via, e.g.\n\n```bash\ncurl -sSL https://install.python-poetry.org | python3 -\npoetry self update\n```\n\nand ensuring that `poetry` is available in your `$PATH`\n\nLumin requires `python >= 3.10`. This can be installed via e.g. [`pyenv`](https://github.com/pyenv/pyenv):\n\n```bash\ncurl https://pyenv.run | bash\npyenv update\npyenv install 3.10\npyenv local 3.10\n```\n\nInstall the dependencies:\n\n```bash\npoetry install\npoetry self add poetry-plugin-export\npoetry config warnings.export false\npoetry run pre-commit install\n```\n\n### Optional requirements\n\n- sparse: enables loading on COO sparse-format tensors, install via e.g. `pip install sparse`\n- pdpbox: enables partial dependency plots, install via e.g. `pip install pdpbox`\n  - **Note**: `pdpbox` includes docs dependencies in its build environment, which can result in conflicts. A fork of `pdpbox` which removes these dependencies can be installed from [https://github.com/GilesStrong/PDPbox]\n\n## Notes\n\n### Why use LUMIN\n\nTMVA contained in CERN's ROOT system, has been the default choice for BDT training for analysis and reconstruction algorithms due to never having to leave ROOT format. With the gradual move to DNN approaches, more scientists are looking to move their data out of ROOT to use the wider selection of tools which are available. Keras appears to be the first stop due to its ease of use, however implementing recent methods in Keras can be difficult, and sometimes requires dropping back to the tensor library that it aims to abstract. Indeed, the prequel to LUMIN was a similar wrapper for Keras ([HEPML_Tools](https://github.com/GilesStrong/hepml_tools)) which involved some pretty ugly hacks.\nThe fastai framework provides access to these recent methods, however doesn't yet support sample weights to the extent that HEP requires.\nLUMIN aims to provide the best of both, Keras-style sample weighting and fastai training methods, while focussing on columnar data and providing domain-specific metrics, plotting, and statistical treatment of results and uncertainties.\n\n### Data types\n\nLUMIN is primarily designed for use on columnar data, and from version 0.5 onwards this also includes *matrix data*; ordered series and un-ordered groups of objects. With some extra work it can be used on other data formats, but at the moment it has nothing special to offer. Whilst recent work in HEP has made use of jet images and GANs, these normally hijack existing ideas and models. Perhaps once we get established, domain specific approaches which necessitate the use of a specialised framework, then LUMIN could grow to meet those demands, but for now I'd recommend checking out the fastai library, especially for image data.\n\nWith just one main developer, I'm simply focussing on the data types and applications I need for my own research and common use cases in HEP. If, however you would like to use LUMIN's other methods for your own work on other data formats, then you are most welcome to contribute and help to grow LUMIN to better meet the needs of the scientific community.\n\n### Future\n\nThe current priority is to improve the documentation, add unit tests, and expand the examples.\n\nThe next step will be to try to increase the user base and number of contributors. I'm aiming to achieve this through presentations, tutorials, blog posts, and papers.\n\nFurther improvements will be in the direction of implementing new methods and (HEP-specific) architectures, as well as providing helper functions and data exporters to statistical analysis packages like Combine and PYHF.\n\n### Contributing & feedback\n\nContributions, suggestions, and feedback are most welcome! The issue tracker on this repo is probably the best place to report bugs et cetera.\n\n### Code style\n\nNope, the majority of the code-base does not conform to PEP8. PEP8 has its uses, but my understanding is that it primarily written for developers and maintainers of software whose users never need to read the source code. As a maths-heavy research framework which users are expected to interact with, PEP8 isn't the best style. Instead, I'm aiming to follow more [the style of fastai](https://docs.fast.ai/dev/style.html), which emphasises, in particular, reducing vertical space (useful for reading source code in a notebook) naming and abbreviating variables according to their importance and lifetime (easier to recognise which variables have a larger scope and permits easier writing of mathematical operations). A full list of the abbreviations used may be found in [abbr.md](https://github.com/GilesStrong/lumin/blob/master/abbr.md)\n\n### Why is LUMIN called LUMIN?\n\nAside from being a recursive acronym (and therefore the best kind of acronym) lumin is short for 'luminosity'. In high-energy physics, the integrated luminosity of the data collected by an experiment is the main driver in the results that analyses obtain. With the paradigm shift towards multivariate analyses, however, improved methods can be seen as providing 'artificial luminosity'; e.g. the gain offered by some DNN could be measured in terms of the amount of extra data that would have to be collected to achieve the same result with a more traditional analysis. Luminosity can also be connected to the fact that LUMIN is built a\nLUMIN is primarily developed by Giles Strong; a British-born doctor in particle physics, researcher at INFN-Padova (Italy), and a member of the CMS collaboration at CERN, and a founding member of the MODE Collaboration (differentiable optimisation for detector design).\n\nAs LUMIN has grown, it has welcomed contributions from members of the scientific and software development community. Check out the [contributors page](https://github.com/GilesStrong/lumin/graphs/contributors) for a complete list.\n\nCertainly more developers and contributors are welcome to join and help out!\n\n### Reference\n\nIf you have used LUMIN in your analysis work and wish to cite it, the preferred reference is: *Giles C. Strong, LUMIN, Zenodo (Mar. 2019), https://doi.org/10.5281/zenodo.2601857, Note: Please check https://github.com/GilesStrong/lumin/graphs/contributors for the full list of contributors*\n\n```\n@misc{giles_chatham_strong_2019_2601857,  \n\u00a0\u00a0author       = {Giles Chatham Strong},  \n\u00a0\u00a0title        = {LUMIN},  \n\u00a0\u00a0month        = mar,  \n\u00a0\u00a0year         = 2019,  \n\u00a0\u00a0note         = {{Please check https://github.com/GilesStrong/lumin/graphs/contributors for the full list of contributors}},  \n\u00a0\u00a0doi          = {10.5281/zenodo.2601857},  \n\u00a0\u00a0url          = {https://doi.org/10.5281/zenodo.2601857}  \n}\n```\n",
    "bugtrack_url": null,
    "license": "Apache Software License 2.0",
    "summary": "LUMIN Unifies Many Improvements for Networks: A PyTorch wrapper to make deep learning more accessable to scientists.",
    "version": "0.9.1",
    "project_urls": {
        "Documentation": "https://lumin.readthedocs.io/en/stable/?badge=stable",
        "Homepage": "https://mode-collaboration.github.io/"
    },
    "split_keywords": [
        "deep learning",
        " differential programming",
        " physics",
        " science",
        " statistics"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9d1693997a901af106f07196be9e14e8e10faf5b6ab47d221408fbd20ada61d2",
                "md5": "f38dec4bc0faf3f155532ba20fddf8a9",
                "sha256": "83715a84297d6587fa607f24449d02a4e360818b0b5f4d08f55da3e234b981e3"
            },
            "downloads": -1,
            "filename": "lumin-0.9.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f38dec4bc0faf3f155532ba20fddf8a9",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 174548,
            "upload_time": "2024-10-29T08:05:05",
            "upload_time_iso_8601": "2024-10-29T08:05:05.283620Z",
            "url": "https://files.pythonhosted.org/packages/9d/16/93997a901af106f07196be9e14e8e10faf5b6ab47d221408fbd20ada61d2/lumin-0.9.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c4655cc3308f65229cf6957f3f12028ceb0e56471089779080a587b6e51781d4",
                "md5": "f08291ae15ec414387324b0dd0019f7d",
                "sha256": "3f7e879a6cb7fac68e5a54cc5a844028d68802fb2ccde229e70017111d261f90"
            },
            "downloads": -1,
            "filename": "lumin-0.9.1.tar.gz",
            "has_sig": false,
            "md5_digest": "f08291ae15ec414387324b0dd0019f7d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 148765,
            "upload_time": "2024-10-29T08:05:07",
            "upload_time_iso_8601": "2024-10-29T08:05:07.536478Z",
            "url": "https://files.pythonhosted.org/packages/c4/65/5cc3308f65229cf6957f3f12028ceb0e56471089779080a587b6e51781d4/lumin-0.9.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-29 08:05:07",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "lumin"
}
        
Elapsed time: 1.15373s