mlrl-boomer


Namemlrl-boomer JSON
Version 0.14.0 PyPI version JSON
download
home_pageNone
SummaryA scikit-learn implementation of BOOMER - an algorithm for learning gradient boosted multi-label output rules
upload_time2025-08-22 23:11:41
maintainerNone
docs_urlNone
authorNone
requires_python>=3.12
licenseNone
keywords gradient boosting machine learning multi-label classification multi-output regression multi-target regression rule learning scikit-learn
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
  <picture>
    <source media="(prefers-color-scheme: dark)" srcset="https://github.com/mrapp-ke/MLRL-Boomer/raw/main/doc/_static/logo_boomer_dark.svg">
    <source media="(prefers-color-scheme: light)" srcset="https://github.com/mrapp-ke/MLRL-Boomer/raw/main/doc/_static/logo_boomer_light.svg">
    <img alt="BOOMER - Gradient Boosted Multi-Label Classification Rules" src="https://github.com/mrapp-ke/MLRL-Boomer/raw/main/.assets/logo_boomer_light.svg">
  </picture>
</p>

[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![PyPI version](https://badge.fury.io/py/mlrl-boomer.svg)](https://badge.fury.io/py/mlrl-boomer) [![Documentation Status](https://readthedocs.org/projects/mlrl-boomer/badge/?version=latest)](https://mlrl-boomer.readthedocs.io/en/latest/?badge=latest)

**🔗 Important links:** [Documentation](https://mlrl-boomer.readthedocs.io/en/latest/) | [Issue Tracker](https://github.com/mrapp-ke/MLRL-Boomer/issues) | [Changelog](https://mlrl-boomer.readthedocs.io/en/latest/misc/CHANGELOG.html) | [License](https://mlrl-boomer.readthedocs.io/en/latest/misc/LICENSE.html)

<!-- documentation-start -->

This software package provides the official implementation of **BOOMER - an algorithm for learning gradient boosted multi-output rules** that uses [gradient boosting](https://en.wikipedia.org/wiki/Gradient_boosting) for learning an ensemble of rules that is built with respect to a specific multivariate loss function. It integrates with the popular [scikit-learn](https://scikit-learn.org) machine learning framework.

The problem domains addressed by this algorithm include the following:

- **Multi-label classification**: The goal of [multi-label classification](https://en.wikipedia.org/wiki/Multi-label_classification) is the automatic assignment of sets of labels to individual data points, for example, the annotation of text documents with topics.
- **Multi-output regression**: Multivariate [regression](https://en.wikipedia.org/wiki/Regression_analysis) problems require to predict for more than a single numerical output variable.

# The BOOMER Algorithm

To provide a versatile tool for different use cases, great emphasis is put on the *efficiency* of the implementation. Moreover, to ensure its *flexibility*, it is designed in a modular fashion and can therefore easily be adjusted to different requirements. This modular approach enables implementing different kind of rule learning algorithms (see packages [mlrl-common](https://pypi.org/project/mlrl-common/) and [mlrl-seco](https://pypi.org/project/mlrl-seco/)).

## 📖 References

The algorithm was first published in the following [paper](https://doi.org/10.1007/978-3-030-67664-3_8). A preprint version is publicly available [here](https://arxiv.org/pdf/2006.13346.pdf).

*Michael Rapp, Eneldo Loza Mencía, Johannes Fürnkranz Vu-Linh Nguyen and Eyke Hüllermeier. Learning Gradient Boosted Multi-label Classification Rules. In: Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases (ECML-PKDD), 2020, Springer.*

If you use the algorithm in a scientific publication, we would appreciate citations to the mentioned paper.

## 🔧 Functionalities

The algorithm that is provided by this project currently supports the following core functionalities for learning ensembles of boosted classification or regression rules.

### Deliberate Loss Optimization

- **Decomposable or non-decomposable loss functions** can be optimized in expectation.
- **L1 and L2 regularization** can be used.
- **Shrinkage (a.k.a. the learning rate) can be adjusted** for controlling the impact of individual rules on the overall ensemble.

### Different Prediction Strategies

- **Various strategies for predicting scores, binary labels or probabilities** are available, depending on whether a classification or regression model is used.
- **Isotonic regression models can be used to calibrate marginal and joint probabilities** predicted by a classification model.

### Flexible Handling of Input Data

- **Native support for numerical, ordinal, and nominal features** eliminates the need for pre-processing techniques such as one-hot encoding.
- **Handling of missing feature values**, i.e., occurrences of *NaN* in the feature matrix, is implemented by the algorithm.

### Fine-grained Control over Model Characteristics

- **Rules can be constructed via a greedy search or a beam search.** The latter may help to improve the quality of individual rules.
- **Single-output, partial, or complete heads** can be used by rules, i.e., they can predict for a single output, a subset of the available outputs, or all of them. Predicting for multiple outputs simultaneously enables to model local dependencies between them.
- **Fine-grained control over the specificity/generality of rules** is provided via hyperparameters.

### Support for Post-Optimization and Pruning

- **Incremental reduced error pruning** can be used for removing overly specific conditions from rules and preventing overfitting.
- **Post- and pre-pruning (a.k.a. early stopping)** allows to determine the optimal number of rules to be included in an ensemble.
- **Sequential post-optimization** may help improving the predictive performance of a model by reconstructing each rule in the context of the other rules.

## ⌚ Runtime and Memory Optimizations

In addition to the features mentioned above, several techniques that may speed up training or reduce the memory footprint are currently implemented.

### Approximation Techniques

- **Unsupervised feature binning** can be used to speed up the evaluation of a rule's potential conditions when dealing with numerical features.
- **Sampling techniques and stratification methods** can be used for learning new rules on a subset of the available training examples, features, or output variables.
- **[Gradient-based label binning (GBLB)](https://arxiv.org/pdf/2106.11690.pdf)** can be used for assigning the labels included in a multi-label classification dataset to a limited number of bins. This may speed up training significantly when minimizing a non-decomposable loss function using rules with partial or complete heads.

### Sparse Data Structures

- **Sparse feature matrices** can be used for training and prediction. This may speed up training significantly on some datasets.
- **Sparse ground truth matrices** can be used for training. This may reduce the memory footprint in case of large datasets.
- **Sparse prediction matrices** can be used for storing predicted labels. This may reduce the memory footprint in case of large datasets.
- **Sparse matrices for storing gradients and Hessians** can be used if supported by the loss function. This may speed up training significantly on datasets with many output variables.

### Parallelization

- **Multi-threading** can be used for parallelizing the evaluation of a rule's potential refinements across several features, updating the gradients and Hessians of individual examples in parallel, or obtaining predictions for several examples in parallel.

<!-- documentation-end -->

## 📚 Documentation

Our documentation provides an extensive [user guide](https://mlrl-boomer.readthedocs.io/en/latest/user_guide/boosting/index.html), as well as [Python](https://mlrl-boomer.readthedocs.io/en/latest/developer_guide/api/python/boosting/mlrl.boosting.html) and [C++](https://mlrl-boomer.readthedocs.io/en/latest/developer_guide/api/cpp/boosting/filelist.html) API references for developers. If you are new to the project, you probably want to read about the following topics:

- Instructions for [installing the software package](https://mlrl-boomer.readthedocs.io/en/latest/quickstart/installation.html) or [building the project from source](https://mlrl-boomer.readthedocs.io/en/latest/developer_guide/compilation.html).
- Examples of how to [use the algorithm](https://mlrl-boomer.readthedocs.io/en/latest/quickstart/usage.html) in your own Python code or how to use the [command line API](https://mlrl-boomer.readthedocs.io/en/latest/quickstart/testbed.html).
- An overview of available [parameters](https://mlrl-boomer.readthedocs.io/en/latest/user_guide/boosting/parameters.html).

A collection of benchmark datasets that are compatible with the algorithm are provided in a separate [repository](https://github.com/mrapp-ke/Boomer-Datasets).

For an overview of changes and new features that have been included in past releases, please refer to the [changelog](https://mlrl-boomer.readthedocs.io/en/latest/misc/CHANGELOG.html).

## 📜 License

This project is open source software licensed under the terms of the [MIT license](https://mlrl-boomer.readthedocs.io/en/latest/misc/LICENSE.html). We welcome contributions to the project to enhance its functionality and make it more accessible to a broader audience. A frequently updated list of contributors is available [here](https://mlrl-boomer.readthedocs.io/en/latest/misc/CONTRIBUTORS.html).

All contributions to the project and discussions on the [issue tracker](https://github.com/mrapp-ke/MLRL-Boomer/issues) are expected to follow the [code of conduct](https://mlrl-boomer.readthedocs.io/en/latest/misc/CODE_OF_CONDUCT.html).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "mlrl-boomer",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.12",
    "maintainer_email": null,
    "keywords": "gradient boosting, machine learning, multi-label classification, multi-output regression, multi-target regression, rule learning, scikit-learn",
    "author": null,
    "author_email": "Michael Rapp <michael.rapp.ml@gmail.com>",
    "download_url": null,
    "platform": null,
    "description": "<p align=\"center\">\n  <picture>\n    <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://github.com/mrapp-ke/MLRL-Boomer/raw/main/doc/_static/logo_boomer_dark.svg\">\n    <source media=\"(prefers-color-scheme: light)\" srcset=\"https://github.com/mrapp-ke/MLRL-Boomer/raw/main/doc/_static/logo_boomer_light.svg\">\n    <img alt=\"BOOMER - Gradient Boosted Multi-Label Classification Rules\" src=\"https://github.com/mrapp-ke/MLRL-Boomer/raw/main/.assets/logo_boomer_light.svg\">\n  </picture>\n</p>\n\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![PyPI version](https://badge.fury.io/py/mlrl-boomer.svg)](https://badge.fury.io/py/mlrl-boomer) [![Documentation Status](https://readthedocs.org/projects/mlrl-boomer/badge/?version=latest)](https://mlrl-boomer.readthedocs.io/en/latest/?badge=latest)\n\n**\ud83d\udd17 Important links:** [Documentation](https://mlrl-boomer.readthedocs.io/en/latest/) | [Issue Tracker](https://github.com/mrapp-ke/MLRL-Boomer/issues) | [Changelog](https://mlrl-boomer.readthedocs.io/en/latest/misc/CHANGELOG.html) | [License](https://mlrl-boomer.readthedocs.io/en/latest/misc/LICENSE.html)\n\n<!-- documentation-start -->\n\nThis software package provides the official implementation of **BOOMER - an algorithm for learning gradient boosted multi-output rules** that uses [gradient boosting](https://en.wikipedia.org/wiki/Gradient_boosting) for learning an ensemble of rules that is built with respect to a specific multivariate loss function. It integrates with the popular [scikit-learn](https://scikit-learn.org) machine learning framework.\n\nThe problem domains addressed by this algorithm include the following:\n\n- **Multi-label classification**: The goal of [multi-label classification](https://en.wikipedia.org/wiki/Multi-label_classification) is the automatic assignment of sets of labels to individual data points, for example, the annotation of text documents with topics.\n- **Multi-output regression**: Multivariate [regression](https://en.wikipedia.org/wiki/Regression_analysis) problems require to predict for more than a single numerical output variable.\n\n# The BOOMER Algorithm\n\nTo provide a versatile tool for different use cases, great emphasis is put on the *efficiency* of the implementation. Moreover, to ensure its *flexibility*, it is designed in a modular fashion and can therefore easily be adjusted to different requirements. This modular approach enables implementing different kind of rule learning algorithms (see packages [mlrl-common](https://pypi.org/project/mlrl-common/) and [mlrl-seco](https://pypi.org/project/mlrl-seco/)).\n\n## \ud83d\udcd6 References\n\nThe algorithm was first published in the following [paper](https://doi.org/10.1007/978-3-030-67664-3_8). A preprint version is publicly available [here](https://arxiv.org/pdf/2006.13346.pdf).\n\n*Michael Rapp, Eneldo Loza Menc\u00eda, Johannes F\u00fcrnkranz Vu-Linh Nguyen and Eyke H\u00fcllermeier. Learning Gradient Boosted Multi-label Classification Rules. In: Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases (ECML-PKDD), 2020, Springer.*\n\nIf you use the algorithm in a scientific publication, we would appreciate citations to the mentioned paper.\n\n## \ud83d\udd27 Functionalities\n\nThe algorithm that is provided by this project currently supports the following core functionalities for learning ensembles of boosted classification or regression rules.\n\n### Deliberate Loss Optimization\n\n- **Decomposable or non-decomposable loss functions** can be optimized in expectation.\n- **L1 and L2 regularization** can be used.\n- **Shrinkage (a.k.a. the learning rate) can be adjusted** for controlling the impact of individual rules on the overall ensemble.\n\n### Different Prediction Strategies\n\n- **Various strategies for predicting scores, binary labels or probabilities** are available, depending on whether a classification or regression model is used.\n- **Isotonic regression models can be used to calibrate marginal and joint probabilities** predicted by a classification model.\n\n### Flexible Handling of Input Data\n\n- **Native support for numerical, ordinal, and nominal features** eliminates the need for pre-processing techniques such as one-hot encoding.\n- **Handling of missing feature values**, i.e., occurrences of *NaN* in the feature matrix, is implemented by the algorithm.\n\n### Fine-grained Control over Model Characteristics\n\n- **Rules can be constructed via a greedy search or a beam search.** The latter may help to improve the quality of individual rules.\n- **Single-output, partial, or complete heads** can be used by rules, i.e., they can predict for a single output, a subset of the available outputs, or all of them. Predicting for multiple outputs simultaneously enables to model local dependencies between them.\n- **Fine-grained control over the specificity/generality of rules** is provided via hyperparameters.\n\n### Support for Post-Optimization and Pruning\n\n- **Incremental reduced error pruning** can be used for removing overly specific conditions from rules and preventing overfitting.\n- **Post- and pre-pruning (a.k.a. early stopping)** allows to determine the optimal number of rules to be included in an ensemble.\n- **Sequential post-optimization** may help improving the predictive performance of a model by reconstructing each rule in the context of the other rules.\n\n## \u231a Runtime and Memory Optimizations\n\nIn addition to the features mentioned above, several techniques that may speed up training or reduce the memory footprint are currently implemented.\n\n### Approximation Techniques\n\n- **Unsupervised feature binning** can be used to speed up the evaluation of a rule's potential conditions when dealing with numerical features.\n- **Sampling techniques and stratification methods** can be used for learning new rules on a subset of the available training examples, features, or output variables.\n- **[Gradient-based label binning (GBLB)](https://arxiv.org/pdf/2106.11690.pdf)** can be used for assigning the labels included in a multi-label classification dataset to a limited number of bins. This may speed up training significantly when minimizing a non-decomposable loss function using rules with partial or complete heads.\n\n### Sparse Data Structures\n\n- **Sparse feature matrices** can be used for training and prediction. This may speed up training significantly on some datasets.\n- **Sparse ground truth matrices** can be used for training. This may reduce the memory footprint in case of large datasets.\n- **Sparse prediction matrices** can be used for storing predicted labels. This may reduce the memory footprint in case of large datasets.\n- **Sparse matrices for storing gradients and Hessians** can be used if supported by the loss function. This may speed up training significantly on datasets with many output variables.\n\n### Parallelization\n\n- **Multi-threading** can be used for parallelizing the evaluation of a rule's potential refinements across several features, updating the gradients and Hessians of individual examples in parallel, or obtaining predictions for several examples in parallel.\n\n<!-- documentation-end -->\n\n## \ud83d\udcda Documentation\n\nOur documentation provides an extensive [user guide](https://mlrl-boomer.readthedocs.io/en/latest/user_guide/boosting/index.html), as well as [Python](https://mlrl-boomer.readthedocs.io/en/latest/developer_guide/api/python/boosting/mlrl.boosting.html) and [C++](https://mlrl-boomer.readthedocs.io/en/latest/developer_guide/api/cpp/boosting/filelist.html) API references for developers. If you are new to the project, you probably want to read about the following topics:\n\n- Instructions for [installing the software package](https://mlrl-boomer.readthedocs.io/en/latest/quickstart/installation.html) or [building the project from source](https://mlrl-boomer.readthedocs.io/en/latest/developer_guide/compilation.html).\n- Examples of how to [use the algorithm](https://mlrl-boomer.readthedocs.io/en/latest/quickstart/usage.html) in your own Python code or how to use the [command line API](https://mlrl-boomer.readthedocs.io/en/latest/quickstart/testbed.html).\n- An overview of available [parameters](https://mlrl-boomer.readthedocs.io/en/latest/user_guide/boosting/parameters.html).\n\nA collection of benchmark datasets that are compatible with the algorithm are provided in a separate [repository](https://github.com/mrapp-ke/Boomer-Datasets).\n\nFor an overview of changes and new features that have been included in past releases, please refer to the [changelog](https://mlrl-boomer.readthedocs.io/en/latest/misc/CHANGELOG.html).\n\n## \ud83d\udcdc License\n\nThis project is open source software licensed under the terms of the [MIT license](https://mlrl-boomer.readthedocs.io/en/latest/misc/LICENSE.html). We welcome contributions to the project to enhance its functionality and make it more accessible to a broader audience. A frequently updated list of contributors is available [here](https://mlrl-boomer.readthedocs.io/en/latest/misc/CONTRIBUTORS.html).\n\nAll contributions to the project and discussions on the [issue tracker](https://github.com/mrapp-ke/MLRL-Boomer/issues) are expected to follow the [code of conduct](https://mlrl-boomer.readthedocs.io/en/latest/misc/CODE_OF_CONDUCT.html).\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A scikit-learn implementation of BOOMER - an algorithm for learning gradient boosted multi-label output rules",
    "version": "0.14.0",
    "project_urls": {
        "changelog": "https://raw.githubusercontent.com/mrapp-ke/MLRL-Boomer/refs/heads/main/CHANGELOG.md",
        "documentation": "https://mlrl-boomer.readthedocs.io/en/latest",
        "download": "https://github.com/mrapp-ke/MLRL-Boomer/releases",
        "homepage": "https://github.com/mrapp-ke/MLRL-Boomer",
        "issues": "https://github.com/mrapp-ke/MLRL-Boomer/issues",
        "source": "https://github.com/mrapp-ke/MLRL-Boomer.git"
    },
    "split_keywords": [
        "gradient boosting",
        " machine learning",
        " multi-label classification",
        " multi-output regression",
        " multi-target regression",
        " rule learning",
        " scikit-learn"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "f0f98487038670646f7c61fe70c57231f708d8b799fa0ae0c3c7df441fd90856",
                "md5": "5922d05f4ce37049bcc2c669c52b85ac",
                "sha256": "b8c706a5b7c3412b0bcebca43956a69b71aa859b4aa1fb49e12e294fca2ca3fc"
            },
            "downloads": -1,
            "filename": "mlrl_boomer-0.14.0-cp312-cp312-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "5922d05f4ce37049bcc2c669c52b85ac",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.12",
            "size": 3051445,
            "upload_time": "2025-08-22T23:11:41",
            "upload_time_iso_8601": "2025-08-22T23:11:41.929587Z",
            "url": "https://files.pythonhosted.org/packages/f0/f9/8487038670646f7c61fe70c57231f708d8b799fa0ae0c3c7df441fd90856/mlrl_boomer-0.14.0-cp312-cp312-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "442d205ffeab199483aa96299459b86336df35528eee05d5b7f8123930a58bb3",
                "md5": "48e12a79b7114164eeec2274a51569f2",
                "sha256": "143cb59dd0f7d90008158de390409d0e6a9095965a77a21c537425f8c05b3f06"
            },
            "downloads": -1,
            "filename": "mlrl_boomer-0.14.0-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl",
            "has_sig": false,
            "md5_digest": "48e12a79b7114164eeec2274a51569f2",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.12",
            "size": 7137335,
            "upload_time": "2025-08-22T23:11:44",
            "upload_time_iso_8601": "2025-08-22T23:11:44.317071Z",
            "url": "https://files.pythonhosted.org/packages/44/2d/205ffeab199483aa96299459b86336df35528eee05d5b7f8123930a58bb3/mlrl_boomer-0.14.0-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "9eb51d81aa5abc296be99593fe4106c8d0e3d4c7311830d995135de06967e7a2",
                "md5": "558c2f22b84977c6abbc2e073de3cc32",
                "sha256": "6ef1a85e2c4d79caf165692b0c6edfd901d946dee7c2b9f981435ced11d7b917"
            },
            "downloads": -1,
            "filename": "mlrl_boomer-0.14.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl",
            "has_sig": false,
            "md5_digest": "558c2f22b84977c6abbc2e073de3cc32",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.12",
            "size": 7310462,
            "upload_time": "2025-08-22T23:11:46",
            "upload_time_iso_8601": "2025-08-22T23:11:46.784946Z",
            "url": "https://files.pythonhosted.org/packages/9e/b5/1d81aa5abc296be99593fe4106c8d0e3d4c7311830d995135de06967e7a2/mlrl_boomer-0.14.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "bfe27f04f55d20f640b64689e3da2123942a9f9b334139f4bb51e1e9672d924a",
                "md5": "2dfa7f50cf0246c027940c21e3bbe94b",
                "sha256": "ede948bcdd2d5141f30e50a1d55c402d32bb319e9a31d963bbc9d1ac0e334f22"
            },
            "downloads": -1,
            "filename": "mlrl_boomer-0.14.0-cp312-cp312-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "2dfa7f50cf0246c027940c21e3bbe94b",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.12",
            "size": 1169961,
            "upload_time": "2025-08-22T23:11:48",
            "upload_time_iso_8601": "2025-08-22T23:11:48.550275Z",
            "url": "https://files.pythonhosted.org/packages/bf/e2/7f04f55d20f640b64689e3da2123942a9f9b334139f4bb51e1e9672d924a/mlrl_boomer-0.14.0-cp312-cp312-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "436c1e9769487663a6a7211ac2f2e57e2818cf8456f073051a6a43443755c50e",
                "md5": "8dd06a59f5f102d3c0cfa507d0bd2cb3",
                "sha256": "1df93141ccb203f22c798cd33e733735cf2ad908b0ca4f76af072f8da358e509"
            },
            "downloads": -1,
            "filename": "mlrl_boomer-0.14.0-cp313-cp313-macosx_11_0_arm64.whl",
            "has_sig": false,
            "md5_digest": "8dd06a59f5f102d3c0cfa507d0bd2cb3",
            "packagetype": "bdist_wheel",
            "python_version": "cp313",
            "requires_python": ">=3.12",
            "size": 3046540,
            "upload_time": "2025-08-22T23:11:50",
            "upload_time_iso_8601": "2025-08-22T23:11:50.520744Z",
            "url": "https://files.pythonhosted.org/packages/43/6c/1e9769487663a6a7211ac2f2e57e2818cf8456f073051a6a43443755c50e/mlrl_boomer-0.14.0-cp313-cp313-macosx_11_0_arm64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ff1493d80de03d93cf70d7bd66e7c7502a0baa212d05a13c955a7f856d8b28ef",
                "md5": "5d64fdad57ad3956fddd0b7e4d689925",
                "sha256": "5bb07a4b864525ae0933fa585f6988cdfec84d47ad78898d3354d1f7e1329224"
            },
            "downloads": -1,
            "filename": "mlrl_boomer-0.14.0-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl",
            "has_sig": false,
            "md5_digest": "5d64fdad57ad3956fddd0b7e4d689925",
            "packagetype": "bdist_wheel",
            "python_version": "cp313",
            "requires_python": ">=3.12",
            "size": 7130055,
            "upload_time": "2025-08-22T23:11:52",
            "upload_time_iso_8601": "2025-08-22T23:11:52.872654Z",
            "url": "https://files.pythonhosted.org/packages/ff/14/93d80de03d93cf70d7bd66e7c7502a0baa212d05a13c955a7f856d8b28ef/mlrl_boomer-0.14.0-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "2806f714fa51634b3ea6c35876e394e5477ea02f75e4522bcf787d02fb600584",
                "md5": "427f96d79e54ab6e7285e4b9ad3c020d",
                "sha256": "cf6d268108a40b2e32591c2a8b866c247b4f82d0a9577c95960ef957f761d135"
            },
            "downloads": -1,
            "filename": "mlrl_boomer-0.14.0-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl",
            "has_sig": false,
            "md5_digest": "427f96d79e54ab6e7285e4b9ad3c020d",
            "packagetype": "bdist_wheel",
            "python_version": "cp313",
            "requires_python": ">=3.12",
            "size": 7302286,
            "upload_time": "2025-08-22T23:11:55",
            "upload_time_iso_8601": "2025-08-22T23:11:55.172900Z",
            "url": "https://files.pythonhosted.org/packages/28/06/f714fa51634b3ea6c35876e394e5477ea02f75e4522bcf787d02fb600584/mlrl_boomer-0.14.0-cp313-cp313-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "98ae7346168917caf856ba8a96ae228594189fad9fbb5185b4e4a0544860a5c8",
                "md5": "7212b201bbe2b4442c61d4f2366cecb0",
                "sha256": "1ddd3a6d0464459ed8414f231fe3c2c64d6fa79d13f4923acbcc80d2ad03725a"
            },
            "downloads": -1,
            "filename": "mlrl_boomer-0.14.0-cp313-cp313-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "7212b201bbe2b4442c61d4f2366cecb0",
            "packagetype": "bdist_wheel",
            "python_version": "cp313",
            "requires_python": ">=3.12",
            "size": 1164215,
            "upload_time": "2025-08-22T23:11:57",
            "upload_time_iso_8601": "2025-08-22T23:11:57.435544Z",
            "url": "https://files.pythonhosted.org/packages/98/ae/7346168917caf856ba8a96ae228594189fad9fbb5185b4e4a0544860a5c8/mlrl_boomer-0.14.0-cp313-cp313-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-22 23:11:41",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mrapp-ke",
    "github_project": "MLRL-Boomer",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "mlrl-boomer"
}
        
Elapsed time: 2.89883s