arm-preprocessing


Namearm-preprocessing JSON
Version 0.2.4 PyPI version JSON
download
home_pagehttps://github.com/firefly-cpp/arm-preprocessing
SummaryImplementation of several preprocessing techniques for Association Rule Mining (ARM)
upload_time2024-10-09 13:58:52
maintainerNone
docs_urlNone
authorTadej Lahovnik
requires_python<4.00,>=3.9
licenseNone
keywords association rule mining data science preprocessing
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
  <img alt="logo" width="300" src=".github/images/logo_black.png">
</p>

<h1 align="center">
  arm-preprocessing
</h1>

<p align="center">
  <img alt="PyPI Version" src="https://img.shields.io/pypi/v/arm-preprocessing.svg">
  <img alt="PyPI - Python Version" src="https://img.shields.io/pypi/pyversions/arm-preprocessing.svg">
  <img alt="PyPI - Downloads" src="https://img.shields.io/pypi/dm/arm-preprocessing.svg" href="https://pepy.tech/project/arm-preprocessing">
  <a href="https://repology.org/project/python:arm-preprocessing/versions">
    <img alt="Packaging status" src="https://repology.org/badge/tiny-repos/python:arm-preprocessing.svg">
  </a>
  <a href="https://pepy.tech/project/arm-preprocessing">
    <img alt="Downloads" src="https://static.pepy.tech/badge/arm-preprocessing">
  </a>
  <img alt="License" src="https://img.shields.io/github/license/firefly-cpp/arm-preprocessing.svg">
  <a href="https://github.com/firefly-cpp/arm-preprocessing/actions/workflows/test.yml">
    <img alt="arm-preprocessing" src="https://github.com/firefly-cpp/arm-preprocessing/actions/workflows/test.yml/badge.svg">
  </a>
  <a href="https://arm-preprocessing.readthedocs.io/en/latest/?badge=latest">
    <img alt="Documentation Status" src="https://readthedocs.org/projects/arm-preprocessing/badge/?version=latest">
  </a>
</p>

<p align="center">
  <img alt="Repository size" src="https://img.shields.io/github/repo-size/firefly-cpp/arm-preprocessing">
  <img alt="Open issues" src="https://isitmaintained.com/badge/open/firefly-cpp/arm-preprocessing.svg">
  <a href='http://isitmaintained.com/project/firefly-cpp/arm-preprocessing "Average time to resolve an issue"'>
    <img alt="Average time to resolve an issue" src="http://isitmaintained.com/badge/resolution/firefly-cpp/arm-preprocessing.svg">
  </a>
  <img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/w/firefly-cpp/arm-preprocessing.svg">
  <img alt="GitHub contributors" src="https://img.shields.io/github/contributors/firefly-cpp/arm-preprocessing.svg">
</p>

<p align="center">
  <a href="#-why-arm-preprocessing">💡 Why arm-preprocessing?</a> •
  <a href="#-key-features">✨ Key features</a> •
  <a href="#-installation">📦 Installation</a> •
  <a href="#-usage">🚀 Usage</a> •
  <a href="#-related-frameworks">🔗 Related frameworks</a> •
  <a href="#-references">📚 References</a> •
  <a href="#-license">🔑 License</a>
</p>

arm-preprocessing is a lightweight Python library supporting several key steps involving data preparation, manipulation, and discretisation for Association Rule Mining (ARM). 🧠 Embrace its minimalistic design that prioritises simplicity. 💡 The framework is intended to be fully extensible and offers seamless integration with related ARM libraries (e.g., [NiaARM](https://github.com/firefly-cpp/NiaARM)). 🔗

* **Free software:** MIT license
* **Documentation**: [http://arm-preprocessing.readthedocs.io](http://arm-preprocessing.readthedocs.io)
* **Python**: 3.9.x, 3.10.x, 3.11.x, 3.12x
* **Tested OS:** Windows, Ubuntu, Fedora, Alpine, Arch, macOS. **However, that does not mean it does not work on others**

## 💡 Why arm-preprocessing?

While numerous libraries facilitate data mining preprocessing tasks, this library is designed to integrate seamlessly with association rule mining. It harmonises well with the NiaARM library, a robust numerical association rule mining framework. The primary aim is to bridge the gap between preprocessing and rule mining, simplifying the workflow/pipeline. Additionally, its design allows for the effortless incorporation of new preprocessing methods and fast benchmarking.

## ✨ Key features

- Loading various formats of datasets (CSV, JSON, TXT, TCX) 📊
- Converting datasets to different formats 🔄
- Loading different types of datasets (numerical dataset, discrete dataset, time-series data, text, etc.) 📉
- Dataset identification (which type of dataset) 🔍
- Dataset statistics 📈
- Discretisation methods 📏
- Data squashing methods 🤏
- Feature scaling methods ⚖️
- Feature selection methods 🎯

## 📦 Installation

### pip

To install ``arm-preprocessing`` with pip, use:
```bash
pip install arm-preprocessing
```

To install ``arm-preprocessing`` on Alpine Linux, please use:
```sh
$ apk add py3-arm-preprocessing
```

To install ``arm-preprocessing`` on Arch Linux, please use an [AUR helper](https://wiki.archlinux.org/title/AUR_helpers):
```sh
$ yay -Syyu python-arm-preprocessing
```

## 🚀 Usage

### Data loading

The following example demonstrates how to load a dataset from a file (csv, json, txt). More examples can be found in the [examples/data_loading](./examples/data_loading/) directory:
- [Loading a dataset from a CSV file](./examples/data_loading/load_dataset_csv.py)
- [Loading a dataset from a JSON file](./examples/data_loading/load_dataset_json.py)
- [Loading a dataset from a TCX file](./examples/data_loading/load_dataset_tcx.py)
- [Loading a time-series dataset](./examples/data_loading/load_dataset_timeseries.py)

```python
from arm_preprocessing.dataset import Dataset

# Initialise dataset with filename (without format) and format (csv, json, txt)
dataset = Dataset('path/to/datasets', format='csv')

# Load dataset
dataset.load_data()
df = dataset.data
```

### Missing values

The following example demonstrates how to handle missing values in a dataset using imputation. More examples can be found in the [examples/missing_values](./examples/missing_values) directory:
- [Handling missing values in a dataset using row deletion](./examples/missing_values/missing_values_rows.py)
- [Handling missing values in a dataset using column deletion](./examples/missing_values/missing_values_columns.py)
- [Handling missing values in a dataset using imputation](./examples/missing_values/missing_values_impute.py)

```python
from arm_preprocessing.dataset import Dataset

# Initialise dataset with filename and format
dataset = Dataset('examples/missing_values/data', format='csv')
dataset.load()

# Impute missing data
dataset.missing_values(method='impute')
```

### Data discretisation

The following example demonstrates how to discretise a dataset using the equal width method. More examples can be found in the [examples/discretisation](./examples/discretisation) directory:
- [Discretising a dataset using the equal width method](./examples/discretisation/equal_width_discretisation.py)
- [Discretising a dataset using the equal frequency method](./examples/discretisation/equal_frequency_discretisation.py)
- [Discretising a dataset using k-means clustering](./examples/discretisation/kmeans_discretisation.py)

```python
from arm_preprocessing.dataset import Dataset

# Initialise dataset with filename (without format) and format (csv, json, txt)
dataset = Dataset('datasets/sportydatagen', format='csv')
dataset.load_data()

# Discretise dataset using equal width discretisation
dataset.discretise(method='equal_width', num_bins=5, columns=['calories'])
```

### Data squashing

The following example demonstrates how to squash a dataset using the euclidean similarity. More examples can be found in the [examples/squashing](./examples/squashing) directory:
- [Squashing a dataset using the euclidean similarity](./examples/squashing/squash_euclidean.py)
- [Squashing a dataset using the cosine similarity](./examples/squashing/squash_cosine.py)

```python
from arm_preprocessing.dataset import Dataset

# Initialise dataset with filename and format
dataset = Dataset('datasets/breast', format='csv')
dataset.load()

# Squash dataset
dataset.squash(threshold=0.75, similarity='euclidean')
```

### Feature scaling

The following example demonstrates how to scale the dataset's features. More examples can be found in the [examples/scaling](./examples/scaling) directory:
- [Scale features using normalisation](./examples/scaling/normalisation.py)
- [Scale features using standardisation](./examples/scaling/standardisation.py)

```python
from arm_preprocessing.dataset import Dataset

# Initialise dataset with filename and format
dataset = Dataset('datasets/Abalone', format='csv')
dataset.load()

# Scale dataset using normalisation
dataset.scale(method='normalisation')
```

### Feature selection

The following example demonstrates how to select features from a dataset. More examples can be found in the [examples/feature_selection](./examples/feature_selection) directory:
- [Select features using the Kendall Tau correlation coefficient](./examples/feature_selection/feature_selection.py)

```python
from arm_preprocessing.dataset import Dataset

# Initialise dataset with filename and format
dataset = Dataset('datasets/sportydatagen', format='csv')
dataset.load()

# Feature selection
dataset.feature_selection(
    method='kendall', threshold=0.15, class_column='calories')
```

## 🔗 Related frameworks

[1] [NiaARM: A minimalistic framework for Numerical Association Rule Mining](https://github.com/firefly-cpp/NiaARM)

[2] [uARMSolver: universal Association Rule Mining Solver](https://github.com/firefly-cpp/uARMSolver)

## 📚 References

[1] I. Fister, I. Fister Jr., D. Novak and D. Verber, [Data squashing as preprocessing in association rule mining](https://iztok-jr-fister.eu/static/publications/300.pdf), 2022 IEEE Symposium Series on Computational Intelligence (SSCI), Singapore, Singapore, 2022, pp. 1720-1725, doi: 10.1109/SSCI51031.2022.10022240.

[2] I. Fister Jr., I. Fister [A brief overview of swarm intelligence-based algorithms for numerical association rule mining](https://arxiv.org/abs/2010.15524). arXiv preprint arXiv:2010.15524 (2020).

## 🔑 License

This package is distributed under the MIT License. This license can be found online
at <http://www.opensource.org/licenses/MIT>.

## Disclaimer

This framework is provided as-is, and there are no guarantees that it fits your purposes or that it is bug-free. Use it at your own risk!

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/firefly-cpp/arm-preprocessing",
    "name": "arm-preprocessing",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.00,>=3.9",
    "maintainer_email": null,
    "keywords": "association rule mining, data science, preprocessing",
    "author": "Tadej Lahovnik",
    "author_email": "tadej.lahovnik1@um.si",
    "download_url": "https://files.pythonhosted.org/packages/d2/f4/8dccbd5b7bb2f86f9cd973c8d93dc156fbe1b8d43a49b3a9cc03e8a8b942/arm_preprocessing-0.2.4.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n  <img alt=\"logo\" width=\"300\" src=\".github/images/logo_black.png\">\n</p>\n\n<h1 align=\"center\">\n  arm-preprocessing\n</h1>\n\n<p align=\"center\">\n  <img alt=\"PyPI Version\" src=\"https://img.shields.io/pypi/v/arm-preprocessing.svg\">\n  <img alt=\"PyPI - Python Version\" src=\"https://img.shields.io/pypi/pyversions/arm-preprocessing.svg\">\n  <img alt=\"PyPI - Downloads\" src=\"https://img.shields.io/pypi/dm/arm-preprocessing.svg\" href=\"https://pepy.tech/project/arm-preprocessing\">\n  <a href=\"https://repology.org/project/python:arm-preprocessing/versions\">\n    <img alt=\"Packaging status\" src=\"https://repology.org/badge/tiny-repos/python:arm-preprocessing.svg\">\n  </a>\n  <a href=\"https://pepy.tech/project/arm-preprocessing\">\n    <img alt=\"Downloads\" src=\"https://static.pepy.tech/badge/arm-preprocessing\">\n  </a>\n  <img alt=\"License\" src=\"https://img.shields.io/github/license/firefly-cpp/arm-preprocessing.svg\">\n  <a href=\"https://github.com/firefly-cpp/arm-preprocessing/actions/workflows/test.yml\">\n    <img alt=\"arm-preprocessing\" src=\"https://github.com/firefly-cpp/arm-preprocessing/actions/workflows/test.yml/badge.svg\">\n  </a>\n  <a href=\"https://arm-preprocessing.readthedocs.io/en/latest/?badge=latest\">\n    <img alt=\"Documentation Status\" src=\"https://readthedocs.org/projects/arm-preprocessing/badge/?version=latest\">\n  </a>\n</p>\n\n<p align=\"center\">\n  <img alt=\"Repository size\" src=\"https://img.shields.io/github/repo-size/firefly-cpp/arm-preprocessing\">\n  <img alt=\"Open issues\" src=\"https://isitmaintained.com/badge/open/firefly-cpp/arm-preprocessing.svg\">\n  <a href='http://isitmaintained.com/project/firefly-cpp/arm-preprocessing \"Average time to resolve an issue\"'>\n    <img alt=\"Average time to resolve an issue\" src=\"http://isitmaintained.com/badge/resolution/firefly-cpp/arm-preprocessing.svg\">\n  </a>\n  <img alt=\"GitHub commit activity\" src=\"https://img.shields.io/github/commit-activity/w/firefly-cpp/arm-preprocessing.svg\">\n  <img alt=\"GitHub contributors\" src=\"https://img.shields.io/github/contributors/firefly-cpp/arm-preprocessing.svg\">\n</p>\n\n<p align=\"center\">\n  <a href=\"#-why-arm-preprocessing\">\ud83d\udca1 Why arm-preprocessing?</a> \u2022\n  <a href=\"#-key-features\">\u2728 Key features</a> \u2022\n  <a href=\"#-installation\">\ud83d\udce6 Installation</a> \u2022\n  <a href=\"#-usage\">\ud83d\ude80 Usage</a> \u2022\n  <a href=\"#-related-frameworks\">\ud83d\udd17 Related frameworks</a> \u2022\n  <a href=\"#-references\">\ud83d\udcda References</a> \u2022\n  <a href=\"#-license\">\ud83d\udd11 License</a>\n</p>\n\narm-preprocessing is a lightweight Python library supporting several key steps involving data preparation, manipulation, and discretisation for Association Rule Mining (ARM). \ud83e\udde0 Embrace its minimalistic design that prioritises simplicity. \ud83d\udca1 The framework is intended to be fully extensible and offers seamless integration with related ARM libraries (e.g., [NiaARM](https://github.com/firefly-cpp/NiaARM)). \ud83d\udd17\n\n* **Free software:** MIT license\n* **Documentation**: [http://arm-preprocessing.readthedocs.io](http://arm-preprocessing.readthedocs.io)\n* **Python**: 3.9.x, 3.10.x, 3.11.x, 3.12x\n* **Tested OS:** Windows, Ubuntu, Fedora, Alpine, Arch, macOS. **However, that does not mean it does not work on others**\n\n## \ud83d\udca1 Why arm-preprocessing?\n\nWhile numerous libraries facilitate data mining preprocessing tasks, this library is designed to integrate seamlessly with association rule mining. It harmonises well with the NiaARM library, a robust numerical association rule mining framework. The primary aim is to bridge the gap between preprocessing and rule mining, simplifying the workflow/pipeline. Additionally, its design allows for the effortless incorporation of new preprocessing methods and fast benchmarking.\n\n## \u2728 Key features\n\n- Loading various formats of datasets (CSV, JSON, TXT, TCX) \ud83d\udcca\n- Converting datasets to different formats \ud83d\udd04\n- Loading different types of datasets (numerical dataset, discrete dataset, time-series data, text, etc.) \ud83d\udcc9\n- Dataset identification (which type of dataset) \ud83d\udd0d\n- Dataset statistics \ud83d\udcc8\n- Discretisation methods \ud83d\udccf\n- Data squashing methods \ud83e\udd0f\n- Feature scaling methods \u2696\ufe0f\n- Feature selection methods \ud83c\udfaf\n\n## \ud83d\udce6 Installation\n\n### pip\n\nTo install ``arm-preprocessing`` with pip, use:\n```bash\npip install arm-preprocessing\n```\n\nTo install ``arm-preprocessing`` on Alpine Linux, please use:\n```sh\n$ apk add py3-arm-preprocessing\n```\n\nTo install ``arm-preprocessing`` on Arch Linux, please use an [AUR helper](https://wiki.archlinux.org/title/AUR_helpers):\n```sh\n$ yay -Syyu python-arm-preprocessing\n```\n\n## \ud83d\ude80 Usage\n\n### Data loading\n\nThe following example demonstrates how to load a dataset from a file (csv, json, txt). More examples can be found in the [examples/data_loading](./examples/data_loading/) directory:\n- [Loading a dataset from a CSV file](./examples/data_loading/load_dataset_csv.py)\n- [Loading a dataset from a JSON file](./examples/data_loading/load_dataset_json.py)\n- [Loading a dataset from a TCX file](./examples/data_loading/load_dataset_tcx.py)\n- [Loading a time-series dataset](./examples/data_loading/load_dataset_timeseries.py)\n\n```python\nfrom arm_preprocessing.dataset import Dataset\n\n# Initialise dataset with filename (without format) and format (csv, json, txt)\ndataset = Dataset('path/to/datasets', format='csv')\n\n# Load dataset\ndataset.load_data()\ndf = dataset.data\n```\n\n### Missing values\n\nThe following example demonstrates how to handle missing values in a dataset using imputation. More examples can be found in the [examples/missing_values](./examples/missing_values) directory:\n- [Handling missing values in a dataset using row deletion](./examples/missing_values/missing_values_rows.py)\n- [Handling missing values in a dataset using column deletion](./examples/missing_values/missing_values_columns.py)\n- [Handling missing values in a dataset using imputation](./examples/missing_values/missing_values_impute.py)\n\n```python\nfrom arm_preprocessing.dataset import Dataset\n\n# Initialise dataset with filename and format\ndataset = Dataset('examples/missing_values/data', format='csv')\ndataset.load()\n\n# Impute missing data\ndataset.missing_values(method='impute')\n```\n\n### Data discretisation\n\nThe following example demonstrates how to discretise a dataset using the equal width method. More examples can be found in the [examples/discretisation](./examples/discretisation) directory:\n- [Discretising a dataset using the equal width method](./examples/discretisation/equal_width_discretisation.py)\n- [Discretising a dataset using the equal frequency method](./examples/discretisation/equal_frequency_discretisation.py)\n- [Discretising a dataset using k-means clustering](./examples/discretisation/kmeans_discretisation.py)\n\n```python\nfrom arm_preprocessing.dataset import Dataset\n\n# Initialise dataset with filename (without format) and format (csv, json, txt)\ndataset = Dataset('datasets/sportydatagen', format='csv')\ndataset.load_data()\n\n# Discretise dataset using equal width discretisation\ndataset.discretise(method='equal_width', num_bins=5, columns=['calories'])\n```\n\n### Data squashing\n\nThe following example demonstrates how to squash a dataset using the euclidean similarity. More examples can be found in the [examples/squashing](./examples/squashing) directory:\n- [Squashing a dataset using the euclidean similarity](./examples/squashing/squash_euclidean.py)\n- [Squashing a dataset using the cosine similarity](./examples/squashing/squash_cosine.py)\n\n```python\nfrom arm_preprocessing.dataset import Dataset\n\n# Initialise dataset with filename and format\ndataset = Dataset('datasets/breast', format='csv')\ndataset.load()\n\n# Squash dataset\ndataset.squash(threshold=0.75, similarity='euclidean')\n```\n\n### Feature scaling\n\nThe following example demonstrates how to scale the dataset's features. More examples can be found in the [examples/scaling](./examples/scaling) directory:\n- [Scale features using normalisation](./examples/scaling/normalisation.py)\n- [Scale features using standardisation](./examples/scaling/standardisation.py)\n\n```python\nfrom arm_preprocessing.dataset import Dataset\n\n# Initialise dataset with filename and format\ndataset = Dataset('datasets/Abalone', format='csv')\ndataset.load()\n\n# Scale dataset using normalisation\ndataset.scale(method='normalisation')\n```\n\n### Feature selection\n\nThe following example demonstrates how to select features from a dataset. More examples can be found in the [examples/feature_selection](./examples/feature_selection) directory:\n- [Select features using the Kendall Tau correlation coefficient](./examples/feature_selection/feature_selection.py)\n\n```python\nfrom arm_preprocessing.dataset import Dataset\n\n# Initialise dataset with filename and format\ndataset = Dataset('datasets/sportydatagen', format='csv')\ndataset.load()\n\n# Feature selection\ndataset.feature_selection(\n    method='kendall', threshold=0.15, class_column='calories')\n```\n\n## \ud83d\udd17 Related frameworks\n\n[1] [NiaARM: A minimalistic framework for Numerical Association Rule Mining](https://github.com/firefly-cpp/NiaARM)\n\n[2] [uARMSolver: universal Association Rule Mining Solver](https://github.com/firefly-cpp/uARMSolver)\n\n## \ud83d\udcda References\n\n[1] I. Fister, I. Fister Jr., D. Novak and D. Verber, [Data squashing as preprocessing in association rule mining](https://iztok-jr-fister.eu/static/publications/300.pdf), 2022 IEEE Symposium Series on Computational Intelligence (SSCI), Singapore, Singapore, 2022, pp. 1720-1725, doi: 10.1109/SSCI51031.2022.10022240.\n\n[2] I. Fister Jr., I. Fister [A brief overview of swarm intelligence-based algorithms for numerical association rule mining](https://arxiv.org/abs/2010.15524). arXiv preprint arXiv:2010.15524 (2020).\n\n## \ud83d\udd11 License\n\nThis package is distributed under the MIT License. This license can be found online\nat <http://www.opensource.org/licenses/MIT>.\n\n## Disclaimer\n\nThis framework is provided as-is, and there are no guarantees that it fits your purposes or that it is bug-free. Use it at your own risk!\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Implementation of several preprocessing techniques for Association Rule Mining (ARM)",
    "version": "0.2.4",
    "project_urls": {
        "Documentation": "http://arm-preprocessing.readthedocs.io",
        "Homepage": "https://github.com/firefly-cpp/arm-preprocessing",
        "Repository": "https://github.com/firefly-cpp/arm-preprocessing"
    },
    "split_keywords": [
        "association rule mining",
        " data science",
        " preprocessing"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6ba7a736a4af6cb8ca8ffd23c5e6babf31916647e81c6c5cdea3cee798693947",
                "md5": "b5d72d081ac9f8c1792beee517c1cd18",
                "sha256": "f56dc81bf6538fc9e217a6a2794059496077e8934b6502f360b279a85deced0a"
            },
            "downloads": -1,
            "filename": "arm_preprocessing-0.2.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b5d72d081ac9f8c1792beee517c1cd18",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.00,>=3.9",
            "size": 10116,
            "upload_time": "2024-10-09T13:58:51",
            "upload_time_iso_8601": "2024-10-09T13:58:51.314139Z",
            "url": "https://files.pythonhosted.org/packages/6b/a7/a736a4af6cb8ca8ffd23c5e6babf31916647e81c6c5cdea3cee798693947/arm_preprocessing-0.2.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d2f48dccbd5b7bb2f86f9cd973c8d93dc156fbe1b8d43a49b3a9cc03e8a8b942",
                "md5": "8a9f1a6a193c492869ca902dd4a5d685",
                "sha256": "d6e3ebb960401b71167c8625ea53906a3d422939285fad97a9474a89c74c7b6f"
            },
            "downloads": -1,
            "filename": "arm_preprocessing-0.2.4.tar.gz",
            "has_sig": false,
            "md5_digest": "8a9f1a6a193c492869ca902dd4a5d685",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.00,>=3.9",
            "size": 11247,
            "upload_time": "2024-10-09T13:58:52",
            "upload_time_iso_8601": "2024-10-09T13:58:52.762872Z",
            "url": "https://files.pythonhosted.org/packages/d2/f4/8dccbd5b7bb2f86f9cd973c8d93dc156fbe1b8d43a49b3a9cc03e8a8b942/arm_preprocessing-0.2.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-09 13:58:52",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "firefly-cpp",
    "github_project": "arm-preprocessing",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "arm-preprocessing"
}
        
Elapsed time: 0.35900s