alpbench


Namealpbench JSON
Version 0.1.1 PyPI version JSON
download
home_pagehttps://github.com/ValentinMargraf/ActiveLearningPipelines
SummaryActive Learning Pipelines Benchmark
upload_time2024-06-14 07:11:21
maintainerNone
docs_urlNone
authorValentin Margraf et al.
requires_python>=3.10.0
licenseMIT
keywords python machine learning active learning benchmark tabular data classification
VCS
bugtrack_url
requirements py-experimenter mysql-connector-python openml scikit-learn scikit-activeml catboost xgboost
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![License](https://img.shields.io/badge/License-MIT-brightgreen.svg)](https://opensource.org/licenses/MIT)
[![Coverage Status](https://coveralls.io/repos/github/ValentinMargraf/ActiveLearningPipelines/badge.svg)](https://coveralls.io/github/ValentinMargraf/ActiveLearningPipelines)
[![Tests](https://github.com/ValentinMargraf/ActiveLearningPipelines/actions/workflows/unit-tests.yml/badge.svg)](https://github.com/ValentinMargraf/ActiveLearningPipelines/actions/workflows/unit-tests.yml)
[![Read the Docs](https://readthedocs.org/projects/shapiq/badge/?version=latest)](https://activelearningpipelines.readthedocs.io/en/latest/?badge=latest)

[![PyPI Version](https://img.shields.io/pypi/pyversions/alpbench.svg)](https://pypi.org/project/alpbench)
[![PyPI status](https://img.shields.io/pypi/status/alpbench.svg?color=blue)](https://pypi.org/project/alpbench)
[![Code Style](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)

# ALPBench: A Benchmark for Active Learning Pipelines on Tabular Data
`ALPBench` is a Python package for the specification, execution, and performance monitoring of **active learning pipelines (ALP)** consisting of a **learning algorithm** and a **query strategy** for real-world tabular classification tasks. It has built-in measures to ensure evaluations are done reproducibly, saving exact dataset splits and hyperparameter settings of used algorithms. In total, `ALPBench` consists of 86 real-world tabular classification datasets and 5 active learning settings, yielding 430 active learning problems. However, the benchmark allows for easy extension such as implementing your own learning algorithm and/or query strategy and benchmark it against existing approaches.


# 🛠️ Install
`ALPBench` is intended to work with **Python 3.10 and above**.

```
# The base package can be installed via pip:
pip install alpbench

# Alternatively, you can install the full package via pip:
pip install alpbench[full]

# Or you can install the package from source:
git clone https://github.com/ValentinMargraf/ActiveLearningPipelines.git
cd ActiveLearningPipelines
conda create --name alpbench python=3.10
conda activate alpbench

# Install for usage (without TabNet and TabPFN)
pip install -r requirements.txt

# Install for usage (with TabNet and TabPFN)
pip install -r requirements_full.txt
```

Documentation at https://activelearningpipelines.readthedocs.io/en/latest/


# ⭐ Quickstart
You can use `ALPBench` in different ways. There already exist quite some learners and query strategies that can be
run through accessing them with their name, as can be seen in the minimal example below. In the ALP.pipeline module you
can also implement your own (new) query strategies.


## 📈 Fit an Active Learning Pipeline

Fit an ALP on dataset with openmlid 31, using a random forest and margin sampling. You can find similar example code snippets in
**examples/**.

```python
from sklearn.metrics import accuracy_score

from alpbench.benchmark.BenchmarkConnector import DataFileBenchmarkConnector
from alpbench.evaluation.experimenter.DefaultSetup import ensure_default_setup
from alpbench.pipeline.ALPEvaluator import ALPEvaluator

# create benchmark connector and establish database connection
benchmark_connector = DataFileBenchmarkConnector()

# load some default settings and algorithm choices
ensure_default_setup(benchmark_connector)

evaluator = ALPEvaluator(benchmark_connector=benchmark_connector,
                         setting_name="small", openml_id=31, sampling_strategy_name="margin", learner_name="rf_gini")
alp = evaluator.fit()

# fit / predict and evaluate predictions
X_test, y_test = evaluator.get_test_data()
y_hat = alp.predict(X=X_test)
print("final test acc", accuracy_score(y_test, y_hat))

>> final
test
acc
0.7181818181818181

```


## Changelog 
### v0.1.0 (2024-06-13)
Initial release

- `pipeline` can be used to combine learning algorithms and query strategies into active learning pipelines
- `evaluation` provides tools to evaluate active learning pipelines
- `benchmark` monitors the performance of active learning pipelines over time and store results in a database

### v0.1.1 (2024-06-14)
Initial release

- extra code for `tabnet` does no longer need to be included from the repo

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/ValentinMargraf/ActiveLearningPipelines",
    "name": "alpbench",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10.0",
    "maintainer_email": null,
    "keywords": "python, machine learning, active learning, benchmark, tabular data, classification",
    "author": "Valentin Margraf et al.",
    "author_email": "valentin.margraf@ifi.lmu.de",
    "download_url": "https://files.pythonhosted.org/packages/cc/a8/e736f648724f6df71a112f97b56faacdecc83a0c2604ecc80be33af23693/alpbench-0.1.1.tar.gz",
    "platform": null,
    "description": "[![License](https://img.shields.io/badge/License-MIT-brightgreen.svg)](https://opensource.org/licenses/MIT)\n[![Coverage Status](https://coveralls.io/repos/github/ValentinMargraf/ActiveLearningPipelines/badge.svg)](https://coveralls.io/github/ValentinMargraf/ActiveLearningPipelines)\n[![Tests](https://github.com/ValentinMargraf/ActiveLearningPipelines/actions/workflows/unit-tests.yml/badge.svg)](https://github.com/ValentinMargraf/ActiveLearningPipelines/actions/workflows/unit-tests.yml)\n[![Read the Docs](https://readthedocs.org/projects/shapiq/badge/?version=latest)](https://activelearningpipelines.readthedocs.io/en/latest/?badge=latest)\n\n[![PyPI Version](https://img.shields.io/pypi/pyversions/alpbench.svg)](https://pypi.org/project/alpbench)\n[![PyPI status](https://img.shields.io/pypi/status/alpbench.svg?color=blue)](https://pypi.org/project/alpbench)\n[![Code Style](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n\n# ALPBench: A Benchmark for Active Learning Pipelines on Tabular Data\n`ALPBench` is a Python package for the specification, execution, and performance monitoring of **active learning pipelines (ALP)** consisting of a **learning algorithm** and a **query strategy** for real-world tabular classification tasks. It has built-in measures to ensure evaluations are done reproducibly, saving exact dataset splits and hyperparameter settings of used algorithms. In total, `ALPBench` consists of 86 real-world tabular classification datasets and 5 active learning settings, yielding 430 active learning problems. However, the benchmark allows for easy extension such as implementing your own learning algorithm and/or query strategy and benchmark it against existing approaches.\n\n\n# \ud83d\udee0\ufe0f Install\n`ALPBench` is intended to work with **Python 3.10 and above**.\n\n```\n# The base package can be installed via pip:\npip install alpbench\n\n# Alternatively, you can install the full package via pip:\npip install alpbench[full]\n\n# Or you can install the package from source:\ngit clone https://github.com/ValentinMargraf/ActiveLearningPipelines.git\ncd ActiveLearningPipelines\nconda create --name alpbench python=3.10\nconda activate alpbench\n\n# Install for usage (without TabNet and TabPFN)\npip install -r requirements.txt\n\n# Install for usage (with TabNet and TabPFN)\npip install -r requirements_full.txt\n```\n\nDocumentation at https://activelearningpipelines.readthedocs.io/en/latest/\n\n\n# \u2b50 Quickstart\nYou can use `ALPBench` in different ways. There already exist quite some learners and query strategies that can be\nrun through accessing them with their name, as can be seen in the minimal example below. In the ALP.pipeline module you\ncan also implement your own (new) query strategies.\n\n\n## \ud83d\udcc8 Fit an Active Learning Pipeline\n\nFit an ALP on dataset with openmlid 31, using a random forest and margin sampling. You can find similar example code snippets in\n**examples/**.\n\n```python\nfrom sklearn.metrics import accuracy_score\n\nfrom alpbench.benchmark.BenchmarkConnector import DataFileBenchmarkConnector\nfrom alpbench.evaluation.experimenter.DefaultSetup import ensure_default_setup\nfrom alpbench.pipeline.ALPEvaluator import ALPEvaluator\n\n# create benchmark connector and establish database connection\nbenchmark_connector = DataFileBenchmarkConnector()\n\n# load some default settings and algorithm choices\nensure_default_setup(benchmark_connector)\n\nevaluator = ALPEvaluator(benchmark_connector=benchmark_connector,\n                         setting_name=\"small\", openml_id=31, sampling_strategy_name=\"margin\", learner_name=\"rf_gini\")\nalp = evaluator.fit()\n\n# fit / predict and evaluate predictions\nX_test, y_test = evaluator.get_test_data()\ny_hat = alp.predict(X=X_test)\nprint(\"final test acc\", accuracy_score(y_test, y_hat))\n\n>> final\ntest\nacc\n0.7181818181818181\n\n```\n\n\n## Changelog \n### v0.1.0 (2024-06-13)\nInitial release\n\n- `pipeline` can be used to combine learning algorithms and query strategies into active learning pipelines\n- `evaluation` provides tools to evaluate active learning pipelines\n- `benchmark` monitors the performance of active learning pipelines over time and store results in a database\n\n### v0.1.1 (2024-06-14)\nInitial release\n\n- extra code for `tabnet` does no longer need to be included from the repo\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Active Learning Pipelines Benchmark",
    "version": "0.1.1",
    "project_urls": {
        "Documentation": "https://activelearningpipelines.readthedocs.io/",
        "Homepage": "https://github.com/ValentinMargraf/ActiveLearningPipelines",
        "Source": "https://github.com/ValentinMargraf/ActiveLearningPipelines",
        "Tracker": "https://github.com/ValentinMargraf/ActiveLearningPipelines/issues"
    },
    "split_keywords": [
        "python",
        " machine learning",
        " active learning",
        " benchmark",
        " tabular data",
        " classification"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d3c6a52a6547a19b7719cd03bf96a3a5cccb23542d0649eb98821a7efc136b19",
                "md5": "7511bac75f7e333f15bfbdbb456e235f",
                "sha256": "4887205836ae02dd0fefebddbd84ec900ca2dd50d497526e84c70995af9e7886"
            },
            "downloads": -1,
            "filename": "alpbench-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7511bac75f7e333f15bfbdbb456e235f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10.0",
            "size": 93034,
            "upload_time": "2024-06-14T07:11:18",
            "upload_time_iso_8601": "2024-06-14T07:11:18.861703Z",
            "url": "https://files.pythonhosted.org/packages/d3/c6/a52a6547a19b7719cd03bf96a3a5cccb23542d0649eb98821a7efc136b19/alpbench-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cca8e736f648724f6df71a112f97b56faacdecc83a0c2604ecc80be33af23693",
                "md5": "d42cb948c38e5fdad3d1ed83d7feb642",
                "sha256": "5d61e504ce5a1fb7650da12d8bb79d8e638807d2b702978113b4ed81b7173f2f"
            },
            "downloads": -1,
            "filename": "alpbench-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "d42cb948c38e5fdad3d1ed83d7feb642",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10.0",
            "size": 82005,
            "upload_time": "2024-06-14T07:11:21",
            "upload_time_iso_8601": "2024-06-14T07:11:21.089073Z",
            "url": "https://files.pythonhosted.org/packages/cc/a8/e736f648724f6df71a112f97b56faacdecc83a0c2604ecc80be33af23693/alpbench-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-06-14 07:11:21",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ValentinMargraf",
    "github_project": "ActiveLearningPipelines",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "py-experimenter",
            "specs": [
                [
                    "==",
                    "1.4.1"
                ]
            ]
        },
        {
            "name": "mysql-connector-python",
            "specs": [
                [
                    "==",
                    "8.4.0"
                ]
            ]
        },
        {
            "name": "openml",
            "specs": [
                [
                    "==",
                    "0.14.2"
                ]
            ]
        },
        {
            "name": "scikit-learn",
            "specs": [
                [
                    "==",
                    "1.2.2"
                ]
            ]
        },
        {
            "name": "scikit-activeml",
            "specs": [
                [
                    "==",
                    "0.4.1"
                ]
            ]
        },
        {
            "name": "catboost",
            "specs": [
                [
                    "==",
                    "1.2.5"
                ]
            ]
        },
        {
            "name": "xgboost",
            "specs": [
                [
                    "==",
                    "2.0.3"
                ]
            ]
        }
    ],
    "lcname": "alpbench"
}
        
Elapsed time: 0.81284s