lohrasb


Namelohrasb JSON
Version 4.2.0 PyPI version JSON
download
home_page
SummaryThis versatile tool streamlines hyperparameter optimization in machine learning workflows.It supports a wide range of search methods, from GridSearchCV and RandomizedSearchCVto advanced techniques like OptunaSearchCV, Ray Tune, and Scikit-Learn Tune.Designed to enhance model performance and efficiency, it's suitable for tasks of any scale.
upload_time2023-09-09 22:14:20
maintainer
docs_urlNone
authordrhosseinjavedani
requires_python
licenseBSD-3-Clause license
keywords auto ml pipeline machine learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![GitHub Repo stars](https://img.shields.io/github/stars/drhosseinjavedani/lohrasb) 
![GitHub forks](https://img.shields.io/github/forks/drhosseinjavedani/lohrasb) 
![GitHub language count](https://img.shields.io/github/languages/count/drhosseinjavedani/lohrasb) 
![GitHub repo size](https://img.shields.io/github/repo-size/drhosseinjavedani/lohrasb) 
![GitHub](https://img.shields.io/github/license/drhosseinjavedani/lohrasb)
![PyPI - Downloads](https://img.shields.io/pypi/dd/lohrasb) 
![PyPI - Python Version](https://img.shields.io/pypi/pyversions/lohrasb)
![GitHub issues](https://img.shields.io/github/issues/drhosseinjavedani/lohrasb)
![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg)
![GitHub contributors](https://img.shields.io/github/contributors/drhosseinjavedani/lohrasb)
![GitHub last commit](https://img.shields.io/github/last-commit/drhosseinjavedani/lohrasb)

# Lohrasb
Introducing **Lohrasb**, a powerful tool designed to streamline machine learning development by providing scalable hyperparameter tuning solutions. Lohrasb incorporates several robust optimization frameworks including [Optuna](https://optuna.readthedocs.io/en/stable/index.html), [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html), [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html), and [Ray Tune Scikit-Learn API](https://docs.ray.io/en/latest/tune/api_docs/sklearn.html). Its compatibility extends to the majority of estimators from Scikit-learn as well as popular machine learning libraries such as [CatBoost](https://catboost.ai/) and [LightGBM](https://lightgbm.readthedocs.io/en/latest/), offering a seamless hyperparameter tuning experience.

Lohrasb is also flexible enough to cater to models conforming to standard Scikit-learn API conventions, such as those implementing `fit` and `predict` methods. This means if you're working with a custom model that adheres to these conventions, or any machine learning model from other libraries that use these methods, Lohrasb can assist you in optimizing the model's hyperparameters.

In addition to model flexibility, Lohrasb provides flexibility in optimization metrics as well. It naturally supports standard Scikit-learn metrics like `f1_score` or `r2_score`. Beyond these, it allows the use of custom evaluation metrics for optimization purposes. This could include specially designed metrics like `f1_plus_tn` or any other specific, customized metric that aligns with your project's requirements.

Overall, whether you're tuning a Scikit-learn estimator, a CatBoost model, a LightGBM classifier, or even a custom model, Lohrasb is designed to streamline your workflow and make the process of hyperparameter optimization more efficient and effective. Its broad compatibility ensures that you can achieve the best performance possible from your models, guided by optimization metrics that are most aligned with your project's goals.

### Introduction
The BaseModel of the Lohrasb package is designed with versatility and flexibility in mind. It accepts a variety of parameters ranging from an estimator class and its tuning parameters to different optimization engines like GridSearchCV, RandomizedSearchCV, or Optuna, and their associated parameters. In this process, the data samples are divided into training and validation sets, providing a robust setup for model validation.

Using these optimizing engines, Lohrasb effectively estimates the optimal parameters for your selected estimator. This results in an enhanced model performance, optimized specifically for your data and problem space. 
### Installation

Lohrasb package is available on PyPI and can be installed with pip:

```sh
pip install lohrasb
```


### Supported estimators for this package
Lohrasb supports almost all machine learning estimators for classification and regression.

### Usage

Lohrasb presents an effective solution for tuning the optimal parameters of a machine learning model. It leverages robust optimization engines, namely [Optuna](https://optuna.readthedocs.io/en/stable/index.html), [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html), [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html), along with [TuneGridSearchCV, and TuneSearchCV](https://docs.ray.io/en/latest/tune/api_docs/sklearn.html) from [Ray](https://docs.ray.io/en/latest/index.html) tune Scikit-Learn API (tune.sklearn). 

These capabilities empower Lohrasb users to perform comprehensive hyperparameter tuning on a broad range of machine learning models. Whether you are using a model from Scikit-learn, CatBoost, LightGBM, or even a custom model, Lohrasb's functionality enables you to extract the best performance from your model by optimizing its parameters using the most suitable engine.
### Some examples
In the following section, we will showcase several examples that illustrate how users can leverage various optimization engines incorporated within Lohrasb for effective hyperparameter tuning. This guidance aims to equip users with practical knowledge for harnessing the full potential of Lohrasb's diverse optimization capabilities.

#### Utilizing Ray Tune for Hyperparameter Optimization
Lohrasb's optimize_by_tune feature seamlessly integrates the powerful Tune tool from Ray, thereby streamlining hyperparameter optimization for Scikit-learn-based machine learning models. This feature harmoniously combines Tune's robust capabilities with a user-friendly interface, reducing the complexity of hyperparameter tuning and increasing its accessibility. Consequently, optimize_by_tune allows developers to concentrate on core model development while effectively managing hyperparameter optimization. This process leverages the full range of Tune's advanced functionalities. See the example below on how to utilize it:

```
from ray.tune.search.hyperopt import HyperOptSearch
import optuna

# Datasets
from sklearn.datasets import make_classification, make_regression

# Custom estimator
from lohrasb.best_estimator import BaseModel

# Gradient boosting frameworks
from catboost import CatBoostClassifier
from xgboost import XGBClassifier

# metrics
from sklearn.metrics import r2_score, f1_score

# Imbalanced-learn ensemble
from imblearn.ensemble import BalancedRandomForestClassifier
from ray import air, tune

# others
from sklearn.model_selection import train_test_split

from sklearn.linear_model import (
    Ridge,
    SGDRegressor,
)

# Define hyperparameters for each model
xgb_params = {
    "n_estimators": tune.randint(50, 200),
    "max_depth": tune.randint(6, 15),
    "learning_rate": tune.uniform(0.001, 0.1),
}
cb_params = {
    "iterations": tune.randint(50, 200),
    "depth": tune.randint(4, 8),
    "learning_rate": tune.uniform(0.001, 0.1),
}
brf_params = {
    "n_estimators": tune.randint(50, 200),
    "max_depth": tune.choice([None, 10, 20]),
}

# Put models and hyperparameters into a list of tuples
estimators_params_clfs = [
    (XGBClassifier, xgb_params),
    (CatBoostClassifier, cb_params),
    (BalancedRandomForestClassifier, brf_params),
]

# Define hyperparameters for each model
ridge_params_reg = {"alpha": tune.uniform(0.5, 1.0)}
sgr_params_reg = {
    "loss": tune.choice(["squared_loss", "huber", "epsilon_insensitive"]),
    "penalty": tune.choice(["l2", "l1", "elasticnet"]),
}

# Put models and hyperparameters into a list of tuples
estimators_params_regs = [(Ridge, ridge_params_reg), (SGDRegressor, sgr_params_reg)]


def using_tune_classification(estimator, params):
    # Create synthetic dataset
    search_alg = HyperOptSearch()
    X, y = make_classification(
        n_samples=1000,
        n_features=20,
        n_informative=3,
        n_redundant=10,
        n_classes=3,
        random_state=42,
    )
    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=0.2, random_state=42
    )

    # Initialize the estimator
    est = estimator()

    # Create keyword arguments for tune
    kwargs = {
        # define kwargs for base model
        "kwargs": {  # params for fit method
            "fit_tune_kwargs": {
                "sample_weight": None,
            },
            # params for TuneCV
            "main_tune_kwargs": {
                "cv": 3,
                "scoring": "f1_macro",
                "estimator": est,
            },
            # kwargs of Tuner
            "tuner_kwargs": {
                "tune_config": tune.TuneConfig(
                    search_alg=search_alg,
                    mode="max",
                    metric="score",
                ),
                "param_space": params,
                "run_config": air.RunConfig(stop={"training_iteration": 20}),
            },
        }
    }

    # Run optimize_by_tune
    obj = BaseModel().optimize_by_tune(**kwargs)
    obj.fit(X_train, y_train)

    # Predict on test data
    y_pred = obj.predict(X_test)
    f1 = f1_score(y_test, y_pred, average="macro")
    print(f"f1_score is {f1}")


def using_tune_regressiom(estimator, params):
    search_alg = HyperOptSearch()
    # Create synthetic regression dataset
    X, y = make_regression(
        n_samples=100, n_features=10, n_informative=5, n_targets=1, random_state=1
    )

    # Initialize the estimator
    est = estimator()

    # Create keyword arguments for tune
    kwargs = {
        # define kwargs for base model
        "kwargs": {  # params for fit method
            "fit_tune_kwargs": {
                "sample_weight": None,
            },
            # params for TuneCV
            "main_tune_kwargs": {
                "cv": 3,
                "scoring": "r2",
                "estimator": est,
            },
            # kwargs of Tuner
            "tuner_kwargs": {
                "tune_config": tune.TuneConfig(
                    search_alg=search_alg,
                    mode="max",
                    metric="score",
                ),
                "param_space": params,
                "run_config": air.RunConfig(stop={"training_iteration": 20}),
            },
        }
    }

    # Create obj of the class
    obj = BaseModel().optimize_by_tune(**kwargs)

    # Check if instance created successfully
    assert obj is not None

    # Fit data and predict
    obj.fit(X, y)
    predictions = obj.predict(X)
    r2 = r2_score(y, predictions)
    print(f"r2_score is {r2}")

    (Ridge, ridge_params_reg),
    (SGDRegressor, sgr_params_reg)


# some regression examples
using_tune_regressiom(Ridge, ridge_params_reg)
using_tune_regressiom(SGDRegressor, sgr_params_reg)
# some classification examples
using_tune_classification(CatBoostClassifier, cb_params)
using_tune_classification(XGBClassifier, xgb_params)
using_tune_classification(BalancedRandomForestClassifier, brf_params)

```
#### Embracing GridSearchCV for Hyperparameter Optimization
The `optimize_by_gridsearchcv` function in Lohrasb incorporates GridSearchCV's robust capabilities, making the process of hyperparameter optimization streamlined and efficient, specifically for Scikit-learn-based machine learning models. This function merges GridSearchCV's comprehensive search abilities with a user-friendly interface, thereby simplifying hyperparameter tuning and making it more accessible. 
```
from sklearn.datasets import make_classification, make_regression
from sklearn.model_selection import KFold, train_test_split
from sklearn.metrics import f1_score, r2_score
from lohrasb.best_estimator import BaseModel
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import Lasso, LinearRegression

# Define hyperparameters for the classifiers and regressors
rf_params = {"n_estimators": [50, 100, 200], "max_depth": [None, 10, 20]}
lr_params_reg = {"fit_intercept": [True, False]}
lasso_params_reg = {"alpha": [0.1, 0.5, 1.0]}

def using_tune_classification(estimator, params):
    # Create synthetic dataset
    X, y = make_classification(n_samples=1000, n_features=20, n_informative=3, n_redundant=10, n_classes=3, random_state=42)
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

    # Initialize the estimator
    est = estimator()

    # Create the model with the chosen hyperparameters
    obj = BaseModel().optimize_by_gridsearchcv(
        kwargs={
            "fit_grid_kwargs": {
                "sample_weight": None,
            },
            "grid_search_kwargs": {
                "estimator": est,
                "param_grid": params,
                "scoring": "f1_micro",
                "verbose": 3,
                "n_jobs": -1,
                "cv": KFold(2),
            },
        }
    )

    # Fit the model and make predictions
    obj.fit(X_train, y_train)
    y_pred = obj.predict(X_test)

    # Evaluate the model performance
    f1 = f1_score(y_test, y_pred, average="macro")
    print(f"f1_score is {f1}")

def using_tune_regression(estimator, params):
    # Create synthetic regression dataset
    X, y = make_regression(n_samples=100, n_features=10, n_informative=5, n_targets=1, random_state=1)

    # Initialize the estimator
    est = estimator()

    # Create the model with the chosen hyperparameters
    obj = BaseModel().optimize_by_gridsearchcv(
        kwargs={
            "fit_grid_kwargs": {
                "sample_weight": None,
            },
            "grid_search_kwargs": {
                "estimator": est,
                "param_grid": params,
                "scoring": "r2",
                "verbose": 3,
                "n_jobs": -1,
                "cv": KFold(2),
            },
        }
    )

    # Fit the model and make predictions
    obj.fit(X, y)
    predictions = obj.predict(X)

    # Evaluate the model performance
    r2 = r2_score(y, predictions)
    print(f"r2_score is {r2}")

# Regression examples
using_tune_regression(Lasso, lasso_params_reg)
using_tune_regression(LinearRegression, lr_params_reg)

# Classification examples
using_tune_classification(RandomForestClassifier, rf_params)

```
#### Exploring the Use of RandomizedSearchCV Interface
The `optimize_by_randomsearchcv` function in Lohrasb harnesses the robust capabilities of RandomizedSearchCV, thereby simplifying and enhancing the efficiency of hyperparameter optimization, particularly for Scikit-learn-based machine learning models. By merging RandomizedSearchCV's stochastic search capabilities with an intuitive interface, `optimize_by_randomsearchcv` makes the process of hyperparameter tuning more accessible and less complex. 

```
from sklearn.datasets import make_classification, make_regression
from sklearn.model_selection import KFold, train_test_split
from sklearn.metrics import f1_score, r2_score
from lohrasb.best_estimator import BaseModel
from sklearn.ensemble import AdaBoostClassifier
from sklearn.linear_model import Ridge

# Define hyperparameters for the classifiers and regressors
adb_params = {"n_estimators": [50, 100, 200], "learning_rate": [0.001, 0.01, 0.1]}
ridge_params_reg = {"fit_intercept": [True, False]}


def using_tune_classification(estimator, params):
    # Create synthetic dataset
    X, y = make_classification(
        n_samples=1000,
        n_features=20,
        n_informative=3,
        n_redundant=10,
        n_classes=3,
        random_state=42,
    )
    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=0.2, random_state=42
    )

    # Initialize the estimator
    est = estimator()

    # Create the model with the chosen hyperparameters
    obj = BaseModel().optimize_by_randomsearchcv(
        kwargs={
            "fit_random_kwargs": {
                "sample_weight": None,
            },
            "random_search_kwargs": {
                "estimator": est,
                "param_distributions": params,
                "scoring": "f1_micro",
                "verbose": 3,
                "n_jobs": -1,
                "cv": KFold(2),
                "n_iter": 10,
            },
            "main_random_kwargs": {},
        }
    )

    # Fit the model and make predictions
    obj.fit(X_train, y_train)
    y_pred = obj.predict(X_test)

    # Evaluate the model performance
    f1 = f1_score(y_test, y_pred, average="macro")
    print(f"f1_score is {f1}")


def using_tune_regression(estimator, params):
    # Create synthetic regression dataset
    X, y = make_regression(
        n_samples=100, n_features=10, n_informative=5, n_targets=1, random_state=1
    )

    # Initialize the estimator
    est = estimator()

    # Create the model with the chosen hyperparameters
    obj = BaseModel().optimize_by_randomsearchcv(
        kwargs={
            "fit_random_kwargs": {
                "sample_weight": None,
            },
            "random_search_kwargs": {
                "estimator": est,
                "param_distributions": params,
                "scoring": "r2",
                "verbose": 3,
                "n_jobs": -1,
                "cv": KFold(2),
                "n_iter": 10,
            },
            "main_random_kwargs": {},
        }
    )

    # Fit the model and make predictions
    obj.fit(X, y)
    predictions = obj.predict(X)

    # Evaluate the model performance
    r2 = r2_score(y, predictions)
    print(f"r2_score is {r2}")


# Regression examples
using_tune_regression(Ridge, ridge_params_reg)

# Classification examples
using_tune_classification(AdaBoostClassifier, adb_params)
```
#### Streamlining Optimization with `optimize_by_optunasearchcv`
Lohrasb's `optimize_by_optunasearchcv` utilizes the power and flexibility of OptunaSearchCV, streamlining hyperparameter optimization for Scikit-learn models. This function melds Optuna's robust search abilities with an intuitive interface, simplifying tuning tasks. It allows developers to focus on key model development aspects while managing hyperparameter optimization using OptunaSearchCV's advanced features. 
```
from sklearn.datasets import make_classification, make_regression
import optuna
from sklearn.model_selection import KFold, train_test_split
from sklearn.metrics import f1_score, r2_score
from lohrasb.best_estimator import BaseModel
from sklearn.ensemble import AdaBoostClassifier
from sklearn.linear_model import Ridge

# Define hyperparameters for the classifiers and regressors
adb_params = {
    "n_estimators": optuna.distributions.IntDistribution(50, 200),
    "learning_rate": optuna.distributions.FloatDistribution(0.001, 0.1),
}
ridge_params_reg = {
    "fit_intercept": optuna.distributions.CategoricalDistribution(choices=[True, False])
}


def using_tune_classification(estimator, params):
    # Create synthetic classification dataset
    X, y = make_classification(
        n_samples=1000,
        n_features=20,
        n_informative=3,
        n_redundant=10,
        n_classes=3,
        random_state=42,
    )
    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=0.2, random_state=42
    )

    # Initialize the estimator and create a model with the specified hyperparameters
    est = estimator()
    obj = BaseModel().optimize_by_optunasearchcv(
        kwargs={
            "fit_newoptuna_kwargs": {"sample_weight": None},
            "newoptuna_search_kwargs": {
                "estimator": est,
                "param_distributions": params,
                "scoring": "f1_micro",
                "verbose": 3,
                "n_jobs": -1,
                "cv": KFold(2),
            },
            "main_newoptuna_kwargs": {},
        }
    )

    # Fit the model and make predictions
    obj.fit(X_train, y_train)
    y_pred = obj.predict(X_test)

    # Evaluate and print the model performance
    f1 = f1_score(y_test, y_pred, average="macro")
    print(f"f1_score is {f1}")


def using_tune_regression(estimator, params):
    # Create synthetic regression dataset
    X, y = make_regression(
        n_samples=100, n_features=10, n_informative=5, n_targets=1, random_state=1
    )

    # Initialize the estimator and create a model with the specified hyperparameters
    est = estimator()
    obj = BaseModel().optimize_by_optunasearchcv(
        kwargs={
            "fit_newoptuna_kwargs": {"sample_weight": None},
            "newoptuna_search_kwargs": {
                "estimator": est,
                "param_distributions": params,
                "scoring": "r2",
                "verbose": 3,
                "n_jobs": -1,
                "cv": KFold(2),
            },
            "main_newoptuna_kwargs": {},
        }
    )

    # Fit the model and make predictions
    obj.fit(X, y)
    predictions = obj.predict(X)

    # Evaluate and print the model performance
    r2 = r2_score(y, predictions)
    print(f"r2_score is {r2}")


# Run regression examples
using_tune_regression(Ridge, ridge_params_reg)

# Run classification examples
using_tune_classification(AdaBoostClassifier, adb_params)
```
#### Enhancing Optimization with `optimize_by_tunegridsearchcv`
TuneGridSearchCV is a highly versatile extension of Tune's capabilities, designed to replace Scikit-learn's GridSearchCV. It leverages Tune's scalability and flexibility to perform efficient hyperparameter searching over a predefined grid, offering precise and comprehensive tuning for diverse machine learning frameworks including Scikit-learn, CatBoost, LightGBM, and Imbalanced-learn.

The `optimize_by_tunegridsearchcv` feature in Lohrasb harnesses this power and versatility. This function simplifies and enhances hyperparameter optimization not only for Scikit-learn models, but also for models developed using CatBoost, LightGBM, and Imbalanced-learn. By leveraging TuneGridSearchCV's systematic and efficient grid-based search capabilities, `optimize_by_tunegridsearchcv` offers a user-friendly interface that makes hyperparameter tuning less complex and more accessible. This enables developers to focus on the core aspects of model development, while the `optimize_by_tunegridsearchcv` function efficiently manages the detailed tuning process. Hence, `optimize_by_tunegridsearchcv` enriches the overall machine learning workflow, utilizing TuneGridSearchCV's advanced features for a robust and efficient grid-based search across multiple frameworks.

```
from sklearn.datasets import make_classification, make_regression
from sklearn.model_selection import KFold, train_test_split
from sklearn.metrics import f1_score, r2_score
from lohrasb.best_estimator import BaseModel
from catboost import CatBoostRegressor
from lightgbm import LGBMClassifier

# Define hyperparameters for the classifiers and regressors
cat_params_reg = {"n_estimators": [50, 100, 200], "learning_rate": [0.001, 0.01, 0.1]}
lgbm_params = {"max_depth": [5, 6, 7, 10], "gamma": [0.01, 0.1, 1, 1.2]}

def using_tune_classification(estimator, params):
    # Create synthetic classification dataset
    X, y = make_classification(n_samples=1000, n_features=20, n_informative=3, n_redundant=10, n_classes=3, random_state=42)
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

    # Initialize the estimator and create a model with the specified hyperparameters
    est = estimator()
    obj = BaseModel().optimize_by_tunegridsearchcv(
        kwargs={
            "fit_tunegrid_kwargs": {"sample_weight": None},
            "tunegrid_search_kwargs": {
                "estimator": est,
                "param_grid": params,
                "scoring": "f1_micro",
                "verbose": 3,
                "n_jobs": -1,
                "cv": KFold(2),
            },
            "main_tunegrid_kwargs": {},
        }
    )

    # Fit the model and make predictions
    obj.fit(X_train, y_train)
    y_pred = obj.predict(X_test)

    # Evaluate and print the model performance
    f1 = f1_score(y_test, y_pred, average="macro")
    print(f"f1_score is {f1}")

def using_tune_regression(estimator, params):
    # Create synthetic regression dataset
    X, y = make_regression(n_samples=100, n_features=10, n_informative=5, n_targets=1, random_state=1)

    # Initialize the estimator and create a model with the specified hyperparameters
    est = estimator()
    obj = BaseModel().optimize_by_tunegridsearchcv(
        kwargs={
            "fit_tunegrid_kwargs": {"sample_weight": None},
            "tunegrid_search_kwargs": {
                "estimator": est,
                "param_grid": params,
                "scoring": "r2",
                "verbose": 3,
                "n_jobs": -1,
                "cv": KFold(2),
            },
            "main_tunegrid_kwargs": {},
        }
    )

    # Fit the model and make predictions
    obj.fit(X, y)
    predictions = obj.predict(X)

    # Evaluate and print the model performance
    r2 = r2_score(y, predictions)
    print(f"r2_score is {r2}")

# Run regression examples
using_tune_regression(CatBoostRegressor, cat_params_reg)

# Run classification examples
using_tune_classification(LGBMClassifier, lgbm_params)
```

#### Illustrating the Use of `optimize_by_tunesearchcv`
TuneSearchCV is a flexible and powerful tool that combines the strengths of Tune, a project by Ray, with the convenience of Scikit-learn's GridSearchCV and RandomizedSearchCV for hyperparameter tuning. TuneSearchCV provides an optimized and scalable solution for hyperparameter search, capable of handling a large number of hyperparameters and high-dimensional spaces with precision and speed.

The `optimize_by_tunesearchcv` feature within Lohrasb employs this powerhouse to make hyperparameter tuning easier and more efficient. 
```
# Import necessary libraries
from sklearn.datasets import make_classification, make_regression
from sklearn.model_selection import KFold, train_test_split
from sklearn.metrics import f1_score, r2_score
from lohrasb.best_estimator import BaseModel
from sklearn.neural_network import MLPRegressor
from lightgbm import LGBMClassifier

# Define hyperparameters for the MLPRegressor and LGBMClassifier
# These will be the values that the hyperparameter search function will iterate through.
mlp_params_reg = {
    "hidden_layer_sizes": [(5, 5, 5), (5, 10, 5), (10,)],
    "activation": ["tanh", "relu"],
    "solver": ["sgd", "adam"],
    "alpha": [0.0001, 0.05],
    "learning_rate": ["constant", "adaptive"],
}
lgbm_params = {"max_depth": [5, 6, 7, 10]}

# Function for training and evaluating a classification model
def using_tune_classification(estimator, params):
    # Create a synthetic classification dataset with 1000 samples, 20 features, and 3 classes
    X, y = make_classification(
        n_samples=1000,
        n_features=20,
        n_informative=3,
        n_redundant=10,
        n_classes=3,
        random_state=42,
    )
    # Split the dataset into training and test sets
    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=0.2, random_state=42
    )

    # Initialize the estimator
    est = estimator()

    # Use the hyperparameter search function provided by the BaseModel class to find the best parameters
    obj = BaseModel().optimize_by_tunesearchcv(
        kwargs={
            "fit_tune_kwargs": {"sample_weight": None},
            "tune_search_kwargs": {
                "estimator": est,
                "param_distributions": params,
                "scoring": "f1_micro",
                "verbose": 3,
                "n_jobs": -1,
                "cv": KFold(2),
            },
            "main_tune_kwargs": {},
        }
    )

    # Fit the model to the training data
    obj.fit(X_train, y_train)
    # Predict the labels for the test data
    y_pred = obj.predict(X_test)

    # Compute the F1 score of the model
    f1 = f1_score(y_test, y_pred, average="macro")
    print(f"f1_score is {f1}")


# Function for training and evaluating a regression model
def using_tune_regression(estimator, params):
    # Create a synthetic regression dataset with 1000 samples and 10 features
    X, y = make_regression(
        n_samples=1000, n_features=10, n_informative=5, n_targets=1, random_state=1
    )

    # Initialize the estimator
    est = estimator()

    # Use the hyperparameter search function provided by the BaseModel class to find the best parameters
    obj = BaseModel().optimize_by_tunesearchcv(
        kwargs={
            "fit_tune_kwargs": {},
            "tune_search_kwargs": {
                "estimator": est,
                "param_distributions": params,
                "scoring": "r2",
                "verbose": 3,
                "n_jobs": -1,
                "cv": KFold(2),
            },
            "main_tune_kwargs": {},
        }
    )

    # Fit the model to the data
    obj.fit(X, y)
    # Predict the targets for the data
    predictions = obj.predict(X)

    # Compute the R2 score of the model
    r2 = r2_score(y, predictions)
    print(f"r2_score is {r2}")


# Run the regression function using the MLPRegressor and the specified parameters
using_tune_regression(MLPRegressor, mlp_params_reg)

# Run the classification function using the LGBMClassifier and the specified parameters
using_tune_classification(LGBMClassifier, lgbm_params)
```
#### Navigating Hyperparameter Tuning with `optimize_by_optuna`
The `optimize_by_optuna` feature in Lohrasb is a versatile function that leverages the extensive capabilities of the Optuna framework, aiming to simplify hyperparameter tuning for a wide range of machine learning models, including CatBoost, XGBoost, LightGBM, and Scikit-learn models. Optuna, known for its flexibility and efficiency in hyperparameter optimization, significantly enhances the model training process.

This function provides a flexible and customizable interface, accommodating a variety of machine learning tasks. Users can manipulate arguments for different Optuna submodules, such as 'study' and 'optimize', to tailor the function to their specific needs. This flexibility empowers developers to create and manage comprehensive optimization tasks with ease, all within their specific context.

In essence, `optimize_by_optuna` simplifies the tuning process by making the robust capabilities of Optuna readily accessible. Developers can focus on the core aspects of model development, with `optimize_by_optuna` managing the complexity of hyperparameter optimization. Thus, `optimize_by_optuna` augments the machine learning workflow, tapping into Optuna's advanced capabilities to deliver efficient, tailor-made hyperparameter optimization solutions. 

```
# Import necessary libraries
from sklearn.datasets import make_classification, make_regression
import optuna
from sklearn.model_selection import KFold, train_test_split
from sklearn.metrics import f1_score, r2_score
from lohrasb.best_estimator import BaseModel
from sklearn.ensemble import AdaBoostClassifier
from sklearn.linear_model import Ridge
from optuna.samplers import TPESampler
from optuna.pruners import HyperbandPruner

# Define hyperparameters for the AdaBoostClassifier and Ridge regressor
adb_params = {
    'n_estimators': [50,  200],
    'learning_rate': [0.01,  1.0],
    'algorithm': ['SAMME', 'SAMME.R'],
}
ridge_params_reg = {
    'fit_intercept': [True, False],
    'solver': ['auto', 'svd', 'cholesky', 'lsqr', 'sparse_cg', 'sag', 'saga']
}

# Function for training and evaluating a classification model
def using_tune_classification(estimator, params):
    # Create a synthetic classification dataset
    X, y = make_classification(
        n_samples=1000,
        n_features=20,
        n_informative=3,
        n_redundant=10,
        n_classes=3,
        random_state=42,
    )
    # Split the dataset into training and test sets
    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=0.2, random_state=42
    )

    # Initialize the estimator
    est = estimator()

    # Use Optuna for hyperparameter optimization
    obj = BaseModel().optimize_by_optuna(
        kwargs={
            "fit_optuna_kwargs": {
            },
            "main_optuna_kwargs": {
                "estimator": est,
                "estimator_params": params,
                "refit": True,
                "measure_of_accuracy": 'f1_score(y_true, y_pred,average="weighted")',
            },
            "train_test_split_kwargs": {
                "test_size": 0.3,
            },
            "study_search_kwargs": {
                "storage": None,
                "sampler": TPESampler(),
                "pruner": HyperbandPruner(),
                "study_name": "example of optuna optimizer",
                "direction": "maximize",
                "load_if_exists": False,
            },
            "optimize_kwargs": {
                "n_trials": 20,
                "timeout": 600,
                "catch": (),
                "callbacks": None,
                "gc_after_trial": False,
                "show_progress_bar": False,
            },
        }
    )

    # Fit the model and make predictions
    obj.fit(X_train, y_train)
    y_pred = obj.predict(X_test)

    # Evaluate and print the model performance
    f1 = f1_score(y_test, y_pred, average="macro")
    print(f"f1_score is {f1}")


# Function for training and evaluating a regression model
def using_tune_regression(estimator, params):
    # Create a synthetic regression dataset
    X, y = make_regression(
        n_samples=100, n_features=10, n_informative=5, n_targets=1, random_state=1
    )

    # Initialize the estimator
    est = estimator()

    # Use Optuna for hyperparameter optimization
    obj = BaseModel().optimize_by_optuna(
        kwargs={
            "fit_optuna_kwargs": {
            },
            "main_optuna_kwargs": {
                "estimator": est,
                "estimator_params": params,
                "refit": True,
                "measure_of_accuracy": "mean_absolute_error(y_true, y_pred, multioutput='uniform_average')",
            },
            "train_test_split_kwargs": {
                "test_size": 0.3,
            },
            "study_search_kwargs": {
                "storage": None,
                "sampler": TPESampler(),
                "pruner": HyperbandPruner(),
                "study_name": "example of optuna optimizer",
                "direction": "maximize",
                "load_if_exists": False,
            },
            "optimize_kwargs": {
                "n_trials": 20,
                "timeout": 600,
                "catch": (),
                "callbacks": None,
                "gc_after_trial": False,
                "show_progress_bar": False,
            },
        }
    )

    # Fit the model and make predictions
    obj.fit(X, y)
    predictions = obj.predict(X)

    # Evaluate and print the model performance
    r2 = r2_score(y, predictions)
    print(f"r2_score is {r2}")


# Run regression examples
using_tune_regression(Ridge, ridge_params_reg)

# Run classification examples
using_tune_classification(AdaBoostClassifier, adb_params)
```
#### More Real-World Scenarios 

Lohrasb is not just limited to the above functionalities; it offers a multitude of solutions to tackle a variety of problems in machine learning. To get a better understanding of how Lohrasb can be utilized in real-world scenarios, you can visit the [examples](https://github.com/TorkamaniLab/lohrasb/tree/main/examples) webpage. Here you will find a plethora of practical applications demonstrating how Lohrasb's various modules can be adapted to solve specific challenges in hyperparameter tuning across different machine learning frameworks.

### Summary
Lohrasb offers a range of modules specifically designed to simplify and streamline the process of hyperparameter optimization across multiple machine learning frameworks. It integrates the power of various hyperparameter optimization tools such as Tune, GridSearchCV, RandomizedSearchCV, OptunaSearchCV, TuneGridSearchCV, and TuneSearchCV, and brings them into a single, easy-to-use interface.

The `optimize_by_tune` feature melds the robust abilities of Tune with a user-friendly interface, while `optimize_by_gridsearchcv` and `optimize_by_randomsearchcv` employ the exhaustive and stochastic search capabilities of GridSearchCV and RandomizedSearchCV, respectively. The `optimize_by_optunasearchcv` function leverages the flexibility of OptunaSearchCV, and `optimize_by_tunegridsearchcv` and `optimize_by_tunesearchcv` utilize Tune's scalability for grid and randomized searches. In addition, the `optimize_by_optuna` function harnesses the extensive capabilities of the Optuna framework, providing a customizable interface for various machine learning tasks. Across multiple machine learning frameworks, including Scikit-learn, CatBoost, LightGBM, and Imbalanced-learn, Lohrasb provides accessible and efficient tools for hyperparameter tuning, enabling developers to focus on core model development.

### References

We gratefully acknowledge the following open-source libraries which have been essential for developing Lohrasb:

1. **Scikit-learn** - Pedregosa, F. et al. (2011). Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12, 2825-2830. [Website](https://scikit-learn.org/stable/)

2. **GridSearchCV & RandomizedSearchCV** - Part of Scikit-learn library. Refer to the above citation.

3. **Tune (Ray)** - Liaw, R., Liang, E., Nishihara, R., Moritz, P., Gonzalez, J.E., and Stoica, I. (2020). Tune: A Research Platform for Distributed Model Selection and Training. arXiv preprint arXiv:2001.04935. [Website](https://docs.ray.io/en/master/tune/)

4. **Optuna** - Akiba, T., Sano, S., Yanase, T., Ohta, T., and Koyama, M. (2019). Optuna: A Next-generation Hyperparameter Optimization Framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD '19). Association for Computing Machinery, New York, NY, USA, 2623–2631. [Website](https://optuna.org/)

5. **Feature-engine** - Sole, S. (2020). Feature-engine. [Website](https://feature-engine.readthedocs.io/)

6. **XGBoost** - Chen, T., & Guestrin, C. (2016). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '16). Association for Computing Machinery, New York, NY, USA, 785–794. [Website](https://xgboost.readthedocs.io/en/latest/)

7. **CatBoost** - Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A. V., & Gulin, A. (2018). CatBoost: unbiased boosting with categorical features. In Advances in Neural Information Processing Systems. [Website](https://catboost.ai/)

8. **LightGBM** - Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., Liu, T.-Y. (2017). LightGBM: A Highly Efficient Gradient Boosting Decision Tree. In Advances in Neural Information Processing Systems. [Website](https://lightgbm.readthedocs.io/en/latest/)


### License
Licensed under the [BSD 2-Clause](https://opensource.org/licenses/BSD-2-Clause) License.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "lohrasb",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "Auto ML,Pipeline,Machine learning",
    "author": "drhosseinjavedani",
    "author_email": "h.javedani@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/eb/aa/e13b8bf6f42c8d7eac7d2cc65a05a6ba562b20cafb003e1c928136a35f64/lohrasb-4.2.0.tar.gz",
    "platform": null,
    "description": "![GitHub Repo stars](https://img.shields.io/github/stars/drhosseinjavedani/lohrasb) \n![GitHub forks](https://img.shields.io/github/forks/drhosseinjavedani/lohrasb) \n![GitHub language count](https://img.shields.io/github/languages/count/drhosseinjavedani/lohrasb) \n![GitHub repo size](https://img.shields.io/github/repo-size/drhosseinjavedani/lohrasb) \n![GitHub](https://img.shields.io/github/license/drhosseinjavedani/lohrasb)\n![PyPI - Downloads](https://img.shields.io/pypi/dd/lohrasb) \n![PyPI - Python Version](https://img.shields.io/pypi/pyversions/lohrasb)\n![GitHub issues](https://img.shields.io/github/issues/drhosseinjavedani/lohrasb)\n![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg)\n![GitHub contributors](https://img.shields.io/github/contributors/drhosseinjavedani/lohrasb)\n![GitHub last commit](https://img.shields.io/github/last-commit/drhosseinjavedani/lohrasb)\n\n# Lohrasb\nIntroducing **Lohrasb**, a powerful tool designed to streamline machine learning development by providing scalable hyperparameter tuning solutions. Lohrasb incorporates several robust optimization frameworks including [Optuna](https://optuna.readthedocs.io/en/stable/index.html), [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html), [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html), and [Ray Tune Scikit-Learn API](https://docs.ray.io/en/latest/tune/api_docs/sklearn.html). Its compatibility extends to the majority of estimators from Scikit-learn as well as popular machine learning libraries such as [CatBoost](https://catboost.ai/) and [LightGBM](https://lightgbm.readthedocs.io/en/latest/), offering a seamless hyperparameter tuning experience.\n\nLohrasb is also flexible enough to cater to models conforming to standard Scikit-learn API conventions, such as those implementing `fit` and `predict` methods. This means if you're working with a custom model that adheres to these conventions, or any machine learning model from other libraries that use these methods, Lohrasb can assist you in optimizing the model's hyperparameters.\n\nIn addition to model flexibility, Lohrasb provides flexibility in optimization metrics as well. It naturally supports standard Scikit-learn metrics like `f1_score` or `r2_score`. Beyond these, it allows the use of custom evaluation metrics for optimization purposes. This could include specially designed metrics like `f1_plus_tn` or any other specific, customized metric that aligns with your project's requirements.\n\nOverall, whether you're tuning a Scikit-learn estimator, a CatBoost model, a LightGBM classifier, or even a custom model, Lohrasb is designed to streamline your workflow and make the process of hyperparameter optimization more efficient and effective. Its broad compatibility ensures that you can achieve the best performance possible from your models, guided by optimization metrics that are most aligned with your project's goals.\n\n### Introduction\nThe BaseModel of the Lohrasb package is designed with versatility and flexibility in mind. It accepts a variety of parameters ranging from an estimator class and its tuning parameters to different optimization engines like GridSearchCV, RandomizedSearchCV, or Optuna, and their associated parameters. In this process, the data samples are divided into training and validation sets, providing a robust setup for model validation.\n\nUsing these optimizing engines, Lohrasb effectively estimates the optimal parameters for your selected estimator. This results in an enhanced model performance, optimized specifically for your data and problem space. \n### Installation\n\nLohrasb package is available on PyPI and can be installed with pip:\n\n```sh\npip install lohrasb\n```\n\n\n### Supported estimators for this package\nLohrasb supports almost all machine learning estimators for classification and regression.\n\n### Usage\n\nLohrasb presents an effective solution for tuning the optimal parameters of a machine learning model. It leverages robust optimization engines, namely [Optuna](https://optuna.readthedocs.io/en/stable/index.html), [GridSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html), [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html), along with [TuneGridSearchCV, and TuneSearchCV](https://docs.ray.io/en/latest/tune/api_docs/sklearn.html) from [Ray](https://docs.ray.io/en/latest/index.html) tune Scikit-Learn API (tune.sklearn). \n\nThese capabilities empower Lohrasb users to perform comprehensive hyperparameter tuning on a broad range of machine learning models. Whether you are using a model from Scikit-learn, CatBoost, LightGBM, or even a custom model, Lohrasb's functionality enables you to extract the best performance from your model by optimizing its parameters using the most suitable engine.\n### Some examples\nIn the following section, we will showcase several examples that illustrate how users can leverage various optimization engines incorporated within Lohrasb for effective hyperparameter tuning. This guidance aims to equip users with practical knowledge for harnessing the full potential of Lohrasb's diverse optimization capabilities.\n\n#### Utilizing Ray Tune for Hyperparameter Optimization\nLohrasb's optimize_by_tune feature seamlessly integrates the powerful Tune tool from Ray, thereby streamlining hyperparameter optimization for Scikit-learn-based machine learning models. This feature harmoniously combines Tune's robust capabilities with a user-friendly interface, reducing the complexity of hyperparameter tuning and increasing its accessibility. Consequently, optimize_by_tune allows developers to concentrate on core model development while effectively managing hyperparameter optimization. This process leverages the full range of Tune's advanced functionalities. See the example below on how to utilize it:\n\n```\nfrom ray.tune.search.hyperopt import HyperOptSearch\nimport optuna\n\n# Datasets\nfrom sklearn.datasets import make_classification, make_regression\n\n# Custom estimator\nfrom lohrasb.best_estimator import BaseModel\n\n# Gradient boosting frameworks\nfrom catboost import CatBoostClassifier\nfrom xgboost import XGBClassifier\n\n# metrics\nfrom sklearn.metrics import r2_score, f1_score\n\n# Imbalanced-learn ensemble\nfrom imblearn.ensemble import BalancedRandomForestClassifier\nfrom ray import air, tune\n\n# others\nfrom sklearn.model_selection import train_test_split\n\nfrom sklearn.linear_model import (\n    Ridge,\n    SGDRegressor,\n)\n\n# Define hyperparameters for each model\nxgb_params = {\n    \"n_estimators\": tune.randint(50, 200),\n    \"max_depth\": tune.randint(6, 15),\n    \"learning_rate\": tune.uniform(0.001, 0.1),\n}\ncb_params = {\n    \"iterations\": tune.randint(50, 200),\n    \"depth\": tune.randint(4, 8),\n    \"learning_rate\": tune.uniform(0.001, 0.1),\n}\nbrf_params = {\n    \"n_estimators\": tune.randint(50, 200),\n    \"max_depth\": tune.choice([None, 10, 20]),\n}\n\n# Put models and hyperparameters into a list of tuples\nestimators_params_clfs = [\n    (XGBClassifier, xgb_params),\n    (CatBoostClassifier, cb_params),\n    (BalancedRandomForestClassifier, brf_params),\n]\n\n# Define hyperparameters for each model\nridge_params_reg = {\"alpha\": tune.uniform(0.5, 1.0)}\nsgr_params_reg = {\n    \"loss\": tune.choice([\"squared_loss\", \"huber\", \"epsilon_insensitive\"]),\n    \"penalty\": tune.choice([\"l2\", \"l1\", \"elasticnet\"]),\n}\n\n# Put models and hyperparameters into a list of tuples\nestimators_params_regs = [(Ridge, ridge_params_reg), (SGDRegressor, sgr_params_reg)]\n\n\ndef using_tune_classification(estimator, params):\n    # Create synthetic dataset\n    search_alg = HyperOptSearch()\n    X, y = make_classification(\n        n_samples=1000,\n        n_features=20,\n        n_informative=3,\n        n_redundant=10,\n        n_classes=3,\n        random_state=42,\n    )\n    X_train, X_test, y_train, y_test = train_test_split(\n        X, y, test_size=0.2, random_state=42\n    )\n\n    # Initialize the estimator\n    est = estimator()\n\n    # Create keyword arguments for tune\n    kwargs = {\n        # define kwargs for base model\n        \"kwargs\": {  # params for fit method\n            \"fit_tune_kwargs\": {\n                \"sample_weight\": None,\n            },\n            # params for TuneCV\n            \"main_tune_kwargs\": {\n                \"cv\": 3,\n                \"scoring\": \"f1_macro\",\n                \"estimator\": est,\n            },\n            # kwargs of Tuner\n            \"tuner_kwargs\": {\n                \"tune_config\": tune.TuneConfig(\n                    search_alg=search_alg,\n                    mode=\"max\",\n                    metric=\"score\",\n                ),\n                \"param_space\": params,\n                \"run_config\": air.RunConfig(stop={\"training_iteration\": 20}),\n            },\n        }\n    }\n\n    # Run optimize_by_tune\n    obj = BaseModel().optimize_by_tune(**kwargs)\n    obj.fit(X_train, y_train)\n\n    # Predict on test data\n    y_pred = obj.predict(X_test)\n    f1 = f1_score(y_test, y_pred, average=\"macro\")\n    print(f\"f1_score is {f1}\")\n\n\ndef using_tune_regressiom(estimator, params):\n    search_alg = HyperOptSearch()\n    # Create synthetic regression dataset\n    X, y = make_regression(\n        n_samples=100, n_features=10, n_informative=5, n_targets=1, random_state=1\n    )\n\n    # Initialize the estimator\n    est = estimator()\n\n    # Create keyword arguments for tune\n    kwargs = {\n        # define kwargs for base model\n        \"kwargs\": {  # params for fit method\n            \"fit_tune_kwargs\": {\n                \"sample_weight\": None,\n            },\n            # params for TuneCV\n            \"main_tune_kwargs\": {\n                \"cv\": 3,\n                \"scoring\": \"r2\",\n                \"estimator\": est,\n            },\n            # kwargs of Tuner\n            \"tuner_kwargs\": {\n                \"tune_config\": tune.TuneConfig(\n                    search_alg=search_alg,\n                    mode=\"max\",\n                    metric=\"score\",\n                ),\n                \"param_space\": params,\n                \"run_config\": air.RunConfig(stop={\"training_iteration\": 20}),\n            },\n        }\n    }\n\n    # Create obj of the class\n    obj = BaseModel().optimize_by_tune(**kwargs)\n\n    # Check if instance created successfully\n    assert obj is not None\n\n    # Fit data and predict\n    obj.fit(X, y)\n    predictions = obj.predict(X)\n    r2 = r2_score(y, predictions)\n    print(f\"r2_score is {r2}\")\n\n    (Ridge, ridge_params_reg),\n    (SGDRegressor, sgr_params_reg)\n\n\n# some regression examples\nusing_tune_regressiom(Ridge, ridge_params_reg)\nusing_tune_regressiom(SGDRegressor, sgr_params_reg)\n# some classification examples\nusing_tune_classification(CatBoostClassifier, cb_params)\nusing_tune_classification(XGBClassifier, xgb_params)\nusing_tune_classification(BalancedRandomForestClassifier, brf_params)\n\n```\n#### Embracing GridSearchCV for Hyperparameter Optimization\nThe `optimize_by_gridsearchcv` function in Lohrasb incorporates GridSearchCV's robust capabilities, making the process of hyperparameter optimization streamlined and efficient, specifically for Scikit-learn-based machine learning models. This function merges GridSearchCV's comprehensive search abilities with a user-friendly interface, thereby simplifying hyperparameter tuning and making it more accessible. \n```\nfrom sklearn.datasets import make_classification, make_regression\nfrom sklearn.model_selection import KFold, train_test_split\nfrom sklearn.metrics import f1_score, r2_score\nfrom lohrasb.best_estimator import BaseModel\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.linear_model import Lasso, LinearRegression\n\n# Define hyperparameters for the classifiers and regressors\nrf_params = {\"n_estimators\": [50, 100, 200], \"max_depth\": [None, 10, 20]}\nlr_params_reg = {\"fit_intercept\": [True, False]}\nlasso_params_reg = {\"alpha\": [0.1, 0.5, 1.0]}\n\ndef using_tune_classification(estimator, params):\n    # Create synthetic dataset\n    X, y = make_classification(n_samples=1000, n_features=20, n_informative=3, n_redundant=10, n_classes=3, random_state=42)\n    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\n\n    # Initialize the estimator\n    est = estimator()\n\n    # Create the model with the chosen hyperparameters\n    obj = BaseModel().optimize_by_gridsearchcv(\n        kwargs={\n            \"fit_grid_kwargs\": {\n                \"sample_weight\": None,\n            },\n            \"grid_search_kwargs\": {\n                \"estimator\": est,\n                \"param_grid\": params,\n                \"scoring\": \"f1_micro\",\n                \"verbose\": 3,\n                \"n_jobs\": -1,\n                \"cv\": KFold(2),\n            },\n        }\n    )\n\n    # Fit the model and make predictions\n    obj.fit(X_train, y_train)\n    y_pred = obj.predict(X_test)\n\n    # Evaluate the model performance\n    f1 = f1_score(y_test, y_pred, average=\"macro\")\n    print(f\"f1_score is {f1}\")\n\ndef using_tune_regression(estimator, params):\n    # Create synthetic regression dataset\n    X, y = make_regression(n_samples=100, n_features=10, n_informative=5, n_targets=1, random_state=1)\n\n    # Initialize the estimator\n    est = estimator()\n\n    # Create the model with the chosen hyperparameters\n    obj = BaseModel().optimize_by_gridsearchcv(\n        kwargs={\n            \"fit_grid_kwargs\": {\n                \"sample_weight\": None,\n            },\n            \"grid_search_kwargs\": {\n                \"estimator\": est,\n                \"param_grid\": params,\n                \"scoring\": \"r2\",\n                \"verbose\": 3,\n                \"n_jobs\": -1,\n                \"cv\": KFold(2),\n            },\n        }\n    )\n\n    # Fit the model and make predictions\n    obj.fit(X, y)\n    predictions = obj.predict(X)\n\n    # Evaluate the model performance\n    r2 = r2_score(y, predictions)\n    print(f\"r2_score is {r2}\")\n\n# Regression examples\nusing_tune_regression(Lasso, lasso_params_reg)\nusing_tune_regression(LinearRegression, lr_params_reg)\n\n# Classification examples\nusing_tune_classification(RandomForestClassifier, rf_params)\n\n```\n#### Exploring the Use of RandomizedSearchCV Interface\nThe `optimize_by_randomsearchcv` function in Lohrasb harnesses the robust capabilities of RandomizedSearchCV, thereby simplifying and enhancing the efficiency of hyperparameter optimization, particularly for Scikit-learn-based machine learning models. By merging RandomizedSearchCV's stochastic search capabilities with an intuitive interface, `optimize_by_randomsearchcv` makes the process of hyperparameter tuning more accessible and less complex. \n\n```\nfrom sklearn.datasets import make_classification, make_regression\nfrom sklearn.model_selection import KFold, train_test_split\nfrom sklearn.metrics import f1_score, r2_score\nfrom lohrasb.best_estimator import BaseModel\nfrom sklearn.ensemble import AdaBoostClassifier\nfrom sklearn.linear_model import Ridge\n\n# Define hyperparameters for the classifiers and regressors\nadb_params = {\"n_estimators\": [50, 100, 200], \"learning_rate\": [0.001, 0.01, 0.1]}\nridge_params_reg = {\"fit_intercept\": [True, False]}\n\n\ndef using_tune_classification(estimator, params):\n    # Create synthetic dataset\n    X, y = make_classification(\n        n_samples=1000,\n        n_features=20,\n        n_informative=3,\n        n_redundant=10,\n        n_classes=3,\n        random_state=42,\n    )\n    X_train, X_test, y_train, y_test = train_test_split(\n        X, y, test_size=0.2, random_state=42\n    )\n\n    # Initialize the estimator\n    est = estimator()\n\n    # Create the model with the chosen hyperparameters\n    obj = BaseModel().optimize_by_randomsearchcv(\n        kwargs={\n            \"fit_random_kwargs\": {\n                \"sample_weight\": None,\n            },\n            \"random_search_kwargs\": {\n                \"estimator\": est,\n                \"param_distributions\": params,\n                \"scoring\": \"f1_micro\",\n                \"verbose\": 3,\n                \"n_jobs\": -1,\n                \"cv\": KFold(2),\n                \"n_iter\": 10,\n            },\n            \"main_random_kwargs\": {},\n        }\n    )\n\n    # Fit the model and make predictions\n    obj.fit(X_train, y_train)\n    y_pred = obj.predict(X_test)\n\n    # Evaluate the model performance\n    f1 = f1_score(y_test, y_pred, average=\"macro\")\n    print(f\"f1_score is {f1}\")\n\n\ndef using_tune_regression(estimator, params):\n    # Create synthetic regression dataset\n    X, y = make_regression(\n        n_samples=100, n_features=10, n_informative=5, n_targets=1, random_state=1\n    )\n\n    # Initialize the estimator\n    est = estimator()\n\n    # Create the model with the chosen hyperparameters\n    obj = BaseModel().optimize_by_randomsearchcv(\n        kwargs={\n            \"fit_random_kwargs\": {\n                \"sample_weight\": None,\n            },\n            \"random_search_kwargs\": {\n                \"estimator\": est,\n                \"param_distributions\": params,\n                \"scoring\": \"r2\",\n                \"verbose\": 3,\n                \"n_jobs\": -1,\n                \"cv\": KFold(2),\n                \"n_iter\": 10,\n            },\n            \"main_random_kwargs\": {},\n        }\n    )\n\n    # Fit the model and make predictions\n    obj.fit(X, y)\n    predictions = obj.predict(X)\n\n    # Evaluate the model performance\n    r2 = r2_score(y, predictions)\n    print(f\"r2_score is {r2}\")\n\n\n# Regression examples\nusing_tune_regression(Ridge, ridge_params_reg)\n\n# Classification examples\nusing_tune_classification(AdaBoostClassifier, adb_params)\n```\n#### Streamlining Optimization with `optimize_by_optunasearchcv`\nLohrasb's `optimize_by_optunasearchcv` utilizes the power and flexibility of OptunaSearchCV, streamlining hyperparameter optimization for Scikit-learn models. This function melds Optuna's robust search abilities with an intuitive interface, simplifying tuning tasks. It allows developers to focus on key model development aspects while managing hyperparameter optimization using OptunaSearchCV's advanced features. \n```\nfrom sklearn.datasets import make_classification, make_regression\nimport optuna\nfrom sklearn.model_selection import KFold, train_test_split\nfrom sklearn.metrics import f1_score, r2_score\nfrom lohrasb.best_estimator import BaseModel\nfrom sklearn.ensemble import AdaBoostClassifier\nfrom sklearn.linear_model import Ridge\n\n# Define hyperparameters for the classifiers and regressors\nadb_params = {\n    \"n_estimators\": optuna.distributions.IntDistribution(50, 200),\n    \"learning_rate\": optuna.distributions.FloatDistribution(0.001, 0.1),\n}\nridge_params_reg = {\n    \"fit_intercept\": optuna.distributions.CategoricalDistribution(choices=[True, False])\n}\n\n\ndef using_tune_classification(estimator, params):\n    # Create synthetic classification dataset\n    X, y = make_classification(\n        n_samples=1000,\n        n_features=20,\n        n_informative=3,\n        n_redundant=10,\n        n_classes=3,\n        random_state=42,\n    )\n    X_train, X_test, y_train, y_test = train_test_split(\n        X, y, test_size=0.2, random_state=42\n    )\n\n    # Initialize the estimator and create a model with the specified hyperparameters\n    est = estimator()\n    obj = BaseModel().optimize_by_optunasearchcv(\n        kwargs={\n            \"fit_newoptuna_kwargs\": {\"sample_weight\": None},\n            \"newoptuna_search_kwargs\": {\n                \"estimator\": est,\n                \"param_distributions\": params,\n                \"scoring\": \"f1_micro\",\n                \"verbose\": 3,\n                \"n_jobs\": -1,\n                \"cv\": KFold(2),\n            },\n            \"main_newoptuna_kwargs\": {},\n        }\n    )\n\n    # Fit the model and make predictions\n    obj.fit(X_train, y_train)\n    y_pred = obj.predict(X_test)\n\n    # Evaluate and print the model performance\n    f1 = f1_score(y_test, y_pred, average=\"macro\")\n    print(f\"f1_score is {f1}\")\n\n\ndef using_tune_regression(estimator, params):\n    # Create synthetic regression dataset\n    X, y = make_regression(\n        n_samples=100, n_features=10, n_informative=5, n_targets=1, random_state=1\n    )\n\n    # Initialize the estimator and create a model with the specified hyperparameters\n    est = estimator()\n    obj = BaseModel().optimize_by_optunasearchcv(\n        kwargs={\n            \"fit_newoptuna_kwargs\": {\"sample_weight\": None},\n            \"newoptuna_search_kwargs\": {\n                \"estimator\": est,\n                \"param_distributions\": params,\n                \"scoring\": \"r2\",\n                \"verbose\": 3,\n                \"n_jobs\": -1,\n                \"cv\": KFold(2),\n            },\n            \"main_newoptuna_kwargs\": {},\n        }\n    )\n\n    # Fit the model and make predictions\n    obj.fit(X, y)\n    predictions = obj.predict(X)\n\n    # Evaluate and print the model performance\n    r2 = r2_score(y, predictions)\n    print(f\"r2_score is {r2}\")\n\n\n# Run regression examples\nusing_tune_regression(Ridge, ridge_params_reg)\n\n# Run classification examples\nusing_tune_classification(AdaBoostClassifier, adb_params)\n```\n#### Enhancing Optimization with `optimize_by_tunegridsearchcv`\nTuneGridSearchCV is a highly versatile extension of Tune's capabilities, designed to replace Scikit-learn's GridSearchCV. It leverages Tune's scalability and flexibility to perform efficient hyperparameter searching over a predefined grid, offering precise and comprehensive tuning for diverse machine learning frameworks including Scikit-learn, CatBoost, LightGBM, and Imbalanced-learn.\n\nThe `optimize_by_tunegridsearchcv` feature in Lohrasb harnesses this power and versatility. This function simplifies and enhances hyperparameter optimization not only for Scikit-learn models, but also for models developed using CatBoost, LightGBM, and Imbalanced-learn. By leveraging TuneGridSearchCV's systematic and efficient grid-based search capabilities, `optimize_by_tunegridsearchcv` offers a user-friendly interface that makes hyperparameter tuning less complex and more accessible. This enables developers to focus on the core aspects of model development, while the `optimize_by_tunegridsearchcv` function efficiently manages the detailed tuning process. Hence, `optimize_by_tunegridsearchcv` enriches the overall machine learning workflow, utilizing TuneGridSearchCV's advanced features for a robust and efficient grid-based search across multiple frameworks.\n\n```\nfrom sklearn.datasets import make_classification, make_regression\nfrom sklearn.model_selection import KFold, train_test_split\nfrom sklearn.metrics import f1_score, r2_score\nfrom lohrasb.best_estimator import BaseModel\nfrom catboost import CatBoostRegressor\nfrom lightgbm import LGBMClassifier\n\n# Define hyperparameters for the classifiers and regressors\ncat_params_reg = {\"n_estimators\": [50, 100, 200], \"learning_rate\": [0.001, 0.01, 0.1]}\nlgbm_params = {\"max_depth\": [5, 6, 7, 10], \"gamma\": [0.01, 0.1, 1, 1.2]}\n\ndef using_tune_classification(estimator, params):\n    # Create synthetic classification dataset\n    X, y = make_classification(n_samples=1000, n_features=20, n_informative=3, n_redundant=10, n_classes=3, random_state=42)\n    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\n\n    # Initialize the estimator and create a model with the specified hyperparameters\n    est = estimator()\n    obj = BaseModel().optimize_by_tunegridsearchcv(\n        kwargs={\n            \"fit_tunegrid_kwargs\": {\"sample_weight\": None},\n            \"tunegrid_search_kwargs\": {\n                \"estimator\": est,\n                \"param_grid\": params,\n                \"scoring\": \"f1_micro\",\n                \"verbose\": 3,\n                \"n_jobs\": -1,\n                \"cv\": KFold(2),\n            },\n            \"main_tunegrid_kwargs\": {},\n        }\n    )\n\n    # Fit the model and make predictions\n    obj.fit(X_train, y_train)\n    y_pred = obj.predict(X_test)\n\n    # Evaluate and print the model performance\n    f1 = f1_score(y_test, y_pred, average=\"macro\")\n    print(f\"f1_score is {f1}\")\n\ndef using_tune_regression(estimator, params):\n    # Create synthetic regression dataset\n    X, y = make_regression(n_samples=100, n_features=10, n_informative=5, n_targets=1, random_state=1)\n\n    # Initialize the estimator and create a model with the specified hyperparameters\n    est = estimator()\n    obj = BaseModel().optimize_by_tunegridsearchcv(\n        kwargs={\n            \"fit_tunegrid_kwargs\": {\"sample_weight\": None},\n            \"tunegrid_search_kwargs\": {\n                \"estimator\": est,\n                \"param_grid\": params,\n                \"scoring\": \"r2\",\n                \"verbose\": 3,\n                \"n_jobs\": -1,\n                \"cv\": KFold(2),\n            },\n            \"main_tunegrid_kwargs\": {},\n        }\n    )\n\n    # Fit the model and make predictions\n    obj.fit(X, y)\n    predictions = obj.predict(X)\n\n    # Evaluate and print the model performance\n    r2 = r2_score(y, predictions)\n    print(f\"r2_score is {r2}\")\n\n# Run regression examples\nusing_tune_regression(CatBoostRegressor, cat_params_reg)\n\n# Run classification examples\nusing_tune_classification(LGBMClassifier, lgbm_params)\n```\n\n#### Illustrating the Use of `optimize_by_tunesearchcv`\nTuneSearchCV is a flexible and powerful tool that combines the strengths of Tune, a project by Ray, with the convenience of Scikit-learn's GridSearchCV and RandomizedSearchCV for hyperparameter tuning. TuneSearchCV provides an optimized and scalable solution for hyperparameter search, capable of handling a large number of hyperparameters and high-dimensional spaces with precision and speed.\n\nThe `optimize_by_tunesearchcv` feature within Lohrasb employs this powerhouse to make hyperparameter tuning easier and more efficient. \n```\n# Import necessary libraries\nfrom sklearn.datasets import make_classification, make_regression\nfrom sklearn.model_selection import KFold, train_test_split\nfrom sklearn.metrics import f1_score, r2_score\nfrom lohrasb.best_estimator import BaseModel\nfrom sklearn.neural_network import MLPRegressor\nfrom lightgbm import LGBMClassifier\n\n# Define hyperparameters for the MLPRegressor and LGBMClassifier\n# These will be the values that the hyperparameter search function will iterate through.\nmlp_params_reg = {\n    \"hidden_layer_sizes\": [(5, 5, 5), (5, 10, 5), (10,)],\n    \"activation\": [\"tanh\", \"relu\"],\n    \"solver\": [\"sgd\", \"adam\"],\n    \"alpha\": [0.0001, 0.05],\n    \"learning_rate\": [\"constant\", \"adaptive\"],\n}\nlgbm_params = {\"max_depth\": [5, 6, 7, 10]}\n\n# Function for training and evaluating a classification model\ndef using_tune_classification(estimator, params):\n    # Create a synthetic classification dataset with 1000 samples, 20 features, and 3 classes\n    X, y = make_classification(\n        n_samples=1000,\n        n_features=20,\n        n_informative=3,\n        n_redundant=10,\n        n_classes=3,\n        random_state=42,\n    )\n    # Split the dataset into training and test sets\n    X_train, X_test, y_train, y_test = train_test_split(\n        X, y, test_size=0.2, random_state=42\n    )\n\n    # Initialize the estimator\n    est = estimator()\n\n    # Use the hyperparameter search function provided by the BaseModel class to find the best parameters\n    obj = BaseModel().optimize_by_tunesearchcv(\n        kwargs={\n            \"fit_tune_kwargs\": {\"sample_weight\": None},\n            \"tune_search_kwargs\": {\n                \"estimator\": est,\n                \"param_distributions\": params,\n                \"scoring\": \"f1_micro\",\n                \"verbose\": 3,\n                \"n_jobs\": -1,\n                \"cv\": KFold(2),\n            },\n            \"main_tune_kwargs\": {},\n        }\n    )\n\n    # Fit the model to the training data\n    obj.fit(X_train, y_train)\n    # Predict the labels for the test data\n    y_pred = obj.predict(X_test)\n\n    # Compute the F1 score of the model\n    f1 = f1_score(y_test, y_pred, average=\"macro\")\n    print(f\"f1_score is {f1}\")\n\n\n# Function for training and evaluating a regression model\ndef using_tune_regression(estimator, params):\n    # Create a synthetic regression dataset with 1000 samples and 10 features\n    X, y = make_regression(\n        n_samples=1000, n_features=10, n_informative=5, n_targets=1, random_state=1\n    )\n\n    # Initialize the estimator\n    est = estimator()\n\n    # Use the hyperparameter search function provided by the BaseModel class to find the best parameters\n    obj = BaseModel().optimize_by_tunesearchcv(\n        kwargs={\n            \"fit_tune_kwargs\": {},\n            \"tune_search_kwargs\": {\n                \"estimator\": est,\n                \"param_distributions\": params,\n                \"scoring\": \"r2\",\n                \"verbose\": 3,\n                \"n_jobs\": -1,\n                \"cv\": KFold(2),\n            },\n            \"main_tune_kwargs\": {},\n        }\n    )\n\n    # Fit the model to the data\n    obj.fit(X, y)\n    # Predict the targets for the data\n    predictions = obj.predict(X)\n\n    # Compute the R2 score of the model\n    r2 = r2_score(y, predictions)\n    print(f\"r2_score is {r2}\")\n\n\n# Run the regression function using the MLPRegressor and the specified parameters\nusing_tune_regression(MLPRegressor, mlp_params_reg)\n\n# Run the classification function using the LGBMClassifier and the specified parameters\nusing_tune_classification(LGBMClassifier, lgbm_params)\n```\n#### Navigating Hyperparameter Tuning with `optimize_by_optuna`\nThe `optimize_by_optuna` feature in Lohrasb is a versatile function that leverages the extensive capabilities of the Optuna framework, aiming to simplify hyperparameter tuning for a wide range of machine learning models, including CatBoost, XGBoost, LightGBM, and Scikit-learn models. Optuna, known for its flexibility and efficiency in hyperparameter optimization, significantly enhances the model training process.\n\nThis function provides a flexible and customizable interface, accommodating a variety of machine learning tasks. Users can manipulate arguments for different Optuna submodules, such as 'study' and 'optimize', to tailor the function to their specific needs. This flexibility empowers developers to create and manage comprehensive optimization tasks with ease, all within their specific context.\n\nIn essence, `optimize_by_optuna` simplifies the tuning process by making the robust capabilities of Optuna readily accessible. Developers can focus on the core aspects of model development, with `optimize_by_optuna` managing the complexity of hyperparameter optimization. Thus, `optimize_by_optuna` augments the machine learning workflow, tapping into Optuna's advanced capabilities to deliver efficient, tailor-made hyperparameter optimization solutions. \n\n```\n# Import necessary libraries\nfrom sklearn.datasets import make_classification, make_regression\nimport optuna\nfrom sklearn.model_selection import KFold, train_test_split\nfrom sklearn.metrics import f1_score, r2_score\nfrom lohrasb.best_estimator import BaseModel\nfrom sklearn.ensemble import AdaBoostClassifier\nfrom sklearn.linear_model import Ridge\nfrom optuna.samplers import TPESampler\nfrom optuna.pruners import HyperbandPruner\n\n# Define hyperparameters for the AdaBoostClassifier and Ridge regressor\nadb_params = {\n    'n_estimators': [50,  200],\n    'learning_rate': [0.01,  1.0],\n    'algorithm': ['SAMME', 'SAMME.R'],\n}\nridge_params_reg = {\n    'fit_intercept': [True, False],\n    'solver': ['auto', 'svd', 'cholesky', 'lsqr', 'sparse_cg', 'sag', 'saga']\n}\n\n# Function for training and evaluating a classification model\ndef using_tune_classification(estimator, params):\n    # Create a synthetic classification dataset\n    X, y = make_classification(\n        n_samples=1000,\n        n_features=20,\n        n_informative=3,\n        n_redundant=10,\n        n_classes=3,\n        random_state=42,\n    )\n    # Split the dataset into training and test sets\n    X_train, X_test, y_train, y_test = train_test_split(\n        X, y, test_size=0.2, random_state=42\n    )\n\n    # Initialize the estimator\n    est = estimator()\n\n    # Use Optuna for hyperparameter optimization\n    obj = BaseModel().optimize_by_optuna(\n        kwargs={\n            \"fit_optuna_kwargs\": {\n            },\n            \"main_optuna_kwargs\": {\n                \"estimator\": est,\n                \"estimator_params\": params,\n                \"refit\": True,\n                \"measure_of_accuracy\": 'f1_score(y_true, y_pred,average=\"weighted\")',\n            },\n            \"train_test_split_kwargs\": {\n                \"test_size\": 0.3,\n            },\n            \"study_search_kwargs\": {\n                \"storage\": None,\n                \"sampler\": TPESampler(),\n                \"pruner\": HyperbandPruner(),\n                \"study_name\": \"example of optuna optimizer\",\n                \"direction\": \"maximize\",\n                \"load_if_exists\": False,\n            },\n            \"optimize_kwargs\": {\n                \"n_trials\": 20,\n                \"timeout\": 600,\n                \"catch\": (),\n                \"callbacks\": None,\n                \"gc_after_trial\": False,\n                \"show_progress_bar\": False,\n            },\n        }\n    )\n\n    # Fit the model and make predictions\n    obj.fit(X_train, y_train)\n    y_pred = obj.predict(X_test)\n\n    # Evaluate and print the model performance\n    f1 = f1_score(y_test, y_pred, average=\"macro\")\n    print(f\"f1_score is {f1}\")\n\n\n# Function for training and evaluating a regression model\ndef using_tune_regression(estimator, params):\n    # Create a synthetic regression dataset\n    X, y = make_regression(\n        n_samples=100, n_features=10, n_informative=5, n_targets=1, random_state=1\n    )\n\n    # Initialize the estimator\n    est = estimator()\n\n    # Use Optuna for hyperparameter optimization\n    obj = BaseModel().optimize_by_optuna(\n        kwargs={\n            \"fit_optuna_kwargs\": {\n            },\n            \"main_optuna_kwargs\": {\n                \"estimator\": est,\n                \"estimator_params\": params,\n                \"refit\": True,\n                \"measure_of_accuracy\": \"mean_absolute_error(y_true, y_pred, multioutput='uniform_average')\",\n            },\n            \"train_test_split_kwargs\": {\n                \"test_size\": 0.3,\n            },\n            \"study_search_kwargs\": {\n                \"storage\": None,\n                \"sampler\": TPESampler(),\n                \"pruner\": HyperbandPruner(),\n                \"study_name\": \"example of optuna optimizer\",\n                \"direction\": \"maximize\",\n                \"load_if_exists\": False,\n            },\n            \"optimize_kwargs\": {\n                \"n_trials\": 20,\n                \"timeout\": 600,\n                \"catch\": (),\n                \"callbacks\": None,\n                \"gc_after_trial\": False,\n                \"show_progress_bar\": False,\n            },\n        }\n    )\n\n    # Fit the model and make predictions\n    obj.fit(X, y)\n    predictions = obj.predict(X)\n\n    # Evaluate and print the model performance\n    r2 = r2_score(y, predictions)\n    print(f\"r2_score is {r2}\")\n\n\n# Run regression examples\nusing_tune_regression(Ridge, ridge_params_reg)\n\n# Run classification examples\nusing_tune_classification(AdaBoostClassifier, adb_params)\n```\n#### More Real-World Scenarios \n\nLohrasb is not just limited to the above functionalities; it offers a multitude of solutions to tackle a variety of problems in machine learning. To get a better understanding of how Lohrasb can be utilized in real-world scenarios, you can visit the [examples](https://github.com/TorkamaniLab/lohrasb/tree/main/examples) webpage. Here you will find a plethora of practical applications demonstrating how Lohrasb's various modules can be adapted to solve specific challenges in hyperparameter tuning across different machine learning frameworks.\n\n### Summary\nLohrasb offers a range of modules specifically designed to simplify and streamline the process of hyperparameter optimization across multiple machine learning frameworks. It integrates the power of various hyperparameter optimization tools such as Tune, GridSearchCV, RandomizedSearchCV, OptunaSearchCV, TuneGridSearchCV, and TuneSearchCV, and brings them into a single, easy-to-use interface.\n\nThe `optimize_by_tune` feature melds the robust abilities of Tune with a user-friendly interface, while `optimize_by_gridsearchcv` and `optimize_by_randomsearchcv` employ the exhaustive and stochastic search capabilities of GridSearchCV and RandomizedSearchCV, respectively. The `optimize_by_optunasearchcv` function leverages the flexibility of OptunaSearchCV, and `optimize_by_tunegridsearchcv` and `optimize_by_tunesearchcv` utilize Tune's scalability for grid and randomized searches. In addition, the `optimize_by_optuna` function harnesses the extensive capabilities of the Optuna framework, providing a customizable interface for various machine learning tasks. Across multiple machine learning frameworks, including Scikit-learn, CatBoost, LightGBM, and Imbalanced-learn, Lohrasb provides accessible and efficient tools for hyperparameter tuning, enabling developers to focus on core model development.\n\n### References\n\nWe gratefully acknowledge the following open-source libraries which have been essential for developing Lohrasb:\n\n1. **Scikit-learn** - Pedregosa, F. et al. (2011). Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12, 2825-2830. [Website](https://scikit-learn.org/stable/)\n\n2. **GridSearchCV & RandomizedSearchCV** - Part of Scikit-learn library. Refer to the above citation.\n\n3. **Tune (Ray)** - Liaw, R., Liang, E., Nishihara, R., Moritz, P., Gonzalez, J.E., and Stoica, I. (2020). Tune: A Research Platform for Distributed Model Selection and Training. arXiv preprint arXiv:2001.04935. [Website](https://docs.ray.io/en/master/tune/)\n\n4. **Optuna** - Akiba, T., Sano, S., Yanase, T., Ohta, T., and Koyama, M. (2019). Optuna: A Next-generation Hyperparameter Optimization Framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD '19). Association for Computing Machinery, New York, NY, USA, 2623\u20132631. [Website](https://optuna.org/)\n\n5. **Feature-engine** - Sole, S. (2020). Feature-engine. [Website](https://feature-engine.readthedocs.io/)\n\n6. **XGBoost** - Chen, T., & Guestrin, C. (2016). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '16). Association for Computing Machinery, New York, NY, USA, 785\u2013794. [Website](https://xgboost.readthedocs.io/en/latest/)\n\n7. **CatBoost** - Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A. V., & Gulin, A. (2018). CatBoost: unbiased boosting with categorical features. In Advances in Neural Information Processing Systems. [Website](https://catboost.ai/)\n\n8. **LightGBM** - Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., Liu, T.-Y. (2017). LightGBM: A Highly Efficient Gradient Boosting Decision Tree. In Advances in Neural Information Processing Systems. [Website](https://lightgbm.readthedocs.io/en/latest/)\n\n\n### License\nLicensed under the [BSD 2-Clause](https://opensource.org/licenses/BSD-2-Clause) License.\n",
    "bugtrack_url": null,
    "license": "BSD-3-Clause license",
    "summary": "This versatile tool streamlines hyperparameter optimization in machine learning workflows.It supports a wide range of search methods, from GridSearchCV and RandomizedSearchCVto advanced techniques like OptunaSearchCV, Ray Tune, and Scikit-Learn Tune.Designed to enhance model performance and efficiency, it's suitable for tasks of any scale.",
    "version": "4.2.0",
    "project_urls": null,
    "split_keywords": [
        "auto ml",
        "pipeline",
        "machine learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c8a79b3f1a3988cda7b5625925418ce02dc59fa3dcb5c483f1c3a4326cffb189",
                "md5": "4b4e4ceac28e2301b8580e6fb48034e7",
                "sha256": "52f3e46c5122da06cb9ba9bc6579b91a5ae5ab0c5486e341563f42bcf5e1f15e"
            },
            "downloads": -1,
            "filename": "lohrasb-4.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4b4e4ceac28e2301b8580e6fb48034e7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 44678,
            "upload_time": "2023-09-09T22:14:18",
            "upload_time_iso_8601": "2023-09-09T22:14:18.716837Z",
            "url": "https://files.pythonhosted.org/packages/c8/a7/9b3f1a3988cda7b5625925418ce02dc59fa3dcb5c483f1c3a4326cffb189/lohrasb-4.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ebaae13b8bf6f42c8d7eac7d2cc65a05a6ba562b20cafb003e1c928136a35f64",
                "md5": "aec0b2cb5c7536557c79a88b8af11224",
                "sha256": "893653808e9568f5e939b92a5e1f5dd8450cf68e786ab51ee34741bcd29b15d3"
            },
            "downloads": -1,
            "filename": "lohrasb-4.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "aec0b2cb5c7536557c79a88b8af11224",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 45435,
            "upload_time": "2023-09-09T22:14:20",
            "upload_time_iso_8601": "2023-09-09T22:14:20.583526Z",
            "url": "https://files.pythonhosted.org/packages/eb/aa/e13b8bf6f42c8d7eac7d2cc65a05a6ba562b20cafb003e1c928136a35f64/lohrasb-4.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-09-09 22:14:20",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "lohrasb"
}
        
Elapsed time: 0.12308s