# AutoVF
XGBoost + Optuna: no brainer
- auto train xgboost directly from CSV files
- auto tune xgboost using optuna
- auto serve best xgboot model using fastapi
NOTE: PRs are currently not accepted. If there are issues/problems, please create an issue.
# Installation
Install using pip
pip install autovf
# Usage
Training a model using AutoVF is a piece of cake. All you need is some tabular data.
## Parameters
```python
###############################################################################
### required parameters
###############################################################################
# path to training data
train_filename = "data_samples/binary_classification.csv"
# path to output folder to store artifacts
output = "output"
###############################################################################
### optional parameters
###############################################################################
# path to test data. if specified, the model will be evaluated on the test data
# and test_predictions.csv will be saved to the output folder
# if not specified, only OOF predictions will be saved
# test_filename = "test.csv"
test_filename = None
# task: classification or regression
# if not specified, the task will be inferred automatically
# task = "classification"
# task = "regression"
task = None
# an id column
# if not specified, the id column will be generated automatically with the name `id`
# idx = "id"
idx = None
# target columns are list of strings
# if not specified, the target column be assumed to be named `target`
# and the problem will be treated as one of: binary classification, multiclass classification,
# or single column regression
# targets = ["target"]
# targets = ["target1", "target2"]
targets = ["income"]
# features columns are list of strings
# if not specified, all columns except `id`, `targets` & `kfold` columns will be used
# features = ["col1", "col2"]
features = None
# categorical_features are list of strings
# if not specified, categorical columns will be inferred automatically
# categorical_features = ["col1", "col2"]
categorical_features = None
# use_gpu is boolean
# if not specified, GPU is not used
# use_gpu = True
# use_gpu = False
use_gpu = True
# number of folds to use for cross-validation
# default is 5
num_folds = 5
# random seed for reproducibility
# default is 42
seed = 42
# number of optuna trials to run
# default is 1000
# num_trials = 1000
num_trials = 100
# time_limit for optuna trials in seconds
# if not specified, timeout is not set and all trials are run
# time_limit = None
time_limit = 360
# if fast is set to True, the hyperparameter tuning will use only one fold
# however, the model will be trained on all folds in the end
# to generate OOF predictions and test predictions
# default is False
# fast = False
fast = False
```
# Python API
To train a new model, you can run:
```python
from autovf import AutoVF
# required parameters:
train_filename = "data_samples/binary_classification.csv"
output = "output"
# optional parameters
test_filename = None
task = None
idx = None
targets = ["income"]
features = None
categorical_features = None
use_gpu = True
num_folds = 5
seed = 42
num_trials = 100
time_limit = 360
fast = False
# Now its time to train the model!
avf = AutoVF(
train_filename=train_filename,
output=output,
test_filename=test_filename,
task=task,
idx=idx,
targets=targets,
features=features,
categorical_features=categorical_features,
use_gpu=use_gpu,
num_folds=num_folds,
seed=seed,
num_trials=num_trials,
time_limit=time_limit,
fast=fast,
)
avf.train()
```
# CLI
Train the model using the `autovf train` command. The parameters are same as above.
```
autovf train \
--train_filename datasets/30train.csv \
--output outputs/30days \
--test_filename datasets/30test.csv \
--use_gpu
```
You can also serve the trained model using the `autovf serve` command.
```bash
autovf serve --model_path outputs/mll --host 0.0.0.0 --debug
```
To know more about a command, run:
`autovf <command> --help`
```
autovf train --help
usage: autovf <command> [<args>] train [-h] --train_filename TRAIN_FILENAME [--test_filename TEST_FILENAME] --output
OUTPUT [--task {classification,regression}] [--idx IDX] [--targets TARGETS]
[--num_folds NUM_FOLDS] [--features FEATURES] [--use_gpu] [--fast]
[--seed SEED] [--time_limit TIME_LIMIT]
optional arguments:
-h, --help show this help message and exit
--train_filename TRAIN_FILENAME
Path to training file
--test_filename TEST_FILENAME
Path to test file
--output OUTPUT Path to output directory
--task {classification,regression}
User defined task type
--idx IDX ID column
--targets TARGETS Target column(s). If there are multiple targets, separate by ';'
--num_folds NUM_FOLDS
Number of folds to use
--features FEATURES Features to use, separated by ';'
--use_gpu Whether to use GPU for training
--fast Whether to use fast mode for tuning params. Only one fold will be used if fast mode is set
--seed SEED Random seed
--time_limit TIME_LIMIT
Time limit for optimization
```
# autovf
Raw data
{
"_id": null,
"home_page": "https://github.com/alicabukel/autovf",
"name": "autovf",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": "",
"keywords": "",
"author": "Ali Cabukel",
"author_email": "alicabukel@proton.me",
"download_url": "https://files.pythonhosted.org/packages/73/42/8afd53bef7cd6e6e1707eca754142401ebe2f4c7e7d5dd0170b0e8a8f9d6/autovf-0.0.6.tar.gz",
"platform": "linux",
"description": "# AutoVF\n\n\nXGBoost + Optuna: no brainer\n\n- auto train xgboost directly from CSV files\n- auto tune xgboost using optuna\n- auto serve best xgboot model using fastapi\n\nNOTE: PRs are currently not accepted. If there are issues/problems, please create an issue.\n\n# Installation\n\nInstall using pip\n\n pip install autovf\n\n\n# Usage\nTraining a model using AutoVF is a piece of cake. All you need is some tabular data.\n\n## Parameters\n\n```python\n\n###############################################################################\n### required parameters\n###############################################################################\n\n# path to training data\ntrain_filename = \"data_samples/binary_classification.csv\"\n\n# path to output folder to store artifacts\noutput = \"output\"\n\n###############################################################################\n### optional parameters\n###############################################################################\n\n# path to test data. if specified, the model will be evaluated on the test data\n# and test_predictions.csv will be saved to the output folder\n# if not specified, only OOF predictions will be saved\n# test_filename = \"test.csv\"\ntest_filename = None\n\n# task: classification or regression\n# if not specified, the task will be inferred automatically\n# task = \"classification\"\n# task = \"regression\"\ntask = None\n\n# an id column\n# if not specified, the id column will be generated automatically with the name `id`\n# idx = \"id\"\nidx = None\n\n# target columns are list of strings\n# if not specified, the target column be assumed to be named `target`\n# and the problem will be treated as one of: binary classification, multiclass classification,\n# or single column regression\n# targets = [\"target\"]\n# targets = [\"target1\", \"target2\"]\ntargets = [\"income\"]\n\n# features columns are list of strings\n# if not specified, all columns except `id`, `targets` & `kfold` columns will be used\n# features = [\"col1\", \"col2\"]\nfeatures = None\n\n# categorical_features are list of strings\n# if not specified, categorical columns will be inferred automatically\n# categorical_features = [\"col1\", \"col2\"]\ncategorical_features = None\n\n# use_gpu is boolean\n# if not specified, GPU is not used\n# use_gpu = True\n# use_gpu = False\nuse_gpu = True\n\n# number of folds to use for cross-validation\n# default is 5\nnum_folds = 5\n\n# random seed for reproducibility\n# default is 42\nseed = 42\n\n# number of optuna trials to run\n# default is 1000\n# num_trials = 1000\nnum_trials = 100\n\n# time_limit for optuna trials in seconds\n# if not specified, timeout is not set and all trials are run\n# time_limit = None\ntime_limit = 360\n\n# if fast is set to True, the hyperparameter tuning will use only one fold\n# however, the model will be trained on all folds in the end\n# to generate OOF predictions and test predictions\n# default is False\n# fast = False\nfast = False\n```\n\n# Python API\n\nTo train a new model, you can run:\n\n```python\nfrom autovf import AutoVF\n\n\n# required parameters:\ntrain_filename = \"data_samples/binary_classification.csv\"\noutput = \"output\"\n\n# optional parameters\ntest_filename = None\ntask = None\nidx = None\ntargets = [\"income\"]\nfeatures = None\ncategorical_features = None\nuse_gpu = True\nnum_folds = 5\nseed = 42\nnum_trials = 100\ntime_limit = 360\nfast = False\n\n# Now its time to train the model!\navf = AutoVF(\n train_filename=train_filename,\n output=output,\n test_filename=test_filename,\n task=task,\n idx=idx,\n targets=targets,\n features=features,\n categorical_features=categorical_features,\n use_gpu=use_gpu,\n num_folds=num_folds,\n seed=seed,\n num_trials=num_trials,\n time_limit=time_limit,\n fast=fast,\n)\navf.train()\n```\n\n# CLI\n\nTrain the model using the `autovf train` command. The parameters are same as above.\n\n```\nautovf train \\\n --train_filename datasets/30train.csv \\\n --output outputs/30days \\\n --test_filename datasets/30test.csv \\\n --use_gpu\n```\n\nYou can also serve the trained model using the `autovf serve` command.\n\n```bash\nautovf serve --model_path outputs/mll --host 0.0.0.0 --debug\n```\n\nTo know more about a command, run:\n\n `autovf <command> --help` \n\n```\nautovf train --help\n\n\nusage: autovf <command> [<args>] train [-h] --train_filename TRAIN_FILENAME [--test_filename TEST_FILENAME] --output\n OUTPUT [--task {classification,regression}] [--idx IDX] [--targets TARGETS]\n [--num_folds NUM_FOLDS] [--features FEATURES] [--use_gpu] [--fast]\n [--seed SEED] [--time_limit TIME_LIMIT]\n\noptional arguments:\n -h, --help show this help message and exit\n --train_filename TRAIN_FILENAME\n Path to training file\n --test_filename TEST_FILENAME\n Path to test file\n --output OUTPUT Path to output directory\n --task {classification,regression}\n User defined task type\n --idx IDX ID column\n --targets TARGETS Target column(s). If there are multiple targets, separate by ';'\n --num_folds NUM_FOLDS\n Number of folds to use\n --features FEATURES Features to use, separated by ';'\n --use_gpu Whether to use GPU for training\n --fast Whether to use fast mode for tuning params. Only one fold will be used if fast mode is set\n --seed SEED Random seed\n --time_limit TIME_LIMIT\n Time limit for optimization\n```\n# autovf\n",
"bugtrack_url": null,
"license": "Apache 2.0",
"summary": "autovf: tuning xgboost with optuna",
"version": "0.0.6",
"project_urls": {
"Homepage": "https://github.com/alicabukel/autovf"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "3cfa7fc42182dbd081fce4a8078f12ef18ee3d17fb3c7f821652fde1686b2925",
"md5": "6fdc48c1ea8a340bbf2dce599c6ef384",
"sha256": "c30c36a27cc6d1604217ec3da3391c7c18496a6880a8f0ec012625a4c89c1624"
},
"downloads": -1,
"filename": "autovf-0.0.6-py3-none-any.whl",
"has_sig": false,
"md5_digest": "6fdc48c1ea8a340bbf2dce599c6ef384",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6",
"size": 21368,
"upload_time": "2023-06-01T11:25:04",
"upload_time_iso_8601": "2023-06-01T11:25:04.418745Z",
"url": "https://files.pythonhosted.org/packages/3c/fa/7fc42182dbd081fce4a8078f12ef18ee3d17fb3c7f821652fde1686b2925/autovf-0.0.6-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "73428afd53bef7cd6e6e1707eca754142401ebe2f4c7e7d5dd0170b0e8a8f9d6",
"md5": "2106bb1fe8ff00ad55c86388bbd329c3",
"sha256": "988926e397db2a5cfb145320d7d7fee6be536b01ae456695e52bbd8240d8d996"
},
"downloads": -1,
"filename": "autovf-0.0.6.tar.gz",
"has_sig": false,
"md5_digest": "2106bb1fe8ff00ad55c86388bbd329c3",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 19728,
"upload_time": "2023-06-01T11:25:07",
"upload_time_iso_8601": "2023-06-01T11:25:07.036434Z",
"url": "https://files.pythonhosted.org/packages/73/42/8afd53bef7cd6e6e1707eca754142401ebe2f4c7e7d5dd0170b0e8a8f9d6/autovf-0.0.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-06-01 11:25:07",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "alicabukel",
"github_project": "autovf",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "autovf"
}