nyaggle


Namenyaggle JSON
Version 0.1.6 PyPI version JSON
download
home_pagehttps://github.com/nyanp/nyaggle
SummaryCode for Kaggle and Offline Competitions.
upload_time2023-07-13 15:01:56
maintainer
docs_urlNone
authornyanp
requires_python
licenseMIT
keywords nyaggle kaggle
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # nyaggle

![GitHub Actions CI Status](https://github.com/nyanp/nyaggle/workflows/Python%20package/badge.svg)
![GitHub Actions CI Status](https://github.com/nyanp/nyaggle/workflows/weekly_test/badge.svg)
![Python Versions](https://img.shields.io/pypi/pyversions/nyaggle.svg?logo=python&logoColor=white)
![Documentation Status](https://readthedocs.org/projects/nyaggle/badge/?version=latest)

[**Documentation**](https://nyaggle.readthedocs.io/en/latest/index.html)
| [**Slide (Japanese)**](https://docs.google.com/presentation/d/1jv3J7DISw8phZT4z9rqjM-azdrQ4L4wWJN5P-gKL6fA/edit?usp=sharing)

**nyaggle** is an utility library for Kaggle and offline competitions. 
It is particularly focused on experiment tracking, feature engineering, and validation.

- **nyaggle.ensemble** - Averaging & stacking
- **nyaggle.experiment** - Experiment tracking
- **nyaggle.feature_store** - Lightweight feature storage using feather-format
- **nyaggle.features** - sklearn-compatible features
- **nyaggle.hyper_parameters** - Collection of GBDT hyper-parameters used in past Kaggle competitions
- **nyaggle.validation** - Adversarial validation & sklearn-compatible CV splitters

## Installation

You can install nyaggle via pip:

```bash
pip install nyaggle
```

## Examples

### Experiment Tracking

`run_experiment()` is a high-level API for experiments with cross validation.
It outputs parameters, metrics, out of fold predictions, test predictions,
feature importance, and submission.csv under the specified directory.

To enable mlflow tracking, include the optional `with_mlflow=True` parameter.

```python
from sklearn.model_selection import train_test_split

from nyaggle.experiment import run_experiment
from nyaggle.testing import make_classification_df

X, y = make_classification_df()
X_train, X_test, y_train, y_test = train_test_split(X, y)

params = {
    'n_estimators': 1000,
    'max_depth': 8
}

result = run_experiment(params,
                        X_train,
                        y_train,
                        X_test)

# You can get outputs that are needed in data science competitions with 1 API

print(result.test_prediction)  # Test prediction in numpy array
print(result.oof_prediction)   # Out-of-fold prediction in numpy array
print(result.models)           # Trained models for each fold
print(result.importance)       # Feature importance for each fold
print(result.metrics)          # Evalulation metrics for each fold
print(result.time)             # Elapsed time
print(result.submission_df)    # The output dataframe saved as submission.csv

# ...and all outputs have been saved under the logging directory (default: output/yyyymmdd_HHMMSS).


# You can use it with mlflow and track your experiments through mlflow-ui
result = run_experiment(params,
                        X_train,
                        y_train,
                        X_test,
                        with_mlflow=True)
```

nyaggle also has a low-level API which has similar interface to
[mlflow tracking](https://www.mlflow.org/docs/latest/tracking.html) and [wandb](https://www.wandb.com/).

```python
from nyaggle.experiment import Experiment

with Experiment(logging_directory='./output/') as exp:
    # log key-value pair as a parameter
    exp.log_param('lr', 0.01)
    exp.log_param('optimizer', 'adam')

    # log text
    exp.log('blah blah blah')

    # log metric
    exp.log_metric('CV', 0.85)

    # log numpy ndarray, pandas dafaframe and any artifacts
    exp.log_numpy('predicted', predicted)
    exp.log_dataframe('submission', sub, file_format='csv')
    exp.log_artifact('path-to-your-file')
```

### Feature Engineering

#### Target Encoding with K-Fold

```python
import pandas as pd
import numpy as np

from sklearn.model_selection import KFold
from nyaggle.feature.category_encoder import TargetEncoder


train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
all = pd.concat([train, test]).copy()

cat_cols = [c for c in train.columns if train[c].dtype == np.object]
target_col = 'y'

kf = KFold(5)

# Target encoding with K-fold
te = TargetEncoder(kf.split(train))

# use fit/fit_transform to train data, then apply transform to test data
train.loc[:, cat_cols] = te.fit_transform(train[cat_cols], train[target_col])
test.loc[:, cat_cols] = te.transform(test[cat_cols])

# ... or just call fit_transform to concatenated data
all.loc[:, cat_cols] = te.fit_transform(all[cat_cols], all[cat_cols])
```

#### Text Vectorization using BERT

You need to install pytorch to your virtual environment to use BertSentenceVectorizer. 
MaCab and mecab-python3 are also required if you use the Japanese BERT model.

```python
import pandas as pd
from nyaggle.feature.nlp import BertSentenceVectorizer


train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
all = pd.concat([train, test]).copy()

text_cols = ['body']
target_col = 'y'
group_col = 'user_id'


# extract BERT-based sentence vector
bv = BertSentenceVectorizer(text_columns=text_cols)

text_vector = bv.fit_transform(train)


# BERT + SVD, with cuda
bv = BertSentenceVectorizer(text_columns=text_cols, use_cuda=True, n_components=40)

text_vector_svd = bv.fit_transform(train)

# Japanese BERT
bv = BertSentenceVectorizer(text_columns=text_cols, lang='jp')

japanese_text_vector = bv.fit_transform(train)
```


### Adversarial Validation

```python
import pandas as pd
from nyaggle.validation import adversarial_validate

train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')

auc, importance = adversarial_validate(train, test, importance_type='gain')

```

### Validation Splitters

nyaggle provides a set of validation splitters that are compatible with sklearn.

```python
import pandas as pd
from sklearn.model_selection import cross_validate, KFold
from nyaggle.validation import TimeSeriesSplit, Take, Skip, Nth

train = pd.read_csv('train.csv', parse_dates='dt')

# time-series split
ts = TimeSeriesSplit(train['dt'])
ts.add_fold(train_interval=('2019-01-01', '2019-01-10'), test_interval=('2019-01-10', '2019-01-20'))
ts.add_fold(train_interval=('2019-01-06', '2019-01-15'), test_interval=('2019-01-15', '2019-01-25'))

cross_validate(..., cv=ts)

# take the first 3 folds out of 10
cross_validate(..., cv=Take(3, KFold(10)))

# skip the first 3 folds, and evaluate the remaining 7 folds
cross_validate(..., cv=Skip(3, KFold(10)))

# evaluate 1st fold
cross_validate(..., cv=Nth(1, ts))

```

### Other Awesome Repositories

Here is a list of awesome repositories that provide general utility functions for data science competitions.
Please let me know if you have another one :)

- [jeongyoonlee/Kaggler](https://github.com/jeongyoonlee/Kaggler)
- [mxbi/mlcrate](https://github.com/mxbi/mlcrate)
- [analokmaus/kuma_utils](https://github.com/analokmaus/kuma_utils)
- [Far0n/kaggletils](https://github.com/Far0n/kaggletils)
- [MLWave/Kaggle-Ensemble-Guide](https://github.com/MLWave/Kaggle-Ensemble-Guide)
- [rushter/heamy](https://github.com/rushter/heamy)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/nyanp/nyaggle",
    "name": "nyaggle",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "nyaggle kaggle",
    "author": "nyanp",
    "author_email": "Noumi.Taiga@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/54/10/596dcd557a2d88fe43580395b6658a637f17afc72f69b631a648a835700d/nyaggle-0.1.6.tar.gz",
    "platform": null,
    "description": "# nyaggle\n\n![GitHub Actions CI Status](https://github.com/nyanp/nyaggle/workflows/Python%20package/badge.svg)\n![GitHub Actions CI Status](https://github.com/nyanp/nyaggle/workflows/weekly_test/badge.svg)\n![Python Versions](https://img.shields.io/pypi/pyversions/nyaggle.svg?logo=python&logoColor=white)\n![Documentation Status](https://readthedocs.org/projects/nyaggle/badge/?version=latest)\n\n[**Documentation**](https://nyaggle.readthedocs.io/en/latest/index.html)\n| [**Slide (Japanese)**](https://docs.google.com/presentation/d/1jv3J7DISw8phZT4z9rqjM-azdrQ4L4wWJN5P-gKL6fA/edit?usp=sharing)\n\n**nyaggle** is an utility library for Kaggle and offline competitions. \nIt is particularly focused on experiment tracking, feature engineering, and validation.\n\n- **nyaggle.ensemble** - Averaging & stacking\n- **nyaggle.experiment** - Experiment tracking\n- **nyaggle.feature_store** - Lightweight feature storage using feather-format\n- **nyaggle.features** - sklearn-compatible features\n- **nyaggle.hyper_parameters** - Collection of GBDT hyper-parameters used in past Kaggle competitions\n- **nyaggle.validation** - Adversarial validation & sklearn-compatible CV splitters\n\n## Installation\n\nYou can install nyaggle via pip:\n\n```bash\npip install nyaggle\n```\n\n## Examples\n\n### Experiment Tracking\n\n`run_experiment()` is a high-level API for experiments with cross validation.\nIt outputs parameters, metrics, out of fold predictions, test predictions,\nfeature importance, and submission.csv under the specified directory.\n\nTo enable mlflow tracking, include the optional `with_mlflow=True` parameter.\n\n```python\nfrom sklearn.model_selection import train_test_split\n\nfrom nyaggle.experiment import run_experiment\nfrom nyaggle.testing import make_classification_df\n\nX, y = make_classification_df()\nX_train, X_test, y_train, y_test = train_test_split(X, y)\n\nparams = {\n    'n_estimators': 1000,\n    'max_depth': 8\n}\n\nresult = run_experiment(params,\n                        X_train,\n                        y_train,\n                        X_test)\n\n# You can get outputs that are needed in data science competitions with 1 API\n\nprint(result.test_prediction)  # Test prediction in numpy array\nprint(result.oof_prediction)   # Out-of-fold prediction in numpy array\nprint(result.models)           # Trained models for each fold\nprint(result.importance)       # Feature importance for each fold\nprint(result.metrics)          # Evalulation metrics for each fold\nprint(result.time)             # Elapsed time\nprint(result.submission_df)    # The output dataframe saved as submission.csv\n\n# ...and all outputs have been saved under the logging directory (default: output/yyyymmdd_HHMMSS).\n\n\n# You can use it with mlflow and track your experiments through mlflow-ui\nresult = run_experiment(params,\n                        X_train,\n                        y_train,\n                        X_test,\n                        with_mlflow=True)\n```\n\nnyaggle also has a low-level API which has similar interface to\n[mlflow tracking](https://www.mlflow.org/docs/latest/tracking.html) and [wandb](https://www.wandb.com/).\n\n```python\nfrom nyaggle.experiment import Experiment\n\nwith Experiment(logging_directory='./output/') as exp:\n    # log key-value pair as a parameter\n    exp.log_param('lr', 0.01)\n    exp.log_param('optimizer', 'adam')\n\n    # log text\n    exp.log('blah blah blah')\n\n    # log metric\n    exp.log_metric('CV', 0.85)\n\n    # log numpy ndarray, pandas dafaframe and any artifacts\n    exp.log_numpy('predicted', predicted)\n    exp.log_dataframe('submission', sub, file_format='csv')\n    exp.log_artifact('path-to-your-file')\n```\n\n### Feature Engineering\n\n#### Target Encoding with K-Fold\n\n```python\nimport pandas as pd\nimport numpy as np\n\nfrom sklearn.model_selection import KFold\nfrom nyaggle.feature.category_encoder import TargetEncoder\n\n\ntrain = pd.read_csv('train.csv')\ntest = pd.read_csv('test.csv')\nall = pd.concat([train, test]).copy()\n\ncat_cols = [c for c in train.columns if train[c].dtype == np.object]\ntarget_col = 'y'\n\nkf = KFold(5)\n\n# Target encoding with K-fold\nte = TargetEncoder(kf.split(train))\n\n# use fit/fit_transform to train data, then apply transform to test data\ntrain.loc[:, cat_cols] = te.fit_transform(train[cat_cols], train[target_col])\ntest.loc[:, cat_cols] = te.transform(test[cat_cols])\n\n# ... or just call fit_transform to concatenated data\nall.loc[:, cat_cols] = te.fit_transform(all[cat_cols], all[cat_cols])\n```\n\n#### Text Vectorization using BERT\n\nYou need to install pytorch to your virtual environment to use BertSentenceVectorizer. \nMaCab and mecab-python3 are also required if you use the Japanese BERT model.\n\n```python\nimport pandas as pd\nfrom nyaggle.feature.nlp import BertSentenceVectorizer\n\n\ntrain = pd.read_csv('train.csv')\ntest = pd.read_csv('test.csv')\nall = pd.concat([train, test]).copy()\n\ntext_cols = ['body']\ntarget_col = 'y'\ngroup_col = 'user_id'\n\n\n# extract BERT-based sentence vector\nbv = BertSentenceVectorizer(text_columns=text_cols)\n\ntext_vector = bv.fit_transform(train)\n\n\n# BERT + SVD, with cuda\nbv = BertSentenceVectorizer(text_columns=text_cols, use_cuda=True, n_components=40)\n\ntext_vector_svd = bv.fit_transform(train)\n\n# Japanese BERT\nbv = BertSentenceVectorizer(text_columns=text_cols, lang='jp')\n\njapanese_text_vector = bv.fit_transform(train)\n```\n\n\n### Adversarial Validation\n\n```python\nimport pandas as pd\nfrom nyaggle.validation import adversarial_validate\n\ntrain = pd.read_csv('train.csv')\ntest = pd.read_csv('test.csv')\n\nauc, importance = adversarial_validate(train, test, importance_type='gain')\n\n```\n\n### Validation Splitters\n\nnyaggle provides a set of validation splitters that are compatible with sklearn.\n\n```python\nimport pandas as pd\nfrom sklearn.model_selection import cross_validate, KFold\nfrom nyaggle.validation import TimeSeriesSplit, Take, Skip, Nth\n\ntrain = pd.read_csv('train.csv', parse_dates='dt')\n\n# time-series split\nts = TimeSeriesSplit(train['dt'])\nts.add_fold(train_interval=('2019-01-01', '2019-01-10'), test_interval=('2019-01-10', '2019-01-20'))\nts.add_fold(train_interval=('2019-01-06', '2019-01-15'), test_interval=('2019-01-15', '2019-01-25'))\n\ncross_validate(..., cv=ts)\n\n# take the first 3 folds out of 10\ncross_validate(..., cv=Take(3, KFold(10)))\n\n# skip the first 3 folds, and evaluate the remaining 7 folds\ncross_validate(..., cv=Skip(3, KFold(10)))\n\n# evaluate 1st fold\ncross_validate(..., cv=Nth(1, ts))\n\n```\n\n### Other Awesome Repositories\n\nHere is a list of awesome repositories that provide general utility functions for data science competitions.\nPlease let me know if you have another one :)\n\n- [jeongyoonlee/Kaggler](https://github.com/jeongyoonlee/Kaggler)\n- [mxbi/mlcrate](https://github.com/mxbi/mlcrate)\n- [analokmaus/kuma_utils](https://github.com/analokmaus/kuma_utils)\n- [Far0n/kaggletils](https://github.com/Far0n/kaggletils)\n- [MLWave/Kaggle-Ensemble-Guide](https://github.com/MLWave/Kaggle-Ensemble-Guide)\n- [rushter/heamy](https://github.com/rushter/heamy)\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Code for Kaggle and Offline Competitions.",
    "version": "0.1.6",
    "project_urls": {
        "Homepage": "https://github.com/nyanp/nyaggle"
    },
    "split_keywords": [
        "nyaggle",
        "kaggle"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1b9d6461275608c6ac5a4f3ccf479d434533ac48db45da5535f1316d5dff0327",
                "md5": "86218ed6ffb7a2f8f430eb03bb9332db",
                "sha256": "03eceb3c1a83e2e8650e18f91c00c1569966ee5f966ab2ecab2790b246f313c6"
            },
            "downloads": -1,
            "filename": "nyaggle-0.1.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "86218ed6ffb7a2f8f430eb03bb9332db",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 53251,
            "upload_time": "2023-07-13T15:01:54",
            "upload_time_iso_8601": "2023-07-13T15:01:54.839635Z",
            "url": "https://files.pythonhosted.org/packages/1b/9d/6461275608c6ac5a4f3ccf479d434533ac48db45da5535f1316d5dff0327/nyaggle-0.1.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5410596dcd557a2d88fe43580395b6658a637f17afc72f69b631a648a835700d",
                "md5": "67ba0f14ca79b5b7bd41b91c47ed2dbd",
                "sha256": "2cae200a56717ae4bd338c5d873b0680ec60e47d2f8f1e3f2bad1db59688d4bf"
            },
            "downloads": -1,
            "filename": "nyaggle-0.1.6.tar.gz",
            "has_sig": false,
            "md5_digest": "67ba0f14ca79b5b7bd41b91c47ed2dbd",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 44405,
            "upload_time": "2023-07-13T15:01:56",
            "upload_time_iso_8601": "2023-07-13T15:01:56.820330Z",
            "url": "https://files.pythonhosted.org/packages/54/10/596dcd557a2d88fe43580395b6658a637f17afc72f69b631a648a835700d/nyaggle-0.1.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-13 15:01:56",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "nyanp",
    "github_project": "nyaggle",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "nyaggle"
}
        
Elapsed time: 0.12246s