pytorch-tabnet


Namepytorch-tabnet JSON
Version 4.1.0 PyPI version JSON
download
home_pagehttps://github.com/dreamquark-ai/tabnet
SummaryPyTorch implementation of TabNet
upload_time2023-07-23 13:26:59
maintainer
docs_urlNone
author
requires_python>=3.7
license
keywords tabnet pytorch neural-networks
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # README

# TabNet : Attentive Interpretable Tabular Learning

This is a pyTorch implementation of Tabnet (Arik, S. O., & Pfister, T. (2019). TabNet: Attentive Interpretable Tabular Learning. arXiv preprint arXiv:1908.07442.) https://arxiv.org/pdf/1908.07442.pdf. Please note that some different choices have been made overtime to improve the library which can differ from the orginal paper.

<!--- BADGES: START --->
[![CircleCI](https://circleci.com/gh/dreamquark-ai/tabnet.svg?style=svg)](https://circleci.com/gh/dreamquark-ai/tabnet)

[![PyPI version](https://badge.fury.io/py/pytorch-tabnet.svg)](https://badge.fury.io/py/pytorch-tabnet)

![PyPI - Downloads](https://img.shields.io/pypi/dm/pytorch-tabnet)

[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/pytorch-tabnet?logo=pypi&style=flat&color=blue)][#pypi-package]

[![Conda - Platform](https://img.shields.io/conda/pn/conda-forge/pytorch-tabnet?logo=anaconda&style=flat)][#conda-forge-package]

[![Conda (channel only)](https://img.shields.io/conda/vn/conda-forge/pytorch-tabnet?logo=anaconda&style=flat&color=orange)][#conda-forge-package]

[![GitHub - License](https://img.shields.io/github/license/dreamquark-ai/tabnet?logo=github&style=flat&color=green)][#github-license]

[#github-license]: https://github.com/dreamquark-ai/tabnet/blob/main/LICENSE
[#pypi-package]: https://pypi.org/project/pytorch-tabnet/
[#conda-forge-package]: https://anaconda.org/conda-forge/pytorch-tabnet
<!--- BADGES: END --->

Any questions ? Want to contribute ? To talk with us ? You can join us on [Slack](https://join.slack.com/t/mltooling/shared_invite/zt-fxaj0qk7-SWy2_~EWyhj4x9SD6gbRvg)

# Installation

## Easy installation

You can install using `pip` or `conda` as follows.

**with pip**

```sh
pip install pytorch-tabnet
```

**with conda**

```sh
conda install -c conda-forge pytorch-tabnet
```

## Source code

If you wan to use it locally within a docker container:

- `git clone git@github.com:dreamquark-ai/tabnet.git`

- `cd tabnet` to get inside the repository

-----------------

#### CPU only

- `make start` to build and get inside the container

#### GPU

- `make start-gpu` to build and get inside the GPU container

-----------------

- `poetry install` to install all the dependencies, including jupyter

- `make notebook` inside the same terminal. You can then follow the link to a jupyter notebook with tabnet installed.

# What is new ?

- from version **> 4.0** attention is now embedding aware. This aims to maintain a good attention mechanism even with large number of embedding. It is also now possible to specify attention groups (using `grouped_features`). Attention is now done at the group level and not feature level. This is especially useful if a dataset has a lot of columns coming from on single source of data (exemple: a text column transformed using TD-IDF).

# Contributing

When contributing to the TabNet repository, please make sure to first discuss the change you wish to make via a new or already existing issue.

Our commits follow the rules presented [here](https://www.conventionalcommits.org/en/v1.0.0/).

# What problems does pytorch-tabnet handle?

- TabNetClassifier : binary classification and multi-class classification problems
- TabNetRegressor : simple and multi-task regression problems
- TabNetMultiTaskClassifier:  multi-task multi-classification problems

# How to use it?

TabNet is now scikit-compatible, training a TabNetClassifier or TabNetRegressor is really easy.

```python
from pytorch_tabnet.tab_model import TabNetClassifier, TabNetRegressor

clf = TabNetClassifier()  #TabNetRegressor()
clf.fit(
  X_train, Y_train,
  eval_set=[(X_valid, y_valid)]
)
preds = clf.predict(X_test)
```

or for TabNetMultiTaskClassifier :

```python
from pytorch_tabnet.multitask import TabNetMultiTaskClassifier
clf = TabNetMultiTaskClassifier()
clf.fit(
  X_train, Y_train,
  eval_set=[(X_valid, y_valid)]
)
preds = clf.predict(X_test)
```

The targets on `y_train/y_valid` should contain a unique type (e.g. they must all be strings or integers).

### Default eval_metric

A few classic evaluation metrics are implemented (see further below for custom ones):
- binary classification metrics : 'auc', 'accuracy', 'balanced_accuracy', 'logloss'
- multiclass classification : 'accuracy', 'balanced_accuracy', 'logloss'
- regression: 'mse', 'mae', 'rmse', 'rmsle'


Important Note : 'rmsle' will automatically clip negative predictions to 0, because the model can predict negative values.
In order to match the given scores, you need to use `np.clip(clf.predict(X_predict), a_min=0, a_max=None)` when doing predictions.


### Custom evaluation metrics

You can create a metric for your specific need. Here is an example for gini score (note that you need to specifiy whether this metric should be maximized or not):

```python
from pytorch_tabnet.metrics import Metric
from sklearn.metrics import roc_auc_score

class Gini(Metric):
    def __init__(self):
        self._name = "gini"
        self._maximize = True

    def __call__(self, y_true, y_score):
        auc = roc_auc_score(y_true, y_score[:, 1])
        return max(2*auc - 1, 0.)

clf = TabNetClassifier()
clf.fit(
  X_train, Y_train,
  eval_set=[(X_valid, y_valid)],
  eval_metric=[Gini]
)

```

A specific customization example notebook is available here : https://github.com/dreamquark-ai/tabnet/blob/develop/customizing_example.ipynb

# Semi-supervised pre-training

Added later to TabNet's original paper, semi-supervised pre-training is now available via the class `TabNetPretrainer`:

```python
# TabNetPretrainer
unsupervised_model = TabNetPretrainer(
    optimizer_fn=torch.optim.Adam,
    optimizer_params=dict(lr=2e-2),
    mask_type='entmax' # "sparsemax"
)

unsupervised_model.fit(
    X_train=X_train,
    eval_set=[X_valid],
    pretraining_ratio=0.8,
)

clf = TabNetClassifier(
    optimizer_fn=torch.optim.Adam,
    optimizer_params=dict(lr=2e-2),
    scheduler_params={"step_size":10, # how to use learning rate scheduler
                      "gamma":0.9},
    scheduler_fn=torch.optim.lr_scheduler.StepLR,
    mask_type='sparsemax' # This will be overwritten if using pretrain model
)

clf.fit(
    X_train=X_train, y_train=y_train,
    eval_set=[(X_train, y_train), (X_valid, y_valid)],
    eval_name=['train', 'valid'],
    eval_metric=['auc'],
    from_unsupervised=unsupervised_model
)
```

The loss function has been normalized to be independent of `pretraining_ratio`, `batch_size` and the number of features in the problem.
A self supervised loss greater than 1 means that your model is reconstructing worse than predicting the mean for each feature, a loss bellow 1 means that the model is doing better than predicting the mean.

A complete example can be found within the notebook `pretraining_example.ipynb`.

/!\ : current implementation is trying to reconstruct the original inputs, but Batch Normalization applies a random transformation that can't be deduced by a single line, making the reconstruction harder. Lowering the `batch_size` might make the pretraining easier.

# Data augmentation on the fly

It is now possible to apply custom data augmentation pipeline during training.
Templates for ClassificationSMOTE and RegressionSMOTE have been added in `pytorch-tabnet/augmentations.py` and can be used as is.


# Easy saving and loading

It's really easy to save and re-load a trained model, this makes TabNet production ready.
```
# save tabnet model
saving_path_name = "./tabnet_model_test_1"
saved_filepath = clf.save_model(saving_path_name)

# define new model with basic parameters and load state dict weights
loaded_clf = TabNetClassifier()
loaded_clf.load_model(saved_filepath)
```

# Useful links

- [explanatory video](https://youtu.be/ysBaZO8YmX8)
- [binary classification examples](https://github.com/dreamquark-ai/tabnet/blob/develop/census_example.ipynb)
- [multi-class classification examples](https://github.com/dreamquark-ai/tabnet/blob/develop/forest_example.ipynb)
- [regression examples](https://github.com/dreamquark-ai/tabnet/blob/develop/regression_example.ipynb)
- [multi-task regression examples](https://github.com/dreamquark-ai/tabnet/blob/develop/multi_regression_example.ipynb)
- [multi-task multi-class classification examples](https://www.kaggle.com/optimo/tabnetmultitaskclassifier)
- [kaggle moa 1st place solution using tabnet](https://www.kaggle.com/c/lish-moa/discussion/201510)

## Model parameters

- `n_d` : int (default=8)

    Width of the decision prediction layer. Bigger values gives more capacity to the model with the risk of overfitting.
    Values typically range from 8 to 64.

- `n_a`: int (default=8)

    Width of the attention embedding for each mask.
    According to the paper n_d=n_a is usually a good choice. (default=8)

- `n_steps` : int (default=3)

    Number of steps in the architecture (usually between 3 and 10)  

- `gamma` : float  (default=1.3)

    This is the coefficient for feature reusage in the masks.
    A value close to 1 will make mask selection least correlated between layers.
    Values range from 1.0 to 2.0.

- `cat_idxs` : list of int (default=[] - Mandatory for embeddings) 

    List of categorical features indices.

- `cat_dims` : list of int (default=[] - Mandatory for embeddings)

    List of categorical features number of modalities (number of unique values for a categorical feature)
    /!\ no new modalities can be predicted

- `cat_emb_dim` : list of int (optional)

    List of embeddings size for each categorical features. (default =1)

- `n_independent` : int  (default=2)

    Number of independent Gated Linear Units layers at each step.
    Usual values range from 1 to 5.

- `n_shared` : int (default=2)

    Number of shared Gated Linear Units at each step
    Usual values range from 1 to 5

- `epsilon` : float  (default 1e-15)

    Should be left untouched.

- `seed` : int (default=0)

    Random seed for reproducibility

- `momentum` : float

    Momentum for batch normalization, typically ranges from 0.01 to 0.4 (default=0.02)

- `clip_value` : float (default None)

    If a float is given this will clip the gradient at clip_value.
    
- `lambda_sparse` : float (default = 1e-3)

    This is the extra sparsity loss coefficient as proposed in the original paper. The bigger this coefficient is, the sparser your model will be in terms of feature selection. Depending on the difficulty of your problem, reducing this value could help.

- `optimizer_fn` : torch.optim (default=torch.optim.Adam)

    Pytorch optimizer function

- `optimizer_params`: dict (default=dict(lr=2e-2))

    Parameters compatible with optimizer_fn used initialize the optimizer. Since we have Adam as our default optimizer, we use this to define the initial learning rate used for training. As mentionned in the original paper, a large initial learning rate of ```0.02 ```  with decay is a good option.

- `scheduler_fn` : torch.optim.lr_scheduler (default=None)

    Pytorch Scheduler to change learning rates during training.

- `scheduler_params` : dict

    Dictionnary of parameters to apply to the scheduler_fn. Ex : {"gamma": 0.95, "step_size": 10}

- `model_name` : str (default = 'DreamQuarkTabNet')

    Name of the model used for saving in disk, you can customize this to easily retrieve and reuse your trained models.

- `verbose` : int (default=1)

    Verbosity for notebooks plots, set to 1 to see every epoch, 0 to get None.

- `device_name` : str (default='auto')
    'cpu' for cpu training, 'gpu' for gpu training, 'auto' to automatically detect gpu.

- `mask_type: str` (default='sparsemax')
    Either "sparsemax" or "entmax" : this is the masking function to use for selecting features.

- `grouped_features: list of list of ints` (default=None)
    This allows the model to share it's attention accross feature inside a same group.
    This can be especially useful when your preprocessing generates correlated or dependant features: like if you use a TF-IDF or a PCA on a text column.
    Note that feature importance will be exactly the same between features on a same group.
    Please also note that embeddings generated for a categorical variable are always inside a same group. 

- `n_shared_decoder` : int (default=1)

    Number of shared GLU block in decoder, this is only useful for `TabNetPretrainer`.

- `n_indep_decoder` : int (default=1)

    Number of independent GLU block in decoder, this is only useful for `TabNetPretrainer`.

## Fit parameters

- `X_train` : np.array or scipy.sparse.csr_matrix

    Training features

- `y_train` : np.array

    Training targets

- `eval_set`: list of tuple  

    List of eval tuple set (X, y).  
    The last one is used for early stopping  

- `eval_name`: list of str  
              List of eval set names.  

- `eval_metric` : list of str  
              List of evaluation metrics.  
              The last metric is used for early stopping.

- `max_epochs` : int (default = 200)

    Maximum number of epochs for trainng.
    
- `patience` : int (default = 10)

    Number of consecutive epochs without improvement before performing early stopping.

    If patience is set to 0, then no early stopping will be performed.

    Note that if patience is enabled, then best weights from best epoch will automatically be loaded at the end of `fit`.

- `weights` : int or dict (default=0)

    /!\ Only for TabNetClassifier
    Sampling parameter
    0 : no sampling
    1 : automated sampling with inverse class occurrences
    dict : keys are classes, values are weights for each class

- `loss_fn` : torch.loss or list of torch.loss

    Loss function for training (default to mse for regression and cross entropy for classification)
    When using TabNetMultiTaskClassifier you can set a list of same length as number of tasks,
    each task will be assigned its own loss function

- `batch_size` : int (default=1024)

    Number of examples per batch. Large batch sizes are recommended.

- `virtual_batch_size` : int (default=128)

    Size of the mini batches used for "Ghost Batch Normalization".
    /!\ `virtual_batch_size` should divide `batch_size`

- `num_workers` : int (default=0)

    Number or workers used in torch.utils.data.Dataloader

- `drop_last` : bool (default=False)

    Whether to drop last batch if not complete during training

- `callbacks` : list of callback function  
        List of custom callbacks

- `pretraining_ratio` : float

        /!\ TabNetPretrainer Only : Percentage of input features to mask during pretraining.

        Should be between 0 and 1. The bigger the harder the reconstruction task is.

- `warm_start` : bool (default=False)
    In order to match scikit-learn API, this is set to False.
    It allows to fit twice the same model and start from a warm start.

- `compute_importance` : bool (default=True)

    Whether to compute feature importance

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/dreamquark-ai/tabnet",
    "name": "pytorch-tabnet",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "tabnet,pytorch,neural-networks",
    "author": "",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/19/c7/bb93b92e8e123308737240a26aa0868e05e2549ea8ece533b45f37b284d5/pytorch_tabnet-4.1.0.tar.gz",
    "platform": null,
    "description": "# README\n\n# TabNet : Attentive Interpretable Tabular Learning\n\nThis is a pyTorch implementation of Tabnet (Arik, S. O., & Pfister, T. (2019). TabNet: Attentive Interpretable Tabular Learning. arXiv preprint arXiv:1908.07442.) https://arxiv.org/pdf/1908.07442.pdf. Please note that some different choices have been made overtime to improve the library which can differ from the orginal paper.\n\n<!--- BADGES: START --->\n[![CircleCI](https://circleci.com/gh/dreamquark-ai/tabnet.svg?style=svg)](https://circleci.com/gh/dreamquark-ai/tabnet)\n\n[![PyPI version](https://badge.fury.io/py/pytorch-tabnet.svg)](https://badge.fury.io/py/pytorch-tabnet)\n\n![PyPI - Downloads](https://img.shields.io/pypi/dm/pytorch-tabnet)\n\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/pytorch-tabnet?logo=pypi&style=flat&color=blue)][#pypi-package]\n\n[![Conda - Platform](https://img.shields.io/conda/pn/conda-forge/pytorch-tabnet?logo=anaconda&style=flat)][#conda-forge-package]\n\n[![Conda (channel only)](https://img.shields.io/conda/vn/conda-forge/pytorch-tabnet?logo=anaconda&style=flat&color=orange)][#conda-forge-package]\n\n[![GitHub - License](https://img.shields.io/github/license/dreamquark-ai/tabnet?logo=github&style=flat&color=green)][#github-license]\n\n[#github-license]: https://github.com/dreamquark-ai/tabnet/blob/main/LICENSE\n[#pypi-package]: https://pypi.org/project/pytorch-tabnet/\n[#conda-forge-package]: https://anaconda.org/conda-forge/pytorch-tabnet\n<!--- BADGES: END --->\n\nAny questions ? Want to contribute ? To talk with us ? You can join us on [Slack](https://join.slack.com/t/mltooling/shared_invite/zt-fxaj0qk7-SWy2_~EWyhj4x9SD6gbRvg)\n\n# Installation\n\n## Easy installation\n\nYou can install using `pip` or `conda` as follows.\n\n**with pip**\n\n```sh\npip install pytorch-tabnet\n```\n\n**with conda**\n\n```sh\nconda install -c conda-forge pytorch-tabnet\n```\n\n## Source code\n\nIf you wan to use it locally within a docker container:\n\n- `git clone git@github.com:dreamquark-ai/tabnet.git`\n\n- `cd tabnet` to get inside the repository\n\n-----------------\n\n#### CPU only\n\n- `make start` to build and get inside the container\n\n#### GPU\n\n- `make start-gpu` to build and get inside the GPU container\n\n-----------------\n\n- `poetry install` to install all the dependencies, including jupyter\n\n- `make notebook` inside the same terminal. You can then follow the link to a jupyter notebook with tabnet installed.\n\n# What is new ?\n\n- from version **> 4.0** attention is now embedding aware. This aims to maintain a good attention mechanism even with large number of embedding. It is also now possible to specify attention groups (using `grouped_features`). Attention is now done at the group level and not feature level. This is especially useful if a dataset has a lot of columns coming from on single source of data (exemple: a text column transformed using TD-IDF).\n\n# Contributing\n\nWhen contributing to the TabNet repository, please make sure to first discuss the change you wish to make via a new or already existing issue.\n\nOur commits follow the rules presented [here](https://www.conventionalcommits.org/en/v1.0.0/).\n\n# What problems does pytorch-tabnet handle?\n\n- TabNetClassifier : binary classification and multi-class classification problems\n- TabNetRegressor : simple and multi-task regression problems\n- TabNetMultiTaskClassifier:  multi-task multi-classification problems\n\n# How to use it?\n\nTabNet is now scikit-compatible, training a TabNetClassifier or TabNetRegressor is really easy.\n\n```python\nfrom pytorch_tabnet.tab_model import TabNetClassifier, TabNetRegressor\n\nclf = TabNetClassifier()  #TabNetRegressor()\nclf.fit(\n  X_train, Y_train,\n  eval_set=[(X_valid, y_valid)]\n)\npreds = clf.predict(X_test)\n```\n\nor for TabNetMultiTaskClassifier :\n\n```python\nfrom pytorch_tabnet.multitask import TabNetMultiTaskClassifier\nclf = TabNetMultiTaskClassifier()\nclf.fit(\n  X_train, Y_train,\n  eval_set=[(X_valid, y_valid)]\n)\npreds = clf.predict(X_test)\n```\n\nThe targets on `y_train/y_valid` should contain a unique type (e.g. they must all be strings or integers).\n\n### Default eval_metric\n\nA few classic evaluation metrics are implemented (see further below for custom ones):\n- binary classification metrics : 'auc', 'accuracy', 'balanced_accuracy', 'logloss'\n- multiclass classification : 'accuracy', 'balanced_accuracy', 'logloss'\n- regression: 'mse', 'mae', 'rmse', 'rmsle'\n\n\nImportant Note : 'rmsle' will automatically clip negative predictions to 0, because the model can predict negative values.\nIn order to match the given scores, you need to use `np.clip(clf.predict(X_predict), a_min=0, a_max=None)` when doing predictions.\n\n\n### Custom evaluation metrics\n\nYou can create a metric for your specific need. Here is an example for gini score (note that you need to specifiy whether this metric should be maximized or not):\n\n```python\nfrom pytorch_tabnet.metrics import Metric\nfrom sklearn.metrics import roc_auc_score\n\nclass Gini(Metric):\n    def __init__(self):\n        self._name = \"gini\"\n        self._maximize = True\n\n    def __call__(self, y_true, y_score):\n        auc = roc_auc_score(y_true, y_score[:, 1])\n        return max(2*auc - 1, 0.)\n\nclf = TabNetClassifier()\nclf.fit(\n  X_train, Y_train,\n  eval_set=[(X_valid, y_valid)],\n  eval_metric=[Gini]\n)\n\n```\n\nA specific customization example notebook is available here : https://github.com/dreamquark-ai/tabnet/blob/develop/customizing_example.ipynb\n\n# Semi-supervised pre-training\n\nAdded later to TabNet's original paper, semi-supervised pre-training is now available via the class `TabNetPretrainer`:\n\n```python\n# TabNetPretrainer\nunsupervised_model = TabNetPretrainer(\n    optimizer_fn=torch.optim.Adam,\n    optimizer_params=dict(lr=2e-2),\n    mask_type='entmax' # \"sparsemax\"\n)\n\nunsupervised_model.fit(\n    X_train=X_train,\n    eval_set=[X_valid],\n    pretraining_ratio=0.8,\n)\n\nclf = TabNetClassifier(\n    optimizer_fn=torch.optim.Adam,\n    optimizer_params=dict(lr=2e-2),\n    scheduler_params={\"step_size\":10, # how to use learning rate scheduler\n                      \"gamma\":0.9},\n    scheduler_fn=torch.optim.lr_scheduler.StepLR,\n    mask_type='sparsemax' # This will be overwritten if using pretrain model\n)\n\nclf.fit(\n    X_train=X_train, y_train=y_train,\n    eval_set=[(X_train, y_train), (X_valid, y_valid)],\n    eval_name=['train', 'valid'],\n    eval_metric=['auc'],\n    from_unsupervised=unsupervised_model\n)\n```\n\nThe loss function has been normalized to be independent of `pretraining_ratio`, `batch_size` and the number of features in the problem.\nA self supervised loss greater than 1 means that your model is reconstructing worse than predicting the mean for each feature, a loss bellow 1 means that the model is doing better than predicting the mean.\n\nA complete example can be found within the notebook `pretraining_example.ipynb`.\n\n/!\\ : current implementation is trying to reconstruct the original inputs, but Batch Normalization applies a random transformation that can't be deduced by a single line, making the reconstruction harder. Lowering the `batch_size` might make the pretraining easier.\n\n# Data augmentation on the fly\n\nIt is now possible to apply custom data augmentation pipeline during training.\nTemplates for ClassificationSMOTE and RegressionSMOTE have been added in `pytorch-tabnet/augmentations.py` and can be used as is.\n\n\n# Easy saving and loading\n\nIt's really easy to save and re-load a trained model, this makes TabNet production ready.\n```\n# save tabnet model\nsaving_path_name = \"./tabnet_model_test_1\"\nsaved_filepath = clf.save_model(saving_path_name)\n\n# define new model with basic parameters and load state dict weights\nloaded_clf = TabNetClassifier()\nloaded_clf.load_model(saved_filepath)\n```\n\n# Useful links\n\n- [explanatory video](https://youtu.be/ysBaZO8YmX8)\n- [binary classification examples](https://github.com/dreamquark-ai/tabnet/blob/develop/census_example.ipynb)\n- [multi-class classification examples](https://github.com/dreamquark-ai/tabnet/blob/develop/forest_example.ipynb)\n- [regression examples](https://github.com/dreamquark-ai/tabnet/blob/develop/regression_example.ipynb)\n- [multi-task regression examples](https://github.com/dreamquark-ai/tabnet/blob/develop/multi_regression_example.ipynb)\n- [multi-task multi-class classification examples](https://www.kaggle.com/optimo/tabnetmultitaskclassifier)\n- [kaggle moa 1st place solution using tabnet](https://www.kaggle.com/c/lish-moa/discussion/201510)\n\n## Model parameters\n\n- `n_d` : int (default=8)\n\n    Width of the decision prediction layer. Bigger values gives more capacity to the model with the risk of overfitting.\n    Values typically range from 8 to 64.\n\n- `n_a`: int (default=8)\n\n    Width of the attention embedding for each mask.\n    According to the paper n_d=n_a is usually a good choice. (default=8)\n\n- `n_steps` : int (default=3)\n\n    Number of steps in the architecture (usually between 3 and 10)  \n\n- `gamma` : float  (default=1.3)\n\n    This is the coefficient for feature reusage in the masks.\n    A value close to 1 will make mask selection least correlated between layers.\n    Values range from 1.0 to 2.0.\n\n- `cat_idxs` : list of int (default=[] - Mandatory for embeddings) \n\n    List of categorical features indices.\n\n- `cat_dims` : list of int (default=[] - Mandatory for embeddings)\n\n    List of categorical features number of modalities (number of unique values for a categorical feature)\n    /!\\ no new modalities can be predicted\n\n- `cat_emb_dim` : list of int (optional)\n\n    List of embeddings size for each categorical features. (default =1)\n\n- `n_independent` : int  (default=2)\n\n    Number of independent Gated Linear Units layers at each step.\n    Usual values range from 1 to 5.\n\n- `n_shared` : int (default=2)\n\n    Number of shared Gated Linear Units at each step\n    Usual values range from 1 to 5\n\n- `epsilon` : float  (default 1e-15)\n\n    Should be left untouched.\n\n- `seed` : int (default=0)\n\n    Random seed for reproducibility\n\n- `momentum` : float\n\n    Momentum for batch normalization, typically ranges from 0.01 to 0.4 (default=0.02)\n\n- `clip_value` : float (default None)\n\n    If a float is given this will clip the gradient at clip_value.\n    \n- `lambda_sparse` : float (default = 1e-3)\n\n    This is the extra sparsity loss coefficient as proposed in the original paper. The bigger this coefficient is, the sparser your model will be in terms of feature selection. Depending on the difficulty of your problem, reducing this value could help.\n\n- `optimizer_fn` : torch.optim (default=torch.optim.Adam)\n\n    Pytorch optimizer function\n\n- `optimizer_params`: dict (default=dict(lr=2e-2))\n\n    Parameters compatible with optimizer_fn used initialize the optimizer. Since we have Adam as our default optimizer, we use this to define the initial learning rate used for training. As mentionned in the original paper, a large initial learning rate of ```0.02 ```  with decay is a good option.\n\n- `scheduler_fn` : torch.optim.lr_scheduler (default=None)\n\n    Pytorch Scheduler to change learning rates during training.\n\n- `scheduler_params` : dict\n\n    Dictionnary of parameters to apply to the scheduler_fn. Ex : {\"gamma\": 0.95, \"step_size\": 10}\n\n- `model_name` : str (default = 'DreamQuarkTabNet')\n\n    Name of the model used for saving in disk, you can customize this to easily retrieve and reuse your trained models.\n\n- `verbose` : int (default=1)\n\n    Verbosity for notebooks plots, set to 1 to see every epoch, 0 to get None.\n\n- `device_name` : str (default='auto')\n    'cpu' for cpu training, 'gpu' for gpu training, 'auto' to automatically detect gpu.\n\n- `mask_type: str` (default='sparsemax')\n    Either \"sparsemax\" or \"entmax\" : this is the masking function to use for selecting features.\n\n- `grouped_features: list of list of ints` (default=None)\n    This allows the model to share it's attention accross feature inside a same group.\n    This can be especially useful when your preprocessing generates correlated or dependant features: like if you use a TF-IDF or a PCA on a text column.\n    Note that feature importance will be exactly the same between features on a same group.\n    Please also note that embeddings generated for a categorical variable are always inside a same group. \n\n- `n_shared_decoder` : int (default=1)\n\n    Number of shared GLU block in decoder, this is only useful for `TabNetPretrainer`.\n\n- `n_indep_decoder` : int (default=1)\n\n    Number of independent GLU block in decoder, this is only useful for `TabNetPretrainer`.\n\n## Fit parameters\n\n- `X_train` : np.array or scipy.sparse.csr_matrix\n\n    Training features\n\n- `y_train` : np.array\n\n    Training targets\n\n- `eval_set`: list of tuple  \n\n    List of eval tuple set (X, y).  \n    The last one is used for early stopping  \n\n- `eval_name`: list of str  \n              List of eval set names.  \n\n- `eval_metric` : list of str  \n              List of evaluation metrics.  \n              The last metric is used for early stopping.\n\n- `max_epochs` : int (default = 200)\n\n    Maximum number of epochs for trainng.\n    \n- `patience` : int (default = 10)\n\n    Number of consecutive epochs without improvement before performing early stopping.\n\n    If patience is set to 0, then no early stopping will be performed.\n\n    Note that if patience is enabled, then best weights from best epoch will automatically be loaded at the end of `fit`.\n\n- `weights` : int or dict (default=0)\n\n    /!\\ Only for TabNetClassifier\n    Sampling parameter\n    0 : no sampling\n    1 : automated sampling with inverse class occurrences\n    dict : keys are classes, values are weights for each class\n\n- `loss_fn` : torch.loss or list of torch.loss\n\n    Loss function for training (default to mse for regression and cross entropy for classification)\n    When using TabNetMultiTaskClassifier you can set a list of same length as number of tasks,\n    each task will be assigned its own loss function\n\n- `batch_size` : int (default=1024)\n\n    Number of examples per batch. Large batch sizes are recommended.\n\n- `virtual_batch_size` : int (default=128)\n\n    Size of the mini batches used for \"Ghost Batch Normalization\".\n    /!\\ `virtual_batch_size` should divide `batch_size`\n\n- `num_workers` : int (default=0)\n\n    Number or workers used in torch.utils.data.Dataloader\n\n- `drop_last` : bool (default=False)\n\n    Whether to drop last batch if not complete during training\n\n- `callbacks` : list of callback function  \n        List of custom callbacks\n\n- `pretraining_ratio` : float\n\n        /!\\ TabNetPretrainer Only : Percentage of input features to mask during pretraining.\n\n        Should be between 0 and 1. The bigger the harder the reconstruction task is.\n\n- `warm_start` : bool (default=False)\n    In order to match scikit-learn API, this is set to False.\n    It allows to fit twice the same model and start from a warm start.\n\n- `compute_importance` : bool (default=True)\n\n    Whether to compute feature importance\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "PyTorch implementation of TabNet",
    "version": "4.1.0",
    "project_urls": {
        "Documentation": "https://github.com/dreamquark-ai/tabnet",
        "Homepage": "https://github.com/dreamquark-ai/tabnet",
        "Repository": "https://github.com/dreamquark-ai/tabnet"
    },
    "split_keywords": [
        "tabnet",
        "pytorch",
        "neural-networks"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0f92ed98b89b7cf5661656daa4cc88e578f712eb5eae41b8f46a56c1ece3a895",
                "md5": "f504f99f5c6ec07583a817b3fa14fb03",
                "sha256": "70e8c9803f68f7cb26930d4cdb88857d1d98c745e0daf99d0f870fc70698515f"
            },
            "downloads": -1,
            "filename": "pytorch_tabnet-4.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f504f99f5c6ec07583a817b3fa14fb03",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 44481,
            "upload_time": "2023-07-23T13:26:57",
            "upload_time_iso_8601": "2023-07-23T13:26:57.044057Z",
            "url": "https://files.pythonhosted.org/packages/0f/92/ed98b89b7cf5661656daa4cc88e578f712eb5eae41b8f46a56c1ece3a895/pytorch_tabnet-4.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "19c7bb93b92e8e123308737240a26aa0868e05e2549ea8ece533b45f37b284d5",
                "md5": "064dc061abb46de3f0c9bf6b1afc8d8f",
                "sha256": "18887b993a8bf86ec05a576b5cf93f09e08b778cd9f418c5b254b6566df673a5"
            },
            "downloads": -1,
            "filename": "pytorch_tabnet-4.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "064dc061abb46de3f0c9bf6b1afc8d8f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 44925,
            "upload_time": "2023-07-23T13:26:59",
            "upload_time_iso_8601": "2023-07-23T13:26:59.063151Z",
            "url": "https://files.pythonhosted.org/packages/19/c7/bb93b92e8e123308737240a26aa0868e05e2549ea8ece533b45f37b284d5/pytorch_tabnet-4.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-23 13:26:59",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "dreamquark-ai",
    "github_project": "tabnet",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "circle": true,
    "lcname": "pytorch-tabnet"
}
        
Elapsed time: 0.10328s