lofo-importance


Namelofo-importance JSON
Version 0.3.4 PyPI version JSON
download
home_pagehttps://github.com/aerdem4/lofo-importance
SummaryLeave One Feature Out Importance
upload_time2024-01-16 09:19:52
maintainer
docs_urlNone
authorAhmet Erdem
requires_python
license
keywords feature importance selection explainable data-science machine-learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![alt text](docs/lofo_logo.png?raw=true "Title")

LOFO (Leave One Feature Out) Importance calculates the importances of a set of features based on a metric of choice, for a model of choice, by iteratively removing each feature from the set, and evaluating the performance of the model, with a validation scheme of choice, based on the chosen metric.

LOFO first evaluates the performance of the model with all the input features included, then iteratively removes one feature at a time, retrains the model, and evaluates its performance on a validation set. The mean and standard deviation (across the folds) of the importance of each feature is then reported.

If a model is not passed as an argument to LOFO Importance, it will run LightGBM as a default model.

## Install

LOFO Importance can be installed using

```
pip install lofo-importance
```

## Advantages of LOFO Importance

LOFO has several advantages compared to other importance types:

* It does not favor granular features
* It generalises well to unseen test sets
* It is model agnostic
* It gives negative importance to features that hurt performance upon inclusion
* It can group the features. Especially useful for high dimensional features like TFIDF or OHE features.
* It can automatically group highly correlated features to avoid underestimating their importance.

## Example on Kaggle's Microsoft Malware Prediction Competition

In this Kaggle competition, Microsoft provides a malware dataset to predict whether or not a machine will soon be hit with malware. One of the features, Centos_OSVersion is very predictive on the training set, since some OS versions are probably more prone to bugs and failures than others. However, upon splitting the data out of time, we obtain validation sets with OS versions that have not occurred in the training set. Therefore, the model will not have learned the relationship between the target and this seasonal feature. By evaluating this feature's importance using other importance types, Centos_OSVersion seems to have high importance, because its importance was evaluated using only the training set. However, LOFO Importance depends on a validation scheme, so it will not only give this feature low importance, but even negative importance.

```python
 import pandas as pd
from sklearn.model_selection import KFold
from lofo import LOFOImportance, Dataset, plot_importance
%matplotlib inline

# import data
train_df = pd.read_csv("../input/train.csv", dtype=dtypes)

# extract a sample of the data
sample_df = train_df.sample(frac=0.01, random_state=0)
sample_df.sort_values("AvSigVersion", inplace=True) # Sort by time for time split validation

# define the validation scheme
cv = KFold(n_splits=4, shuffle=False, random_state=None) # Don't shuffle to keep the time split split validation

# define the binary target and the features
dataset = Dataset(df=sample_df, target="HasDetections", features=[col for col in train_df.columns if col != "HasDetections"])

# define the validation scheme and scorer. The default model is LightGBM
lofo_imp = LOFOImportance(dataset, cv=cv, scoring="roc_auc")

# get the mean and standard deviation of the importances in pandas format
importance_df = lofo_imp.get_importance()

# plot the means and standard deviations of the importances
plot_importance(importance_df, figsize=(12, 20))
```

![alt text](docs/plot_importance.png?raw=true "Title")

## Another Example: Kaggle's TReNDS Competition

In this Kaggle competition, pariticipants are asked to predict some cognitive properties of patients.
Independent component features (IC) from sMRI and very high dimensional correlation features (FNC) from 3D fMRIs are provided.
LOFO can group the fMRI correlation features into one.

```python
def get_lofo_importance(target):
    cv = KFold(n_splits=7, shuffle=True, random_state=17)

    dataset = Dataset(df=df[df[target].notnull()], target=target, features=loading_features,
                      feature_groups={"fnc": df[df[target].notnull()][fnc_features].values
                      })

    model = Ridge(alpha=0.01)
    lofo_imp = LOFOImportance(dataset, cv=cv, scoring="neg_mean_absolute_error", model=model)

    return lofo_imp.get_importance()

plot_importance(get_lofo_importance(target="domain1_var1"), figsize=(8, 8), kind="box")
```

![alt text](docs/plot_importance_box.png?raw=true "Title")

## Flofo Importance

If running the LOFO Importance package is too time-costly for you, you can use Fast LOFO. Fast LOFO, or FLOFO takes, as inputs, an already trained model and a validation set, and does a pseudo-random permutation on the values of each feature, one by one, then uses the trained model to make predictions on the validation set. The mean of the FLOFO importance is then the difference in the performance of the model on the validation set over several randomised permutations.
The difference between FLOFO importance and permutation importance is that the permutations on a feature's values are done within groups, where groups are obtained by grouping the validation set by k=2 features. These k features are chosen at random n=10 times, and the mean and standard deviation of the FLOFO importance are calculated based on these n runs.
The reason this grouping makes the measure of importance better is that permuting a feature's value is no longer completely random. In fact, the permutations are done within groups of similar samples, so the permutations are equivalent to noising the samples. This ensures that:

* The permuted feature values are very unlikely to be replaced by unrealistic values.
* A feature that is predictable by features among the chosen n*k features will be replaced by very similar values during permutation. Therefore, it will only slightly affect the model performance (and will yield a small FLOFO importance). This solves the correlated feature overestimation problem.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/aerdem4/lofo-importance",
    "name": "lofo-importance",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "feature importance selection explainable data-science machine-learning",
    "author": "Ahmet Erdem",
    "author_email": "ahmeterd4@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/7b/88/7a5cc4a502677dd86d766f1117aae6fb01fa648302838b41016d93a1a554/lofo-importance-0.3.4.tar.gz",
    "platform": null,
    "description": "![alt text](docs/lofo_logo.png?raw=true \"Title\")\n\nLOFO (Leave One Feature Out) Importance calculates the importances of a set of features based on a metric of choice, for a model of choice, by iteratively removing each feature from the set, and evaluating the performance of the model, with a validation scheme of choice, based on the chosen metric.\n\nLOFO first evaluates the performance of the model with all the input features included, then iteratively removes one feature at a time, retrains the model, and evaluates its performance on a validation set. The mean and standard deviation (across the folds) of the importance of each feature is then reported.\n\nIf a model is not passed as an argument to LOFO Importance, it will run LightGBM as a default model.\n\n## Install\n\nLOFO Importance can be installed using\n\n```\npip install lofo-importance\n```\n\n## Advantages of LOFO Importance\n\nLOFO has several advantages compared to other importance types:\n\n* It does not favor granular features\n* It generalises well to unseen test sets\n* It is model agnostic\n* It gives negative importance to features that hurt performance upon inclusion\n* It can group the features. Especially useful for high dimensional features like TFIDF or OHE features.\n* It can automatically group highly correlated features to avoid underestimating their importance.\n\n## Example on Kaggle's Microsoft Malware Prediction Competition\n\nIn this Kaggle competition, Microsoft provides a malware dataset to predict whether or not a machine will soon be hit with malware. One of the features, Centos_OSVersion is very predictive on the training set, since some OS versions are probably more prone to bugs and failures than others. However, upon splitting the data out of time, we obtain validation sets with OS versions that have not occurred in the training set. Therefore, the model will not have learned the relationship between the target and this seasonal feature. By evaluating this feature's importance using other importance types, Centos_OSVersion seems to have high importance, because its importance was evaluated using only the training set. However, LOFO Importance depends on a validation scheme, so it will not only give this feature low importance, but even negative importance.\n\n```python\n import pandas as pd\nfrom sklearn.model_selection import KFold\nfrom lofo import LOFOImportance, Dataset, plot_importance\n%matplotlib inline\n\n# import data\ntrain_df = pd.read_csv(\"../input/train.csv\", dtype=dtypes)\n\n# extract a sample of the data\nsample_df = train_df.sample(frac=0.01, random_state=0)\nsample_df.sort_values(\"AvSigVersion\", inplace=True) # Sort by time for time split validation\n\n# define the validation scheme\ncv = KFold(n_splits=4, shuffle=False, random_state=None) # Don't shuffle to keep the time split split validation\n\n# define the binary target and the features\ndataset = Dataset(df=sample_df, target=\"HasDetections\", features=[col for col in train_df.columns if col != \"HasDetections\"])\n\n# define the validation scheme and scorer. The default model is LightGBM\nlofo_imp = LOFOImportance(dataset, cv=cv, scoring=\"roc_auc\")\n\n# get the mean and standard deviation of the importances in pandas format\nimportance_df = lofo_imp.get_importance()\n\n# plot the means and standard deviations of the importances\nplot_importance(importance_df, figsize=(12, 20))\n```\n\n![alt text](docs/plot_importance.png?raw=true \"Title\")\n\n## Another Example: Kaggle's TReNDS Competition\n\nIn this Kaggle competition, pariticipants are asked to predict some cognitive properties of patients.\nIndependent component features (IC) from sMRI and very high dimensional correlation features (FNC) from 3D fMRIs are provided.\nLOFO can group the fMRI correlation features into one.\n\n```python\ndef get_lofo_importance(target):\n    cv = KFold(n_splits=7, shuffle=True, random_state=17)\n\n    dataset = Dataset(df=df[df[target].notnull()], target=target, features=loading_features,\n                      feature_groups={\"fnc\": df[df[target].notnull()][fnc_features].values\n                      })\n\n    model = Ridge(alpha=0.01)\n    lofo_imp = LOFOImportance(dataset, cv=cv, scoring=\"neg_mean_absolute_error\", model=model)\n\n    return lofo_imp.get_importance()\n\nplot_importance(get_lofo_importance(target=\"domain1_var1\"), figsize=(8, 8), kind=\"box\")\n```\n\n![alt text](docs/plot_importance_box.png?raw=true \"Title\")\n\n## Flofo Importance\n\nIf running the LOFO Importance package is too time-costly for you, you can use Fast LOFO. Fast LOFO, or FLOFO takes, as inputs, an already trained model and a validation set, and does a pseudo-random permutation on the values of each feature, one by one, then uses the trained model to make predictions on the validation set. The mean of the FLOFO importance is then the difference in the performance of the model on the validation set over several randomised permutations.\nThe difference between FLOFO importance and permutation importance is that the permutations on a feature's values are done within groups, where groups are obtained by grouping the validation set by k=2 features. These k features are chosen at random n=10 times, and the mean and standard deviation of the FLOFO importance are calculated based on these n runs.\nThe reason this grouping makes the measure of importance better is that permuting a feature's value is no longer completely random. In fact, the permutations are done within groups of similar samples, so the permutations are equivalent to noising the samples. This ensures that:\n\n* The permuted feature values are very unlikely to be replaced by unrealistic values.\n* A feature that is predictable by features among the chosen n*k features will be replaced by very similar values during permutation. Therefore, it will only slightly affect the model performance (and will yield a small FLOFO importance). This solves the correlated feature overestimation problem.\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Leave One Feature Out Importance",
    "version": "0.3.4",
    "project_urls": {
        "Homepage": "https://github.com/aerdem4/lofo-importance"
    },
    "split_keywords": [
        "feature",
        "importance",
        "selection",
        "explainable",
        "data-science",
        "machine-learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b53e8a972b9fd07ae081cb552bcafe3c75b1ca81f4393a37e97b390f936a8d5c",
                "md5": "790af91b4e4776df35e80ee0874f770a",
                "sha256": "b001e23c634ee234603ad0fc66111b182734e162d6c6db554d210fb5208bb469"
            },
            "downloads": -1,
            "filename": "lofo_importance-0.3.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "790af91b4e4776df35e80ee0874f770a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 11249,
            "upload_time": "2024-01-16T09:19:50",
            "upload_time_iso_8601": "2024-01-16T09:19:50.878125Z",
            "url": "https://files.pythonhosted.org/packages/b5/3e/8a972b9fd07ae081cb552bcafe3c75b1ca81f4393a37e97b390f936a8d5c/lofo_importance-0.3.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7b887a5cc4a502677dd86d766f1117aae6fb01fa648302838b41016d93a1a554",
                "md5": "0bdbff7ab0d13ecb11968022740da5db",
                "sha256": "1803f6c5eff4f32732360496340b1f2c6f1c6d26bad4e8c110faef02d9746555"
            },
            "downloads": -1,
            "filename": "lofo-importance-0.3.4.tar.gz",
            "has_sig": false,
            "md5_digest": "0bdbff7ab0d13ecb11968022740da5db",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 12737,
            "upload_time": "2024-01-16T09:19:52",
            "upload_time_iso_8601": "2024-01-16T09:19:52.391501Z",
            "url": "https://files.pythonhosted.org/packages/7b/88/7a5cc4a502677dd86d766f1117aae6fb01fa648302838b41016d93a1a554/lofo-importance-0.3.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-01-16 09:19:52",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "aerdem4",
    "github_project": "lofo-importance",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "lofo-importance"
}
        
Elapsed time: 0.15800s