bridgescaler


Namebridgescaler JSON
Version 0.6.0 PyPI version JSON
download
home_pagehttps://github.com/NCAR/bridgescaler
SummaryTool to automagically save scikit-learn scaler properties to a portable, readable format.
upload_time2024-03-29 23:29:40
maintainerNone
docs_urlNone
authorDavid John Gagne
requires_python>=3.7
licenseMIT
keywords machine learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # bridgescaler
Bridge your scikit-learn-style scaler parameters between Python sessions and users.
Bridgescaler allows you to save the properties of a scikit-learn-style scaler object
to a json file, and then repopulate a new scaler object with the same properties.


## Dependencies
* scikit-learn
* numpy
* pandas
* xarray
* pytdigest

## Installation
For a stable version of bridgescaler, you can install from PyPI.
```bash
pip install bridgescaler
```

For the latest version of bridgescaler, install from github.
```bash
git clone https://github.com/NCAR/bridgescaler.git
cd bridgescaler
pip install .
```

## Usage
bridgescaler supports all the common scikit-learn scaler classes:
* StandardScaler
* RobustScaler
* MinMaxScaler
* MaxAbsScaler
* QuantileTransformer
* PowerTransformer
* SplineTransformer

First, create some synthetic data to transform.
```python
import numpy as np
import pandas as pd

# specify distribution parameters for each variable
locs = np.array([0, 5, -2, 350.5], dtype=np.float32)
scales = np.array([1.0, 10, 0.1, 5000.0])
names = ["A", "B", "C", "D"]
num_examples = 205
x_data_dict = {}
for l in range(locs.shape[0]):
    # sample from random normal with different parameters
    x_data_dict[names[l]] = np.random.normal(loc=locs[l], scale=scales[l], size=num_examples)
x_data = pd.DataFrame(x_data_dict)
```

Now, let's fit and transform the data with StandardScaler.
```python
from sklearn.preprocessing import StandardScaler
from bridgescaler import save_scaler, load_scaler

scaler = StandardScaler()
scaler.fit_transform(x_data)
filename = "x_standard_scaler.json"
# save to json file
save_scaler(scaler, filename)

# create new StandardScaler from json file information.
new_scaler = load_scaler(filename) # new_scaler is a StandardScaler object
```
### Distributed Scaler
The distributed scalers allow you to calculate scaling
parameters on different subsets of a dataset and then combine the scaling factors
together to get representative scaling values for the full dataset. Distributed
Standard Scalers, MinMax Scalers, and Quantile Transformers have been implemented and work with both tabular
and muliti-dimensional patch data in numpy, pandas DataFrame, and xarray DataArray formats.
By default, the scaler assumes your channel/variable dimension is the last
dimension, but if `channels_last=False` is set in the `__init__`, `transform`,
or `inverse_transform` methods, then the 2nd dimension is assumed to be the variable
dimension. It is possible to fit data with one ordering and then
transform it with a different one. 

For large datasets, it may be expensive to redo the scalers if you want to use a 
subset or different ordering of variables. However, in bridgescaler, the 
Distributed Scalers all support arbitrary ordering and subsets of variables for transforms if 
the input data are in a Xarray DataArray or Pandas DataFrame with variable
names that match the original data. 

Example:
```python
from bridgescaler.distributed import DStandardScaler
import numpy as np

x_1 = np.random.normal(0, 2.2, (20, 5, 4, 8))
x_2 = np.random.normal(1, 3.5, (25, 4, 8, 5))

dss_1 = DStandardScaler(channels_last=False)
dss_2 = DStandardScaler(channels_last=True)
dss_1.fit(x_1)
dss_2.fit(x_2)
dss_combined = np.sum([dss_1, dss_2])

dss_combined.transform(x_1, channels_last=False)
```

### Group Scaler
The group scalers use the same scaling parameters for a group of similar
variables rather than scaling each column independently. This is useful for situations where variables are related, 
such as temperatures at different height levels.

Groups are specified as a list of column ids, which can be column names for pandas dataframes or column indices
for numpy arrays.

For example:
```python
from bridgescaler.group import GroupStandardScaler
import pandas as pd
import numpy as np
x_rand = np.random.random(size=(100, 5))
data = pd.DataFrame(data=x_rand, 
                    columns=["a", "b", "c", "d", "e"])
groups = [["a", "b"], ["c", "d"], "e"]
group_scaler = GroupStandardScaler()
x_transformed = group_scaler.fit_transform(data, groups=groups)
```

"a" and "b" are a single group and all values of both will be included when calculating the mean and standard 
deviation for that group.

### Deep Scaler
The deep scalers are designed to scale 2 or 3-dimensional fields input into a 
deep learning model such as a convolutional neural network. The scalers assume
that the last dimension is the channel/variable dimension and scales the values accordingly.
The scalers can support 2D or 3D patches with no change in code structure. Support is provided for
DeepStandardScaler and DeepQuantileTransformer.

Example:
```python
from bridgescaler.deep import DeepStandardScaler
import numpy as np
np.random.seed(352680)
n_ex = 5000
n_channels = 4
dim = 32
means = np.array([1, 5, -4, 2.5], dtype=np.float32)
sds = np.array([10, 2, 43.4, 32.], dtype=np.float32)
x = np.zeros((n_ex, dim, dim, n_channels), dtype=np.float32)
for chan in range(n_channels):
    x[..., chan] = np.random.normal(means[chan], sds[chan], (n_ex, dim, dim))
dss = DeepStandardScaler()
dss.fit(x)
x_transformed = dss.transform(x)
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/NCAR/bridgescaler",
    "name": "bridgescaler",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "machine learning",
    "author": "David John Gagne",
    "author_email": "dgagne@ucar.edu",
    "download_url": "https://files.pythonhosted.org/packages/10/f1/c51fb94f212166374f33def6d964f26d60ad5d57a863629117262637ec2d/bridgescaler-0.6.0.tar.gz",
    "platform": "any",
    "description": "# bridgescaler\nBridge your scikit-learn-style scaler parameters between Python sessions and users.\nBridgescaler allows you to save the properties of a scikit-learn-style scaler object\nto a json file, and then repopulate a new scaler object with the same properties.\n\n\n## Dependencies\n* scikit-learn\n* numpy\n* pandas\n* xarray\n* pytdigest\n\n## Installation\nFor a stable version of bridgescaler, you can install from PyPI.\n```bash\npip install bridgescaler\n```\n\nFor the latest version of bridgescaler, install from github.\n```bash\ngit clone https://github.com/NCAR/bridgescaler.git\ncd bridgescaler\npip install .\n```\n\n## Usage\nbridgescaler supports all the common scikit-learn scaler classes:\n* StandardScaler\n* RobustScaler\n* MinMaxScaler\n* MaxAbsScaler\n* QuantileTransformer\n* PowerTransformer\n* SplineTransformer\n\nFirst, create some synthetic data to transform.\n```python\nimport numpy as np\nimport pandas as pd\n\n# specify distribution parameters for each variable\nlocs = np.array([0, 5, -2, 350.5], dtype=np.float32)\nscales = np.array([1.0, 10, 0.1, 5000.0])\nnames = [\"A\", \"B\", \"C\", \"D\"]\nnum_examples = 205\nx_data_dict = {}\nfor l in range(locs.shape[0]):\n    # sample from random normal with different parameters\n    x_data_dict[names[l]] = np.random.normal(loc=locs[l], scale=scales[l], size=num_examples)\nx_data = pd.DataFrame(x_data_dict)\n```\n\nNow, let's fit and transform the data with StandardScaler.\n```python\nfrom sklearn.preprocessing import StandardScaler\nfrom bridgescaler import save_scaler, load_scaler\n\nscaler = StandardScaler()\nscaler.fit_transform(x_data)\nfilename = \"x_standard_scaler.json\"\n# save to json file\nsave_scaler(scaler, filename)\n\n# create new StandardScaler from json file information.\nnew_scaler = load_scaler(filename) # new_scaler is a StandardScaler object\n```\n### Distributed Scaler\nThe distributed scalers allow you to calculate scaling\nparameters on different subsets of a dataset and then combine the scaling factors\ntogether to get representative scaling values for the full dataset. Distributed\nStandard Scalers, MinMax Scalers, and Quantile Transformers have been implemented and work with both tabular\nand muliti-dimensional patch data in numpy, pandas DataFrame, and xarray DataArray formats.\nBy default, the scaler assumes your channel/variable dimension is the last\ndimension, but if `channels_last=False` is set in the `__init__`, `transform`,\nor `inverse_transform` methods, then the 2nd dimension is assumed to be the variable\ndimension. It is possible to fit data with one ordering and then\ntransform it with a different one. \n\nFor large datasets, it may be expensive to redo the scalers if you want to use a \nsubset or different ordering of variables. However, in bridgescaler, the \nDistributed Scalers all support arbitrary ordering and subsets of variables for transforms if \nthe input data are in a Xarray DataArray or Pandas DataFrame with variable\nnames that match the original data. \n\nExample:\n```python\nfrom bridgescaler.distributed import DStandardScaler\nimport numpy as np\n\nx_1 = np.random.normal(0, 2.2, (20, 5, 4, 8))\nx_2 = np.random.normal(1, 3.5, (25, 4, 8, 5))\n\ndss_1 = DStandardScaler(channels_last=False)\ndss_2 = DStandardScaler(channels_last=True)\ndss_1.fit(x_1)\ndss_2.fit(x_2)\ndss_combined = np.sum([dss_1, dss_2])\n\ndss_combined.transform(x_1, channels_last=False)\n```\n\n### Group Scaler\nThe group scalers use the same scaling parameters for a group of similar\nvariables rather than scaling each column independently. This is useful for situations where variables are related, \nsuch as temperatures at different height levels.\n\nGroups are specified as a list of column ids, which can be column names for pandas dataframes or column indices\nfor numpy arrays.\n\nFor example:\n```python\nfrom bridgescaler.group import GroupStandardScaler\nimport pandas as pd\nimport numpy as np\nx_rand = np.random.random(size=(100, 5))\ndata = pd.DataFrame(data=x_rand, \n                    columns=[\"a\", \"b\", \"c\", \"d\", \"e\"])\ngroups = [[\"a\", \"b\"], [\"c\", \"d\"], \"e\"]\ngroup_scaler = GroupStandardScaler()\nx_transformed = group_scaler.fit_transform(data, groups=groups)\n```\n\n\"a\" and \"b\" are a single group and all values of both will be included when calculating the mean and standard \ndeviation for that group.\n\n### Deep Scaler\nThe deep scalers are designed to scale 2 or 3-dimensional fields input into a \ndeep learning model such as a convolutional neural network. The scalers assume\nthat the last dimension is the channel/variable dimension and scales the values accordingly.\nThe scalers can support 2D or 3D patches with no change in code structure. Support is provided for\nDeepStandardScaler and DeepQuantileTransformer.\n\nExample:\n```python\nfrom bridgescaler.deep import DeepStandardScaler\nimport numpy as np\nnp.random.seed(352680)\nn_ex = 5000\nn_channels = 4\ndim = 32\nmeans = np.array([1, 5, -4, 2.5], dtype=np.float32)\nsds = np.array([10, 2, 43.4, 32.], dtype=np.float32)\nx = np.zeros((n_ex, dim, dim, n_channels), dtype=np.float32)\nfor chan in range(n_channels):\n    x[..., chan] = np.random.normal(means[chan], sds[chan], (n_ex, dim, dim))\ndss = DeepStandardScaler()\ndss.fit(x)\nx_transformed = dss.transform(x)\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Tool to automagically save scikit-learn scaler properties to a portable, readable format.",
    "version": "0.6.0",
    "project_urls": {
        "Homepage": "https://github.com/NCAR/bridgescaler"
    },
    "split_keywords": [
        "machine",
        "learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7ec6d77c50821863dfd306f25eba7857603014c4ef96ac7faae5a2b473cc74dd",
                "md5": "7cf009253797c95698682b190fe14725",
                "sha256": "ee93ef760815ef80003e6ca0e2035e38ce69907733be142f054f08d4ab90017c"
            },
            "downloads": -1,
            "filename": "bridgescaler-0.6.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7cf009253797c95698682b190fe14725",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 15593,
            "upload_time": "2024-03-29T23:29:38",
            "upload_time_iso_8601": "2024-03-29T23:29:38.527745Z",
            "url": "https://files.pythonhosted.org/packages/7e/c6/d77c50821863dfd306f25eba7857603014c4ef96ac7faae5a2b473cc74dd/bridgescaler-0.6.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "10f1c51fb94f212166374f33def6d964f26d60ad5d57a863629117262637ec2d",
                "md5": "57f1ed83f3f0f7d150782fe7b3228947",
                "sha256": "7a57dbf2eb98610d9500a35c7d610a26581f16f64a9767171af908cb647f6843"
            },
            "downloads": -1,
            "filename": "bridgescaler-0.6.0.tar.gz",
            "has_sig": false,
            "md5_digest": "57f1ed83f3f0f7d150782fe7b3228947",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 15763,
            "upload_time": "2024-03-29T23:29:40",
            "upload_time_iso_8601": "2024-03-29T23:29:40.226716Z",
            "url": "https://files.pythonhosted.org/packages/10/f1/c51fb94f212166374f33def6d964f26d60ad5d57a863629117262637ec2d/bridgescaler-0.6.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-29 23:29:40",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "NCAR",
    "github_project": "bridgescaler",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "bridgescaler"
}
        
Elapsed time: 0.41635s