python-ds-utils


Namepython-ds-utils JSON
Version 0.0.1 PyPI version JSON
download
home_pagehttps://github.com/WittmannF/python_ds_utils
SummaryData Science Utility Functions
upload_time2023-10-18 22:25:35
maintainer
docs_urlNone
authorFernando Wittmann
requires_python>=3.7
licenseApache Software License 2.0
keywords nbdev jupyter notebook python
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ds_utils

<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

## Install

``` sh
pip install ds_utils
```

## Development notes

This library has been developed using [nbdev](https://nbdev.fast.ai).
Any change or PR has to be made directly to the notebooks that creates
each module. All tests must pass.

## `ml.RegressorCV` Class

### Overview

The
[`RegressorCV`](https://WittmannF.github.io/python_ds_utils/ml.html#regressorcv)
class in the `ml.py` module of the `ds_utils` library is designed to
train an estimator using cross-validation and record metrics for each
fold. It also stores each model that is trained on each fold of the
cross-validation process. The final prediction is made as the median
value of the predictions from each stored model.

### Initialization

``` python
RegressorCV(base_reg, cv=5, groups=None, verbose=False, n_bins_stratify=None)
```

#### Parameters

- `base_reg` : Estimator object implementing ‘fit’. The object to use to
  fit the data.
- `cv` : int or cross-validation generator, default=5. Determines the
  cross-validation splitting strategy.
- `groups` : array-like of shape (n_samples,), default=None. Group
  labels for the samples used while splitting the dataset into
  train/test set.
- `n_bins_stratify` : int, default=None. Number of bins to use for
  stratification.
- `verbose` : bool, default=False. Whether or not to print metrics.

### Attributes

- `cv_results_` : Dictionary containing the results of the
  cross-validation, including the fold number, the regressor, and the
  metrics.
- `oof_train_` : Series containing the out-of-fold predictions on the
  training set.
- `oof_score_` : The R2 score calculated using the out-of-fold
  predictions.
- `oof_mape_` : The Mean Absolute Percentage Error calculated using the
  out-of-fold predictions.
- `oof_rmse_` : The Root Mean Squared Error calculated using the
  out-of-fold predictions.
- `metrics_` : The summary of metrics calculated during the fitting
  process.

### Methods

#### fit(self, X, y, \*\*kwargs)

Trains the base regressor on the provided data using cross-validation
and stores the results.

#### predict(self, X)

Predicts the target variable for the provided features using the median
value of the predictions from the models trained during
cross-validation.

### Example

``` python
from ds_utils.ml import RegressorCV
from sklearn.ensemble import RandomForestRegressor

# Initialize the RegressorCV with a base regressor and cross-validation strategy
reg_cv = RegressorCV(base_reg=RandomForestRegressor(), cv=5, verbose=True)

# Fit the RegressorCV to the training data
reg_cv.fit(X_train, y_train)

# Predict the target variable for new data
predictions = reg_cv.predict(X_new)

# Get the summary of recorded metrics
metrics = reg_cv.metrics_
```

## `ml.RegressorTimeSeriesCV` Class

### Overview

The
[`RegressorTimeSeriesCV`](https://WittmannF.github.io/python_ds_utils/ml.html#regressortimeseriescv)
class in the `ml.py` module of the `ds_utils` library is designed to
train a base regressor using time series cross-validation and record
metrics for each fold.

### Initialization

``` python
RegressorTimeSeriesCV(base_reg, cv=5, verbose=False, catboost_use_eval_set=False)
```

#### Parameters

- `base_reg` : Estimator object implementing ‘fit’. The object to use to
  fit the data.
- `cv` : int, default=5. Determines the cross-validation splitting
  strategy.
- `verbose` : bool, default=False. Whether or not to print metrics.
- `catboost_use_eval_set` : bool, default=False. Whether or not to use
  eval_set in CatBoostRegressor.

### Attributes

- `cv_results_` : List containing the results of the cross-validation,
  including fold number, regressor, train and test indices, and metrics.
- `metrics_` : The summary of metrics calculated during the fitting
  process.
- `y_test_last_fold_` : The true target variable values of the last
  fold.
- `y_pred_last_fold_` : The predicted target variable values of the last
  fold.

### Methods

#### fit(self, X, y, sample_weight=None)

Trains the base regressor on the provided data using time series
cross-validation and stores the results.

#### predict(self, X)

Predicts the target variable for the provided features using the base
regressor trained on the full data.

### Example

``` python
from ds_utils.ml import RegressorTimeSeriesCV
from sklearn.ensemble import RandomForestRegressor

# Initialize the RegressorTimeSeriesCV with a base regressor and cross-validation strategy
reg_tscv = RegressorTimeSeriesCV(base_reg=RandomForestRegressor(), cv=5, verbose=True)

# Fit the RegressorTimeSeriesCV to the training data
reg_tscv.fit(X_train, y_train)

# Predict the target variable for new data
predictions = reg_tscv.predict(X_new)

# Get the summary of recorded metrics
metrics = reg_tscv.metrics_
```

## `ml.KNNRegressor` Class

### Overview

The
[`KNNRegressor`](https://WittmannF.github.io/python_ds_utils/ml.html#knnregressor)
class in the `ml.py` module of the `ds_utils` library is an extension of
the `KNeighborsRegressor` from scikit-learn, with modifications to the
`predict` method to allow different calculations for predictions and to
optionally return the indices of the nearest neighbors.

### Initialization

The initialization parameters are the same as those of the
`KNeighborsRegressor` from scikit-learn. Refer to the [official
documentation](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html)
for details on the parameters.

### Methods

#### predict(self, X, return_match_index=False, pred_calc=‘mean’)

Predicts the target variable for the provided features and allows
different calculations for predictions.

##### Parameters

- `X` : array-like of shape (n_samples, n_features). Test samples.
- `return_match_index` : bool, default=False. Whether to return the
  index of the nearest matched neighbor.
- `pred_calc` : str, default=‘mean’. The calculation to use for
  predictions. Possible values are ‘mean’ and ‘median’.

##### Returns

- `y_pred` : array of shape (n_samples,) or (n_samples, n_outputs). The
  predicted target variable.
- `nearest_matched_index` : array of shape (n_samples,). The index of
  the nearest matched neighbor. Returned only if
  `return_match_index=True`.
- `neigh_ind` : array of shape (n_samples, n_neighbors). Indices of the
  neighbors in the training set. Returned only if
  `return_match_index=True`.

### Example

``` python
from ds_utils.ml import KNNRegressor

# Initialize the KNNRegressor with specific parameters
knn_reg = KNNRegressor(n_neighbors=3)

# Fit the KNNRegressor to the training data
knn_reg.fit(X_train, y_train)

# Predict the target variable for new data and return the index of the nearest matched neighbor
predictions, nearest_matched_index, neigh_ind = knn_reg.predict(X_new, return_match_index=True, pred_calc='median')
```

## `ml.AutoRegressor` Class

### Overview

The
[`AutoRegressor`](https://WittmannF.github.io/python_ds_utils/ml.html#autoregressor)
class is designed for performing automated regression tasks, including
preprocessing and model fitting. It supports several regression
algorithms and allows for easy comparison of their performance on a
given dataset. The class provides various methods for model evaluation,
feature importance, and visualization.

### Initialization

``` python
ar = AutoRegressor(
    num_cols,
    cat_cols,
    target_col,
    data=None,
    train=None,
    test=None,
    random_st=42,
    log_target=False,
    estimator='catboost',
    imputer_strategy='simple',
    use_catboost_native_cat_features=False,
    ohe_min_freq=0.05,
    scale_numeric_data=False,
    scale_categoric_data=False,
    scale_target=False
)
```

#### Parameters

- **num_cols**: list
  - List of numerical columns in the dataset.
- **cat_cols**: list
  - List of categorical columns in the dataset.
- **target_col**: str
  - Target column name in the dataset.
- **data**: pd.DataFrame, optional (default=None)
  - Input dataset containing both features and target column.
- **train**: pd.DataFrame, optional (default=None)
  - Training dataset containing both features and target column. Used if
    `data` is not provided.
- **test**: pd.DataFrame, optional (default=None)
  - Testing dataset containing both features and target column. Used if
    `data` is not provided.
- **random_st**: int, optional (default=42)
  - Random state for reproducibility.
- **log_target**: bool, optional (default=False)
  - If the logarithm of the target variable should be used.
- **estimator**: str or estimator object, optional (default=‘catboost’)
  - String or estimator object. Options are ‘catboost’, ‘random_forest’,
    and ‘linear’.
- **imputer_strategy**: str, optional (default=‘simple’)
  - Imputation strategy for missing values. Options are ‘simple’ and
    ‘knn’.
- **use_catboost_native_cat_features**: bool, optional (default=False)
  - If the native CatBoost categorical feature handling should be used.
- **ohe_min_freq**: float, optional (default=0.05)
  - Minimum frequency for OneHotEncoder to consider a category in
    categorical columns.
- **scale_numeric_data**: bool, optional (default=False)
  - If numeric data should be scaled using StandardScaler.
- **scale_categoric_data**: bool, optional (default=False)
  - If categorical data (after one-hot encoding) should be scaled using
    StandardScaler.
- **scale_target**: bool, optional (default=False)
  - If the target variable should be scaled using StandardScaler.

### Methods

#### fit_report(self)

Fits the model to the training data and prints the R2 Score, RMSE, and
MAPE on the test data.

#### test_binary_column(self, binary_column)

Tests the significance of a binary column on the target variable and
returns the p-value for Mann–Whitney U test.

#### get_coefficients(self)

Returns the coefficients of the model if the estimator is linear,
otherwise returns feature importances.

#### get_feature_importances(self)

Returns the feature importances of the model.

#### get_shap(self, return_shap_values=False)

Generates and plots SHAP values for the model and returns SHAP values if
`return_shap_values` is True.

#### plot_importance(self, feat_imp, graph_title=“Model feature importance”)

Plots the feature importances provided in `feat_imp` with the specified
`graph_title`.

### Example Usage

``` python
# Initialize AutoRegressor
ar = AutoRegressor(num_cols, cat_cols, target_col, data)

# Fit the model and print the report
ar.fit_report()

# Get and plot feature importances
feat_imp = ar.get_feature_importances()
ar.plot_importance(feat_imp)
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/WittmannF/python_ds_utils",
    "name": "python-ds-utils",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "nbdev jupyter notebook python",
    "author": "Fernando Wittmann",
    "author_email": "fernando.wittmann@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/5b/f0/fcdc500d2379833e59808576c9aa83ea8e94cf29f5c76f9bf1f21c07d7d2/python_ds_utils-0.0.1.tar.gz",
    "platform": null,
    "description": "# ds_utils\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n\n## Install\n\n``` sh\npip install ds_utils\n```\n\n## Development notes\n\nThis library has been developed using [nbdev](https://nbdev.fast.ai).\nAny change or PR has to be made directly to the notebooks that creates\neach module. All tests must pass.\n\n## `ml.RegressorCV` Class\n\n### Overview\n\nThe\n[`RegressorCV`](https://WittmannF.github.io/python_ds_utils/ml.html#regressorcv)\nclass in the `ml.py` module of the `ds_utils` library is designed to\ntrain an estimator using cross-validation and record metrics for each\nfold. It also stores each model that is trained on each fold of the\ncross-validation process. The final prediction is made as the median\nvalue of the predictions from each stored model.\n\n### Initialization\n\n``` python\nRegressorCV(base_reg, cv=5, groups=None, verbose=False, n_bins_stratify=None)\n```\n\n#### Parameters\n\n- `base_reg` : Estimator object implementing \u2018fit\u2019. The object to use to\n  fit the data.\n- `cv` : int or cross-validation generator, default=5. Determines the\n  cross-validation splitting strategy.\n- `groups` : array-like of shape (n_samples,), default=None. Group\n  labels for the samples used while splitting the dataset into\n  train/test set.\n- `n_bins_stratify` : int, default=None. Number of bins to use for\n  stratification.\n- `verbose` : bool, default=False. Whether or not to print metrics.\n\n### Attributes\n\n- `cv_results_` : Dictionary containing the results of the\n  cross-validation, including the fold number, the regressor, and the\n  metrics.\n- `oof_train_` : Series containing the out-of-fold predictions on the\n  training set.\n- `oof_score_` : The R2 score calculated using the out-of-fold\n  predictions.\n- `oof_mape_` : The Mean Absolute Percentage Error calculated using the\n  out-of-fold predictions.\n- `oof_rmse_` : The Root Mean Squared Error calculated using the\n  out-of-fold predictions.\n- `metrics_` : The summary of metrics calculated during the fitting\n  process.\n\n### Methods\n\n#### fit(self, X, y, \\*\\*kwargs)\n\nTrains the base regressor on the provided data using cross-validation\nand stores the results.\n\n#### predict(self, X)\n\nPredicts the target variable for the provided features using the median\nvalue of the predictions from the models trained during\ncross-validation.\n\n### Example\n\n``` python\nfrom ds_utils.ml import RegressorCV\nfrom sklearn.ensemble import RandomForestRegressor\n\n# Initialize the RegressorCV with a base regressor and cross-validation strategy\nreg_cv = RegressorCV(base_reg=RandomForestRegressor(), cv=5, verbose=True)\n\n# Fit the RegressorCV to the training data\nreg_cv.fit(X_train, y_train)\n\n# Predict the target variable for new data\npredictions = reg_cv.predict(X_new)\n\n# Get the summary of recorded metrics\nmetrics = reg_cv.metrics_\n```\n\n## `ml.RegressorTimeSeriesCV` Class\n\n### Overview\n\nThe\n[`RegressorTimeSeriesCV`](https://WittmannF.github.io/python_ds_utils/ml.html#regressortimeseriescv)\nclass in the `ml.py` module of the `ds_utils` library is designed to\ntrain a base regressor using time series cross-validation and record\nmetrics for each fold.\n\n### Initialization\n\n``` python\nRegressorTimeSeriesCV(base_reg, cv=5, verbose=False, catboost_use_eval_set=False)\n```\n\n#### Parameters\n\n- `base_reg` : Estimator object implementing \u2018fit\u2019. The object to use to\n  fit the data.\n- `cv` : int, default=5. Determines the cross-validation splitting\n  strategy.\n- `verbose` : bool, default=False. Whether or not to print metrics.\n- `catboost_use_eval_set` : bool, default=False. Whether or not to use\n  eval_set in CatBoostRegressor.\n\n### Attributes\n\n- `cv_results_` : List containing the results of the cross-validation,\n  including fold number, regressor, train and test indices, and metrics.\n- `metrics_` : The summary of metrics calculated during the fitting\n  process.\n- `y_test_last_fold_` : The true target variable values of the last\n  fold.\n- `y_pred_last_fold_` : The predicted target variable values of the last\n  fold.\n\n### Methods\n\n#### fit(self, X, y, sample_weight=None)\n\nTrains the base regressor on the provided data using time series\ncross-validation and stores the results.\n\n#### predict(self, X)\n\nPredicts the target variable for the provided features using the base\nregressor trained on the full data.\n\n### Example\n\n``` python\nfrom ds_utils.ml import RegressorTimeSeriesCV\nfrom sklearn.ensemble import RandomForestRegressor\n\n# Initialize the RegressorTimeSeriesCV with a base regressor and cross-validation strategy\nreg_tscv = RegressorTimeSeriesCV(base_reg=RandomForestRegressor(), cv=5, verbose=True)\n\n# Fit the RegressorTimeSeriesCV to the training data\nreg_tscv.fit(X_train, y_train)\n\n# Predict the target variable for new data\npredictions = reg_tscv.predict(X_new)\n\n# Get the summary of recorded metrics\nmetrics = reg_tscv.metrics_\n```\n\n## `ml.KNNRegressor` Class\n\n### Overview\n\nThe\n[`KNNRegressor`](https://WittmannF.github.io/python_ds_utils/ml.html#knnregressor)\nclass in the `ml.py` module of the `ds_utils` library is an extension of\nthe `KNeighborsRegressor` from scikit-learn, with modifications to the\n`predict` method to allow different calculations for predictions and to\noptionally return the indices of the nearest neighbors.\n\n### Initialization\n\nThe initialization parameters are the same as those of the\n`KNeighborsRegressor` from scikit-learn. Refer to the [official\ndocumentation](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html)\nfor details on the parameters.\n\n### Methods\n\n#### predict(self, X, return_match_index=False, pred_calc=\u2018mean\u2019)\n\nPredicts the target variable for the provided features and allows\ndifferent calculations for predictions.\n\n##### Parameters\n\n- `X` : array-like of shape (n_samples, n_features). Test samples.\n- `return_match_index` : bool, default=False. Whether to return the\n  index of the nearest matched neighbor.\n- `pred_calc` : str, default=\u2018mean\u2019. The calculation to use for\n  predictions. Possible values are \u2018mean\u2019 and \u2018median\u2019.\n\n##### Returns\n\n- `y_pred` : array of shape (n_samples,) or (n_samples, n_outputs). The\n  predicted target variable.\n- `nearest_matched_index` : array of shape (n_samples,). The index of\n  the nearest matched neighbor. Returned only if\n  `return_match_index=True`.\n- `neigh_ind` : array of shape (n_samples, n_neighbors). Indices of the\n  neighbors in the training set. Returned only if\n  `return_match_index=True`.\n\n### Example\n\n``` python\nfrom ds_utils.ml import KNNRegressor\n\n# Initialize the KNNRegressor with specific parameters\nknn_reg = KNNRegressor(n_neighbors=3)\n\n# Fit the KNNRegressor to the training data\nknn_reg.fit(X_train, y_train)\n\n# Predict the target variable for new data and return the index of the nearest matched neighbor\npredictions, nearest_matched_index, neigh_ind = knn_reg.predict(X_new, return_match_index=True, pred_calc='median')\n```\n\n## `ml.AutoRegressor` Class\n\n### Overview\n\nThe\n[`AutoRegressor`](https://WittmannF.github.io/python_ds_utils/ml.html#autoregressor)\nclass is designed for performing automated regression tasks, including\npreprocessing and model fitting. It supports several regression\nalgorithms and allows for easy comparison of their performance on a\ngiven dataset. The class provides various methods for model evaluation,\nfeature importance, and visualization.\n\n### Initialization\n\n``` python\nar = AutoRegressor(\n    num_cols,\n    cat_cols,\n    target_col,\n    data=None,\n    train=None,\n    test=None,\n    random_st=42,\n    log_target=False,\n    estimator='catboost',\n    imputer_strategy='simple',\n    use_catboost_native_cat_features=False,\n    ohe_min_freq=0.05,\n    scale_numeric_data=False,\n    scale_categoric_data=False,\n    scale_target=False\n)\n```\n\n#### Parameters\n\n- **num_cols**: list\n  - List of numerical columns in the dataset.\n- **cat_cols**: list\n  - List of categorical columns in the dataset.\n- **target_col**: str\n  - Target column name in the dataset.\n- **data**: pd.DataFrame, optional (default=None)\n  - Input dataset containing both features and target column.\n- **train**: pd.DataFrame, optional (default=None)\n  - Training dataset containing both features and target column. Used if\n    `data` is not provided.\n- **test**: pd.DataFrame, optional (default=None)\n  - Testing dataset containing both features and target column. Used if\n    `data` is not provided.\n- **random_st**: int, optional (default=42)\n  - Random state for reproducibility.\n- **log_target**: bool, optional (default=False)\n  - If the logarithm of the target variable should be used.\n- **estimator**: str or estimator object, optional (default=\u2018catboost\u2019)\n  - String or estimator object. Options are \u2018catboost\u2019, \u2018random_forest\u2019,\n    and \u2018linear\u2019.\n- **imputer_strategy**: str, optional (default=\u2018simple\u2019)\n  - Imputation strategy for missing values. Options are \u2018simple\u2019 and\n    \u2018knn\u2019.\n- **use_catboost_native_cat_features**: bool, optional (default=False)\n  - If the native CatBoost categorical feature handling should be used.\n- **ohe_min_freq**: float, optional (default=0.05)\n  - Minimum frequency for OneHotEncoder to consider a category in\n    categorical columns.\n- **scale_numeric_data**: bool, optional (default=False)\n  - If numeric data should be scaled using StandardScaler.\n- **scale_categoric_data**: bool, optional (default=False)\n  - If categorical data (after one-hot encoding) should be scaled using\n    StandardScaler.\n- **scale_target**: bool, optional (default=False)\n  - If the target variable should be scaled using StandardScaler.\n\n### Methods\n\n#### fit_report(self)\n\nFits the model to the training data and prints the R2 Score, RMSE, and\nMAPE on the test data.\n\n#### test_binary_column(self, binary_column)\n\nTests the significance of a binary column on the target variable and\nreturns the p-value for Mann\u2013Whitney U test.\n\n#### get_coefficients(self)\n\nReturns the coefficients of the model if the estimator is linear,\notherwise returns feature importances.\n\n#### get_feature_importances(self)\n\nReturns the feature importances of the model.\n\n#### get_shap(self, return_shap_values=False)\n\nGenerates and plots SHAP values for the model and returns SHAP values if\n`return_shap_values` is True.\n\n#### plot_importance(self, feat_imp, graph_title=\u201cModel feature importance\u201d)\n\nPlots the feature importances provided in `feat_imp` with the specified\n`graph_title`.\n\n### Example Usage\n\n``` python\n# Initialize AutoRegressor\nar = AutoRegressor(num_cols, cat_cols, target_col, data)\n\n# Fit the model and print the report\nar.fit_report()\n\n# Get and plot feature importances\nfeat_imp = ar.get_feature_importances()\nar.plot_importance(feat_imp)\n```\n",
    "bugtrack_url": null,
    "license": "Apache Software License 2.0",
    "summary": "Data Science Utility Functions",
    "version": "0.0.1",
    "project_urls": {
        "Homepage": "https://github.com/WittmannF/python_ds_utils"
    },
    "split_keywords": [
        "nbdev",
        "jupyter",
        "notebook",
        "python"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "35eee783849f13a7af1c4dfe7a85c9a6db49964b3eb01f7827e5d398e4041160",
                "md5": "55dd99dd9973cd583fe627bb4ef26a06",
                "sha256": "de1e9e2ed208a524f3636768158672891afb977438950afca48c1ebc722f3c3d"
            },
            "downloads": -1,
            "filename": "python_ds_utils-0.0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "55dd99dd9973cd583fe627bb4ef26a06",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 18890,
            "upload_time": "2023-10-18T22:25:33",
            "upload_time_iso_8601": "2023-10-18T22:25:33.000419Z",
            "url": "https://files.pythonhosted.org/packages/35/ee/e783849f13a7af1c4dfe7a85c9a6db49964b3eb01f7827e5d398e4041160/python_ds_utils-0.0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5bf0fcdc500d2379833e59808576c9aa83ea8e94cf29f5c76f9bf1f21c07d7d2",
                "md5": "af9b1e73accf27abebb4e1d8411f123f",
                "sha256": "a33859faa749929c4545881d856329667db73a8f8e446dac572ee3e65e0fd9d7"
            },
            "downloads": -1,
            "filename": "python_ds_utils-0.0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "af9b1e73accf27abebb4e1d8411f123f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 22379,
            "upload_time": "2023-10-18T22:25:35",
            "upload_time_iso_8601": "2023-10-18T22:25:35.301852Z",
            "url": "https://files.pythonhosted.org/packages/5b/f0/fcdc500d2379833e59808576c9aa83ea8e94cf29f5c76f9bf1f21c07d7d2/python_ds_utils-0.0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-18 22:25:35",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "WittmannF",
    "github_project": "python_ds_utils",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "python-ds-utils"
}
        
Elapsed time: 0.16478s