Name | xbooster JSON |
Version |
0.2.2
JSON |
| download |
home_page | None |
Summary | Explainable Boosted Scoring |
upload_time | 2024-05-08 12:06:49 |
maintainer | None |
docs_url | None |
author | xRiskLab |
requires_python | <3.11,>=3.9 |
license | MIT |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# xbooster ๐
A scorecard-format classificatory framework for logistic regression with XGBoost.
xbooster allows to convert an XGB logistic regression into a logarithmic (point) scoring system.
In addition, it provides a suite of interpretability tools to understand the model's behavior,
which can be instrumental for model testing and expert validation.
The interpretability suite includes:
- Granular boosted tree statistics, including metrics such as Weight of Evidence (WOE) and Information Value (IV) for splits ๐ณ
- Tree visualization with customizations ๐จ
- Global and local feature importance ๐
xbooster also provides a scorecard deployment using SQL ๐ฆ.
## Installation โคต
Install the package using pip:
```python
pip install xbooster
```
## Usage ๐
Here's a quick example of how to use xbooster to construct a scorecard for an XGBoost model:
```python
import pandas as pd
import xgboost as xgb
from xbooster.constructor import XGBScorecardConstructor
from sklearn.model_selection import train_test_split
# Load data and train XGBoost model
url = (
"https://github.com/xRiskLab/xBooster/raw/main/examples/data/credit_data.parquet"
)
dataset = pd.read_parquet(url)
features = [
"external_risk_estimate",
"revolving_utilization_of_unsecured_lines",
"account_never_delinq_percent",
"net_fraction_revolving_burden",
"num_total_cc_accounts",
"average_months_in_file",
]
target = "is_bad"
X, y = dataset[features], dataset[target]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Train the XGBoost model
best_params = {
'n_estimators': 100,
'learning_rate': 0.55,
'max_depth': 1,
'min_child_weight': 10,
'grow_policy': "lossguide",
'early_stopping_rounds': 5
}
model = xgb.XGBClassifier(**best_params, random_state=62)
model.fit(X_train, y_train)
# Initialize XGBScorecardConstructor
scorecard_constructor = XGBScorecardConstructor(model, X_train, y_train)
scorecard_constructor.construct_scorecard()
# Print the scorecard
print(scorecard_constructor.scorecard)
```
After this, we can create a scorecard and test its Gini score:
```python
from sklearn.metrics import roc_auc_score
# Create scoring points
xgb_scorecard_with_points = scorecard_constructor.create_points(
pdo=50, target_points=600, target_odds=50
)
# Make predictions using the scorecard
credit_scores = scorecard_constructor.predict_score(X_test)
gini = roc_auc_score(y_test, -credit_scores) * 2 - 1
print(f"Test Gini score: {gini:.2%}")
```
We can also visualize the score distribution between the events of interest.
```python
from xbooster import explainer
explainer.plot_score_distribution(
y_test,
credit_scores,
num_bins=30,
figsize=(8, 3),
dpi=100
)
```
We can further examine feature importances.
Below, we can visualize the global feature importances using Points as our metric:
```python
from xbooster import explainer
explainer.plot_importance(
scorecard_constructor,
metric='Points',
method='global',
normalize=True,
figsize=(3, 3)
)
```
Alternatively, we can calculate local feature importances, which are important for boosters with a depth greater than 1.
```python
explainer.plot_importance(
scorecard_constructor,
metric='Likelihood',
method='local',
normalize=True,
color='#ffd43b',
edgecolor='#1e1e1e',
figsize=(3, 3)
)
```
Finally, we can generate a scorecard in SQL format.
```python
sql_query = scorecard_constructor.generate_sql_query(table_name='my_table')
print(sql_query)
```
# Parameters ๐
## `xbooster.constructor` - XGBoost Scorecard Constructor
### Description
A class for generating a scorecard from a trained XGBoost model. The methodology is inspired by the NVIDIA GTC Talk "Machine Learning in Retail Credit Risk" by Paul Edwards.
### Methods
1. `extract_leaf_weights() -> pd.DataFrame`:
- Extracts the leaf weights from the booster's trees and returns a DataFrame.
- **Returns**:
- `pd.DataFrame`: DataFrame containing the extracted leaf weights.
2. `extract_decision_nodes() -> pd.DataFrame`:
- Extracts the split (decision) nodes from the booster's trees and returns a DataFrame.
- **Returns**:
- `pd.DataFrame`: DataFrame containing the extracted split (decision) nodes.
3. `construct_scorecard() -> pd.DataFrame`:
- Constructs a scorecard based on a booster.
- **Returns**:
- `pd.DataFrame`: The constructed scorecard.
4. `create_points(pdo=50, target_points=600, target_odds=19, precision_points=0, score_type='XAddEvidence') -> pd.DataFrame`:
- Creates a points card from a scorecard.
- **Parameters**:
- `pdo` (int, optional): The points to double the odds. Default is 50.
- `target_points` (int, optional): The standard scorecard points. Default is 600.
- `target_odds` (int, optional): The standard scorecard odds. Default is 19.
- `precision_points` (int, optional): The points decimal precision. Default is 0.
- `score_type` (str, optional): The log-odds to use for the points card. Default is 'XAddEvidence'.
- **Returns**:
- `pd.DataFrame`: The points card.
5. `predict_score(X: pd.DataFrame) -> pd.Series`:
- Predicts the score for a given dataset using the constructed scorecard.
- **Parameters**:
- `X` (`pd.DataFrame`): Features of the dataset.
- **Returns**:
- `pd.Series`: Predicted scores.
6. `sql_query` (property):
- Property that returns the SQL query for deploying the scorecard.
- **Returns**:
- `str`: The SQL query for deploying the scorecard.
7. `generate_sql_query(table_name: str = "my_table") -> str`:
- Converts a scorecard into an SQL format.
- **Parameters**:
- `table_name` (str): The name of the input table in SQL.
- **Returns**:
- `str`: The final SQL query for deploying the scorecard.
## `xbooster.explainer` - XGBoost Scorecard Explainer
This module provides functionalities for explaining XGBoost scorecards, including methods to extract split information, build interaction splits, visualize tree structures, plot feature importances, and more.
### Methods:
1. `extract_splits_info(features: str) -> list`:
- Extracts split information from the DetailedSplit feature.
- **Inputs**:
- `features` (str): A string containing split information.
- **Outputs**:
- Returns a list of tuples containing split information (feature, sign, value).
2. `build_interactions_splits(scorecard_constructor: Optional[XGBScorecardConstructor] = None, dataframe: Optional[pd.DataFrame] = None) -> pd.DataFrame`:
- Builds interaction splits from the XGBoost scorecard.
- **Inputs**:
- `scorecard_constructor` (Optional[XGBScorecardConstructor]): The XGBoost scorecard constructor.
- `dataframe` (Optional[pd.DataFrame]): The dataframe containing split information.
- **Outputs**:
- Returns a pandas DataFrame containing interaction splits.
3. `split_and_count(scorecard_constructor: Optional[XGBScorecardConstructor] = None, dataframe: Optional[pd.DataFrame] = None, label_column: Optional[str] = None) -> pd.DataFrame`:
- Splits the dataset and counts events for each split.
- **Inputs**:
- `scorecard_constructor` (Optional[XGBScorecardConstructor]): The XGBoost scorecard constructor.
- `dataframe` (Optional[pd.DataFrame]): The dataframe containing features and labels.
- `label_column` (Optional[str]): The label column in the dataframe.
- **Outputs**:
- Returns a pandas DataFrame containing split information and event counts.
4. `plot_importance(scorecard_constructor: Optional[XGBScorecardConstructor] = None, metric: str = "Likelihood", normalize: bool = True, method: Optional[str] = None, dataframe: Optional[pd.DataFrame] = None, **kwargs: Any) -> None`:
- Plots the importance of features based on the XGBoost scorecard.
- **Inputs**:
- `scorecard_constructor` (Optional[XGBScorecardConstructor]): The XGBoost scorecard constructor.
- `metric` (str): Metric to plot ("Likelihood" (default), "NegLogLikelihood", "IV", or "Points").
- `normalize` (bool): Whether to normalize the importance values (default: True).
- `method` (Optional[str]): The method to use for plotting the importance ("global" or "local").
- `dataframe` (Optional[pd.DataFrame]): The dataframe containing features and labels.
- `fontfamily` (str): The font family to use for the plot (default: "Monospace").
- `fontsize` (int): The font size to use for the plot (default: 12).
- `dpi` (int): The DPI of the plot (default: 100).
- `title` (str): The title of the plot (default: "Feature Importance").
- `**kwargs` (Any): Additional Matplotlib parameters.
5. `plot_score_distribution(y_true: pd.Series = None, y_pred: pd.Series = None, n_bins: int = 25, scorecard_constructor: Optional[XGBScorecardConstructor] = None, **kwargs: Any)`:
- Plots the distribution of predicted scores based on actual labels.
- **Inputs**:
- `y_true` (pd.Series): The true labels.
- `y_pred` (pd.Series): The predicted labels.
- `n_bins` (int): Number of bins for histogram (default: 25).
- `scorecard_constructor` (Optional[XGBScorecardConstructor]): The XGBoost scorecard constructor.
- `**kwargs` (Any): Additional Matplotlib parameters.
6. `plot_local_importance(scorecard_constructor: Optional[XGBScorecardConstructor] = None, metric: str = "Likelihood", normalize: bool = True, dataframe: Optional[pd.DataFrame] = None, **kwargs: Any) -> None`:
- Plots the local importance of features based on the XGBoost scorecard.
- **Inputs**:
- `scorecard_constructor` (Optional[XGBScorecardConstructor]): The XGBoost scorecard constructor.
- `metric` (str): Metric to plot ("Likelihood" (default), "NegLogLikelihood", "IV", or "Points").
- `normalize` (bool): Whether to normalize the importance values (default: True).
- `dataframe` (Optional[pd.DataFrame]): The dataframe containing features and labels.
- `fontfamily` (str): The font family to use for the plot (default: "Arial").
- `fontsize` (int): The font size to use for the plot (default: 12).
- `boxstyle` (str): The rounding box style to use for the plot (default: "round").
- `title` (str): The title of the plot (default: "Local Feature Importance").
- `**kwargs` (Any): Additional parameters to pass to the matplotlib function.
7. `plot_tree(tree_index: int, scorecard_constructor: Optional[XGBScorecardConstructor] = None, show_info: bool = True) -> None`:
- Plots the tree structure.
- **Inputs**:
- `tree_index` (int): Index of the tree to plot.
- `scorecard_constructor` (Optional[XGBScorecardConstructor]): The XGBoost scorecard constructor.
- `show_info` (bool): Whether to show additional information (default: True).
- `**kwargs` (Any): Additional Matplotlib parameters.
# Contributing ๐ค
Contributions are welcome! For bug reports or feature requests, please open an issue.
For code contributions, please open a pull request.
## Version
Current version: 0.2.2
## Changelog
### [0.1.0] - 2024-02-14
- Initial release
### [0.2.0] - 2024-05-03
- Added tree visualization class (`explainer.py`)
- Updated the local explanation algorithm for models with a depth > 1 (`explainer.py`)
- Added a categorical preprocessor (`_utils.py`)
### [0.2.1] - 2024-05-03
- Updates of dependencies
### [0.2.2] - 2024-05-08
- Updates in `explainer.py` module to improve kwargs handling and minor changes.
# License ๐
This project is licensed under the MIT License - see the LICENSE file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "xbooster",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.11,>=3.9",
"maintainer_email": null,
"keywords": null,
"author": "xRiskLab",
"author_email": "contact@xrisklab.ai",
"download_url": "https://files.pythonhosted.org/packages/14/11/47b8ef3d30dfa7aaa31094cb3180e37bf0552c23223b156514089d78a41d/xbooster-0.2.2.tar.gz",
"platform": null,
"description": "# xbooster \ud83d\ude80\n\nA scorecard-format classificatory framework for logistic regression with XGBoost.\nxbooster allows to convert an XGB logistic regression into a logarithmic (point) scoring system.\n\nIn addition, it provides a suite of interpretability tools to understand the model's behavior,\nwhich can be instrumental for model testing and expert validation.\n\nThe interpretability suite includes:\n\n- Granular boosted tree statistics, including metrics such as Weight of Evidence (WOE) and Information Value (IV) for splits \ud83c\udf33\n- Tree visualization with customizations \ud83c\udfa8\n- Global and local feature importance \ud83d\udcca\n\nxbooster also provides a scorecard deployment using SQL \ud83d\udce6.\n\n## Installation \u2935\n\nInstall the package using pip:\n\n```python\npip install xbooster\n```\n\n## Usage \ud83d\udcdd\nHere's a quick example of how to use xbooster to construct a scorecard for an XGBoost model:\n\n```python\nimport pandas as pd\nimport xgboost as xgb\nfrom xbooster.constructor import XGBScorecardConstructor\nfrom sklearn.model_selection import train_test_split\n\n# Load data and train XGBoost model\nurl = (\n \"https://github.com/xRiskLab/xBooster/raw/main/examples/data/credit_data.parquet\"\n)\ndataset = pd.read_parquet(url)\n\nfeatures = [\n \"external_risk_estimate\",\n \"revolving_utilization_of_unsecured_lines\",\n \"account_never_delinq_percent\",\n \"net_fraction_revolving_burden\",\n \"num_total_cc_accounts\",\n \"average_months_in_file\",\n]\n\ntarget = \"is_bad\"\n\nX, y = dataset[features], dataset[target]\n\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, test_size=0.2, random_state=42\n)\n\n# Train the XGBoost model\nbest_params = {\n 'n_estimators': 100,\n 'learning_rate': 0.55,\n 'max_depth': 1,\n 'min_child_weight': 10,\n 'grow_policy': \"lossguide\",\n 'early_stopping_rounds': 5\n}\nmodel = xgb.XGBClassifier(**best_params, random_state=62)\nmodel.fit(X_train, y_train)\n\n# Initialize XGBScorecardConstructor\nscorecard_constructor = XGBScorecardConstructor(model, X_train, y_train)\nscorecard_constructor.construct_scorecard()\n\n# Print the scorecard\nprint(scorecard_constructor.scorecard)\n```\n\nAfter this, we can create a scorecard and test its Gini score:\n\n```python\nfrom sklearn.metrics import roc_auc_score\n\n# Create scoring points\nxgb_scorecard_with_points = scorecard_constructor.create_points(\n pdo=50, target_points=600, target_odds=50\n)\n# Make predictions using the scorecard\ncredit_scores = scorecard_constructor.predict_score(X_test)\ngini = roc_auc_score(y_test, -credit_scores) * 2 - 1\nprint(f\"Test Gini score: {gini:.2%}\")\n```\n\nWe can also visualize the score distribution between the events of interest.\n\n```python\nfrom xbooster import explainer\n\nexplainer.plot_score_distribution(\n y_test, \n credit_scores,\n num_bins=30, \n figsize=(8, 3),\n dpi=100\n)\n```\n\nWe can further examine feature importances.\n\nBelow, we can visualize the global feature importances using Points as our metric:\n\n```python\nfrom xbooster import explainer\n\nexplainer.plot_importance(\n scorecard_constructor,\n metric='Points',\n method='global',\n normalize=True,\n figsize=(3, 3)\n)\n```\n\nAlternatively, we can calculate local feature importances, which are important for boosters with a depth greater than 1.\n\n```python\nexplainer.plot_importance(\n scorecard_constructor,\n metric='Likelihood',\n method='local',\n normalize=True,\n color='#ffd43b',\n edgecolor='#1e1e1e',\n figsize=(3, 3)\n)\n```\n\nFinally, we can generate a scorecard in SQL format.\n\n```python\nsql_query = scorecard_constructor.generate_sql_query(table_name='my_table')\nprint(sql_query)\n```\n\n# Parameters \ud83d\udee0\n\n## `xbooster.constructor` - XGBoost Scorecard Constructor\n\n### Description\n\nA class for generating a scorecard from a trained XGBoost model. The methodology is inspired by the NVIDIA GTC Talk \"Machine Learning in Retail Credit Risk\" by Paul Edwards.\n\n### Methods\n\n1. `extract_leaf_weights() -> pd.DataFrame`:\n - Extracts the leaf weights from the booster's trees and returns a DataFrame.\n - **Returns**:\n - `pd.DataFrame`: DataFrame containing the extracted leaf weights.\n\n2. `extract_decision_nodes() -> pd.DataFrame`:\n - Extracts the split (decision) nodes from the booster's trees and returns a DataFrame.\n - **Returns**:\n - `pd.DataFrame`: DataFrame containing the extracted split (decision) nodes.\n\n3. `construct_scorecard() -> pd.DataFrame`:\n - Constructs a scorecard based on a booster.\n - **Returns**:\n - `pd.DataFrame`: The constructed scorecard.\n\n4. `create_points(pdo=50, target_points=600, target_odds=19, precision_points=0, score_type='XAddEvidence') -> pd.DataFrame`:\n - Creates a points card from a scorecard.\n - **Parameters**:\n - `pdo` (int, optional): The points to double the odds. Default is 50.\n - `target_points` (int, optional): The standard scorecard points. Default is 600.\n - `target_odds` (int, optional): The standard scorecard odds. Default is 19.\n - `precision_points` (int, optional): The points decimal precision. Default is 0.\n - `score_type` (str, optional): The log-odds to use for the points card. Default is 'XAddEvidence'.\n - **Returns**:\n - `pd.DataFrame`: The points card.\n\n5. `predict_score(X: pd.DataFrame) -> pd.Series`:\n - Predicts the score for a given dataset using the constructed scorecard.\n - **Parameters**:\n - `X` (`pd.DataFrame`): Features of the dataset.\n - **Returns**:\n - `pd.Series`: Predicted scores.\n\n6. `sql_query` (property):\n - Property that returns the SQL query for deploying the scorecard.\n - **Returns**:\n - `str`: The SQL query for deploying the scorecard.\n\n7. `generate_sql_query(table_name: str = \"my_table\") -> str`:\n - Converts a scorecard into an SQL format.\n - **Parameters**:\n - `table_name` (str): The name of the input table in SQL.\n - **Returns**:\n - `str`: The final SQL query for deploying the scorecard.\n\n## `xbooster.explainer` - XGBoost Scorecard Explainer\n\nThis module provides functionalities for explaining XGBoost scorecards, including methods to extract split information, build interaction splits, visualize tree structures, plot feature importances, and more.\n\n### Methods:\n\n1. `extract_splits_info(features: str) -> list`:\n - Extracts split information from the DetailedSplit feature.\n - **Inputs**:\n - `features` (str): A string containing split information.\n - **Outputs**:\n - Returns a list of tuples containing split information (feature, sign, value).\n\n2. `build_interactions_splits(scorecard_constructor: Optional[XGBScorecardConstructor] = None, dataframe: Optional[pd.DataFrame] = None) -> pd.DataFrame`:\n - Builds interaction splits from the XGBoost scorecard.\n - **Inputs**:\n - `scorecard_constructor` (Optional[XGBScorecardConstructor]): The XGBoost scorecard constructor.\n - `dataframe` (Optional[pd.DataFrame]): The dataframe containing split information.\n - **Outputs**:\n - Returns a pandas DataFrame containing interaction splits.\n\n3. `split_and_count(scorecard_constructor: Optional[XGBScorecardConstructor] = None, dataframe: Optional[pd.DataFrame] = None, label_column: Optional[str] = None) -> pd.DataFrame`:\n - Splits the dataset and counts events for each split.\n - **Inputs**:\n - `scorecard_constructor` (Optional[XGBScorecardConstructor]): The XGBoost scorecard constructor.\n - `dataframe` (Optional[pd.DataFrame]): The dataframe containing features and labels.\n - `label_column` (Optional[str]): The label column in the dataframe.\n - **Outputs**:\n - Returns a pandas DataFrame containing split information and event counts.\n\n4. `plot_importance(scorecard_constructor: Optional[XGBScorecardConstructor] = None, metric: str = \"Likelihood\", normalize: bool = True, method: Optional[str] = None, dataframe: Optional[pd.DataFrame] = None, **kwargs: Any) -> None`:\n - Plots the importance of features based on the XGBoost scorecard.\n - **Inputs**:\n - `scorecard_constructor` (Optional[XGBScorecardConstructor]): The XGBoost scorecard constructor.\n - `metric` (str): Metric to plot (\"Likelihood\" (default), \"NegLogLikelihood\", \"IV\", or \"Points\").\n - `normalize` (bool): Whether to normalize the importance values (default: True).\n - `method` (Optional[str]): The method to use for plotting the importance (\"global\" or \"local\").\n - `dataframe` (Optional[pd.DataFrame]): The dataframe containing features and labels.\n - `fontfamily` (str): The font family to use for the plot (default: \"Monospace\").\n - `fontsize` (int): The font size to use for the plot (default: 12).\n - `dpi` (int): The DPI of the plot (default: 100).\n - `title` (str): The title of the plot (default: \"Feature Importance\").\n - `**kwargs` (Any): Additional Matplotlib parameters.\n\n5. `plot_score_distribution(y_true: pd.Series = None, y_pred: pd.Series = None, n_bins: int = 25, scorecard_constructor: Optional[XGBScorecardConstructor] = None, **kwargs: Any)`:\n - Plots the distribution of predicted scores based on actual labels.\n - **Inputs**:\n - `y_true` (pd.Series): The true labels.\n - `y_pred` (pd.Series): The predicted labels.\n - `n_bins` (int): Number of bins for histogram (default: 25).\n - `scorecard_constructor` (Optional[XGBScorecardConstructor]): The XGBoost scorecard constructor.\n - `**kwargs` (Any): Additional Matplotlib parameters.\n\n6. `plot_local_importance(scorecard_constructor: Optional[XGBScorecardConstructor] = None, metric: str = \"Likelihood\", normalize: bool = True, dataframe: Optional[pd.DataFrame] = None, **kwargs: Any) -> None`:\n - Plots the local importance of features based on the XGBoost scorecard.\n - **Inputs**:\n - `scorecard_constructor` (Optional[XGBScorecardConstructor]): The XGBoost scorecard constructor.\n - `metric` (str): Metric to plot (\"Likelihood\" (default), \"NegLogLikelihood\", \"IV\", or \"Points\").\n - `normalize` (bool): Whether to normalize the importance values (default: True).\n - `dataframe` (Optional[pd.DataFrame]): The dataframe containing features and labels.\n - `fontfamily` (str): The font family to use for the plot (default: \"Arial\").\n - `fontsize` (int): The font size to use for the plot (default: 12).\n - `boxstyle` (str): The rounding box style to use for the plot (default: \"round\").\n - `title` (str): The title of the plot (default: \"Local Feature Importance\").\n - `**kwargs` (Any): Additional parameters to pass to the matplotlib function.\n\n7. `plot_tree(tree_index: int, scorecard_constructor: Optional[XGBScorecardConstructor] = None, show_info: bool = True) -> None`:\n - Plots the tree structure.\n - **Inputs**:\n - `tree_index` (int): Index of the tree to plot.\n - `scorecard_constructor` (Optional[XGBScorecardConstructor]): The XGBoost scorecard constructor.\n - `show_info` (bool): Whether to show additional information (default: True).\n - `**kwargs` (Any): Additional Matplotlib parameters.\n\n# Contributing \ud83e\udd1d\nContributions are welcome! For bug reports or feature requests, please open an issue.\n\nFor code contributions, please open a pull request.\n\n## Version\nCurrent version: 0.2.2\n\n## Changelog\n\n### [0.1.0] - 2024-02-14\n- Initial release\n\n### [0.2.0] - 2024-05-03\n- Added tree visualization class (`explainer.py`)\n- Updated the local explanation algorithm for models with a depth > 1 (`explainer.py`)\n- Added a categorical preprocessor (`_utils.py`)\n\n### [0.2.1] - 2024-05-03\n- Updates of dependencies\n\n### [0.2.2] - 2024-05-08\n- Updates in `explainer.py` module to improve kwargs handling and minor changes.\n\n# License \ud83d\udcc4\nThis project is licensed under the MIT License - see the LICENSE file for details.",
"bugtrack_url": null,
"license": "MIT",
"summary": "Explainable Boosted Scoring",
"version": "0.2.2",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "08b71e241d8dc838a85d557e0837dc9b0c5af390de418e57079161dbeb32e8a0",
"md5": "efd3ae766b402b91f3f4be297bfa5829",
"sha256": "c21c83d1ccb80ac0b6fd43959fc3bc73024e0bc757bb4b92837cd3d8b5c28d52"
},
"downloads": -1,
"filename": "xbooster-0.2.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "efd3ae766b402b91f3f4be297bfa5829",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.11,>=3.9",
"size": 29303,
"upload_time": "2024-05-08T12:06:48",
"upload_time_iso_8601": "2024-05-08T12:06:48.060079Z",
"url": "https://files.pythonhosted.org/packages/08/b7/1e241d8dc838a85d557e0837dc9b0c5af390de418e57079161dbeb32e8a0/xbooster-0.2.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "141147b8ef3d30dfa7aaa31094cb3180e37bf0552c23223b156514089d78a41d",
"md5": "00c59cdab616922f4acf8c22380b9f0c",
"sha256": "ea8cab7ef7260f19913f0a782c1f6e728656830b1c2d6be9e479bd9800473e65"
},
"downloads": -1,
"filename": "xbooster-0.2.2.tar.gz",
"has_sig": false,
"md5_digest": "00c59cdab616922f4acf8c22380b9f0c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.11,>=3.9",
"size": 29494,
"upload_time": "2024-05-08T12:06:49",
"upload_time_iso_8601": "2024-05-08T12:06:49.961735Z",
"url": "https://files.pythonhosted.org/packages/14/11/47b8ef3d30dfa7aaa31094cb3180e37bf0552c23223b156514089d78a41d/xbooster-0.2.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-05-08 12:06:49",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "xbooster"
}