xingu


Namexingu JSON
Version 1.7.3 PyPI version JSON
download
home_pageNone
SummaryAutomated ML model training and packaging
upload_time2024-05-20 14:40:52
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseGNU LESSER GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. This version of the GNU Lesser General Public License incorporates the terms and conditions of version 3 of the GNU General Public License, supplemented by the additional permissions listed below. 0. Additional Definitions. As used herein, "this License" refers to version 3 of the GNU Lesser General Public License, and the "GNU GPL" refers to version 3 of the GNU General Public License. "The Library" refers to a covered work governed by this License, other than an Application or a Combined Work as defined below. An "Application" is any work that makes use of an interface provided by the Library, but which is not otherwise based on the Library. Defining a subclass of a class defined by the Library is deemed a mode of using an interface provided by the Library. A "Combined Work" is a work produced by combining or linking an Application with the Library. The particular version of the Library with which the Combined Work was made is also called the "Linked Version". The "Minimal Corresponding Source" for a Combined Work means the Corresponding Source for the Combined Work, excluding any source code for portions of the Combined Work that, considered in isolation, are based on the Application, and not on the Linked Version. The "Corresponding Application Code" for a Combined Work means the object code and/or source code for the Application, including any data and utility programs needed for reproducing the Combined Work from the Application, but excluding the System Libraries of the Combined Work. 1. Exception to Section 3 of the GNU GPL. You may convey a covered work under sections 3 and 4 of this License without being bound by section 3 of the GNU GPL. 2. Conveying Modified Versions. If you modify a copy of the Library, and, in your modifications, a facility refers to a function or data to be supplied by an Application that uses the facility (other than as an argument passed when the facility is invoked), then you may convey a copy of the modified version: a) under this License, provided that you make a good faith effort to ensure that, in the event an Application does not supply the function or data, the facility still operates, and performs whatever part of its purpose remains meaningful, or b) under the GNU GPL, with none of the additional permissions of this License applicable to that copy. 3. Object Code Incorporating Material from Library Header Files. The object code form of an Application may incorporate material from a header file that is part of the Library. You may convey such object code under terms of your choice, provided that, if the incorporated material is not limited to numerical parameters, data structure layouts and accessors, or small macros, inline functions and templates (ten or fewer lines in length), you do both of the following: a) Give prominent notice with each copy of the object code that the Library is used in it and that the Library and its use are covered by this License. b) Accompany the object code with a copy of the GNU GPL and this license document. 4. Combined Works. You may convey a Combined Work under terms of your choice that, taken together, effectively do not restrict modification of the portions of the Library contained in the Combined Work and reverse engineering for debugging such modifications, if you also do each of the following: a) Give prominent notice with each copy of the Combined Work that the Library is used in it and that the Library and its use are covered by this License. b) Accompany the Combined Work with a copy of the GNU GPL and this license document. c) For a Combined Work that displays copyright notices during execution, include the copyright notice for the Library among these notices, as well as a reference directing the user to the copies of the GNU GPL and this license document. d) Do one of the following: 0) Convey the Minimal Corresponding Source under the terms of this License, and the Corresponding Application Code in a form suitable for, and under terms that permit, the user to recombine or relink the Application with a modified version of the Linked Version to produce a modified Combined Work, in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source. 1) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (a) uses at run time a copy of the Library already present on the user's computer system, and (b) will operate properly with a modified version of the Library that is interface-compatible with the Linked Version. e) Provide Installation Information, but only if you would otherwise be required to provide such information under section 6 of the GNU GPL, and only to the extent that such information is necessary to install and execute a modified version of the Combined Work produced by recombining or relinking the Application with a modified version of the Linked Version. (If you use option 4d0, the Installation Information must accompany the Minimal Corresponding Source and Corresponding Application Code. If you use option 4d1, you must provide the Installation Information in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source.) 5. Combined Libraries. You may place library facilities that are a work based on the Library side by side in a single library together with other library facilities that are not Applications and are not covered by this License, and convey such a combined library under terms of your choice, if you do both of the following: a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities, conveyed under the terms of this License. b) Give prominent notice with the combined library that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work. 6. Revised Versions of the GNU Lesser General Public License. The Free Software Foundation may publish revised and/or new versions of the GNU Lesser General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Library as you received it specifies that a certain numbered version of the GNU Lesser General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that published version or of any later version published by the Free Software Foundation. If the Library as you received it does not specify a version number of the GNU Lesser General Public License, you may choose any version of the GNU Lesser General Public License ever published by the Free Software Foundation. If the Library as you received it specifies that a proxy can decide whether future versions of the GNU Lesser General Public License shall apply, that proxy's public statement of acceptance of any version is permanent authorization for you to choose that version for the Library.
keywords mlengineering xingu
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Xingu for automated ML model training

Xingu is a framework of a few classes that helps on full industrialization of
your machine learning training and deployment pipelines. Just write your
DataProvider class, mostly in a declarative way, that completely controls
your training and deployment pipeline.

Notebooks are useful in EDA time, but when the modeling is ready to become
a product, use Xingu proposed classes to organize interactions with DB
(queries), data cleanup, feature engineering, hyper-parameters optimization,
training algorithm, general and custom metrics computation, estimation
post-processing.

- Don’t save a pickle at the end of your EDA, let Xingu organize a versioned
  inventory of saved models (PKLs) linked and associated to commit hashes and
  branches of your code.

- Don’t save metrics manually and in an informal way. Metrics are first class
  citizens, so use Xingu to write methods that compute metrics and let it
  store metrics in an organized database that can be queried and compared.

- Don’t make ad-hoc plots to understand your data. Plots are important assets
  to measure the quality of your model, so use Xingu to write methods that
  formaly generate versioned plots.

- Do not worry or even write code that loads pre-req models, use Xingu pre-req
  architecture to load pre-req models for you and package them together.

- Don’t save ad-hoc hypermaters after optimizations. Let Xingu store and manage
  those for you in a way that can be reused in future trains.
  
- Don’t change your code if you want different functionality. Use Xingu
  environment variables or command line parameters to strategize your trains.

- Don’t manually copy PKLs to production environments on S3 or other object
  storage. Use Xingu’s deployment tools to automate the deployment step.
  
- Don’t write database integration code. Just provide your queries and Xingu
  will give you the data. Xingu will also maintain a local cache of your data
  so you won’t hammer your database across multiple retrains. Do the same with
  static data files with parquet, CSV, on local filesystem or object storage.
  
- Xingu can run anyware, from your laptop, with a plain SQLite database, to
  large scale cloud-powered training pipelines with GitOps, Jenkins, Docker
  etc. Xingu’s database is used only to cellect training information, it isn´t
  required later when model is used to predict.

## Install
```shell
pip install https://github.com/avibrazil/xingu
```

or

```shell
pip install xingu
```

## Use to Train a Model
Check your project has the necessary files and folders:
```shell
$ find
dataproviders/
dataproviders/my_dataprovider.py
estimators/
estimators/myrandomestimator.py
models/
data/
plots/
```
Train with DataProviders `id_of_my_dataprovider1` and `id_of_my_dataprovider2`, both defined in `dataproviders/my_dataprovider.py`:
```shell
$ xingu \
    --dps id_of_my_dataprovider1,id_of_my_dataprovider2 \
    --databases athena "awsathena+rest://athena.us..." \
    --query-cache-path data \
    --trained-models-path models \
    --debug
```

## Use the API
See the [proof of concept notebook](https://github.com/avibrazil/xingu/blob/main/notebooks/POC%20Use%20Xingu.ipynb) with vairous usage scenarios:

- POC 1. Train some Models
- POC 2. Use Pre-Trained Models for Batch Predict
- POC 3. Assess Metrics and create Comparative Reports
- POC 4. Check and report how Metrics evolved
- POC 5. Play with Xingu barebones
- POC 6. Play with the `ConfigManager`
- POC 7. Xingu Estimators in the Command Line
- POC 8. Deploy Xingu Data and Estimators between environments (laptop, staging, production etc)

## Procedures defined by Xingu

Xingu classes do all the heavy lifting while you focus on your machine learning
code only.

- Class `Coach` is responsible of coordinating the training process of one or
multiple models. You control parallelism via command line or environment
variables.

- Class `Model` implements a standard pipelines for train, train with hyperparam
optimization, load and save pickles, database access etc. These pipelines are
is fully controlled by your DataProvider or the environment.

- Class `DataProvider` is a base class that is constantly queried by the `Model`
to determine how the `Model` should operate. Your should create a class derived
from `DataProvider` and reimplement whatever you want to change. This will
completely change behaviour of `Model` operation in a way that you´ll get a
completelly different model.

    - It is your `DataProvider` that defines the source of training data as SQL
    queries or URLs of parquets, CSVs, JSONs
    - It is your `DataProvider` that defines how multi-source data should be
    integrated
    - It is your `DataProvider` that defines how data should be split into train
    and test sets
    - Your `DataProvider` defines which `Estimator` class to use
    - Your `DataProvider` defines how the `Estimator` should be initialized and
    optimized
    - Your `DataProvider` defines which metrics should be computed, how to
    compute them and against which dataset
    - Your `DataProvider` defines which plots should be created and against
    which dataset
    - See below when and how each method of your `DataProvider` will be called
    by `xingu.Model`
    
- Class `Estimator` is another base class (that you can reimplement) to contain
estimator-specific affairs. There will be an `Estimator`-derived class for an
XGBoostRegressor, other for a CatBoostClassifier, other for a
SciKit-Learn-specific algorithm, including hyperparam optimization logic and
libraries. A concrete `Estimator` class can and should be reused across multiple
different models.

The hierarchical diagrams below expose complete Xingu pipelines with all their
steps. Steps marked with 💫 are were you put your code. All the rest is Xingu
boilerplate code ready to use.

### `Coach.team_train()`:

Train various Models, all possible in parallel.

1. `Coach.team_train_parallel()` (background, parallelism controled by `PARALLEL_TRAIN_MAX_WORKERS`):
    1. `Coach.team_load()` (for pre-req models not trained in this session)
    2. Per DataProvider requested to be trained:
        1. `Coach.team_train_member()` (background):
            1. `Model.fit()` calls:
                1. 💫`DataProvider.get_dataset_sources_for_train()` return dict of queries and/or URLs
                2. `Model.data_sources_to_data(sources)`
                3. 💫`DataProvider.clean_data_for_train(dict of DataFrames)`
                4. 💫`DataProvider.feature_engineering_for_train(DataFrame)`
                5. 💫`DataProvider.last_pre_process_for_train(DataFrame)`
                6. 💫`DataProvider.data_split_for_train(DataFrame)` return tuple of dataframes
                7. `Model.hyperparam_optimize()` (decide origin of hyperparam)
                    1. 💫`DataProvider.get_estimator_features_list()`
                    2. 💫`DataProvider.get_target()`
                    3. 💫`DataProvider.get_estimator_optimization_search_space()`
                    4. 💫`DataProvider.get_estimator_hyperparameters()`
                    5. 💫`Estimator.hyperparam_optimize()` (SKOpt, GridSearch et all)
                    6. 💫`Estimator.hyperparam_exchange()`
                8. 💫`DataProvider.post_process_after_hyperparam_optimize()`
                9. 💫`Estimator.fit()`
                10. 💫`DataProvider.post_process_after_train()`
    2. `Coach.post_train_parallel()` (background, only if `POST_PROCESS=true`):
        1. Per trained Model (parallelism controled by `PARALLEL_POST_PROCESS_MAX_WORKERS`):
            1. `Model.save()` (PKL save in background)
            2. `Model.trainsets_save()` (save the train datasets, background)
            3. `Model.trainsets_predict()`:
                1. `Model.predict_proba()` or `Model.predict()` (see [below](#predict))
                2. 💫`DataProvider.pre_process_for_trainsets_metrics()`
                3. `Model.compute_and_save_metrics(channel=trainsets)` (see [below](#metrics))
                4. 💫`DataProvider.post_process_after_trainsets_metrics()`
            4. `Coach.single_batch_predict()` (see [below](#batch))



<a id='batch'></a>
### `Coach.team_batch_predict()`:

Load from storage and use various pre-trained Models to estimate data from a pre-defined SQL query.
The batch predict SQL query is defined into the DataProvider and this process will query the database
to get it.

1. `Coach.team_load()` (for all requested DPs and their pre-reqs)
2. Per loaded model:
    1. `Coach.single_batch_predict()` (background)
        1. `Model.batch_predict()`
            1. 💫`DataProvider.get_dataset_sources_for_batch_predict()`
            2. `Model.data_sources_to_data()`
            3. 💫`DataProvider.clean_data_for_batch_predict()`
            4. 💫`DataProvider.feature_engineering_for_batch_predict()`
            5. 💫`DataProvider.last_pre_process_for_batch_predict()`
            6. `Model.predict_proba()` or `Model.predict()` (see [below](#predict))
        2. `Model.compute_and_save_metrics(channel=batch_predict)` (see [below](#metrics))
        3. `Model.save_batch_predict_estimations()`


<a id='predict'></a>
### `Model.predict()` and `Model.predict_proba()`:

1. `Model.generic_predict()`
    1. 💫`DataProvider.pre_process_for_predict()` or `DataProvider.pre_process_for_predict_proba()`
    2. 💫`DataProvider.get_estimator_features_list()`
    3. 💫`Estimator.predict()` or `Estimator.predict_proba()`
    4. 💫`DataProvider.post_process_after_predict()` or `DataProvider.post_process_after_predict_proba()`


<a id='metrics'></a>
### `Model.compute_and_save_metrics()`:

Sub-system to compute various metrics, graphics and transformations over
a facet of the data.

This is executed right after a Model was trained and also during a batch predict.

Predicted data is computed before `Model.compute_and_save_metrics()` is called.
By `Model.trainsets_predict()` and `Model.batch_predict()`.

1. `Model.save_model_metrics()` calls:
    1. `Model.compute_model_metrics()` calls:
        1. `Model.compute_trainsets_model_metrics()` calls:
            1. All `Model.compute_trainsets_model_metrics_{NAME}()`
            2. All 💫`DataProvider.compute_trainsets_model_metrics_{NAME}()`
        2. `Model.compute_batch_model_metrics()` calls:
            1. All `Model.compute_batch_model_metrics_{NAME}()`
            2. All 💫`DataProvider.compute_batch_model_metrics_{NAME}()`
        3. `Model.compute_global_model_metrics()` calls:
            1. All `Model.compute_global_model_metrics_{NAME}()`
            2. All 💫`DataProvider.compute_global_model_metrics_{NAME}()`
    2. `Model.render_model_plots()` calls:
        1. `Model.render_trainsets_model_plots()` calls:
            1. All `Model.render_trainsets_model_plots_{NAME}()`
            3. All 💫`DataProvider.render_trainsets_model_plots_{NAME}()`
        2. `Model.render_batch_model_plots()` calls:
            1. All `Model.render_batch_model_plots_{NAME}()`
            3. All 💫`DataProvider.render_batch_model_plots_{NAME}()`
        3. `Model.render_global_model_plots()` calls:
            1. All `Model.render_global_model_plots_{NAME}()`
            3. All 💫`DataProvider.render_global_model_plots_{NAME}()`
2. `Model.save_estimation_metrics()` calls:
    1. `Model.compute_estimation_metrics()` calls:
        1. All `Model.compute_estimation_metrics_{NAME}()`
        2. All 💫`DataProvider.compute_estimation_metrics_{NAME}()`

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "xingu",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "mlengineering, xingu",
    "author": null,
    "author_email": "Avi Alkalay <avi@unix.sh>",
    "download_url": "https://files.pythonhosted.org/packages/48/ce/2a39266a0e67bee3d2a52f1cddbdc1dd5e6b056dbeac8164cd28ac66c109/xingu-1.7.3.tar.gz",
    "platform": null,
    "description": "# Xingu for automated ML model training\n\nXingu is a framework of a few classes that helps on full industrialization of\nyour machine learning training and deployment pipelines. Just write your\nDataProvider class, mostly in a declarative way, that completely controls\nyour training and deployment pipeline.\n\nNotebooks are useful in EDA time, but when the modeling is ready to become\na product, use Xingu proposed classes to organize interactions with DB\n(queries), data cleanup, feature engineering, hyper-parameters optimization,\ntraining algorithm, general and custom metrics computation, estimation\npost-processing.\n\n- Don\u2019t save a pickle at the end of your EDA, let Xingu organize a versioned\n  inventory of saved models (PKLs) linked and associated to commit hashes and\n  branches of your code.\n\n- Don\u2019t save metrics manually and in an informal way. Metrics are first class\n  citizens, so use Xingu to write methods that compute metrics and let it\n  store metrics in an organized database that can be queried and compared.\n\n- Don\u2019t make ad-hoc plots to understand your data. Plots are important assets\n  to measure the quality of your model, so use Xingu to write methods that\n  formaly generate versioned plots.\n\n- Do not worry or even write code that loads pre-req models, use Xingu pre-req\n  architecture to load pre-req models for you and package them together.\n\n- Don\u2019t save ad-hoc hypermaters after optimizations. Let Xingu store and manage\n  those for you in a way that can be reused in future trains.\n  \n- Don\u2019t change your code if you want different functionality. Use Xingu\n  environment variables or command line parameters to strategize your trains.\n\n- Don\u2019t manually copy PKLs to production environments on S3 or other object\n  storage. Use Xingu\u2019s deployment tools to automate the deployment step.\n  \n- Don\u2019t write database integration code. Just provide your queries and Xingu\n  will give you the data. Xingu will also maintain a local cache of your data\n  so you won\u2019t hammer your database across multiple retrains. Do the same with\n  static data files with parquet, CSV, on local filesystem or object storage.\n  \n- Xingu can run anyware, from your laptop, with a plain SQLite database, to\n  large scale cloud-powered training pipelines with GitOps, Jenkins, Docker\n  etc. Xingu\u2019s database is used only to cellect training information, it isn\u00b4t\n  required later when model is used to predict.\n\n## Install\n```shell\npip install https://github.com/avibrazil/xingu\n```\n\nor\n\n```shell\npip install xingu\n```\n\n## Use to Train a Model\nCheck your project has the necessary files and folders:\n```shell\n$ find\ndataproviders/\ndataproviders/my_dataprovider.py\nestimators/\nestimators/myrandomestimator.py\nmodels/\ndata/\nplots/\n```\nTrain with DataProviders `id_of_my_dataprovider1` and `id_of_my_dataprovider2`, both defined in `dataproviders/my_dataprovider.py`:\n```shell\n$ xingu \\\n    --dps id_of_my_dataprovider1,id_of_my_dataprovider2 \\\n    --databases athena \"awsathena+rest://athena.us...\" \\\n    --query-cache-path data \\\n    --trained-models-path models \\\n    --debug\n```\n\n## Use the API\nSee the [proof of concept notebook](https://github.com/avibrazil/xingu/blob/main/notebooks/POC%20Use%20Xingu.ipynb) with vairous usage scenarios:\n\n- POC 1. Train some Models\n- POC 2. Use Pre-Trained Models for Batch Predict\n- POC 3. Assess Metrics and create Comparative Reports\n- POC 4. Check and report how Metrics evolved\n- POC 5. Play with Xingu barebones\n- POC 6. Play with the `ConfigManager`\n- POC 7. Xingu Estimators in the Command Line\n- POC 8. Deploy Xingu Data and Estimators between environments (laptop, staging, production etc)\n\n## Procedures defined by Xingu\n\nXingu classes do all the heavy lifting while you focus on your machine learning\ncode only.\n\n- Class `Coach` is responsible of coordinating the training process of one or\nmultiple models. You control parallelism via command line or environment\nvariables.\n\n- Class `Model` implements a standard pipelines for train, train with hyperparam\noptimization, load and save pickles, database access etc. These pipelines are\nis fully controlled by your DataProvider or the environment.\n\n- Class `DataProvider` is a base class that is constantly queried by the `Model`\nto determine how the `Model` should operate. Your should create a class derived\nfrom `DataProvider` and reimplement whatever you want to change. This will\ncompletely change behaviour of `Model` operation in a way that you\u00b4ll get a\ncompletelly different model.\n\n    - It is your `DataProvider` that defines the source of training data as SQL\n    queries or URLs of parquets, CSVs, JSONs\n    - It is your `DataProvider` that defines how multi-source data should be\n    integrated\n    - It is your `DataProvider` that defines how data should be split into train\n    and test sets\n    - Your `DataProvider` defines which `Estimator` class to use\n    - Your `DataProvider` defines how the `Estimator` should be initialized and\n    optimized\n    - Your `DataProvider` defines which metrics should be computed, how to\n    compute them and against which dataset\n    - Your `DataProvider` defines which plots should be created and against\n    which dataset\n    - See below when and how each method of your `DataProvider` will be called\n    by `xingu.Model`\n    \n- Class `Estimator` is another base class (that you can reimplement) to contain\nestimator-specific affairs. There will be an `Estimator`-derived class for an\nXGBoostRegressor, other for a CatBoostClassifier, other for a\nSciKit-Learn-specific algorithm, including hyperparam optimization logic and\nlibraries. A concrete `Estimator` class can and should be reused across multiple\ndifferent models.\n\nThe hierarchical diagrams below expose complete Xingu pipelines with all their\nsteps. Steps marked with \ud83d\udcab are were you put your code. All the rest is Xingu\nboilerplate code ready to use.\n\n### `Coach.team_train()`:\n\nTrain various Models, all possible in parallel.\n\n1. `Coach.team_train_parallel()` (background, parallelism controled by `PARALLEL_TRAIN_MAX_WORKERS`):\n    1. `Coach.team_load()` (for pre-req models not trained in this session)\n    2. Per DataProvider requested to be trained:\n        1. `Coach.team_train_member()` (background):\n            1. `Model.fit()` calls:\n                1. \ud83d\udcab`DataProvider.get_dataset_sources_for_train()` return dict of queries and/or URLs\n                2. `Model.data_sources_to_data(sources)`\n                3. \ud83d\udcab`DataProvider.clean_data_for_train(dict of DataFrames)`\n                4. \ud83d\udcab`DataProvider.feature_engineering_for_train(DataFrame)`\n                5. \ud83d\udcab`DataProvider.last_pre_process_for_train(DataFrame)`\n                6. \ud83d\udcab`DataProvider.data_split_for_train(DataFrame)` return tuple of dataframes\n                7. `Model.hyperparam_optimize()` (decide origin of hyperparam)\n                    1. \ud83d\udcab`DataProvider.get_estimator_features_list()`\n                    2. \ud83d\udcab`DataProvider.get_target()`\n                    3. \ud83d\udcab`DataProvider.get_estimator_optimization_search_space()`\n                    4. \ud83d\udcab`DataProvider.get_estimator_hyperparameters()`\n                    5. \ud83d\udcab`Estimator.hyperparam_optimize()` (SKOpt, GridSearch et all)\n                    6. \ud83d\udcab`Estimator.hyperparam_exchange()`\n                8. \ud83d\udcab`DataProvider.post_process_after_hyperparam_optimize()`\n                9. \ud83d\udcab`Estimator.fit()`\n                10. \ud83d\udcab`DataProvider.post_process_after_train()`\n    2. `Coach.post_train_parallel()` (background, only if `POST_PROCESS=true`):\n        1. Per trained Model (parallelism controled by `PARALLEL_POST_PROCESS_MAX_WORKERS`):\n            1. `Model.save()` (PKL save in background)\n            2. `Model.trainsets_save()` (save the train datasets, background)\n            3. `Model.trainsets_predict()`:\n                1. `Model.predict_proba()` or `Model.predict()` (see [below](#predict))\n                2. \ud83d\udcab`DataProvider.pre_process_for_trainsets_metrics()`\n                3. `Model.compute_and_save_metrics(channel=trainsets)` (see [below](#metrics))\n                4. \ud83d\udcab`DataProvider.post_process_after_trainsets_metrics()`\n            4. `Coach.single_batch_predict()` (see [below](#batch))\n\n\n\n<a id='batch'></a>\n### `Coach.team_batch_predict()`:\n\nLoad from storage and use various pre-trained Models to estimate data from a pre-defined SQL query.\nThe batch predict SQL query is defined into the DataProvider and this process will query the database\nto get it.\n\n1. `Coach.team_load()` (for all requested DPs and their pre-reqs)\n2. Per loaded model:\n    1. `Coach.single_batch_predict()` (background)\n        1. `Model.batch_predict()`\n            1. \ud83d\udcab`DataProvider.get_dataset_sources_for_batch_predict()`\n            2. `Model.data_sources_to_data()`\n            3. \ud83d\udcab`DataProvider.clean_data_for_batch_predict()`\n            4. \ud83d\udcab`DataProvider.feature_engineering_for_batch_predict()`\n            5. \ud83d\udcab`DataProvider.last_pre_process_for_batch_predict()`\n            6. `Model.predict_proba()` or `Model.predict()` (see [below](#predict))\n        2. `Model.compute_and_save_metrics(channel=batch_predict)` (see [below](#metrics))\n        3. `Model.save_batch_predict_estimations()`\n\n\n<a id='predict'></a>\n### `Model.predict()` and `Model.predict_proba()`:\n\n1. `Model.generic_predict()`\n    1. \ud83d\udcab`DataProvider.pre_process_for_predict()` or `DataProvider.pre_process_for_predict_proba()`\n    2. \ud83d\udcab`DataProvider.get_estimator_features_list()`\n    3. \ud83d\udcab`Estimator.predict()` or `Estimator.predict_proba()`\n    4. \ud83d\udcab`DataProvider.post_process_after_predict()` or `DataProvider.post_process_after_predict_proba()`\n\n\n<a id='metrics'></a>\n### `Model.compute_and_save_metrics()`:\n\nSub-system to compute various metrics, graphics and transformations over\na facet of the data.\n\nThis is executed right after a Model was trained and also during a batch predict.\n\nPredicted data is computed before `Model.compute_and_save_metrics()` is called.\nBy `Model.trainsets_predict()` and `Model.batch_predict()`.\n\n1. `Model.save_model_metrics()` calls:\n    1. `Model.compute_model_metrics()` calls:\n        1. `Model.compute_trainsets_model_metrics()` calls:\n            1. All `Model.compute_trainsets_model_metrics_{NAME}()`\n            2. All \ud83d\udcab`DataProvider.compute_trainsets_model_metrics_{NAME}()`\n        2. `Model.compute_batch_model_metrics()` calls:\n            1. All `Model.compute_batch_model_metrics_{NAME}()`\n            2. All \ud83d\udcab`DataProvider.compute_batch_model_metrics_{NAME}()`\n        3. `Model.compute_global_model_metrics()` calls:\n            1. All `Model.compute_global_model_metrics_{NAME}()`\n            2. All \ud83d\udcab`DataProvider.compute_global_model_metrics_{NAME}()`\n    2. `Model.render_model_plots()` calls:\n        1. `Model.render_trainsets_model_plots()` calls:\n            1. All `Model.render_trainsets_model_plots_{NAME}()`\n            3. All \ud83d\udcab`DataProvider.render_trainsets_model_plots_{NAME}()`\n        2. `Model.render_batch_model_plots()` calls:\n            1. All `Model.render_batch_model_plots_{NAME}()`\n            3. All \ud83d\udcab`DataProvider.render_batch_model_plots_{NAME}()`\n        3. `Model.render_global_model_plots()` calls:\n            1. All `Model.render_global_model_plots_{NAME}()`\n            3. All \ud83d\udcab`DataProvider.render_global_model_plots_{NAME}()`\n2. `Model.save_estimation_metrics()` calls:\n    1. `Model.compute_estimation_metrics()` calls:\n        1. All `Model.compute_estimation_metrics_{NAME}()`\n        2. All \ud83d\udcab`DataProvider.compute_estimation_metrics_{NAME}()`\n",
    "bugtrack_url": null,
    "license": "GNU LESSER GENERAL PUBLIC LICENSE Version 3, 29 June 2007  Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.   This version of the GNU Lesser General Public License incorporates the terms and conditions of version 3 of the GNU General Public License, supplemented by the additional permissions listed below.  0. Additional Definitions.  As used herein, \"this License\" refers to version 3 of the GNU Lesser General Public License, and the \"GNU GPL\" refers to version 3 of the GNU General Public License.  \"The Library\" refers to a covered work governed by this License, other than an Application or a Combined Work as defined below.  An \"Application\" is any work that makes use of an interface provided by the Library, but which is not otherwise based on the Library. Defining a subclass of a class defined by the Library is deemed a mode of using an interface provided by the Library.  A \"Combined Work\" is a work produced by combining or linking an Application with the Library.  The particular version of the Library with which the Combined Work was made is also called the \"Linked Version\".  The \"Minimal Corresponding Source\" for a Combined Work means the Corresponding Source for the Combined Work, excluding any source code for portions of the Combined Work that, considered in isolation, are based on the Application, and not on the Linked Version.  The \"Corresponding Application Code\" for a Combined Work means the object code and/or source code for the Application, including any data and utility programs needed for reproducing the Combined Work from the Application, but excluding the System Libraries of the Combined Work.  1. Exception to Section 3 of the GNU GPL.  You may convey a covered work under sections 3 and 4 of this License without being bound by section 3 of the GNU GPL.  2. Conveying Modified Versions.  If you modify a copy of the Library, and, in your modifications, a facility refers to a function or data to be supplied by an Application that uses the facility (other than as an argument passed when the facility is invoked), then you may convey a copy of the modified version:  a) under this License, provided that you make a good faith effort to ensure that, in the event an Application does not supply the function or data, the facility still operates, and performs whatever part of its purpose remains meaningful, or  b) under the GNU GPL, with none of the additional permissions of this License applicable to that copy.  3. Object Code Incorporating Material from Library Header Files.  The object code form of an Application may incorporate material from a header file that is part of the Library.  You may convey such object code under terms of your choice, provided that, if the incorporated material is not limited to numerical parameters, data structure layouts and accessors, or small macros, inline functions and templates (ten or fewer lines in length), you do both of the following:  a) Give prominent notice with each copy of the object code that the Library is used in it and that the Library and its use are covered by this License.  b) Accompany the object code with a copy of the GNU GPL and this license document.  4. Combined Works.  You may convey a Combined Work under terms of your choice that, taken together, effectively do not restrict modification of the portions of the Library contained in the Combined Work and reverse engineering for debugging such modifications, if you also do each of the following:  a) Give prominent notice with each copy of the Combined Work that the Library is used in it and that the Library and its use are covered by this License.  b) Accompany the Combined Work with a copy of the GNU GPL and this license document.  c) For a Combined Work that displays copyright notices during execution, include the copyright notice for the Library among these notices, as well as a reference directing the user to the copies of the GNU GPL and this license document.  d) Do one of the following:  0) Convey the Minimal Corresponding Source under the terms of this License, and the Corresponding Application Code in a form suitable for, and under terms that permit, the user to recombine or relink the Application with a modified version of the Linked Version to produce a modified Combined Work, in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source.  1) Use a suitable shared library mechanism for linking with the Library.  A suitable mechanism is one that (a) uses at run time a copy of the Library already present on the user's computer system, and (b) will operate properly with a modified version of the Library that is interface-compatible with the Linked Version.  e) Provide Installation Information, but only if you would otherwise be required to provide such information under section 6 of the GNU GPL, and only to the extent that such information is necessary to install and execute a modified version of the Combined Work produced by recombining or relinking the Application with a modified version of the Linked Version. (If you use option 4d0, the Installation Information must accompany the Minimal Corresponding Source and Corresponding Application Code. If you use option 4d1, you must provide the Installation Information in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source.)  5. Combined Libraries.  You may place library facilities that are a work based on the Library side by side in a single library together with other library facilities that are not Applications and are not covered by this License, and convey such a combined library under terms of your choice, if you do both of the following:  a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities, conveyed under the terms of this License.  b) Give prominent notice with the combined library that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work.  6. Revised Versions of the GNU Lesser General Public License.  The Free Software Foundation may publish revised and/or new versions of the GNU Lesser General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.  Each version is given a distinguishing version number. If the Library as you received it specifies that a certain numbered version of the GNU Lesser General Public License \"or any later version\" applies to it, you have the option of following the terms and conditions either of that published version or of any later version published by the Free Software Foundation. If the Library as you received it does not specify a version number of the GNU Lesser General Public License, you may choose any version of the GNU Lesser General Public License ever published by the Free Software Foundation.  If the Library as you received it specifies that a proxy can decide whether future versions of the GNU Lesser General Public License shall apply, that proxy's public statement of acceptance of any version is permanent authorization for you to choose that version for the Library. ",
    "summary": "Automated ML model training and packaging",
    "version": "1.7.3",
    "project_urls": {
        "Homepage": "https://github.com/avibrazil/xingu",
        "Issues": "https://github.com/avibrazil/xingu/issues/new/choose",
        "Pypi": "https://pypi.org/project/xingu",
        "Source": "https://github.com/avibrazil/xingu"
    },
    "split_keywords": [
        "mlengineering",
        " xingu"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0ba3615c9e2af3f374ecd0b2669b4afccd7cae4b0afaacf8851a1cb1ae587887",
                "md5": "2af3e57f98d6bcc7c611ffc13a316225",
                "sha256": "60429ebe4aa6a310c6c0c85d70ea68f86b7ebf99a455b45bcf2465653c31483b"
            },
            "downloads": -1,
            "filename": "xingu-1.7.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2af3e57f98d6bcc7c611ffc13a316225",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 62814,
            "upload_time": "2024-05-20T14:40:50",
            "upload_time_iso_8601": "2024-05-20T14:40:50.068675Z",
            "url": "https://files.pythonhosted.org/packages/0b/a3/615c9e2af3f374ecd0b2669b4afccd7cae4b0afaacf8851a1cb1ae587887/xingu-1.7.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "48ce2a39266a0e67bee3d2a52f1cddbdc1dd5e6b056dbeac8164cd28ac66c109",
                "md5": "808655ad5818d11b4819a6ad977791d9",
                "sha256": "99dbc2397a06ac1869bd24b65c9cf5612533d2e699b5e855e189e90f37fecb1e"
            },
            "downloads": -1,
            "filename": "xingu-1.7.3.tar.gz",
            "has_sig": false,
            "md5_digest": "808655ad5818d11b4819a6ad977791d9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 63136,
            "upload_time": "2024-05-20T14:40:52",
            "upload_time_iso_8601": "2024-05-20T14:40:52.421488Z",
            "url": "https://files.pythonhosted.org/packages/48/ce/2a39266a0e67bee3d2a52f1cddbdc1dd5e6b056dbeac8164cd28ac66c109/xingu-1.7.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-20 14:40:52",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "avibrazil",
    "github_project": "xingu",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "xingu"
}
        
Elapsed time: 0.26828s