happy-learning


Namehappy-learning JSON
Version 0.5.0 PyPI version JSON
download
home_pagehttps://github.com/GianniBalistreri/happy_learning
SummaryToolbox for reinforced developing of machine learning models (as proof-of-concept)
upload_time2023-05-06 12:10:46
maintainer
docs_urlNone
authorGianni Francesco Balistreri
requires_python>=3.6
licenseGNU
keywords feature-engineering feature-selection evolutionary-algorithm machine-learning automl reinforcement-learning deep-learning shapley clustering pytorch
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Happy ;) Learning

## Description:
Toolbox for reinforced developing of machine learning models (as proof-of-concept) in python. 
It is specially designed to evolve and optimize machine learning models using evolutionary algorithms both on the feature engineering side and on the hyper parameter tuning side.

## Table of Content:
1. Installation
2. Requirements
3. Introduction
    - Practical Usage
    - FeatureEngineer
    - FeatureTournament
    - FeatureSelector
    - FeatureLearning
    - ModelGenerator
    - NetworkGenerator
    - ClusteringGenerator
    - GeneticAlgorithm
    - SwarmIntelligence
    - DataMiner


## 1. Installation:
You can easily install Happy Learning via pip install happy_learning on every operating system.

## 2. Requirements:
 - ...

## 3. Introduction:
 - Practical Usage:

It covers all aspects of the developing process, such as feature engineering, feature and model selection as well as hyper parameter optimization.

- Feature Engineer:

Process your tabular data smartly. The Feature Engineer module is equipped with all necessary (tabular) feature processing methods. Moreover, it is able to capture the metadata about the data set such as scaling measurement types of the features, taken processing steps, etc.
To scale big data sets it generates temporary data files for each feature separately and loads them for processing purposes only.

 - Feature Learning:
 
It combines both the feature engineering module and the genetic algorithm module to create a reinforcement learning environment to smartly generate new features.
The module creates separate learning environments for categorical and continuous features. The categorical features are one-hot encoded and then unified (one-hot merging).
Whereas the (semi-) continuous features are systematically processed by using several transformation and interaction methods.

 - Feature Tournament:
 
Feature tournament is a process to evaluate the importance of each feature regarding to a specific target feature. It uses the concept of (Additive) Shapley Values to calculate the importance score.

    -- Data Typing:

        Check whether represented data types of Pandas is equal to the real data types occuring in the data

- Feature Selector:

The Feature Selector module applies the feature tournament to calculate feature importance scores and select automatically the best n features based on the scoring.

- ModelGenerator:

The ModelGenerator module generates supervised machine learning models and all necessary hyper parameters for structured (tabular) data.

      -- Model / Hyper parameter:

         Classification models ...
            -> Ada Boosting (ada)
            -> Cat Boost (cat)
            -> Gradient Boosting Decision Tree (gbo)
            -> K-Nearest Neighbor (knn)
            -> Linear Discriminant Analysis (lida)
            -> Logisitic Regression (log)
            -> Quadratic Discriminant Analysis (qda)
            -> Random Forest (rf)
            -> Support-Vector Machine (svm)
            -> Nu-Support-Vector Machine (nusvm)
            -> Extreme Gradient Boosting Decision Tree (xgb)

         Regression models ...
            -> Ada Boosting (ada)
            -> Cat Boost (cat)
            -> Elastic Net (elastic)
            -> Generalized Additive Models (gam)
            -> Gradient Boosting Decision Tree (gbo)
            -> K-Nearest Neighbor (knn)
            -> Random Forest (rf)
            -> Support-Vector Machine (svm)
            -> Nu-Support-Vector Machine (nusvm)
            -> Extreme Gradient Boosting Decision Tree (xgb)

- NetworkGenerator:

The NetworkGenerator module generates neural network architectures and all necessary hyper parameters for text data using PyTorch.

      -- Model / Hyper parameter:

         -> Attention Network (att)
         -> Gated Recurrent Unit (gru)
         -> Long-Short Term Memory (lstm)
         -> Multi-Layer Perceptron (mlp)
         -> Recurrent Neural Network (rnn)
         -> Recurrent Convolutional Neural Network (rcnn)
         -> Self-Attention (self)
         -> Transformer (trans)

- ClusteringGenerator:

The ClusteringGenerator module generates unsupervised machine learning models and all necessary hyper parameters for text clustering.

      -- Model / Hyper parameter:

         -> Gibbs-Sampling Dirichlet Multinomial Modeling (gsdmm)
         -> Latent Dirichlet Allocation (lda)
         -> Latent Semantic Indexing (lsi)
         -> Non-Negative Matrix Factorization (nmf)

- GeneticAlgorithm:

Reinforcement learning module either to evaluate the fittest model / hyper parameter configuration or to engineer (tabular) features. 
It captures several evaluation statistics regarding the evolution process as well as the model performance metrics.
More over, it is able to transfer knowledge across re-trainings.

    -- Model / Hyperparameter Optimization:

        Optimize model / hyper parameter selection ...
            -> Sklearn models
            -> Popular "stand alone" models like XGBoost, CatBoost, etc.
            -> Deep Learning models (using PyTorch only)
            -> Text clustering models (document & short-text)

    -- Feature Engineering / Selection:

        Optimize feature engineering / selection using processing methods from Feature Engineer module ...
            -> Choose only features of fittest models to apply feature engineering based on the action space of the Feature Engineer module

- SwarmIntelligence:

Reinforcement learning module either to evaluate the fittest model / hyper parameter configuration or to engineer (tabular) features. 
It captures several evaluation statistics regarding the evolution process as well as the model performance metrics.
More over, it is able to transfer knowledge across re-trainings.

    -- Model / Hyper parameter Optimization:

        Optimize model / hyper parameter selection ...
            -> Sklearn models
            -> Popular "stand alone" models like XGBoost, CatBoost, etc.
            -> Deep Learning models (using PyTorch only)
            -> Text clustering models (document & short-text)

    -- Feature Engineering / Selection:

        Optimize feature engineering / selection using processing methods from Feature Engineer module ...
            -> Choose only features of fittest models to apply feature engineering based on the action space of the Feature Engineer module

- DataMiner:

Combines all modules for handling structured (tabular) data sets. 
Therefore, it uses the ...
   -> Feature Engineer module to pre-process data in general (imputation, label encoding, date feature processing, etc.)
   -> Feature Learning module to smartly engineer tabular features
   -> Feature Selector module to select the most important features
   -> GeneticAlgorithm / SwarmIntelligence module to find a proper model and hyper parameter configuration by its self.

- TextMiner

Use text data (natural language) by generating various numerical features describing the text

    -- Segmentation:

        Categorize potential text features into following segments ...
            -> Web features
                1) URL
                2) EMail
            -> Enumerated features
            -> Natural language (original text features)
            -> Identifier (original id features)
            -> Unknown

    -- Simple text processing:
        Apply simple processing methods to text features
            -> Merge two text features by given separator
            -> Replace occurances
            -> Subset data set or feature list by given string

    -- Language methods:
        Apply methods to ...
            -> ... detect language in text
            -> ... translate using Google Translate under the hood

    -- Generate linguistic features:
        Apply semantic text processing to generate numeric features
            -> Clean text counter (text after removing stop words, punctuation and special character and lemmatizing)
            -> Part-of-Speech Tagging counter & labels
            -> Named Entity Recognition counter & labels
            -> Dependencies counter & labels (Tree based / Noun Chunks)
            -> Emoji counter & labels

    -- Generate similarity / clustering features:
        Apply similarity methods to generate continuous features using word embeddings
            -> TF-IDF

## 4. Documentation & Examples:

Check the methodology.pdf for the documentation and jupyter notebook for examples. Happy ;) Learning

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/GianniBalistreri/happy_learning",
    "name": "happy-learning",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "feature-engineering feature-selection evolutionary-algorithm machine-learning automl reinforcement-learning deep-learning shapley clustering pytorch",
    "author": "Gianni Francesco Balistreri",
    "author_email": "gbalistreri@gmx.de",
    "download_url": "https://files.pythonhosted.org/packages/9a/40/70cb3eba749057af04b42d868831ed4cec2c5216c018d2c4b66d5c236ed3/happy_learning-0.5.0.tar.gz",
    "platform": null,
    "description": "# Happy ;) Learning\n\n## Description:\nToolbox for reinforced developing of machine learning models (as proof-of-concept) in python. \nIt is specially designed to evolve and optimize machine learning models using evolutionary algorithms both on the feature engineering side and on the hyper parameter tuning side.\n\n## Table of Content:\n1. Installation\n2. Requirements\n3. Introduction\n    - Practical Usage\n    - FeatureEngineer\n    - FeatureTournament\n    - FeatureSelector\n    - FeatureLearning\n    - ModelGenerator\n    - NetworkGenerator\n    - ClusteringGenerator\n    - GeneticAlgorithm\n    - SwarmIntelligence\n    - DataMiner\n\n\n## 1. Installation:\nYou can easily install Happy Learning via pip install happy_learning on every operating system.\n\n## 2. Requirements:\n - ...\n\n## 3. Introduction:\n - Practical Usage:\n\nIt covers all aspects of the developing process, such as feature engineering, feature and model selection as well as hyper parameter optimization.\n\n- Feature Engineer:\n\nProcess your tabular data smartly. The Feature Engineer module is equipped with all necessary (tabular) feature processing methods. Moreover, it is able to capture the metadata about the data set such as scaling measurement types of the features, taken processing steps, etc.\nTo scale big data sets it generates temporary data files for each feature separately and loads them for processing purposes only.\n\n - Feature Learning:\n \nIt combines both the feature engineering module and the genetic algorithm module to create a reinforcement learning environment to smartly generate new features.\nThe module creates separate learning environments for categorical and continuous features. The categorical features are one-hot encoded and then unified (one-hot merging).\nWhereas the (semi-) continuous features are systematically processed by using several transformation and interaction methods.\n\n - Feature Tournament:\n \nFeature tournament is a process to evaluate the importance of each feature regarding to a specific target feature. It uses the concept of (Additive) Shapley Values to calculate the importance score.\n\n    -- Data Typing:\n\n        Check whether represented data types of Pandas is equal to the real data types occuring in the data\n\n- Feature Selector:\n\nThe Feature Selector module applies the feature tournament to calculate feature importance scores and select automatically the best n features based on the scoring.\n\n- ModelGenerator:\n\nThe ModelGenerator module generates supervised machine learning models and all necessary hyper parameters for structured (tabular) data.\n\n      -- Model / Hyper parameter:\n\n         Classification models ...\n            -> Ada Boosting (ada)\n            -> Cat Boost (cat)\n            -> Gradient Boosting Decision Tree (gbo)\n            -> K-Nearest Neighbor (knn)\n            -> Linear Discriminant Analysis (lida)\n            -> Logisitic Regression (log)\n            -> Quadratic Discriminant Analysis (qda)\n            -> Random Forest (rf)\n            -> Support-Vector Machine (svm)\n            -> Nu-Support-Vector Machine (nusvm)\n            -> Extreme Gradient Boosting Decision Tree (xgb)\n\n         Regression models ...\n            -> Ada Boosting (ada)\n            -> Cat Boost (cat)\n            -> Elastic Net (elastic)\n            -> Generalized Additive Models (gam)\n            -> Gradient Boosting Decision Tree (gbo)\n            -> K-Nearest Neighbor (knn)\n            -> Random Forest (rf)\n            -> Support-Vector Machine (svm)\n            -> Nu-Support-Vector Machine (nusvm)\n            -> Extreme Gradient Boosting Decision Tree (xgb)\n\n- NetworkGenerator:\n\nThe NetworkGenerator module generates neural network architectures and all necessary hyper parameters for text data using PyTorch.\n\n      -- Model / Hyper parameter:\n\n         -> Attention Network (att)\n         -> Gated Recurrent Unit (gru)\n         -> Long-Short Term Memory (lstm)\n         -> Multi-Layer Perceptron (mlp)\n         -> Recurrent Neural Network (rnn)\n         -> Recurrent Convolutional Neural Network (rcnn)\n         -> Self-Attention (self)\n         -> Transformer (trans)\n\n- ClusteringGenerator:\n\nThe ClusteringGenerator module generates unsupervised machine learning models and all necessary hyper parameters for text clustering.\n\n      -- Model / Hyper parameter:\n\n         -> Gibbs-Sampling Dirichlet Multinomial Modeling (gsdmm)\n         -> Latent Dirichlet Allocation (lda)\n         -> Latent Semantic Indexing (lsi)\n         -> Non-Negative Matrix Factorization (nmf)\n\n- GeneticAlgorithm:\n\nReinforcement learning module either to evaluate the fittest model / hyper parameter configuration or to engineer (tabular) features. \nIt captures several evaluation statistics regarding the evolution process as well as the model performance metrics.\nMore over, it is able to transfer knowledge across re-trainings.\n\n    -- Model / Hyperparameter Optimization:\n\n        Optimize model / hyper parameter selection ...\n            -> Sklearn models\n            -> Popular \"stand alone\" models like XGBoost, CatBoost, etc.\n            -> Deep Learning models (using PyTorch only)\n            -> Text clustering models (document & short-text)\n\n    -- Feature Engineering / Selection:\n\n        Optimize feature engineering / selection using processing methods from Feature Engineer module ...\n            -> Choose only features of fittest models to apply feature engineering based on the action space of the Feature Engineer module\n\n- SwarmIntelligence:\n\nReinforcement learning module either to evaluate the fittest model / hyper parameter configuration or to engineer (tabular) features. \nIt captures several evaluation statistics regarding the evolution process as well as the model performance metrics.\nMore over, it is able to transfer knowledge across re-trainings.\n\n    -- Model / Hyper parameter Optimization:\n\n        Optimize model / hyper parameter selection ...\n            -> Sklearn models\n            -> Popular \"stand alone\" models like XGBoost, CatBoost, etc.\n            -> Deep Learning models (using PyTorch only)\n            -> Text clustering models (document & short-text)\n\n    -- Feature Engineering / Selection:\n\n        Optimize feature engineering / selection using processing methods from Feature Engineer module ...\n            -> Choose only features of fittest models to apply feature engineering based on the action space of the Feature Engineer module\n\n- DataMiner:\n\nCombines all modules for handling structured (tabular) data sets. \nTherefore, it uses the ...\n   -> Feature Engineer module to pre-process data in general (imputation, label encoding, date feature processing, etc.)\n   -> Feature Learning module to smartly engineer tabular features\n   -> Feature Selector module to select the most important features\n   -> GeneticAlgorithm / SwarmIntelligence module to find a proper model and hyper parameter configuration by its self.\n\n- TextMiner\n\nUse text data (natural language) by generating various numerical features describing the text\n\n    -- Segmentation:\n\n        Categorize potential text features into following segments ...\n            -> Web features\n                1) URL\n                2) EMail\n            -> Enumerated features\n            -> Natural language (original text features)\n            -> Identifier (original id features)\n            -> Unknown\n\n    -- Simple text processing:\n        Apply simple processing methods to text features\n            -> Merge two text features by given separator\n            -> Replace occurances\n            -> Subset data set or feature list by given string\n\n    -- Language methods:\n        Apply methods to ...\n            -> ... detect language in text\n            -> ... translate using Google Translate under the hood\n\n    -- Generate linguistic features:\n        Apply semantic text processing to generate numeric features\n            -> Clean text counter (text after removing stop words, punctuation and special character and lemmatizing)\n            -> Part-of-Speech Tagging counter & labels\n            -> Named Entity Recognition counter & labels\n            -> Dependencies counter & labels (Tree based / Noun Chunks)\n            -> Emoji counter & labels\n\n    -- Generate similarity / clustering features:\n        Apply similarity methods to generate continuous features using word embeddings\n            -> TF-IDF\n\n## 4. Documentation & Examples:\n\nCheck the methodology.pdf for the documentation and jupyter notebook for examples. Happy ;) Learning\n",
    "bugtrack_url": null,
    "license": "GNU",
    "summary": "Toolbox for reinforced developing of machine learning models (as proof-of-concept)",
    "version": "0.5.0",
    "project_urls": {
        "Homepage": "https://github.com/GianniBalistreri/happy_learning"
    },
    "split_keywords": [
        "feature-engineering",
        "feature-selection",
        "evolutionary-algorithm",
        "machine-learning",
        "automl",
        "reinforcement-learning",
        "deep-learning",
        "shapley",
        "clustering",
        "pytorch"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "98c063461b7c409ea1636d335938cb0867b7d12653845327c5d0aa95407c57c0",
                "md5": "04e0ad522e132d5fda3a4d3da272603c",
                "sha256": "3ebccc34d0b3be2cf9b968b8f0f478e848c279b9c6f51377302f223b12224894"
            },
            "downloads": -1,
            "filename": "happy_learning-0.5.0-py3.8.egg",
            "has_sig": false,
            "md5_digest": "04e0ad522e132d5fda3a4d3da272603c",
            "packagetype": "bdist_egg",
            "python_version": "0.5.0",
            "requires_python": ">=3.6",
            "size": 478944,
            "upload_time": "2023-05-06T12:10:43",
            "upload_time_iso_8601": "2023-05-06T12:10:43.100517Z",
            "url": "https://files.pythonhosted.org/packages/98/c0/63461b7c409ea1636d335938cb0867b7d12653845327c5d0aa95407c57c0/happy_learning-0.5.0-py3.8.egg",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "78a2562c72bc9bc51fbe2223fb345c8bc2480ea54dbc9804bda607ee1dbe9f49",
                "md5": "4d044170aed7b25b0267c841c051b8d6",
                "sha256": "c9d71f440453dac25037b08c8e9a4b4f0cbb5205e96782decbd4331ce7cef0c1"
            },
            "downloads": -1,
            "filename": "happy_learning-0.5.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4d044170aed7b25b0267c841c051b8d6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 221716,
            "upload_time": "2023-05-06T12:10:39",
            "upload_time_iso_8601": "2023-05-06T12:10:39.129769Z",
            "url": "https://files.pythonhosted.org/packages/78/a2/562c72bc9bc51fbe2223fb345c8bc2480ea54dbc9804bda607ee1dbe9f49/happy_learning-0.5.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9a4070cb3eba749057af04b42d868831ed4cec2c5216c018d2c4b66d5c236ed3",
                "md5": "a3c945cfb93d378dec42a6029ffd1aa8",
                "sha256": "1cfa01ad2c07176a882d644a653307d5a867f703622054dd950492cee081ecba"
            },
            "downloads": -1,
            "filename": "happy_learning-0.5.0.tar.gz",
            "has_sig": false,
            "md5_digest": "a3c945cfb93d378dec42a6029ffd1aa8",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 236614,
            "upload_time": "2023-05-06T12:10:46",
            "upload_time_iso_8601": "2023-05-06T12:10:46.546847Z",
            "url": "https://files.pythonhosted.org/packages/9a/40/70cb3eba749057af04b42d868831ed4cec2c5216c018d2c4b66d5c236ed3/happy_learning-0.5.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-05-06 12:10:46",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "GianniBalistreri",
    "github_project": "happy_learning",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "happy-learning"
}
        
Elapsed time: 0.06497s