funpredict


Namefunpredict JSON
Version 0.0.6 PyPI version JSON
download
home_page
SummaryIntroducing Fun Predict, the ultimate time-saver for machine learning! No more complex coding or tedious parameter tuning - just sit back and let Fun Predict build your basic models with ease. It's like having a personal assistant for your machine learning projects, making the process simple, efficient, and, well, Fun! 🛋
upload_time2023-11-12 08:55:15
maintainer
docs_urlNone
authorSushanta Das
requires_python
license
keywords python scikit-learn machine learning deep learning computer vision artificial intelligence
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
# Fun Predict🤖

Fun Predict is a free, open-source Python library that helps you build and compare machine learning models easily, without writing much code. It allows you to quickly and easily evaluate a variety of models without having to write a lot of code or tune hyperparameters.



# Installation

To install Fun Predict:

``` 

pip install funpredict

```



# Usage

To use Fun Predict in a project:

```

import funpredict

```



# Classification

Example :

```

from funpredict.fun_model import PlayClassifier

from sklearn.datasets import load_wine

from sklearn.model_selection import train_test_split



# Test with a Classification model

data = load_wine()

X,y = data.data,data.target



X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=.5,random_state =42)



clf = PlayClassifier(verbose=0,ignore_warnings=True, custom_metric=None)

models,predictions = clf.fit(X_train, X_test, y_train, y_test,'multiclass')

# If you confirm which model working best then choose hare.

model_dictionary = clf.provide_models(X_train,X_test,y_train,y_test) 

print(models)



                                        | Accuracy | Balanced Accuracy| F1 Score | Time Taken |

     -----------------------------------------------------------------------------------------|

    | Model :                                                                                 |

    |                                    -----------------------------------------------------+

    | ExtraTreesClassifier              | 1.00     |  1.00            |   1.00    | 0.27      |

    | RandomForestClassifier            | 1.00     |  1.00            |   1.00    | 0.40      |

    | GaussianNB                        | 1.00     |  1.00            |   1.00    | 0.02      |

    | CatBoostClassifier                | 0.99     |  0.99            |   0.99    | 3.32      |

    | KNeighborsClassifier              | 0.99     |  0.99            |   0.99    | 0.03      |

    | RidgeClassifierCV                 | 0.99     |  0.99            |   0.99    | 0.02      |

    | PassiveAggressiveClassifier       | 0.99     |  0.99            |   0.99    | 0.04      |

    | LogisticRegression                | 0.99     |  0.99            |   0.99    | 0.03      |

    | NearestCentroid                   | 0.98     |  0.98            |   0.98    | 0.03      |

    | LGBMClassifier                    | 0.98     |  0.98            |   0.98    | 0.15      |

    | Perceptron                        | 0.98     |  0.98            |   0.98    | 0.04      |

    | SGDClassifier                     | 0.98     |  0.98            |   0.98    | 0.02      |

    | LinearDiscriminantAnalysis        | 0.98     |  0.98            |   0.98    | 0.02      |

    | LinearSVC                         | 0.98     |  0.98            |   0.98    | 0.02      |

    | RidgeClassifier                   | 0.98     |  0.98            |   0.98    | 0.02      |

    | NuSVC                             | 0.98     |  0.98            |   0.98    | 0.02      |

    | SVC                               | 0.98     |  0.98            |   0.98    | 0.02      |

    | LabelPropagation                  | 0.97     |  0.97            |   0.97    | 0.02      |

    | LabelSpreading                    | 0.97     |  0.97            |   0.97    | 0.02      |

    | XGBClassifier                     | 0.97     |  0.97            |   0.97    | 0.10      |

    | BaggingClassifier                 | 0.97     |  0.97            |   0.97    | 0.11      |

    | BernoulliNB                       | 0.94     |  0.94            |   0.94    | 0.04      |

    | CalibratedClassifierCV            | 0.94     |  0.94            |   0.94    | 0.15      |

    | AdaBoostClassifier                | 0.93     |  0.93            |   0.93    | 0.29      |

    | QuadraticDiscriminantAnalysis     | 0.93     |  0.93            |   0.93    | 0.04      |

    | DecisionTreeClassifier            | 0.88     |  0.88            |   0.88    | 0.04      |

    | ExtraTreeClassifier               | 0.83     |  0.83            |   0.83    | 0.04      |

    | DummyClassifier                   | 0.34     |  0.33            |   0.17    | 0.03      |

    -------------------------------------------------------------------------------------------

```

```

# Vertical bar plot

clf.barplot(predictions)

```

![clf-bar](https://github.com/hi-sushanta/funpredict/assets/93595990/6a6ab9fc-ceb1-481e-a73c-1314c59c4562)



```

# Horizontal bar plot

clf.hbarplot(predictions)

```

![clf-hbar](https://github.com/hi-sushanta/funpredict/assets/93595990/9b594717-411e-4295-90bd-37cd4ef9e68e)



# Regression

Example :

```

from funpredict.fun_model import PlayRegressor

from sklearn.datasets import load_diabetes

from sklearn.model_selection import train_test_split



# Test with Regressor Model

data = load_diabetes()

X,y = data.data, data.target

X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=.5,random_state =42)



rgs = PlayRegressor(verbose=0,ignore_warnings=True, custom_metric=None)

models,predictions = rgs.fit(X_train, X_test, y_train, y_test)

# If you confirm which model works best then choose hare.

model_dictionary = rgs.provide_models(X_train, X_test,y_train,y_test)

print(models)



|-----------------------------------------------------------------------------------------|

| Model                             | Adjusted R-Squared | R-Squared |  RMSE | Time Taken | 

    |:------------------------------|-------------------:|----------:|------:|-----------:|

    | BayesianRidge                 |      0.45          |   0.48    | 54.46 |    0.04    |

    | ElasticNetCV                  |      0.46          |   0.48    | 54.41 |    0.31    |

    | RidgeCV                       |      0.45          |   0.48    | 54.51 |    0.04    |

    | LinearRegression              |      0.45          |   0.48    | 54.58 |    0.03    |

    | TransformedTargetRegressor    |      0.45          |   0.48    | 54.58 |    0.04    |

    | Lars                          |      0.45          |   0.48    | 54.58 |    0.05    |

    | Ridge                         |      0.45          |   0.48    | 54.59 |    0.03    |

    | Lasso                         |      0.45          |   0.47    | 54.69 |    0.03    |

    | LassoLars                     |      0.45          |   0.47    | 54.69 |    0.03    |

    | LassoCV                       |      0.45          |   0.47    | 54.70 |    0.28    |

    | LassoLarsCV                   |      0.45          |   0.47    | 54.71 |    0.07    |

    | PoissonRegressor              |      0.45          |   0.47    | 54.76 |    0.04    |

    | SGDRegressor                  |      0.45          |   0.47    | 54.76 |    0.04    |

    | OrthogonalMatchingPursuitCV   |      0.45          |   0.47    | 54.80 |    0.06    |

    | HuberRegressor                |      0.44          |   0.47    | 54.96 |    0.06    |

    | LassoLarsIC                   |      0.44          |   0.47    | 55.02 |    0.03    |

    | ElasticNet                    |      0.44          |   0.47    | 55.05 |    0.03    |

    | LarsCV                        |      0.43          |   0.45    | 55.72 |    0.09    |

    | AdaBoostRegressor             |      0.42          |   0.44    | 56.34 |    0.34    |

    | TweedieRegressor              |      0.41          |   0.44    | 56.40 |    0.03    |

    | ExtraTreesRegressor           |      0.41          |   0.44    | 56.60 |    0.40    |

    | PassiveAggressiveRegressor    |      0.41          |   0.44    | 56.61 |    0.03    |

    | GammaRegressor                |      0.41          |   0.43    | 56.79 |    0.02    |

    | LGBMRegressor                 |      0.40          |   0.43    | 57.04 |    0.12    |

    | CatBoostRegressor             |      0.39          |   0.42    | 57.47 |    3.26    |

    | RandomForestRegressor         |      0.38          |   0.41    | 58.00 |    0.79    |

    | HistGradientBoostingRegressor |      0.36          |   0.39    | 58.84 |    0.27    |

    | GradientBoostingRegressor     |      0.36          |   0.39    | 58.95 |    0.31    |

    | BaggingRegressor              |      0.33          |   0.36    | 60.12 |    0.11    |

    | KNeighborsRegressor           |      0.29          |   0.32    | 62.09 |    0.03    |

    | XGBRegressor                  |      0.23          |   0.27    | 64.59 |    0.21    |

    | OrthogonalMatchingPursuit     |      0.23          |   0.26    | 64.86 |    0.05    |

    | RANSACRegressor               |      0.11          |   0.15    | 69.40 |    0.33    |

    | NuSVR                         |      0.07          |   0.11    | 70.99 |    0.08    |

    | LinearSVR                     |      0.07          |   0.11    | 71.11 |    0.03    |

    | SVR                           |      0.07          |   0.11    | 71.23 |    0.04    |

    | DummyRegressor                |      0.05      -   |   0.00    | 75.45 |    0.02    |

    | DecisionTreeRegressor         |      0.13      -   |   0.08    | 78.38 |    0.03    |

    | ExtraTreeRegressor            |      0.18      -   |   0.13    | 80.02 |    0.02    |

    | GaussianProcessRegressor      |      0.99      -   |   0.90    | 04.06 |    0.07    |

    | MLPRegressor                  |      1.19      -   |   1.09    | 09.17 |    1.34    |

    | KernelRidge                   |      3.91      -   |   3.69    | 63.34 |    0.06    |

    |-------------------------------------------------------------------------------------|

```



```

# Vertical bar plot

rgs.barplot(predictions)

```

![rgs-bar](https://github.com/hi-sushanta/funpredict/assets/93595990/d0a92bad-2a2c-4826-99a8-c0561dd71f40)



```

# Horizontal bar plot

rgs.hbarplot(predictions)

```

![rgs-hbar](https://github.com/hi-sushanta/funpredict/assets/93595990/9be834be-64dd-4641-8f04-02d7c635cb13)


            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "funpredict",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "python,scikit-learn,machine learning,deep learning,Computer Vision,Artificial intelligence",
    "author": "Sushanta Das",
    "author_email": "<imachi@skiff.com>",
    "download_url": "https://files.pythonhosted.org/packages/5a/26/2bdf25f52d905643feeba47f93a815292eec7160c6898ff19610d2aa7022/funpredict-0.0.6.tar.gz",
    "platform": null,
    "description": "\r\n# Fun Predict\ud83e\udd16\r\r\nFun Predict is a free, open-source Python library that helps you build and compare machine learning models easily, without writing much code. It allows you to quickly and easily evaluate a variety of models without having to write a lot of code or tune hyperparameters.\r\r\n\r\r\n# Installation\r\r\nTo install Fun Predict:\r\r\n``` \r\r\npip install funpredict\r\r\n```\r\r\n\r\r\n# Usage\r\r\nTo use Fun Predict in a project:\r\r\n```\r\r\nimport funpredict\r\r\n```\r\r\n\r\r\n# Classification\r\r\nExample :\r\r\n```\r\r\nfrom funpredict.fun_model import PlayClassifier\r\r\nfrom sklearn.datasets import load_wine\r\r\nfrom sklearn.model_selection import train_test_split\r\r\n\r\r\n# Test with a Classification model\r\r\ndata = load_wine()\r\r\nX,y = data.data,data.target\r\r\n\r\r\nX_train, X_test, y_train, y_test = train_test_split(X, y,test_size=.5,random_state =42)\r\r\n\r\r\nclf = PlayClassifier(verbose=0,ignore_warnings=True, custom_metric=None)\r\r\nmodels,predictions = clf.fit(X_train, X_test, y_train, y_test,'multiclass')\r\r\n# If you confirm which model working best then choose hare.\r\r\nmodel_dictionary = clf.provide_models(X_train,X_test,y_train,y_test) \r\r\nprint(models)\r\r\n\r\r\n                                        | Accuracy | Balanced Accuracy| F1 Score | Time Taken |\r\r\n     -----------------------------------------------------------------------------------------|\r\r\n    | Model :                                                                                 |\r\r\n    |                                    -----------------------------------------------------+\r\r\n    | ExtraTreesClassifier              | 1.00     |  1.00            |   1.00    | 0.27      |\r\r\n    | RandomForestClassifier            | 1.00     |  1.00            |   1.00    | 0.40      |\r\r\n    | GaussianNB                        | 1.00     |  1.00            |   1.00    | 0.02      |\r\r\n    | CatBoostClassifier                | 0.99     |  0.99            |   0.99    | 3.32      |\r\r\n    | KNeighborsClassifier              | 0.99     |  0.99            |   0.99    | 0.03      |\r\r\n    | RidgeClassifierCV                 | 0.99     |  0.99            |   0.99    | 0.02      |\r\r\n    | PassiveAggressiveClassifier       | 0.99     |  0.99            |   0.99    | 0.04      |\r\r\n    | LogisticRegression                | 0.99     |  0.99            |   0.99    | 0.03      |\r\r\n    | NearestCentroid                   | 0.98     |  0.98            |   0.98    | 0.03      |\r\r\n    | LGBMClassifier                    | 0.98     |  0.98            |   0.98    | 0.15      |\r\r\n    | Perceptron                        | 0.98     |  0.98            |   0.98    | 0.04      |\r\r\n    | SGDClassifier                     | 0.98     |  0.98            |   0.98    | 0.02      |\r\r\n    | LinearDiscriminantAnalysis        | 0.98     |  0.98            |   0.98    | 0.02      |\r\r\n    | LinearSVC                         | 0.98     |  0.98            |   0.98    | 0.02      |\r\r\n    | RidgeClassifier                   | 0.98     |  0.98            |   0.98    | 0.02      |\r\r\n    | NuSVC                             | 0.98     |  0.98            |   0.98    | 0.02      |\r\r\n    | SVC                               | 0.98     |  0.98            |   0.98    | 0.02      |\r\r\n    | LabelPropagation                  | 0.97     |  0.97            |   0.97    | 0.02      |\r\r\n    | LabelSpreading                    | 0.97     |  0.97            |   0.97    | 0.02      |\r\r\n    | XGBClassifier                     | 0.97     |  0.97            |   0.97    | 0.10      |\r\r\n    | BaggingClassifier                 | 0.97     |  0.97            |   0.97    | 0.11      |\r\r\n    | BernoulliNB                       | 0.94     |  0.94            |   0.94    | 0.04      |\r\r\n    | CalibratedClassifierCV            | 0.94     |  0.94            |   0.94    | 0.15      |\r\r\n    | AdaBoostClassifier                | 0.93     |  0.93            |   0.93    | 0.29      |\r\r\n    | QuadraticDiscriminantAnalysis     | 0.93     |  0.93            |   0.93    | 0.04      |\r\r\n    | DecisionTreeClassifier            | 0.88     |  0.88            |   0.88    | 0.04      |\r\r\n    | ExtraTreeClassifier               | 0.83     |  0.83            |   0.83    | 0.04      |\r\r\n    | DummyClassifier                   | 0.34     |  0.33            |   0.17    | 0.03      |\r\r\n    -------------------------------------------------------------------------------------------\r\r\n```\r\r\n```\r\r\n# Vertical bar plot\r\r\nclf.barplot(predictions)\r\r\n```\r\r\n![clf-bar](https://github.com/hi-sushanta/funpredict/assets/93595990/6a6ab9fc-ceb1-481e-a73c-1314c59c4562)\r\r\n\r\r\n```\r\r\n# Horizontal bar plot\r\r\nclf.hbarplot(predictions)\r\r\n```\r\r\n![clf-hbar](https://github.com/hi-sushanta/funpredict/assets/93595990/9b594717-411e-4295-90bd-37cd4ef9e68e)\r\r\n\r\r\n# Regression\r\r\nExample :\r\r\n```\r\r\nfrom funpredict.fun_model import PlayRegressor\r\r\nfrom sklearn.datasets import load_diabetes\r\r\nfrom sklearn.model_selection import train_test_split\r\r\n\r\r\n# Test with Regressor Model\r\r\ndata = load_diabetes()\r\r\nX,y = data.data, data.target\r\r\nX_train, X_test, y_train, y_test = train_test_split(X, y,test_size=.5,random_state =42)\r\r\n\r\r\nrgs = PlayRegressor(verbose=0,ignore_warnings=True, custom_metric=None)\r\r\nmodels,predictions = rgs.fit(X_train, X_test, y_train, y_test)\r\r\n# If you confirm which model works best then choose hare.\r\r\nmodel_dictionary = rgs.provide_models(X_train, X_test,y_train,y_test)\r\r\nprint(models)\r\r\n\r\r\n|-----------------------------------------------------------------------------------------|\r\r\n| Model                             | Adjusted R-Squared | R-Squared |  RMSE | Time Taken | \r\r\n    |:------------------------------|-------------------:|----------:|------:|-----------:|\r\r\n    | BayesianRidge                 |      0.45          |   0.48    | 54.46 |    0.04    |\r\r\n    | ElasticNetCV                  |      0.46          |   0.48    | 54.41 |    0.31    |\r\r\n    | RidgeCV                       |      0.45          |   0.48    | 54.51 |    0.04    |\r\r\n    | LinearRegression              |      0.45          |   0.48    | 54.58 |    0.03    |\r\r\n    | TransformedTargetRegressor    |      0.45          |   0.48    | 54.58 |    0.04    |\r\r\n    | Lars                          |      0.45          |   0.48    | 54.58 |    0.05    |\r\r\n    | Ridge                         |      0.45          |   0.48    | 54.59 |    0.03    |\r\r\n    | Lasso                         |      0.45          |   0.47    | 54.69 |    0.03    |\r\r\n    | LassoLars                     |      0.45          |   0.47    | 54.69 |    0.03    |\r\r\n    | LassoCV                       |      0.45          |   0.47    | 54.70 |    0.28    |\r\r\n    | LassoLarsCV                   |      0.45          |   0.47    | 54.71 |    0.07    |\r\r\n    | PoissonRegressor              |      0.45          |   0.47    | 54.76 |    0.04    |\r\r\n    | SGDRegressor                  |      0.45          |   0.47    | 54.76 |    0.04    |\r\r\n    | OrthogonalMatchingPursuitCV   |      0.45          |   0.47    | 54.80 |    0.06    |\r\r\n    | HuberRegressor                |      0.44          |   0.47    | 54.96 |    0.06    |\r\r\n    | LassoLarsIC                   |      0.44          |   0.47    | 55.02 |    0.03    |\r\r\n    | ElasticNet                    |      0.44          |   0.47    | 55.05 |    0.03    |\r\r\n    | LarsCV                        |      0.43          |   0.45    | 55.72 |    0.09    |\r\r\n    | AdaBoostRegressor             |      0.42          |   0.44    | 56.34 |    0.34    |\r\r\n    | TweedieRegressor              |      0.41          |   0.44    | 56.40 |    0.03    |\r\r\n    | ExtraTreesRegressor           |      0.41          |   0.44    | 56.60 |    0.40    |\r\r\n    | PassiveAggressiveRegressor    |      0.41          |   0.44    | 56.61 |    0.03    |\r\r\n    | GammaRegressor                |      0.41          |   0.43    | 56.79 |    0.02    |\r\r\n    | LGBMRegressor                 |      0.40          |   0.43    | 57.04 |    0.12    |\r\r\n    | CatBoostRegressor             |      0.39          |   0.42    | 57.47 |    3.26    |\r\r\n    | RandomForestRegressor         |      0.38          |   0.41    | 58.00 |    0.79    |\r\r\n    | HistGradientBoostingRegressor |      0.36          |   0.39    | 58.84 |    0.27    |\r\r\n    | GradientBoostingRegressor     |      0.36          |   0.39    | 58.95 |    0.31    |\r\r\n    | BaggingRegressor              |      0.33          |   0.36    | 60.12 |    0.11    |\r\r\n    | KNeighborsRegressor           |      0.29          |   0.32    | 62.09 |    0.03    |\r\r\n    | XGBRegressor                  |      0.23          |   0.27    | 64.59 |    0.21    |\r\r\n    | OrthogonalMatchingPursuit     |      0.23          |   0.26    | 64.86 |    0.05    |\r\r\n    | RANSACRegressor               |      0.11          |   0.15    | 69.40 |    0.33    |\r\r\n    | NuSVR                         |      0.07          |   0.11    | 70.99 |    0.08    |\r\r\n    | LinearSVR                     |      0.07          |   0.11    | 71.11 |    0.03    |\r\r\n    | SVR                           |      0.07          |   0.11    | 71.23 |    0.04    |\r\r\n    | DummyRegressor                |      0.05      -   |   0.00    | 75.45 |    0.02    |\r\r\n    | DecisionTreeRegressor         |      0.13      -   |   0.08    | 78.38 |    0.03    |\r\r\n    | ExtraTreeRegressor            |      0.18      -   |   0.13    | 80.02 |    0.02    |\r\r\n    | GaussianProcessRegressor      |      0.99      -   |   0.90    | 04.06 |    0.07    |\r\r\n    | MLPRegressor                  |      1.19      -   |   1.09    | 09.17 |    1.34    |\r\r\n    | KernelRidge                   |      3.91      -   |   3.69    | 63.34 |    0.06    |\r\r\n    |-------------------------------------------------------------------------------------|\r\r\n```\r\r\n\r\r\n```\r\r\n# Vertical bar plot\r\r\nrgs.barplot(predictions)\r\r\n```\r\r\n![rgs-bar](https://github.com/hi-sushanta/funpredict/assets/93595990/d0a92bad-2a2c-4826-99a8-c0561dd71f40)\r\r\n\r\r\n```\r\r\n# Horizontal bar plot\r\r\nrgs.hbarplot(predictions)\r\r\n```\r\r\n![rgs-hbar](https://github.com/hi-sushanta/funpredict/assets/93595990/9be834be-64dd-4641-8f04-02d7c635cb13)\r\r\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Introducing Fun Predict, the ultimate time-saver for machine learning! No more complex coding or tedious parameter tuning - just sit back and let Fun Predict build your basic models with ease. It's like having a personal assistant for your machine learning projects, making the process simple, efficient, and, well, Fun! \ud83d\udecb",
    "version": "0.0.6",
    "project_urls": null,
    "split_keywords": [
        "python",
        "scikit-learn",
        "machine learning",
        "deep learning",
        "computer vision",
        "artificial intelligence"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5a262bdf25f52d905643feeba47f93a815292eec7160c6898ff19610d2aa7022",
                "md5": "e6d07aac2e69765e71ebaab4733bfa8b",
                "sha256": "8ef2964cb75bb0ba2f7373803e75ab20036f0dbc352b86bf3dc3bf594e887f01"
            },
            "downloads": -1,
            "filename": "funpredict-0.0.6.tar.gz",
            "has_sig": false,
            "md5_digest": "e6d07aac2e69765e71ebaab4733bfa8b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 13603,
            "upload_time": "2023-11-12T08:55:15",
            "upload_time_iso_8601": "2023-11-12T08:55:15.059148Z",
            "url": "https://files.pythonhosted.org/packages/5a/26/2bdf25f52d905643feeba47f93a815292eec7160c6898ff19610d2aa7022/funpredict-0.0.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-12 08:55:15",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "funpredict"
}
        
Elapsed time: 0.14225s