SpheroScan


NameSpheroScan JSON
Version 0.0.10 PyPI version JSON
download
home_pagehttps://github.com/FunctionalUrology/SpheroScan.git
SummaryA User-Friendly Deep Learning Tool for Spheroid Image Analysis.
upload_time2023-01-17 16:42:56
maintainer
docs_urlNone
authorAkshay
requires_python>=3.10
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # MLcps
**MLcps: Machine Learning cumulative performance score** is a performance metric that combines multiple performance metrics and reports a cumulative score enabling researchers to compare the ML models using a single metric. MLcps provides a comprehensive platform to identify the best-performing ML model on any given dataset.

### ***Note***  
If you want to use MLcps without installing it on your local machine, please click here [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/FunctionalUrology/MLcps.git/main). It will launch a Jupyterlab server (all the requirements for MLcps are pre-installed ) where you can run the already available example Jupyter notebook for MLcps analysis. It may take a while to launch! You can also upload your data or notebook to perform the analysis.

# Prerequisites

1. Python >=3.8
2. R >=4.0. R should be accessible through terminal/command prompt.
3. ```radarchart, tibble,``` and ```dplyr``` R packages. MLcps can install all these packages at first import if unavailable, but we highly recommend installing them before using MLcps. The user could run the following R code in the R environment to install them:
```
## Install the unavailable packages
install.packages(c('radarchart','tibble','dplyr'),dependencies = TRUE,repos="https://cloud.r-project.org")                         
 ```

# Installation
```
pip install MLcps
```

# Binder environment for MLcps

As an alternative, we have built a binder computational environment where all the requirements are pre-installed for MLcps.
It allows the user to ***use MLcps without any installation***.

Please click here [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/FunctionalUrology/MLcps.git/main) to launch the Jupyterlab server where you can run the already available example Jupyter notebook for MLcps analysis. It may take a while to launch! You can also upload your data or notebook to perform the analysis.

# Usage
#### **Quick Start**
```python
#import MLcps
from MLcps import getCPS

#calculate Machine Learning cumulative performance score
cps=getCPS.calculate(object)
```  
> * ***object***: A pandas dataframe where rows are different metrics scores and columns are different ML models. **Or** a GridSearchCV object.
> * ***cps***: A pandas dataframe with models name and corresponding MLcps. **Or** a GridSearchCV object.

#### **Example 0.1**
Create Input dataframe for MLcps

```python
import pandas as pd
metrics_list=[]

#Metrics from SVC model (kernel=rbf)
acc = 0.88811 #accuracy
bacc = 0.86136 #balanced_accuracy
prec = 0.86 #precision
rec = 0.97727 #recall
f1 = 0.91489 #F1
mcc = 0.76677 #Matthews_correlation_coefficient
metrics_list.append([acc,bacc,prec,rec,f1,mcc])

#Metrics from SVC model (kernel=linear)
acc = 0.88811
bacc = 0.87841
prec = 0.90
rec = 0.92045
f1 = 0.91011
mcc = 0.76235
metrics_list.append([acc,bacc,prec,rec,f1,mcc])

#Metrics from KNN
acc = 0.78811
bacc = 0.82841
prec = 0.80
rec = 0.82
f1 = 0.8911
mcc = 0.71565
metrics_list.append([acc,bacc,prec,rec,f1,mcc])

metrics=pd.DataFrame(metrics_list,index=["SVM rbf","SVM linear","KNN"],
                     columns=["accuracy","balanced_accuracy","precision","recall",
                              "f1","Matthews_correlation_coefficient"])
print(metrics)
```

#### **Example 1**
Calculate MLcps for a pandas dataframe where rows are different metrics scores and columns are different ML models.

```python
#import MLcps
from MLcps import getCPS

#read input data (a dataframe) or load an example data
metrics=getCPS.sample_metrics()

#calculate Machine Learning cumulative performance score
cpsScore=getCPS.calculate(metrics)
print(cpsScore)

#########################################################
#plot MLcps
import plotly.express as px
from plotly.offline import plot
import plotly.io as pio
pio.renderers.default = 'iframe' #or pio.renderers.default = 'browser'

fig = px.bar(cpsScore, x='Score', y='Algorithms',color='Score',labels={'MLcps Score'},
             width=700,height=1000,text_auto=True)

fig.update_xaxes(title_text="MLcps")
plot(fig)
fig
```


#### **Example 2**
Calculate MLcps using the mean test score of all the metrics available in the given GridSearch object and return an updated GridSearch object. Returned GridSearch object contains ```mean_test_MLcps``` and ```rank_test_MLcps``` arrays, which can be used to rank the models similar to any other metric.

```python
#import MLcps
from MLcps import getCPS

#load GridSearch object or load it from package
gsObj=getCPS.sample_GridSearch_Object()

#calculate Machine Learning cumulative performance score
gsObj_updated=getCPS.calculate(gsObj)

#########################################################
#access MLcps
print("MLcps: ",gsObj_updated.cv_results_["mean_test_MLcps"])

#access rank array based on MLcps
print("Ranking based on MLcps:",gsObj_updated.cv_results_["rank_test_MLcps"])
```  

#### **Example 3**
Certain metrics are more significant than others in some cases. As an example, if the dataset is imbalanced, a high F1 score might be preferred to higher accuracy. A user can provide weights for metrics of interest while calculating MLcps in such a scenario. Weights should be a dictionary object where keys are metric names and values are corresponding weights. It can be passed as a parameter in ```getCPS.calculate()``` function.

  * **3.a)**

```python
#import MLcps
from MLcps import getCPS

#read input data (a dataframe) or load an example data
metrics=getCPS.sample_metrics()

#define weights
weights={"Accuracy":0.75,"F1": 1.25}

#calculate Machine Learning cumulative performance score
cpsScore=getCPS.calculate(metrics,weights)
print(cpsScore)

#########################################################
#plot weighted MLcps
import plotly.express as px
from plotly.offline import plot
import plotly.io as pio
pio.renderers.default = 'iframe' #or pio.renderers.default = 'browser'

fig = px.bar(cpsScore, x='Score', y='Algorithms',color='Score',labels={'MLcps Score'},
             width=700,height=1000,text_auto=True)

fig.update_xaxes(title_text="MLcps")
plot(fig)
fig
```  
  * **3.b)**
```python
#import MLcps
from MLcps import getCPS

#########################################################
#load GridSearch object or load it from package
gsObj=getCPS.sample_GridSearch_Object()

#define weights
weights={"accuracy":0.75,"f1": 1.25}

#calculate Machine Learning cumulative performance score
gsObj_updated=getCPS.calculate(gsObj,weights)

#########################################################
#access MLcps
print("MLcps: ",gsObj_updated.cv_results_["mean_test_MLcps"])

#access rank array based on MLcps
print("Ranking based on MLcps:",gsObj_updated.cv_results_["rank_test_MLcps"])

```  

# Links
<!--* For a general introduction of the tool and how to setting up MLcps:
  * Please watch  MLcps **[Setup video tutorial]()** (coming soon).  
  *  Please watch MLcps **[Introduction video tutorial]()** (coming soon).
-->
* MLcps source code and a Jupyter notebook with sample analyses is available on the **[MLcps GitHub repository](https://github.com/FunctionalUrology/MLcps/blob/main/Example-Notebook.ipynb)** and binder [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/FunctionalUrology/MLcps.git/main).
* Please use the  **[MLcps GitHub](https://github.com/FunctionalUrology/MLcps/issues)** repository to report all the issues.

# Citations Information
If **MLcps** in any way help you in your research work, please cite the MLcps publication.
***

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/FunctionalUrology/SpheroScan.git",
    "name": "SpheroScan",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "",
    "keywords": "",
    "author": "Akshay",
    "author_email": "akshaysuhag1996@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/db/4e/9fff67789fbf529f4bd891def55bc285156b480019e1142c1b7011a41b81/SpheroScan-0.0.10.tar.gz",
    "platform": null,
    "description": "# MLcps\n**MLcps: Machine Learning cumulative performance score** is a performance metric that combines multiple performance metrics and reports a cumulative score enabling researchers to compare the ML models using a single metric. MLcps provides a comprehensive platform to identify the best-performing ML model on any given dataset.\n\n### ***Note***  \nIf you want to use MLcps without installing it on your local machine, please click here [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/FunctionalUrology/MLcps.git/main). It will launch a Jupyterlab server (all the requirements for MLcps are pre-installed ) where you can run the already available example Jupyter notebook for MLcps analysis. It may take a while to launch! You can also upload your data or notebook to perform the analysis.\n\n# Prerequisites\n\n1. Python >=3.8\n2. R >=4.0. R should be accessible through terminal/command prompt.\n3. ```radarchart, tibble,``` and ```dplyr``` R packages. MLcps can install all these packages at first import if unavailable, but we highly recommend installing them before using MLcps. The user could run the following R code in the R environment to install them:\n```\n## Install the unavailable packages\ninstall.packages(c('radarchart','tibble','dplyr'),dependencies = TRUE,repos=\"https://cloud.r-project.org\")                         \n ```\n\n# Installation\n```\npip install MLcps\n```\n\n# Binder environment for MLcps\n\nAs an alternative, we have built a binder computational environment where all the requirements are pre-installed for MLcps.\nIt allows the user to ***use MLcps without any installation***.\n\nPlease click here [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/FunctionalUrology/MLcps.git/main) to launch the Jupyterlab server where you can run the already available example Jupyter notebook for MLcps analysis. It may take a while to launch! You can also upload your data or notebook to perform the analysis.\n\n# Usage\n#### **Quick Start**\n```python\n#import MLcps\nfrom MLcps import getCPS\n\n#calculate Machine Learning cumulative performance score\ncps=getCPS.calculate(object)\n```  \n> * ***object***: A pandas dataframe where rows are different metrics scores and columns are different ML models. **Or** a GridSearchCV object.\n> * ***cps***: A pandas dataframe with models name and corresponding MLcps. **Or** a GridSearchCV object.\n\n#### **Example 0.1**\nCreate Input dataframe for MLcps\n\n```python\nimport pandas as pd\nmetrics_list=[]\n\n#Metrics from SVC model (kernel=rbf)\nacc = 0.88811 #accuracy\nbacc = 0.86136 #balanced_accuracy\nprec = 0.86 #precision\nrec = 0.97727 #recall\nf1 = 0.91489 #F1\nmcc = 0.76677 #Matthews_correlation_coefficient\nmetrics_list.append([acc,bacc,prec,rec,f1,mcc])\n\n#Metrics from SVC model (kernel=linear)\nacc = 0.88811\nbacc = 0.87841\nprec = 0.90\nrec = 0.92045\nf1 = 0.91011\nmcc = 0.76235\nmetrics_list.append([acc,bacc,prec,rec,f1,mcc])\n\n#Metrics from KNN\nacc = 0.78811\nbacc = 0.82841\nprec = 0.80\nrec = 0.82\nf1 = 0.8911\nmcc = 0.71565\nmetrics_list.append([acc,bacc,prec,rec,f1,mcc])\n\nmetrics=pd.DataFrame(metrics_list,index=[\"SVM rbf\",\"SVM linear\",\"KNN\"],\n                     columns=[\"accuracy\",\"balanced_accuracy\",\"precision\",\"recall\",\n                              \"f1\",\"Matthews_correlation_coefficient\"])\nprint(metrics)\n```\n\n#### **Example 1**\nCalculate MLcps for a pandas dataframe where rows are different metrics scores and columns are different ML models.\n\n```python\n#import MLcps\nfrom MLcps import getCPS\n\n#read input data (a dataframe) or load an example data\nmetrics=getCPS.sample_metrics()\n\n#calculate Machine Learning cumulative performance score\ncpsScore=getCPS.calculate(metrics)\nprint(cpsScore)\n\n#########################################################\n#plot MLcps\nimport plotly.express as px\nfrom plotly.offline import plot\nimport plotly.io as pio\npio.renderers.default = 'iframe' #or pio.renderers.default = 'browser'\n\nfig = px.bar(cpsScore, x='Score', y='Algorithms',color='Score',labels={'MLcps Score'},\n             width=700,height=1000,text_auto=True)\n\nfig.update_xaxes(title_text=\"MLcps\")\nplot(fig)\nfig\n```\n\n\n#### **Example 2**\nCalculate MLcps using the mean test score of all the metrics available in the given GridSearch object and return an updated GridSearch object. Returned GridSearch object contains ```mean_test_MLcps``` and ```rank_test_MLcps``` arrays, which can be used to rank the models similar to any other metric.\n\n```python\n#import MLcps\nfrom MLcps import getCPS\n\n#load GridSearch object or load it from package\ngsObj=getCPS.sample_GridSearch_Object()\n\n#calculate Machine Learning cumulative performance score\ngsObj_updated=getCPS.calculate(gsObj)\n\n#########################################################\n#access MLcps\nprint(\"MLcps: \",gsObj_updated.cv_results_[\"mean_test_MLcps\"])\n\n#access rank array based on MLcps\nprint(\"Ranking based on MLcps:\",gsObj_updated.cv_results_[\"rank_test_MLcps\"])\n```  \n\n#### **Example 3**\nCertain metrics are more significant than others in some cases. As an example, if the dataset is imbalanced, a high F1 score might be preferred to higher accuracy. A user can provide weights for metrics of interest while calculating MLcps in such a scenario. Weights should be a dictionary object where keys are metric names and values are corresponding weights. It can be passed as a parameter in ```getCPS.calculate()``` function.\n\n  * **3.a)**\n\n```python\n#import MLcps\nfrom MLcps import getCPS\n\n#read input data (a dataframe) or load an example data\nmetrics=getCPS.sample_metrics()\n\n#define weights\nweights={\"Accuracy\":0.75,\"F1\": 1.25}\n\n#calculate Machine Learning cumulative performance score\ncpsScore=getCPS.calculate(metrics,weights)\nprint(cpsScore)\n\n#########################################################\n#plot weighted MLcps\nimport plotly.express as px\nfrom plotly.offline import plot\nimport plotly.io as pio\npio.renderers.default = 'iframe' #or pio.renderers.default = 'browser'\n\nfig = px.bar(cpsScore, x='Score', y='Algorithms',color='Score',labels={'MLcps Score'},\n             width=700,height=1000,text_auto=True)\n\nfig.update_xaxes(title_text=\"MLcps\")\nplot(fig)\nfig\n```  \n  * **3.b)**\n```python\n#import MLcps\nfrom MLcps import getCPS\n\n#########################################################\n#load GridSearch object or load it from package\ngsObj=getCPS.sample_GridSearch_Object()\n\n#define weights\nweights={\"accuracy\":0.75,\"f1\": 1.25}\n\n#calculate Machine Learning cumulative performance score\ngsObj_updated=getCPS.calculate(gsObj,weights)\n\n#########################################################\n#access MLcps\nprint(\"MLcps: \",gsObj_updated.cv_results_[\"mean_test_MLcps\"])\n\n#access rank array based on MLcps\nprint(\"Ranking based on MLcps:\",gsObj_updated.cv_results_[\"rank_test_MLcps\"])\n\n```  \n\n# Links\n<!--* For a general introduction of the tool and how to setting up MLcps:\n  * Please watch  MLcps **[Setup video tutorial]()** (coming soon).  \n  *  Please watch MLcps **[Introduction video tutorial]()** (coming soon).\n-->\n* MLcps source code and a Jupyter notebook with sample analyses is available on the **[MLcps GitHub repository](https://github.com/FunctionalUrology/MLcps/blob/main/Example-Notebook.ipynb)** and binder [![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/FunctionalUrology/MLcps.git/main).\n* Please use the  **[MLcps GitHub](https://github.com/FunctionalUrology/MLcps/issues)** repository to report all the issues.\n\n# Citations Information\nIf **MLcps** in any way help you in your research work, please cite the MLcps publication.\n***\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "A User-Friendly Deep Learning Tool for Spheroid Image Analysis.",
    "version": "0.0.10",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2b03f6fe47d7a3dc41fe826a2b8bc7ac8c34b8c198651c5c5aaeb949ac714ee7",
                "md5": "f11174be506a36693cf5e18f587e9562",
                "sha256": "6d12284b5a16bc277b1c09236664c35eeee41f67cbf487b39be5fa57c58c74c8"
            },
            "downloads": -1,
            "filename": "SpheroScan-0.0.10-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f11174be506a36693cf5e18f587e9562",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 36503,
            "upload_time": "2023-01-17T16:42:51",
            "upload_time_iso_8601": "2023-01-17T16:42:51.696366Z",
            "url": "https://files.pythonhosted.org/packages/2b/03/f6fe47d7a3dc41fe826a2b8bc7ac8c34b8c198651c5c5aaeb949ac714ee7/SpheroScan-0.0.10-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "db4e9fff67789fbf529f4bd891def55bc285156b480019e1142c1b7011a41b81",
                "md5": "75227ff71acd5f18862150f5d65a6652",
                "sha256": "3d78ca2a203b0ddd272e99e440f9eeeaf9c2b0e69b18280bc507f5cbd3230018"
            },
            "downloads": -1,
            "filename": "SpheroScan-0.0.10.tar.gz",
            "has_sig": false,
            "md5_digest": "75227ff71acd5f18862150f5d65a6652",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 34755,
            "upload_time": "2023-01-17T16:42:56",
            "upload_time_iso_8601": "2023-01-17T16:42:56.420034Z",
            "url": "https://files.pythonhosted.org/packages/db/4e/9fff67789fbf529f4bd891def55bc285156b480019e1142c1b7011a41b81/SpheroScan-0.0.10.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-01-17 16:42:56",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "FunctionalUrology",
    "github_project": "SpheroScan.git",
    "lcname": "spheroscan"
}
        
Elapsed time: 0.05196s