vision-models-evaluation


Namevision-models-evaluation JSON
Version 0.0.5 PyPI version JSON
download
home_pagehttps://github.com/ruescog/vision_models_evaluation
SummaryA library to test fastai learners using some evaluation techniques.
upload_time2023-09-26 08:47:42
maintainer
docs_urlNone
authorruescog
requires_python>=3.7
licenseApache Software License 2.0
keywords nbdev jupyter notebook python
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            vision_models_evaluation
================

<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

## Install

To install the library, just run:

``` sh
pip install vision_models_evaluation
```

## How to use

This library provides a method that can help you in the process of model
evaluation. Using the [scikit-learn validation
techniques](https://scikit-learn.org/stable/modules/cross_validation.html#cross-validation-iterators)
you can validate your deep learning models.

In order to validate your model, you will need to build and train
various versions of it (for example, using a KFold validation, it is
needed to build five different models).

For doing so, you need to provide: the `DataBlock` hparams
(hyperparameters), the `DataLoader` hparams, the technique used to split
the data, the `Learner` construction hparams, the learning mode (whether
to use a pretrained model or not: `fit_one_cycle` or `finetune`) and the
`Learner` training hparams. So, the first step is to define them all:

``` python
db_hparams = {
    "blocks": (ImageBlock, MaskBlock(codes)),
    "get_items": partial(get_image_files, folders=['train']),
    "get_y": get_y_fn,
    "item_tfms": [Resize((480,640)), TargetMaskConvertTransform(), transformPipeline],
    "batch_tfms": Normalize.from_stats(*imagenet_stats)
}
dl_hparams = {
    "source": path_images,
    "bs": 4
}
technique = KFold(n_splits = 5)
learner_hparams = {
    "arch": resnet18,
    "pretrained": True,
    "metrics": [DiceMulti()]
}
learning_hparams = {
    "epochs": 10,
    "base_lr": 0.001,
    "freeze_epochs": 1
}
learning_mode = "finetune"
```

Then, you need to call the `evaluate` method with those defined hparams.
After the execution, the method will return a dictionary of results (for
each metric used to test the model, the value obtained in each fold).

``` python
r = evaluate(
    db_hparams,
    dl_hparams,
    technique,
    learner_hparams,
    learning_hparams,
    learning_mode
)
```

Finally, you can plot the metrics using a boxplot from pandas, for
example:

``` python
import pandas as pd

df = pd.DataFrame(r)
df.boxplot("DiceMulti");

print(
    df["DiceMulti"].mean(),
    df["DiceMulti"].std()
)
```

![download.png](index_files/figure-commonmark/406aa26d-1-download.png)

You can use this method to evaluate your model, but you can also use it
to evaluate several models with distinct hparams: you can get the
results for each of them and then plot the average of their metrics.



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/ruescog/vision_models_evaluation",
    "name": "vision-models-evaluation",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "nbdev jupyter notebook python",
    "author": "ruescog",
    "author_email": "ruescog@unirioja.es",
    "download_url": "https://files.pythonhosted.org/packages/47/33/2f80803d32999dc7f2ec4ea7352f58a338a30caaa7476d1f957b233d526b/vision_models_evaluation-0.0.5.tar.gz",
    "platform": null,
    "description": "vision_models_evaluation\n================\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n\n## Install\n\nTo install the library, just run:\n\n``` sh\npip install vision_models_evaluation\n```\n\n## How to use\n\nThis library provides a method that can help you in the process of model\nevaluation. Using the [scikit-learn validation\ntechniques](https://scikit-learn.org/stable/modules/cross_validation.html#cross-validation-iterators)\nyou can validate your deep learning models.\n\nIn order to validate your model, you will need to build and train\nvarious versions of it (for example, using a KFold validation, it is\nneeded to build five different models).\n\nFor doing so, you need to provide: the `DataBlock` hparams\n(hyperparameters), the `DataLoader` hparams, the technique used to split\nthe data, the `Learner` construction hparams, the learning mode (whether\nto use a pretrained model or not: `fit_one_cycle` or `finetune`) and the\n`Learner` training hparams. So, the first step is to define them all:\n\n``` python\ndb_hparams = {\n    \"blocks\": (ImageBlock, MaskBlock(codes)),\n    \"get_items\": partial(get_image_files, folders=['train']),\n    \"get_y\": get_y_fn,\n    \"item_tfms\": [Resize((480,640)), TargetMaskConvertTransform(), transformPipeline],\n    \"batch_tfms\": Normalize.from_stats(*imagenet_stats)\n}\ndl_hparams = {\n    \"source\": path_images,\n    \"bs\": 4\n}\ntechnique = KFold(n_splits = 5)\nlearner_hparams = {\n    \"arch\": resnet18,\n    \"pretrained\": True,\n    \"metrics\": [DiceMulti()]\n}\nlearning_hparams = {\n    \"epochs\": 10,\n    \"base_lr\": 0.001,\n    \"freeze_epochs\": 1\n}\nlearning_mode = \"finetune\"\n```\n\nThen, you need to call the `evaluate` method with those defined hparams.\nAfter the execution, the method will return a dictionary of results (for\neach metric used to test the model, the value obtained in each fold).\n\n``` python\nr = evaluate(\n    db_hparams,\n    dl_hparams,\n    technique,\n    learner_hparams,\n    learning_hparams,\n    learning_mode\n)\n```\n\nFinally, you can plot the metrics using a boxplot from pandas, for\nexample:\n\n``` python\nimport pandas as pd\n\ndf = pd.DataFrame(r)\ndf.boxplot(\"DiceMulti\");\n\nprint(\n    df[\"DiceMulti\"].mean(),\n    df[\"DiceMulti\"].std()\n)\n```\n\n![download.png](index_files/figure-commonmark/406aa26d-1-download.png)\n\nYou can use this method to evaluate your model, but you can also use it\nto evaluate several models with distinct hparams: you can get the\nresults for each of them and then plot the average of their metrics.\n\n\n",
    "bugtrack_url": null,
    "license": "Apache Software License 2.0",
    "summary": "A library to test fastai learners using some evaluation techniques.",
    "version": "0.0.5",
    "project_urls": {
        "Homepage": "https://github.com/ruescog/vision_models_evaluation"
    },
    "split_keywords": [
        "nbdev",
        "jupyter",
        "notebook",
        "python"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4aec5deb93bddc91e12a4f05a4823fb50d431c5fea214c1a1470662e6deb05d7",
                "md5": "215458db5cf77dded07126fa6dc1c93e",
                "sha256": "bb291dc4644bdfbd9a8be4bed67bb7254180f096484260e24701c8b979e99c7b"
            },
            "downloads": -1,
            "filename": "vision_models_evaluation-0.0.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "215458db5cf77dded07126fa6dc1c93e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 9282,
            "upload_time": "2023-09-26T08:47:40",
            "upload_time_iso_8601": "2023-09-26T08:47:40.840802Z",
            "url": "https://files.pythonhosted.org/packages/4a/ec/5deb93bddc91e12a4f05a4823fb50d431c5fea214c1a1470662e6deb05d7/vision_models_evaluation-0.0.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "47332f80803d32999dc7f2ec4ea7352f58a338a30caaa7476d1f957b233d526b",
                "md5": "b2611f359984c5c446197e9117439ac4",
                "sha256": "c99af4261185024951c1002405d39ce736ff46a7529521a471d3044dc9f5c35b"
            },
            "downloads": -1,
            "filename": "vision_models_evaluation-0.0.5.tar.gz",
            "has_sig": false,
            "md5_digest": "b2611f359984c5c446197e9117439ac4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 9613,
            "upload_time": "2023-09-26T08:47:42",
            "upload_time_iso_8601": "2023-09-26T08:47:42.355707Z",
            "url": "https://files.pythonhosted.org/packages/47/33/2f80803d32999dc7f2ec4ea7352f58a338a30caaa7476d1f957b233d526b/vision_models_evaluation-0.0.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-09-26 08:47:42",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ruescog",
    "github_project": "vision_models_evaluation",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "vision-models-evaluation"
}
        
Elapsed time: 0.20006s