recognite


Namerecognite JSON
Version 0.0.1a0 PyPI version JSON
download
home_pagehttps://github.com/florisdf/recognite
SummaryRecognite is a library to kickstart your next PyTorch-based recognition project.
upload_time2023-08-09 15:47:27
maintainer
docs_urlNone
authorFloris De Feyter
requires_python>=3.6
licenseMIT license
keywords recognite
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Recognite


[![](https://img.shields.io/pypi/v/recognite.svg)](https://pypi.org/project/recognite/)
[![](https://readthedocs.org/projects/recognite/badge/?version=latest)](https://recognite.readthedocs.io/)


Recognite is a library to kickstart your next PyTorch-based recognition project. Some interesting features include:

- You can choose from nearly **80 different base models** for your recognition model: classics like AlexNet, GoogLeNet, VGG, Inception, ResNet, but also more recent models like ResNeXt, EfficientNet, and transformer-based models like ViT and SwinTransformer.
- You can easily evaluate your model model directly for a **recognition task**, where *query* samples are compared with a *gallery*, and none of the samples have a class that was used during training.
- By changing only a single argument, you can **cross-validate sets of hyperparameters** without much effort.


## Installation

You can install Recognite with pip:

```bash
pip install recognite
```

## Quickstart

This repo contains a [basic training script](examples/basic/train.py) with which you can quickly start a recognition training. To use this script in your project, you can clone the repository, copy the script into your project directory and install the script's requirements:

```bash
# Clone the Recognite repo
git clone https://github.com/florisdf/recognite

# Copy the training script to your project
cp recognite/examples/basic/train.py path/to/your/recognition_project

# Install the requirements of the training script
pip install -r recognite/examples/basic/requirements.txt
```

> The last line installs [Weights and Biases](https://wandb.ai), which is used for logging. Make sure to create an account and run `wandb login` from your command line.

The training script trains a recognition model of your choice on a dataset you define, using tools from the Recognite library. The dataset should be given as a CSV file (`--data_csv`) with two columns: `image` (containing image paths) and `label` (containing the corresponding labels). We split the unique labels of the dataset into 5 folds. Labels in the fold defined by `--val_fold` are used for validation. The others are used for training. During validation, we measure the model's top-1 accuracy when classifying a set of queries by comparing the query embeddings with the embeddings of a set of reference samples (`--num_refs` per validation label). This accuracy is logged to Weights and Biases (see `--wandb_entity` and `--wandb_project`).

Each image is uniformly resized such that its shortest side has a fixed size (`--size`). For training images, we then take a square crop of that size at a random location in the image. For the validation images, we crop out the square center of the image.

For the model, you can choose from a large number of pretrained classifiers, see `--model_name` and `--model_weights`. The model's final fully-connected layer is adjusted to the number of classes in the training set and is then trained for `--num_epoch` epochs by optimizing the softmax cross-entropy loss with stochastic gradient descent, configured by `--batch_size`, `--lr`, `--momentum` and `--weight_decay`.

For example, with the following command, we train a ResNet-18 model with [default pretrained weights](https://pytorch.org/vision/main/models.html) for 30 epochs on images from `data.csv` using a learning rate of `0.01`, a momentum of `0.9`, and a weight decay of `1e-5`. As validation set, we use the labels of the first fold (index `0`) and we use `1` reference sample per label in the gallery set.


```bash
python train.py \
    --model_name=resnet18 --model_weights=DEFAULT \
    --data_csv=data.csv --val_fold=0 --num_refs=1 --size=224 \
    --num_epochs=30 --lr=0.01 --momentum=0.9 --weight_decay=1e-5 \
    --wandb_entity=your_user_name --wandb_project=your_project
```

For more details on the different command line arguments, you can run

```bash
python train.py --help
```

## More information

See [the docs](https://recognite.readthedocs.io/) for more information and examples with Recognite.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/florisdf/recognite",
    "name": "recognite",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "recognite",
    "author": "Floris De Feyter",
    "author_email": "floris.defeyter@kuleuven.be",
    "download_url": "https://files.pythonhosted.org/packages/17/16/d8e985acb8509e961c072f783ae7e23b85e2ca32b05ca6054b10993f58c4/recognite-0.0.1a0.tar.gz",
    "platform": null,
    "description": "# Recognite\n\n\n[![](https://img.shields.io/pypi/v/recognite.svg)](https://pypi.org/project/recognite/)\n[![](https://readthedocs.org/projects/recognite/badge/?version=latest)](https://recognite.readthedocs.io/)\n\n\nRecognite is a library to kickstart your next PyTorch-based recognition project. Some interesting features include:\n\n- You can choose from nearly **80 different base models** for your recognition model: classics like AlexNet, GoogLeNet, VGG, Inception, ResNet, but also more recent models like ResNeXt, EfficientNet, and transformer-based models like ViT and SwinTransformer.\n- You can easily evaluate your model model directly for a **recognition task**, where *query* samples are compared with a *gallery*, and none of the samples have a class that was used during training.\n- By changing only a single argument, you can **cross-validate sets of hyperparameters** without much effort.\n\n\n## Installation\n\nYou can install Recognite with pip:\n\n```bash\npip install recognite\n```\n\n## Quickstart\n\nThis repo contains a [basic training script](examples/basic/train.py) with which you can quickly start a recognition training. To use this script in your project, you can clone the repository, copy the script into your project directory and install the script's requirements:\n\n```bash\n# Clone the Recognite repo\ngit clone https://github.com/florisdf/recognite\n\n# Copy the training script to your project\ncp recognite/examples/basic/train.py path/to/your/recognition_project\n\n# Install the requirements of the training script\npip install -r recognite/examples/basic/requirements.txt\n```\n\n> The last line installs [Weights and Biases](https://wandb.ai), which is used for logging. Make sure to create an account and run `wandb login` from your command line.\n\nThe training script trains a recognition model of your choice on a dataset you define, using tools from the Recognite library. The dataset should be given as a CSV file (`--data_csv`) with two columns: `image` (containing image paths) and `label` (containing the corresponding labels). We split the unique labels of the dataset into 5 folds. Labels in the fold defined by `--val_fold` are used for validation. The others are used for training. During validation, we measure the model's top-1 accuracy when classifying a set of queries by comparing the query embeddings with the embeddings of a set of reference samples (`--num_refs` per validation label). This accuracy is logged to Weights and Biases (see `--wandb_entity` and `--wandb_project`).\n\nEach image is uniformly resized such that its shortest side has a fixed size (`--size`). For training images, we then take a square crop of that size at a random location in the image. For the validation images, we crop out the square center of the image.\n\nFor the model, you can choose from a large number of pretrained classifiers, see `--model_name` and `--model_weights`. The model's final fully-connected layer is adjusted to the number of classes in the training set and is then trained for `--num_epoch` epochs by optimizing the softmax cross-entropy loss with stochastic gradient descent, configured by `--batch_size`, `--lr`, `--momentum` and `--weight_decay`.\n\nFor example, with the following command, we train a ResNet-18 model with [default pretrained weights](https://pytorch.org/vision/main/models.html) for 30 epochs on images from `data.csv` using a learning rate of `0.01`, a momentum of `0.9`, and a weight decay of `1e-5`. As validation set, we use the labels of the first fold (index `0`) and we use `1` reference sample per label in the gallery set.\n\n\n```bash\npython train.py \\\n    --model_name=resnet18 --model_weights=DEFAULT \\\n    --data_csv=data.csv --val_fold=0 --num_refs=1 --size=224 \\\n    --num_epochs=30 --lr=0.01 --momentum=0.9 --weight_decay=1e-5 \\\n    --wandb_entity=your_user_name --wandb_project=your_project\n```\n\nFor more details on the different command line arguments, you can run\n\n```bash\npython train.py --help\n```\n\n## More information\n\nSee [the docs](https://recognite.readthedocs.io/) for more information and examples with Recognite.\n",
    "bugtrack_url": null,
    "license": "MIT license",
    "summary": "Recognite is a library to kickstart your next PyTorch-based recognition project.",
    "version": "0.0.1a0",
    "project_urls": {
        "Homepage": "https://github.com/florisdf/recognite"
    },
    "split_keywords": [
        "recognite"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6ad439111cae968e4fd4cf5e99a35e225f2e5c41a07d9c12caf848553f441342",
                "md5": "3b9b3434a71ddad39289db775e0c5343",
                "sha256": "c91720b3c501a1dfa05e41dc348c7bb2e54f9b99c16ce66890fd5cceefe6b4a6"
            },
            "downloads": -1,
            "filename": "recognite-0.0.1a0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3b9b3434a71ddad39289db775e0c5343",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 22788,
            "upload_time": "2023-08-09T15:47:25",
            "upload_time_iso_8601": "2023-08-09T15:47:25.682008Z",
            "url": "https://files.pythonhosted.org/packages/6a/d4/39111cae968e4fd4cf5e99a35e225f2e5c41a07d9c12caf848553f441342/recognite-0.0.1a0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1716d8e985acb8509e961c072f783ae7e23b85e2ca32b05ca6054b10993f58c4",
                "md5": "a97ab4acfc0a286e43bee4779b8defbe",
                "sha256": "5246db6c9435051c3df27957d8275254496c2a28092472c40294938739f56700"
            },
            "downloads": -1,
            "filename": "recognite-0.0.1a0.tar.gz",
            "has_sig": false,
            "md5_digest": "a97ab4acfc0a286e43bee4779b8defbe",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 19162,
            "upload_time": "2023-08-09T15:47:27",
            "upload_time_iso_8601": "2023-08-09T15:47:27.170809Z",
            "url": "https://files.pythonhosted.org/packages/17/16/d8e985acb8509e961c072f783ae7e23b85e2ca32b05ca6054b10993f58c4/recognite-0.0.1a0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-09 15:47:27",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "florisdf",
    "github_project": "recognite",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "recognite"
}
        
Elapsed time: 0.23665s