dinov2


Namedinov2 JSON
Version 0.0.1.dev2 PyPI version JSON
download
home_pagehttps://github.com/Jack-Moo/DinoV2
SummaryPyTorch code and models for the DINOv2 self-supervised learning method.
upload_time2023-04-22 23:18:37
maintainer
docs_urlNone
authorFAIR
requires_python>=3.9.0
licenseCC-BY-NC
keywords
VCS
bugtrack_url
requirements torch torchvision omegaconf torchmetrics fvcore iopath xformers submitit cuml-cu11
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
# DINOv2: Learning Robust Visual Features without Supervision

**[Meta AI Research, FAIR](https://ai.facebook.com/research/)**

Maxime Oquab,
Timothée Darcet,
Théo Moutakanni,
Huy V. Vo,
Marc Szafraniec,
Vasil Khalidov,
Patrick Labatut,
Armand Joulin,
Piotr Bojanowski

[[`Paper`](https://arxiv.org/abs/2304.07193)] [[`Blog`](https://ai.facebook.com/blog/dino-v2-computer-vision-self-supervised-learning/)] [[`Demo`](https://dinov2.metademolab.com)] [[`BibTeX`](#citing-dinov2)]

PyTorch implementation and pretrained models for DINOv2. For details, see the paper: **[DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193)**.

DINOv2 models produce high-performance visual features that can be directly employed with classifiers as simple as linear layers on a variety of computer vision tasks; these visual features are robust and perform well across domains without any requirement for fine-tuning. The models were pretrained on a dataset of 142 M images without using any labels or annotations.

https://user-images.githubusercontent.com/60359573/230078733-5faffa19-e6ce-4c55-9200-62dd76f8236a.mp4

<div align="center">
  Visualization of the three first principal components of the patch features of all frames, mapped to RGB values.
</div>

## Pretrained models

<table style="margin: auto">
  <tr>
    <th>model</th>
    <th># of<br />params</th>
    <th>ImageNet<br />k-NN</th>
    <th>ImageNet<br />linear</th>
    <th>download</th>
  </tr>
  <tr>
    <td>ViT-S/14 distilled</td>
    <td align="right">21 M</td>
    <td align="right">79.0%</td>
    <td align="right">81.1%</td>
    <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_pretrain.pth">backbone only</a></td>
  </tr>
  <tr>
    <td>ViT-B/14 distilled</td>
    <td align="right">86 M</td>
    <td align="right">82.1%</td>
    <td align="right">84.5%</td>
    <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_pretrain.pth">backbone only</a></td>
  </tr>
  <tr>
    <td>ViT-L/14 distilled</td>
    <td align="right">300 M</td>
    <td align="right">83.5%</td>
    <td align="right">86.3%</td>
    <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_pretrain.pth">backbone only</a></td>
  </tr>
  <tr>
    <td>ViT-g/14</td>
    <td align="right">1,100 M</td>
    <td align="right">83.5%</td>
    <td align="right">86.5%</td>
    <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_pretrain.pth">backbone only</a></td>
  </tr>
</table>

### Pretrained models via PyTorch Hub

Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install PyTorch (the only required dependency for loading the model). Installing PyTorch with CUDA support is strongly recommended.

A corresponding [model card](MODEL_CARD.md) is included in the repository.

```python
import torch

dinov2_vits14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14')
dinov2_vitb14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitb14')
dinov2_vitl14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitl14')
dinov2_vitg14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14')
```

## Installation

The training and evaluation code requires PyTorch 2.0 and [xFormers](https://github.com/facebookresearch/xformers) 0.0.18 as well as a number of other 3rd party packages. Note that the code has only been tested with the specified versions and also expects a Linux environment. To setup all the required dependencies for training and evaluation, please follow the instructions below:

*[conda](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html)* **(Recommended)** - Clone the repository and then create and activate a `dinov2` conda environment using the provided environment definition:

```shell
conda env create -f conda.yaml
conda activate dinov2
```

*[pip](https://pip.pypa.io/en/stable/getting-started/)* - Clone the repository and then use the provided `requirements.txt` to install the dependencies:

```shell
pip install -r requirements.txt
```

## Data preparation

### ImageNet-1k

The root directory of the dataset should hold the following contents:

- `<root>/test/ILSVRC2012_test_00000001.JPEG`
- `<root>/test/[..]`
- `<root>/test/ILSVRC2012_test_00100000.JPEG`
- `<root>/train/n01440764/n01440764_10026.JPEG`
- `<root>/train/[...]`
- `<root>/train/n15075141/n15075141_9993.JPEG`
- `<root>/val/n01440764/ILSVRC2012_val_00000293.JPEG`
- `<root>/val/[...]`
- `<root>/val/n15075141/ILSVRC2012_val_00049174.JPEG`
- `<root>/labels.txt`

### ImageNet-22k

Please adapt the [dataset class](dinov2/data/datasets/image_net_22k.py) to match your local setup.

<br />

:warning: To execute the commands provided in the next sections for training and evaluation, the `dinov2` package should be included in the Python module search path, i.e. simply prefix the command to run with `PYTHONPATH=.`.

## Training

### Fast setup: training DINOv2 ViT-L/16 on ImageNet-1k

Run DINOv2 training on 4 A100-80GB nodes (32 GPUs) in a SLURM cluster environment with submitit:

```shell
python dinov2/run/train/train.py \
    --nodes 4 \
    --config-file dinov2/configs/train/vitl16_short.yaml \
    --output-dir <PATH/TO/OUTPUT/DIR> \
    train.dataset_path=ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
```

Training time is approximately 1 day and the resulting checkpoint should reach 81.6% on k-NN eval and 82.9% on linear eval.

The training code saves the weights of the teacher in the `eval` folder every 12500 iterations for evaluation.

### Long setup: training DINOv2 ViT-L/14 on ImageNet-22k

Run DINOv2 training on 12 A100-80GB nodes (96 GPUs) in a SLURM cluster environment with submitit:

```shell
python dinov2/run/train/train.py \
    --nodes 12 \
    --config-file dinov2/configs/train/vitl14.yaml \
    --output-dir <PATH/TO/OUTPUT/DIR> \
    train.dataset_path=ImageNet22k:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
```

Training time is approximately 3.3 days and the resulting checkpoint should reach 82.0% on k-NN eval and 84.5% on linear eval.

The training code saves the weights of the teacher in the `eval` folder every 12500 iterations for evaluation.


## Evaluation

The training code regularly saves the teacher weights. In order to evaluate the model, run the following evaluation on a single node:

### k-NN classification on ImageNet-1k

```shell
python dinov2/run/eval/knn.py \
    --config-file <PATH/TO/OUTPUT/DIR>/config.yaml \
    --pretrained-weights <PATH/TO/OUTPUT/DIR>/eval/training_24999/teacher_checkpoint.pth \
    --output-dir <PATH/TO/OUTPUT/DIR>/eval/training_24999/knn \
    --train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \
    --val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
```

### Logistic regression classification on ImageNet-1k

```shell
python dinov2/run/eval/log_regression.py \
    --config-file <PATH/TO/OUTPUT/DIR>/config.yaml \
    --pretrained-weights <PATH/TO/OUTPUT/DIR>/eval/training_24999/teacher_checkpoint.pth \
    --output-dir <PATH/TO/OUTPUT/DIR>/eval/training_24999/logreg \
    --train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \
    --val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
```

### Linear classification with data augmentation on ImageNet-1k

```shell
python dinov2/run/eval/linear.py \
    --config-file <PATH/TO/OUTPUT/DIR>/config.yaml \
    --pretrained-weights <PATH/TO/OUTPUT/DIR>/eval/training_24999/teacher_checkpoint.pth \
    --output-dir <PATH/TO/OUTPUT/DIR>/eval/training_24999/linear \
    --train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \
    --val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
```

We release the weights from evaluating the different models:

<table style="margin: auto">
  <tr>
    <th>model</th>
    <th>ImageNet<br />top-1</th>
    <th>linear evaluation</th>
  </tr>
  <tr>
    <td>ViT-S/14 distilled</td>
    <td align="right">81.1%</td>
    <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_linear_head.pth">linear head weights</a></td>
  </tr>
  <tr>
    <td>ViT-B/14 distilled</td>
    <td align="right">84.5%</td>
    <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_linear_head.pth">linear head weights</a></td>
  </tr>
  <tr>
    <td>ViT-L/14 distilled</td>
    <td align="right">86.3%</td>
    <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_linear_head.pth">linear head weights</a></td>
  </tr>
  <tr>
    <td>ViT-g/14</td>
    <td align="right">86.5%</td>
    <td><a href="https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_linear_head.pth">linear head weights</a></td>
  </tr>
</table>

The performance of the provided pretrained model weights can be evaluated as follows on ImageNet-1k:

```shell
python dinov2/run/eval/linear.py \
    --config-file dinov2/configs/eval/vitg14_pretrain.yaml \
    --pretrained-weights https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_pretrain.pth \
    --train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \
    --val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>
```

## License

DINOv2 code and model weights are released under the CC-BY-NC 4.0 license. See [LICENSE](LICENSE) for additional details.

## Contributing

See [contributing](CONTRIBUTING.md) and the [code of conduct](CODE_OF_CONDUCT.md).

## Citing DINOv2

If you find this repository useful, please consider giving a star :star: and citation :t-rex::

```
@misc{oquab2023dinov2,
  title={DINOv2: Learning Robust Visual Features without Supervision},
  author={Oquab, Maxime and Darcet, Timothée and Moutakanni, Theo and Vo, Huy V. and Szafraniec, Marc and Khalidov, Vasil and Fernandez, Pierre and Haziza, Daniel and Massa, Francisco and El-Nouby, Alaaeldin and Howes, Russell and Huang, Po-Yao and Xu, Hu and Sharma, Vasu and Li, Shang-Wen and Galuba, Wojciech and Rabbat, Mike and Assran, Mido and Ballas, Nicolas and Synnaeve, Gabriel and Misra, Ishan and Jegou, Herve and Mairal, Julien and Labatut, Patrick and Joulin, Armand and Bojanowski, Piotr},
  journal={arXiv:2304.07193},
  year={2023}
}
```



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Jack-Moo/DinoV2",
    "name": "dinov2",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.9.0",
    "maintainer_email": "",
    "keywords": "",
    "author": "FAIR",
    "author_email": "",
    "download_url": "",
    "platform": null,
    "description": "\n# DINOv2: Learning Robust Visual Features without Supervision\n\n**[Meta AI Research, FAIR](https://ai.facebook.com/research/)**\n\nMaxime Oquab,\nTimoth\u00e9e Darcet,\nTh\u00e9o Moutakanni,\nHuy V. Vo,\nMarc Szafraniec,\nVasil Khalidov,\nPatrick Labatut,\nArmand Joulin,\nPiotr Bojanowski\n\n[[`Paper`](https://arxiv.org/abs/2304.07193)] [[`Blog`](https://ai.facebook.com/blog/dino-v2-computer-vision-self-supervised-learning/)] [[`Demo`](https://dinov2.metademolab.com)] [[`BibTeX`](#citing-dinov2)]\n\nPyTorch implementation and pretrained models for DINOv2. For details, see the paper: **[DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193)**.\n\nDINOv2 models produce high-performance visual features that can be directly employed with classifiers as simple as linear layers on a variety of computer vision tasks; these visual features are robust and perform well across domains without any requirement for fine-tuning. The models were pretrained on a dataset of 142 M images without using any labels or annotations.\n\nhttps://user-images.githubusercontent.com/60359573/230078733-5faffa19-e6ce-4c55-9200-62dd76f8236a.mp4\n\n<div align=\"center\">\n  Visualization of the three first principal components of the patch features of all frames, mapped to RGB values.\n</div>\n\n## Pretrained models\n\n<table style=\"margin: auto\">\n  <tr>\n    <th>model</th>\n    <th># of<br />params</th>\n    <th>ImageNet<br />k-NN</th>\n    <th>ImageNet<br />linear</th>\n    <th>download</th>\n  </tr>\n  <tr>\n    <td>ViT-S/14 distilled</td>\n    <td align=\"right\">21 M</td>\n    <td align=\"right\">79.0%</td>\n    <td align=\"right\">81.1%</td>\n    <td><a href=\"https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_pretrain.pth\">backbone only</a></td>\n  </tr>\n  <tr>\n    <td>ViT-B/14 distilled</td>\n    <td align=\"right\">86 M</td>\n    <td align=\"right\">82.1%</td>\n    <td align=\"right\">84.5%</td>\n    <td><a href=\"https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_pretrain.pth\">backbone only</a></td>\n  </tr>\n  <tr>\n    <td>ViT-L/14 distilled</td>\n    <td align=\"right\">300 M</td>\n    <td align=\"right\">83.5%</td>\n    <td align=\"right\">86.3%</td>\n    <td><a href=\"https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_pretrain.pth\">backbone only</a></td>\n  </tr>\n  <tr>\n    <td>ViT-g/14</td>\n    <td align=\"right\">1,100 M</td>\n    <td align=\"right\">83.5%</td>\n    <td align=\"right\">86.5%</td>\n    <td><a href=\"https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_pretrain.pth\">backbone only</a></td>\n  </tr>\n</table>\n\n### Pretrained models via PyTorch Hub\n\nPlease follow the instructions [here](https://pytorch.org/get-started/locally/) to install PyTorch (the only required dependency for loading the model). Installing PyTorch with CUDA support is strongly recommended.\n\nA corresponding [model card](MODEL_CARD.md) is included in the repository.\n\n```python\nimport torch\n\ndinov2_vits14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14')\ndinov2_vitb14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitb14')\ndinov2_vitl14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitl14')\ndinov2_vitg14 = torch.hub.load('facebookresearch/dinov2', 'dinov2_vitg14')\n```\n\n## Installation\n\nThe training and evaluation code requires PyTorch 2.0 and [xFormers](https://github.com/facebookresearch/xformers) 0.0.18 as well as a number of other 3rd party packages. Note that the code has only been tested with the specified versions and also expects a Linux environment. To setup all the required dependencies for training and evaluation, please follow the instructions below:\n\n*[conda](https://docs.conda.io/projects/conda/en/latest/user-guide/getting-started.html)* **(Recommended)** - Clone the repository and then create and activate a `dinov2` conda environment using the provided environment definition:\n\n```shell\nconda env create -f conda.yaml\nconda activate dinov2\n```\n\n*[pip](https://pip.pypa.io/en/stable/getting-started/)* - Clone the repository and then use the provided `requirements.txt` to install the dependencies:\n\n```shell\npip install -r requirements.txt\n```\n\n## Data preparation\n\n### ImageNet-1k\n\nThe root directory of the dataset should hold the following contents:\n\n- `<root>/test/ILSVRC2012_test_00000001.JPEG`\n- `<root>/test/[..]`\n- `<root>/test/ILSVRC2012_test_00100000.JPEG`\n- `<root>/train/n01440764/n01440764_10026.JPEG`\n- `<root>/train/[...]`\n- `<root>/train/n15075141/n15075141_9993.JPEG`\n- `<root>/val/n01440764/ILSVRC2012_val_00000293.JPEG`\n- `<root>/val/[...]`\n- `<root>/val/n15075141/ILSVRC2012_val_00049174.JPEG`\n- `<root>/labels.txt`\n\n### ImageNet-22k\n\nPlease adapt the [dataset class](dinov2/data/datasets/image_net_22k.py) to match your local setup.\n\n<br />\n\n:warning: To execute the commands provided in the next sections for training and evaluation, the `dinov2` package should be included in the Python module search path, i.e. simply prefix the command to run with `PYTHONPATH=.`.\n\n## Training\n\n### Fast setup: training DINOv2 ViT-L/16 on ImageNet-1k\n\nRun DINOv2 training on 4 A100-80GB nodes (32 GPUs) in a SLURM cluster environment with submitit:\n\n```shell\npython dinov2/run/train/train.py \\\n    --nodes 4 \\\n    --config-file dinov2/configs/train/vitl16_short.yaml \\\n    --output-dir <PATH/TO/OUTPUT/DIR> \\\n    train.dataset_path=ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>\n```\n\nTraining time is approximately 1 day and the resulting checkpoint should reach 81.6% on k-NN eval and 82.9% on linear eval.\n\nThe training code saves the weights of the teacher in the `eval` folder every 12500 iterations for evaluation.\n\n### Long setup: training DINOv2 ViT-L/14 on ImageNet-22k\n\nRun DINOv2 training on 12 A100-80GB nodes (96 GPUs) in a SLURM cluster environment with submitit:\n\n```shell\npython dinov2/run/train/train.py \\\n    --nodes 12 \\\n    --config-file dinov2/configs/train/vitl14.yaml \\\n    --output-dir <PATH/TO/OUTPUT/DIR> \\\n    train.dataset_path=ImageNet22k:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>\n```\n\nTraining time is approximately 3.3 days and the resulting checkpoint should reach 82.0% on k-NN eval and 84.5% on linear eval.\n\nThe training code saves the weights of the teacher in the `eval` folder every 12500 iterations for evaluation.\n\n\n## Evaluation\n\nThe training code regularly saves the teacher weights. In order to evaluate the model, run the following evaluation on a single node:\n\n### k-NN classification on ImageNet-1k\n\n```shell\npython dinov2/run/eval/knn.py \\\n    --config-file <PATH/TO/OUTPUT/DIR>/config.yaml \\\n    --pretrained-weights <PATH/TO/OUTPUT/DIR>/eval/training_24999/teacher_checkpoint.pth \\\n    --output-dir <PATH/TO/OUTPUT/DIR>/eval/training_24999/knn \\\n    --train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \\\n    --val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>\n```\n\n### Logistic regression classification on ImageNet-1k\n\n```shell\npython dinov2/run/eval/log_regression.py \\\n    --config-file <PATH/TO/OUTPUT/DIR>/config.yaml \\\n    --pretrained-weights <PATH/TO/OUTPUT/DIR>/eval/training_24999/teacher_checkpoint.pth \\\n    --output-dir <PATH/TO/OUTPUT/DIR>/eval/training_24999/logreg \\\n    --train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \\\n    --val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>\n```\n\n### Linear classification with data augmentation on ImageNet-1k\n\n```shell\npython dinov2/run/eval/linear.py \\\n    --config-file <PATH/TO/OUTPUT/DIR>/config.yaml \\\n    --pretrained-weights <PATH/TO/OUTPUT/DIR>/eval/training_24999/teacher_checkpoint.pth \\\n    --output-dir <PATH/TO/OUTPUT/DIR>/eval/training_24999/linear \\\n    --train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \\\n    --val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>\n```\n\nWe release the weights from evaluating the different models:\n\n<table style=\"margin: auto\">\n  <tr>\n    <th>model</th>\n    <th>ImageNet<br />top-1</th>\n    <th>linear evaluation</th>\n  </tr>\n  <tr>\n    <td>ViT-S/14 distilled</td>\n    <td align=\"right\">81.1%</td>\n    <td><a href=\"https://dl.fbaipublicfiles.com/dinov2/dinov2_vits14/dinov2_vits14_linear_head.pth\">linear head weights</a></td>\n  </tr>\n  <tr>\n    <td>ViT-B/14 distilled</td>\n    <td align=\"right\">84.5%</td>\n    <td><a href=\"https://dl.fbaipublicfiles.com/dinov2/dinov2_vitb14/dinov2_vitb14_linear_head.pth\">linear head weights</a></td>\n  </tr>\n  <tr>\n    <td>ViT-L/14 distilled</td>\n    <td align=\"right\">86.3%</td>\n    <td><a href=\"https://dl.fbaipublicfiles.com/dinov2/dinov2_vitl14/dinov2_vitl14_linear_head.pth\">linear head weights</a></td>\n  </tr>\n  <tr>\n    <td>ViT-g/14</td>\n    <td align=\"right\">86.5%</td>\n    <td><a href=\"https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_linear_head.pth\">linear head weights</a></td>\n  </tr>\n</table>\n\nThe performance of the provided pretrained model weights can be evaluated as follows on ImageNet-1k:\n\n```shell\npython dinov2/run/eval/linear.py \\\n    --config-file dinov2/configs/eval/vitg14_pretrain.yaml \\\n    --pretrained-weights https://dl.fbaipublicfiles.com/dinov2/dinov2_vitg14/dinov2_vitg14_pretrain.pth \\\n    --train-dataset ImageNet:split=TRAIN:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET> \\\n    --val-dataset ImageNet:split=VAL:root=<PATH/TO/DATASET>:extra=<PATH/TO/DATASET>\n```\n\n## License\n\nDINOv2 code and model weights are released under the CC-BY-NC 4.0 license. See [LICENSE](LICENSE) for additional details.\n\n## Contributing\n\nSee [contributing](CONTRIBUTING.md) and the [code of conduct](CODE_OF_CONDUCT.md).\n\n## Citing DINOv2\n\nIf you find this repository useful, please consider giving a star :star: and citation :t-rex::\n\n```\n@misc{oquab2023dinov2,\n  title={DINOv2: Learning Robust Visual Features without Supervision},\n  author={Oquab, Maxime and Darcet, Timoth\u00e9e and Moutakanni, Theo and Vo, Huy V. and Szafraniec, Marc and Khalidov, Vasil and Fernandez, Pierre and Haziza, Daniel and Massa, Francisco and El-Nouby, Alaaeldin and Howes, Russell and Huang, Po-Yao and Xu, Hu and Sharma, Vasu and Li, Shang-Wen and Galuba, Wojciech and Rabbat, Mike and Assran, Mido and Ballas, Nicolas and Synnaeve, Gabriel and Misra, Ishan and Jegou, Herve and Mairal, Julien and Labatut, Patrick and Joulin, Armand and Bojanowski, Piotr},\n  journal={arXiv:2304.07193},\n  year={2023}\n}\n```\n\n\n",
    "bugtrack_url": null,
    "license": "CC-BY-NC",
    "summary": "PyTorch code and models for the DINOv2 self-supervised learning method.",
    "version": "0.0.1.dev2",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "57caab3b80cd296e3646553b744dcf8904edbd04e18c61a3b9102a02fd39d521",
                "md5": "3fd2ebc56be168e9558d3e9e07ff8324",
                "sha256": "d23754838b9d415233e1df1d5c2bf8f418d0e6654390f271ba46be389a5652cb"
            },
            "downloads": -1,
            "filename": "dinov2-0.0.1.dev2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3fd2ebc56be168e9558d3e9e07ff8324",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9.0",
            "size": 88037,
            "upload_time": "2023-04-22T23:18:37",
            "upload_time_iso_8601": "2023-04-22T23:18:37.888227Z",
            "url": "https://files.pythonhosted.org/packages/57/ca/ab3b80cd296e3646553b744dcf8904edbd04e18c61a3b9102a02fd39d521/dinov2-0.0.1.dev2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-04-22 23:18:37",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "Jack-Moo",
    "github_project": "DinoV2",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "torch",
            "specs": [
                [
                    "==",
                    "2.0.0"
                ]
            ]
        },
        {
            "name": "torchvision",
            "specs": [
                [
                    "==",
                    "0.15.0"
                ]
            ]
        },
        {
            "name": "omegaconf",
            "specs": []
        },
        {
            "name": "torchmetrics",
            "specs": [
                [
                    "==",
                    "0.10.3"
                ]
            ]
        },
        {
            "name": "fvcore",
            "specs": []
        },
        {
            "name": "iopath",
            "specs": []
        },
        {
            "name": "xformers",
            "specs": [
                [
                    "==",
                    "0.0.18"
                ]
            ]
        },
        {
            "name": "submitit",
            "specs": []
        },
        {
            "name": "cuml-cu11",
            "specs": []
        }
    ],
    "lcname": "dinov2"
}
        
Elapsed time: 0.11298s