vision-datasets


Namevision-datasets JSON
Version 1.0.19 PyPI version JSON
download
home_pagehttps://github.com/microsoft/vision-datasets
SummaryA utility repo for vision dataset access and management.
upload_time2024-11-06 01:22:12
maintainerNone
docs_urlNone
authorPing Jin, Shohei Ono
requires_python>=3.8
licenseMIT
keywords vision datasets classification detection
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Vision Datasets

## Introduction

This repo

- defines unified contract for dataset for purposes such as training, visualization, and exploration, via `DatasetManifest`, `ImageDataManifest`, etc.
- provides many commonly used dataset operation, such as sample dataset by categories, sample few-shot sub-dataset, sample dataset by ratios, train-test split, merge dataset, etc. (See [Here](#oom))
- provides API for organizing and accessing datasets, via `DatasetHub`

Currently, seven `basic` types of data are supported:

- `image_classification_multiclass`: each image can is only with one label.
- `image_classification_multilabel`: each image can is with one or multiple labels (e.g., 'cat', 'animal', 'pet').
- `image_object_detection`: each image is labeled with bounding boxes surrounding the objects of interest.
- `image_text_matching`: each image is associated with a collection of texts describing the image, and whether each text description matches the image or not.
- `image_matting`: each image has a pixel-wise annotation, where each pixel is labeled as 'foreground' or 'background'.
- `image_regression`: each image is labeled with a real-valued numeric regression target.
- `image_caption`: each image is labeled with a few texts describing the images.
- `text_2_image_retrieval`: each image is labeled with a number of text queries describing the image. Optionally, an image is associated with one label.
- `visual_question_answering`: each image is labeled with a number of question-answer pairs
- `visual_object_grounding`: each image is labeled with a number of question-answer-bboxes triplets.

`multitask` type is a composition type, where one set of images has multiple sets of annotations available for different tasks, where each task can be of any basic type.

`key_value_pair` type is a generalized type, where a sample can be one or multiple images with optional text, labeled with key-value pairs. The keys and values are defined by a schema. Note that all the above seven basic types can be defined as this type with specific schemas.

**Note that `image_caption` and `text_2_image_retrieval` might be merged into `image_text_matching` in future.**

## Dataset Contracts

We support datasets with two types of annotations:

- single-image annotation (S), and
- multi-image annotation (M)

Below table shows all the supported contracts: 
| Annotation | Contract class                       | Explaination                                                                                                                                                                                                                                                                                                                                                                                                                                        |
| :--------- | :----------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| S          | `DatasetManifest`                    | wraps the information about a dataset including labelmap, images (width, height, path to image), and annotations. Information about each image is obtained in `ImageDataManifest`. <br>For multitask dataset, the labels stored in the ImageDataManifest is a dict mapping from task name to that task's labels. The labelmap stored in DatasetManifest is also a dict mapping from task name to that task's labels.                            |
| S,M        | `ImageDataManifest`                  | encapsulates image-specific information, such as image id, path, labels, and width/height. One thing to note here is that the image path can be:<br>&nbsp;1. a local path (absolute `c:\images\1.jpg` or relative `images\1.jpg`), <br>&nbsp;2. a local path in a **non-compressed** zip file (absolute `c:\images.zip@1.jpg` or relative `images.zip@1.jpg`) or <br>&nbsp;3. an url. <br>All three kinds of paths can be loaded by `VisionDataset` |
| S          | `ImageLabelManifest`                 | encapsulates one single image-level annotation                                                                                                                                                                                                                                                                                                                                                                                                      |
| S          | `CategoryManifest`                   | encapsulates the information about a category, such as its name and super category, if applicable                                                                                                                                                                                                                                                                                                                                                   |
| M          | `MultiImageLabelManifest`            | is abstract class. It encapsulates one annotation with one or multiple images, each image is stored as an image index.                                                                                                                                                                                                                                                                                                                              |
| M          | `DatasetManifestWithMultiImageLabel` | supports annotations associated with one or multiple images. Each annotation is represented by `MultiImageLabelManifest` class, and each image is represented by `ImageDataManifest`.                                                                                                                                                                                                                                                               |
| M          | `KeyValuePairDatasetManifest`        | inherits `DatasetManifestWithMultiImageLabel`, dataset with each sample having `KeyValuePairLabelManifest` label, dataset is also associated with a schema to define the expected keys and values.                                                                                                                                                                                                                                                  |
| M          | `KeyValuePairLabelManifest`          | inherits `MultiImageLabelManifest`, encapsulates label information of `KeyValuePairDatasetManifest`. Each label has fields `img_ids` (associated images), `text` (associated text input), and `fields` (dictionary of interested field keys and values).                                                                                                                                                                                   |
| S,M        | `VisionDataset`                      | is an iterable dataset class that consumes the information from `DatasetManifest` or `DatasetManifestWithMultiImageLabel`                                                                                                                                                                                                                                                                                                                           |

### Creating DatasetManifest

In addition to loading a serialized `DatasetManifest` for instantiation, this repo currently supports two formats of data that can instantiates `DatasetManifest`,
using `DatasetManifest.create_dataset_manifest(dataset_info, usage, container_sas_or_root_dir)`: `COCO` and `IRIS` (legacy).

`DatasetInfo` as the first arg in the arg list wraps the metainfo about the dataset like the name of the dataset, locations of the images, annotation files, etc. See examples in the sections below
for different data formats.

Once a `DatasetManifest` is created, you can create a `VisionDataset` for accessing the data in the dataset, especially the image data, for training, visualization, etc:

```{python}
dataset = VisionDataset(dataset_info, dataset_manifest, coordinates='relative')
```


### Creating KeyValuePairDatasetManifest

You can use `CocoManifestAdaptorFactory` to create the manifest from COCO format data and a schema, a COCO data example can be found in `COCO_DATA_FORMAT.md`, and a schema example (dictionary) can be found in `DATA_PREPARATION.md`. 

```{python}
from vision_datasets.common import CocoManifestAdaptorFactory, DatasetTypes
# check schema dictionary example From `DATA_PREPARATION.md`
adaptor = CocoManifestAdaptorFactory.create(DatasetTypes.KEY_VALUE_PAIR, schema=schema_dict)
key_value_pair_dataset_manifest = adaptor.create_dataset_manifest(coco_file_path_or_url='test.json', url_or_root_dir='data/')  # image paths in test.json is relative to url_or_root_dir
# test the first sample
print(
    key_value_pair_dataset_manifest.images[0].img_path,'\n',
    key_value_pair_dataset_manifest.annotations[0].fields,'\n',
    key_value_pair_dataset_manifest.annotations[0].text,'\n',
)
```

Once a `KeyValuePairDatasetManifest` is created, along with a dataset_info, create a `VisionDataset` for accessing the data in the dataset.

```{python}
from vision_datasets.common import DatasetInfoFactory, VisionDataset
# check dataset information dictionary example From `DATA_PREPARATION.md`
dataset_info = DatasetInfoFactory.create(dataset_info_dict)
dataset = VisionDataset(dataset_info, key_value_pair_dataset_manifest)
# test the first sample
imgs, target, _ = dataset[0]
print(imgs)
print(target)
```

### Loading IC/OD/VQA Datasets in KeyValuePair (KVP) Format:
You can convert an existing IC/OD VisionDataset to the generalized KVP format using the following adapter:

```{python}
# For MultiClass and MultiLabel IC dataset
from vision_datasets.image_classification import MulticlassClassificationAsKeyValuePairDataset, MultilabelClassificationAsKeyValuePairDataset
sample_multiclass_ic_dataset = VisionDataset(dataset_info, dataset_manifest)
kvp_dataset = MulticlassClassificationAsKeyValuePairDataset(sample_multiclass_ic_dataset)
sample_multilabel_ic_dataset = VisionDataset(dataset_info, dataset_manifest)
kvp_dataset = MultilabelClassificationAsKeyValuePairDataset(sample_multilabel_ic_dataset)


# For OD dataset
from vision_datasets.image_object_detection import DetectionAsKeyValuePairDataset, DetectionAsKeyValuePairDatasetForMultilabelClassification
sample_od_dataset = VisionDataset(dataset_info, dataset_manifest)
kvp_dataset = DetectionAsKeyValuePairDataset(sample_od_dataset)
kvp_dataset_for_multilabel_classification = DetectionAsKeyValuePairDatasetForMultilabelClassification(sample_od_dataset)

# For VQA dataset
from vision_datasets.visual_question_answering import VQAAsKeyValuePairDataset
sample_vqa_dataset = VisionDataset(dataset_info, dataset_manifest)
kvp_dataset = VQAAsKeyValuePairDataset(sample_vqa_dataset)
```


#### Coco format

Here is an example with explanation of what a `DatasetInfo` looks like for coco format, when it is serialized into json:

```json
    {
        "name": "sampled-ms-coco",
        "version": 1,
        "description": "A sampled ms-coco dataset.",
        "type": "object_detection",
        "format": "coco", // indicating the annotation data are stored in coco format
        "root_folder": "detection/coco2017_20200401", // a root folder for all files listed
        "train": {
            "index_path": "train.json", // coco json file for training, see next section for example
            "files_for_local_usage": [ // associated files including data such as images
                "images/train_images.zip"
            ]
        },
        "val": {
            "index_path": "val.json",
            "files_for_local_usage": [
                "images/val_images.zip"
            ]
        },
        "test": {
            "index_path": "test.json",
            "files_for_local_usage": [
                "images/test_images.zip"
            ]
        }
    }
```

Coco annotation format details w.r.t. `image_classification_multiclass/label`, `image_object_detection`, `image_caption`, `image_text_match`, `key_value_pair`, and `multitask`  can be found in `COCO_DATA_FORMAT.md`.

Index file can be put into a zip file as well (e.g., `annotations.zip@train.json`), no need to add the this zip to "files_for_local_usage" explicitly.

#### Iris format

Iris format is a legacy format which can be found in `IRIS_DATA_FORMAT.md`. Only `multiclass/label_classification`, `object_detection` and `multitask` are supported.

## Dataset management and access

Check [DATA_PREPARATION.md](DATA_PREPARATION.md) for complete guide on how to prepare datasets in steps.

Once you have multiple datasets, it is more convenient to have all the `DatasetInfo` in one place and instantiate `DatasetManifest` or even `VisionDataset` by just using the dataset name, usage (
train, val ,test) and version.

This repo offers the class `DatasetHub` for this purpose. Once instantiated with a json including the `DatasetInfo` for all datasets, you can retrieve a `VisionDataset` by

```python
import pathlib
from vision_datasets.common import Usages, DatasetHub

dataset_infos_json_path = 'datasets.json'
dataset_hub = DatasetHub(pathlib.Path(dataset_infos_json_path).read_text(), blob_container_sas, local_dir)
stanford_cars = dataset_hub.create_vision_dataset('stanford-cars', version=1, usage=Usages.TRAIN)

# note that you can pass multiple datasets.json to DatasetHub, it can combine them all
# example: DatasetHub([ds_json1, ds_json2, ...])
# note that you can specify multiple usages in create_manifest_dataset call
# example dataset_hub.create_manifest_dataset('stanford-cars', version=1, usage=[Usages.TRAIN, Usages.VAL])

for img, targets, sample_idx_str in stanford_cars:
    if isinstance(img, list):  # for key_value_pair dataset, the first item is a list of images
       img = img[0]
    img.show()
    img.close()
    print(targets)
```

Note that this hub class works with data saved in both Azure Blob container and on local disk.

If `local_dir`:

1. is provided, the hub will look for the resources locally and **download the data** (files included in "
   files_for_local_usage", the index files, metadata (if iris format), labelmap (if iris format))
   from `blob_container_sas` if not present locally
2. is NOT provided (i.e. `None`), the hub will create a manifest dataset that directly consumes data from the blob
   indicated by `blob_container_sas`. Note that this does not work, if data are stored in zipped files. You will have to
   unzip your data in the azure blob. (Index files requires no update, if image paths are for zip files: `a.zip@1.jpg`).
   This kind of azure-based dataset is good for large dataset exploration, but can be slow for training.

When data exists on local disk, `blob_container_sas` can be `None`.

## Operations on manifests {#oom}

There are supported operations on manifests for different data types, such as split, merge, sample, etc. You can run

`vision_list_supported_operations -d {DATA_TYPE}`

to see the supported operations for a specific data type. You can use the factory classes in `vision_datasets.common.factory` to create operations for certain data type.

```python
from vision_datasets.common import DatasetTypes, SplitFactory, SplitConfig


data_manifest = ....
splitter = SplitFactory.create(DatasetTypes.IMAGE_CLASSIFICATION_MULTICLASS, SplitConfig(ratio=0.3))
manifest_1, manifest_2 = splitter.run(data_manifest)
```

### Training with PyTorch

Training with PyTorch is easy. After instantiating a `VisionDataset`, simply passing it in `vision_datasets.common.dataset.TorchDataset` together with the `transform`, then you are good to go with the PyTorch DataLoader for training.


## Helpful commands

There are a few commands that come with this repo once installed, such as datset check and download, detection conversion to classification dataset, and so on, check [`UTIL_COMMANDS.md`](./UTIL_COMMANDS.md) for details.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/microsoft/vision-datasets",
    "name": "vision-datasets",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "vision datasets classification detection",
    "author": "Ping Jin, Shohei Ono",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/7b/e2/4ef4ca68d2adf4d705a5da5bd62723c4a92c8449cc05b87d897b76493fe2/vision_datasets-1.0.19.tar.gz",
    "platform": null,
    "description": "# Vision Datasets\n\n## Introduction\n\nThis repo\n\n- defines unified contract for dataset for purposes such as training, visualization, and exploration, via `DatasetManifest`, `ImageDataManifest`, etc.\n- provides many commonly used dataset operation, such as sample dataset by categories, sample few-shot sub-dataset, sample dataset by ratios, train-test split, merge dataset, etc. (See [Here](#oom))\n- provides API for organizing and accessing datasets, via `DatasetHub`\n\nCurrently, seven `basic` types of data are supported:\n\n- `image_classification_multiclass`: each image can is only with one label.\n- `image_classification_multilabel`: each image can is with one or multiple labels (e.g., 'cat', 'animal', 'pet').\n- `image_object_detection`: each image is labeled with bounding boxes surrounding the objects of interest.\n- `image_text_matching`: each image is associated with a collection of texts describing the image, and whether each text description matches the image or not.\n- `image_matting`: each image has a pixel-wise annotation, where each pixel is labeled as 'foreground' or 'background'.\n- `image_regression`: each image is labeled with a real-valued numeric regression target.\n- `image_caption`: each image is labeled with a few texts describing the images.\n- `text_2_image_retrieval`: each image is labeled with a number of text queries describing the image. Optionally, an image is associated with one label.\n- `visual_question_answering`: each image is labeled with a number of question-answer pairs\n- `visual_object_grounding`: each image is labeled with a number of question-answer-bboxes triplets.\n\n`multitask` type is a composition type, where one set of images has multiple sets of annotations available for different tasks, where each task can be of any basic type.\n\n`key_value_pair` type is a generalized type, where a sample can be one or multiple images with optional text, labeled with key-value pairs. The keys and values are defined by a schema. Note that all the above seven basic types can be defined as this type with specific schemas.\n\n**Note that `image_caption` and `text_2_image_retrieval` might be merged into `image_text_matching` in future.**\n\n## Dataset Contracts\n\nWe support datasets with two types of annotations:\n\n- single-image annotation (S), and\n- multi-image annotation (M)\n\nBelow table shows all the supported contracts: \n| Annotation | Contract class                       | Explaination                                                                                                                                                                                                                                                                                                                                                                                                                                        |\n| :--------- | :----------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| S          | `DatasetManifest`                    | wraps the information about a dataset including labelmap, images (width, height, path to image), and annotations. Information about each image is obtained in `ImageDataManifest`. <br>For multitask dataset, the labels stored in the ImageDataManifest is a dict mapping from task name to that task's labels. The labelmap stored in DatasetManifest is also a dict mapping from task name to that task's labels.                            |\n| S,M        | `ImageDataManifest`                  | encapsulates image-specific information, such as image id, path, labels, and width/height. One thing to note here is that the image path can be:<br>&nbsp;1. a local path (absolute `c:\\images\\1.jpg` or relative `images\\1.jpg`), <br>&nbsp;2. a local path in a **non-compressed** zip file (absolute `c:\\images.zip@1.jpg` or relative `images.zip@1.jpg`) or <br>&nbsp;3. an url. <br>All three kinds of paths can be loaded by `VisionDataset` |\n| S          | `ImageLabelManifest`                 | encapsulates one single image-level annotation                                                                                                                                                                                                                                                                                                                                                                                                      |\n| S          | `CategoryManifest`                   | encapsulates the information about a category, such as its name and super category, if applicable                                                                                                                                                                                                                                                                                                                                                   |\n| M          | `MultiImageLabelManifest`            | is abstract class. It encapsulates one annotation with one or multiple images, each image is stored as an image index.                                                                                                                                                                                                                                                                                                                              |\n| M          | `DatasetManifestWithMultiImageLabel` | supports annotations associated with one or multiple images. Each annotation is represented by `MultiImageLabelManifest` class, and each image is represented by `ImageDataManifest`.                                                                                                                                                                                                                                                               |\n| M          | `KeyValuePairDatasetManifest`        | inherits `DatasetManifestWithMultiImageLabel`, dataset with each sample having `KeyValuePairLabelManifest` label, dataset is also associated with a schema to define the expected keys and values.                                                                                                                                                                                                                                                  |\n| M          | `KeyValuePairLabelManifest`          | inherits `MultiImageLabelManifest`, encapsulates label information of `KeyValuePairDatasetManifest`. Each label has fields `img_ids` (associated images), `text` (associated text input), and `fields` (dictionary of interested field keys and values).                                                                                                                                                                                   |\n| S,M        | `VisionDataset`                      | is an iterable dataset class that consumes the information from `DatasetManifest` or `DatasetManifestWithMultiImageLabel`                                                                                                                                                                                                                                                                                                                           |\n\n### Creating DatasetManifest\n\nIn addition to loading a serialized `DatasetManifest` for instantiation, this repo currently supports two formats of data that can instantiates `DatasetManifest`,\nusing `DatasetManifest.create_dataset_manifest(dataset_info, usage, container_sas_or_root_dir)`: `COCO` and `IRIS` (legacy).\n\n`DatasetInfo` as the first arg in the arg list wraps the metainfo about the dataset like the name of the dataset, locations of the images, annotation files, etc. See examples in the sections below\nfor different data formats.\n\nOnce a `DatasetManifest` is created, you can create a `VisionDataset` for accessing the data in the dataset, especially the image data, for training, visualization, etc:\n\n```{python}\ndataset = VisionDataset(dataset_info, dataset_manifest, coordinates='relative')\n```\n\n\n### Creating KeyValuePairDatasetManifest\n\nYou can use `CocoManifestAdaptorFactory` to create the manifest from COCO format data and a schema, a COCO data example can be found in `COCO_DATA_FORMAT.md`, and a schema example (dictionary) can be found in `DATA_PREPARATION.md`. \n\n```{python}\nfrom vision_datasets.common import CocoManifestAdaptorFactory, DatasetTypes\n# check schema dictionary example From `DATA_PREPARATION.md`\nadaptor = CocoManifestAdaptorFactory.create(DatasetTypes.KEY_VALUE_PAIR, schema=schema_dict)\nkey_value_pair_dataset_manifest = adaptor.create_dataset_manifest(coco_file_path_or_url='test.json', url_or_root_dir='data/')  # image paths in test.json is relative to url_or_root_dir\n# test the first sample\nprint(\n    key_value_pair_dataset_manifest.images[0].img_path,'\\n',\n    key_value_pair_dataset_manifest.annotations[0].fields,'\\n',\n    key_value_pair_dataset_manifest.annotations[0].text,'\\n',\n)\n```\n\nOnce a `KeyValuePairDatasetManifest` is created, along with a dataset_info, create a `VisionDataset` for accessing the data in the dataset.\n\n```{python}\nfrom vision_datasets.common import DatasetInfoFactory, VisionDataset\n# check dataset information dictionary example From `DATA_PREPARATION.md`\ndataset_info = DatasetInfoFactory.create(dataset_info_dict)\ndataset = VisionDataset(dataset_info, key_value_pair_dataset_manifest)\n# test the first sample\nimgs, target, _ = dataset[0]\nprint(imgs)\nprint(target)\n```\n\n### Loading IC/OD/VQA Datasets in KeyValuePair (KVP) Format:\nYou can convert an existing IC/OD VisionDataset to the generalized KVP format using the following adapter:\n\n```{python}\n# For MultiClass and MultiLabel IC dataset\nfrom vision_datasets.image_classification import MulticlassClassificationAsKeyValuePairDataset, MultilabelClassificationAsKeyValuePairDataset\nsample_multiclass_ic_dataset = VisionDataset(dataset_info, dataset_manifest)\nkvp_dataset = MulticlassClassificationAsKeyValuePairDataset(sample_multiclass_ic_dataset)\nsample_multilabel_ic_dataset = VisionDataset(dataset_info, dataset_manifest)\nkvp_dataset = MultilabelClassificationAsKeyValuePairDataset(sample_multilabel_ic_dataset)\n\n\n# For OD dataset\nfrom vision_datasets.image_object_detection import DetectionAsKeyValuePairDataset, DetectionAsKeyValuePairDatasetForMultilabelClassification\nsample_od_dataset = VisionDataset(dataset_info, dataset_manifest)\nkvp_dataset = DetectionAsKeyValuePairDataset(sample_od_dataset)\nkvp_dataset_for_multilabel_classification = DetectionAsKeyValuePairDatasetForMultilabelClassification(sample_od_dataset)\n\n# For VQA dataset\nfrom vision_datasets.visual_question_answering import VQAAsKeyValuePairDataset\nsample_vqa_dataset = VisionDataset(dataset_info, dataset_manifest)\nkvp_dataset = VQAAsKeyValuePairDataset(sample_vqa_dataset)\n```\n\n\n#### Coco format\n\nHere is an example with explanation of what a `DatasetInfo` looks like for coco format, when it is serialized into json:\n\n```json\n    {\n        \"name\": \"sampled-ms-coco\",\n        \"version\": 1,\n        \"description\": \"A sampled ms-coco dataset.\",\n        \"type\": \"object_detection\",\n        \"format\": \"coco\", // indicating the annotation data are stored in coco format\n        \"root_folder\": \"detection/coco2017_20200401\", // a root folder for all files listed\n        \"train\": {\n            \"index_path\": \"train.json\", // coco json file for training, see next section for example\n            \"files_for_local_usage\": [ // associated files including data such as images\n                \"images/train_images.zip\"\n            ]\n        },\n        \"val\": {\n            \"index_path\": \"val.json\",\n            \"files_for_local_usage\": [\n                \"images/val_images.zip\"\n            ]\n        },\n        \"test\": {\n            \"index_path\": \"test.json\",\n            \"files_for_local_usage\": [\n                \"images/test_images.zip\"\n            ]\n        }\n    }\n```\n\nCoco annotation format details w.r.t. `image_classification_multiclass/label`, `image_object_detection`, `image_caption`, `image_text_match`, `key_value_pair`, and `multitask`  can be found in `COCO_DATA_FORMAT.md`.\n\nIndex file can be put into a zip file as well (e.g., `annotations.zip@train.json`), no need to add the this zip to \"files_for_local_usage\" explicitly.\n\n#### Iris format\n\nIris format is a legacy format which can be found in `IRIS_DATA_FORMAT.md`. Only `multiclass/label_classification`, `object_detection` and `multitask` are supported.\n\n## Dataset management and access\n\nCheck [DATA_PREPARATION.md](DATA_PREPARATION.md) for complete guide on how to prepare datasets in steps.\n\nOnce you have multiple datasets, it is more convenient to have all the `DatasetInfo` in one place and instantiate `DatasetManifest` or even `VisionDataset` by just using the dataset name, usage (\ntrain, val ,test) and version.\n\nThis repo offers the class `DatasetHub` for this purpose. Once instantiated with a json including the `DatasetInfo` for all datasets, you can retrieve a `VisionDataset` by\n\n```python\nimport pathlib\nfrom vision_datasets.common import Usages, DatasetHub\n\ndataset_infos_json_path = 'datasets.json'\ndataset_hub = DatasetHub(pathlib.Path(dataset_infos_json_path).read_text(), blob_container_sas, local_dir)\nstanford_cars = dataset_hub.create_vision_dataset('stanford-cars', version=1, usage=Usages.TRAIN)\n\n# note that you can pass multiple datasets.json to DatasetHub, it can combine them all\n# example: DatasetHub([ds_json1, ds_json2, ...])\n# note that you can specify multiple usages in create_manifest_dataset call\n# example dataset_hub.create_manifest_dataset('stanford-cars', version=1, usage=[Usages.TRAIN, Usages.VAL])\n\nfor img, targets, sample_idx_str in stanford_cars:\n    if isinstance(img, list):  # for key_value_pair dataset, the first item is a list of images\n       img = img[0]\n    img.show()\n    img.close()\n    print(targets)\n```\n\nNote that this hub class works with data saved in both Azure Blob container and on local disk.\n\nIf `local_dir`:\n\n1. is provided, the hub will look for the resources locally and **download the data** (files included in \"\n   files_for_local_usage\", the index files, metadata (if iris format), labelmap (if iris format))\n   from `blob_container_sas` if not present locally\n2. is NOT provided (i.e. `None`), the hub will create a manifest dataset that directly consumes data from the blob\n   indicated by `blob_container_sas`. Note that this does not work, if data are stored in zipped files. You will have to\n   unzip your data in the azure blob. (Index files requires no update, if image paths are for zip files: `a.zip@1.jpg`).\n   This kind of azure-based dataset is good for large dataset exploration, but can be slow for training.\n\nWhen data exists on local disk, `blob_container_sas` can be `None`.\n\n## Operations on manifests {#oom}\n\nThere are supported operations on manifests for different data types, such as split, merge, sample, etc. You can run\n\n`vision_list_supported_operations -d {DATA_TYPE}`\n\nto see the supported operations for a specific data type. You can use the factory classes in `vision_datasets.common.factory` to create operations for certain data type.\n\n```python\nfrom vision_datasets.common import DatasetTypes, SplitFactory, SplitConfig\n\n\ndata_manifest = ....\nsplitter = SplitFactory.create(DatasetTypes.IMAGE_CLASSIFICATION_MULTICLASS, SplitConfig(ratio=0.3))\nmanifest_1, manifest_2 = splitter.run(data_manifest)\n```\n\n### Training with PyTorch\n\nTraining with PyTorch is easy. After instantiating a `VisionDataset`, simply passing it in `vision_datasets.common.dataset.TorchDataset` together with the `transform`, then you are good to go with the PyTorch DataLoader for training.\n\n\n## Helpful commands\n\nThere are a few commands that come with this repo once installed, such as datset check and download, detection conversion to classification dataset, and so on, check [`UTIL_COMMANDS.md`](./UTIL_COMMANDS.md) for details.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A utility repo for vision dataset access and management.",
    "version": "1.0.19",
    "project_urls": {
        "Homepage": "https://github.com/microsoft/vision-datasets"
    },
    "split_keywords": [
        "vision",
        "datasets",
        "classification",
        "detection"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ca1f24b936cdf83a6c2d0608b1afc597f8475af2314dbb88dab6da854b5d6168",
                "md5": "742e180ca2333760d3be6549f1cb3f26",
                "sha256": "cc8df6f8d4e29507422f9f9675861bb27b4507507f0b3110eae024a019422ed2"
            },
            "downloads": -1,
            "filename": "vision_datasets-1.0.19-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "742e180ca2333760d3be6549f1cb3f26",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 143566,
            "upload_time": "2024-11-06T01:22:10",
            "upload_time_iso_8601": "2024-11-06T01:22:10.829427Z",
            "url": "https://files.pythonhosted.org/packages/ca/1f/24b936cdf83a6c2d0608b1afc597f8475af2314dbb88dab6da854b5d6168/vision_datasets-1.0.19-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7be24ef4ca68d2adf4d705a5da5bd62723c4a92c8449cc05b87d897b76493fe2",
                "md5": "2aaec66f4705c4b11ac7a5c514dbcef4",
                "sha256": "dd62c57e030f7712259bc503eeefdaadd9f60345b212306016f078bd87ba0e06"
            },
            "downloads": -1,
            "filename": "vision_datasets-1.0.19.tar.gz",
            "has_sig": false,
            "md5_digest": "2aaec66f4705c4b11ac7a5c514dbcef4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 90380,
            "upload_time": "2024-11-06T01:22:12",
            "upload_time_iso_8601": "2024-11-06T01:22:12.594850Z",
            "url": "https://files.pythonhosted.org/packages/7b/e2/4ef4ca68d2adf4d705a5da5bd62723c4a92c8449cc05b87d897b76493fe2/vision_datasets-1.0.19.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-06 01:22:12",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "microsoft",
    "github_project": "vision-datasets",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "vision-datasets"
}
        
Elapsed time: 1.39752s