dataset-iterator


Namedataset-iterator JSON
Version 0.4.1 PyPI version JSON
download
home_pagehttps://github.com/jeanollion/dataset_iterator.git
SummaryKeras-style data iterator for images contained in dataset files such as hdf5 or PIL readable files. Images can be contained in several files.
upload_time2024-04-23 08:09:46
maintainerNone
docs_urlNone
authorJean Ollion
requires_python>=3
licenseNone
keywords iterator dataset image numpy
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Dataset Iterator
This repo contains keras iterator classes for multi-channel (time-lapse) images contained in dataset files such as hdf5.

## Dataset structure:
One dataset file can contain several sub-datasets (dataset_name0, dataset_name1, etc...), the iterator will iterate through all of them as if they were concatenated.

    .
    ├── ...
    ├── dataset_name0                    
    │   ├── channel0          
    │   └── channel1   
    │   └── ...
    ├── dataset_name1                    
    │   ├── channel0          
    │   └── channel1   
    │   └── ...
    └── ...

Each dataset contain channels (channel0, channel1 ...) that must have same shape. All datasets must have the same number of channels, and shape (except batch size) must be equal among datasets.

## Groups

There can be more folder level, for instance to have train and test sets in the same file:

    .
    ├── ...
    ├── experiment1                    
    │   ├── train          
    │   │   ├── raw
    │   │   └── labels
    │   └── test   
    │       ├── raw
    │       └── labels
    ├── experiment2                    
    │   ├── train          
    │   │   ├── raw
    │   │   └── labels
    │   └── test   
    │       ├── raw
    │       └── labels
    └── ...
```python
train_it = MultiChannelIterator(dataset_file_path = file_path, channel_keywords = ["/raw", "/labels"], group_keyword="train")
test_it = MultiChannelIterator(dataset_file_path = file_path, channel_keywords = ["/raw", "/labels"], group_keyword="test")
```
# Image formats
- Those iterators are using an object of class `DatasetIO` to access the data.
- There is currently an implementation of DatasetIO for .h5 files (`H5pyIO`), as well as dataset composed of multiple images files supported by PILLOW (`MultipleFileIO`).
- one can also concatenate datasets from different files:
  - if a dataset is split into several files that contain the same channels: use `ConcatenateDatasetIO`
  - if a dataset contains channels in different files, use: `MultipleDatasetIO`

# Demo
See this notebook for a demo: [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1J-UPivwyNTpyLhOMfzhfG0pIl6gDD9I5)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/jeanollion/dataset_iterator.git",
    "name": "dataset-iterator",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3",
    "maintainer_email": null,
    "keywords": "Iterator, Dataset, Image, Numpy",
    "author": "Jean Ollion",
    "author_email": "jean.ollion@polytechnique.org",
    "download_url": "https://files.pythonhosted.org/packages/3b/bb/6961d0dfef8a1c0f98818cf0362669e3391143d5ca1edd2d4cec57f1fd1d/dataset_iterator-0.4.1.tar.gz",
    "platform": null,
    "description": "# Dataset Iterator\nThis repo contains keras iterator classes for multi-channel (time-lapse) images contained in dataset files such as hdf5.\n\n## Dataset structure:\nOne dataset file can contain several sub-datasets (dataset_name0, dataset_name1, etc...), the iterator will iterate through all of them as if they were concatenated.\n\n    .\n    \u251c\u2500\u2500 ...\n    \u251c\u2500\u2500 dataset_name0                    \n    \u2502   \u251c\u2500\u2500 channel0          \n    \u2502   \u2514\u2500\u2500 channel1   \n    \u2502   \u2514\u2500\u2500 ...\n    \u251c\u2500\u2500 dataset_name1                    \n    \u2502   \u251c\u2500\u2500 channel0          \n    \u2502   \u2514\u2500\u2500 channel1   \n    \u2502   \u2514\u2500\u2500 ...\n    \u2514\u2500\u2500 ...\n\nEach dataset contain channels (channel0, channel1 ...) that must have same shape. All datasets must have the same number of channels, and shape (except batch size) must be equal among datasets.\n\n## Groups\n\nThere can be more folder level, for instance to have train and test sets in the same file:\n\n    .\n    \u251c\u2500\u2500 ...\n    \u251c\u2500\u2500 experiment1                    \n    \u2502   \u251c\u2500\u2500 train          \n    \u2502   \u2502   \u251c\u2500\u2500 raw\n    \u2502   \u2502   \u2514\u2500\u2500 labels\n    \u2502   \u2514\u2500\u2500 test   \n    \u2502       \u251c\u2500\u2500 raw\n    \u2502       \u2514\u2500\u2500 labels\n    \u251c\u2500\u2500 experiment2                    \n    \u2502   \u251c\u2500\u2500 train          \n    \u2502   \u2502   \u251c\u2500\u2500 raw\n    \u2502   \u2502   \u2514\u2500\u2500 labels\n    \u2502   \u2514\u2500\u2500 test   \n    \u2502       \u251c\u2500\u2500 raw\n    \u2502       \u2514\u2500\u2500 labels\n    \u2514\u2500\u2500 ...\n```python\ntrain_it = MultiChannelIterator(dataset_file_path = file_path, channel_keywords = [\"/raw\", \"/labels\"], group_keyword=\"train\")\ntest_it = MultiChannelIterator(dataset_file_path = file_path, channel_keywords = [\"/raw\", \"/labels\"], group_keyword=\"test\")\n```\n# Image formats\n- Those iterators are using an object of class `DatasetIO` to access the data.\n- There is currently an implementation of DatasetIO for .h5 files (`H5pyIO`), as well as dataset composed of multiple images files supported by PILLOW (`MultipleFileIO`).\n- one can also concatenate datasets from different files:\n  - if a dataset is split into several files that contain the same channels: use `ConcatenateDatasetIO`\n  - if a dataset contains channels in different files, use: `MultipleDatasetIO`\n\n# Demo\nSee this notebook for a demo: [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1J-UPivwyNTpyLhOMfzhfG0pIl6gDD9I5)\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Keras-style data iterator for images contained in dataset files such as hdf5 or PIL readable files. Images can be contained in several files.",
    "version": "0.4.1",
    "project_urls": {
        "Download": "https://github.com/jeanollion/dataset_iterator/releases/download/v0.4.1/dataset_iterator-0.4.1.tar.gz",
        "Homepage": "https://github.com/jeanollion/dataset_iterator.git"
    },
    "split_keywords": [
        "iterator",
        " dataset",
        " image",
        " numpy"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "82ebca76adc6674f89511bc93fcbc9df77c8675bcb952ba6059d8696b1389cd8",
                "md5": "b421df6d785805aa36823fa0a1167510",
                "sha256": "8530a151071a5729f35905f9452a286b80856dcb27f246b5d1cf4d32774b901f"
            },
            "downloads": -1,
            "filename": "dataset_iterator-0.4.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b421df6d785805aa36823fa0a1167510",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3",
            "size": 61121,
            "upload_time": "2024-04-23T08:09:44",
            "upload_time_iso_8601": "2024-04-23T08:09:44.860446Z",
            "url": "https://files.pythonhosted.org/packages/82/eb/ca76adc6674f89511bc93fcbc9df77c8675bcb952ba6059d8696b1389cd8/dataset_iterator-0.4.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3bbb6961d0dfef8a1c0f98818cf0362669e3391143d5ca1edd2d4cec57f1fd1d",
                "md5": "0b38b2dd7fae33b5876a593f16720c9a",
                "sha256": "88b093e8c98bb61189a4ca50adcc820c62079c72aaee55eedcfeb780bf9494f2"
            },
            "downloads": -1,
            "filename": "dataset_iterator-0.4.1.tar.gz",
            "has_sig": false,
            "md5_digest": "0b38b2dd7fae33b5876a593f16720c9a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3",
            "size": 53856,
            "upload_time": "2024-04-23T08:09:46",
            "upload_time_iso_8601": "2024-04-23T08:09:46.695473Z",
            "url": "https://files.pythonhosted.org/packages/3b/bb/6961d0dfef8a1c0f98818cf0362669e3391143d5ca1edd2d4cec57f1fd1d/dataset_iterator-0.4.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-23 08:09:46",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "jeanollion",
    "github_project": "dataset_iterator",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "dataset-iterator"
}
        
Elapsed time: 0.22687s