tfrecord


Nametfrecord JSON
Version 1.14.5 PyPI version JSON
download
home_pagehttps://github.com/vahidk/tfrecord
SummaryTFRecord reader
upload_time2024-07-07 22:18:15
maintainerNone
docs_urlNone
authorVahid Kazemi
requires_pythonNone
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # TFRecord reader and writer

This library allows reading and writing tfrecord files efficiently in python. The library also provides an IterableDataset reader of tfrecord files for PyTorch. Currently uncompressed and compressed gzip TFRecords are supported.

## Installation

```pip3 install tfrecord[torch]```

## Usage

It's recommended to create an index file for each TFRecord file. Index file must be provided when using multiple workers, otherwise the loader may return duplicate records. You can create an index file for an individual tfrecord file with this utility program:
```
python3 -m tfrecord.tools.tfrecord2idx <tfrecord path> <index path>
```

To create "*.tfidnex" files for all "*.tfrecord" files in a directory run:
```
tfrecord2idx <data dir>
```

## Reading & Writing tf.train.Example

### Reading tf.Example records in PyTorch
Use TFRecordDataset to read TFRecord files in PyTorch.
```python
import torch
from tfrecord.torch.dataset import TFRecordDataset

tfrecord_path = "/tmp/data.tfrecord"
index_path = None
description = {"image": "byte", "label": "float"}
dataset = TFRecordDataset(tfrecord_path, index_path, description)
loader = torch.utils.data.DataLoader(dataset, batch_size=32)

data = next(iter(loader))
print(data)
```

Use MultiTFRecordDataset to read multiple TFRecord files. This class samples from given tfrecord files with given probability.
```python
import torch
from tfrecord.torch.dataset import MultiTFRecordDataset

tfrecord_pattern = "/tmp/{}.tfrecord"
index_pattern = "/tmp/{}.index"
splits = {
    "dataset1": 0.8,
    "dataset2": 0.2,
}
description = {"image": "byte", "label": "int"}
dataset = MultiTFRecordDataset(tfrecord_pattern, index_pattern, splits, description)
loader = torch.utils.data.DataLoader(dataset, batch_size=32)

data = next(iter(loader))
print(data)
```

### Infinite and finite PyTorch dataset

By default, `MultiTFRecordDataset` is infinite, meaning that it samples the data forever. You can make it finite by providing the appropriate flag
```
dataset = MultiTFRecordDataset(..., infinite=False)
```

### Shuffling the data

Both TFRecordDataset and MultiTFRecordDataset automatically shuffle the data when you provide a queue size.
```
dataset = TFRecordDataset(..., shuffle_queue_size=1024)
```

### Transforming input data

You can optionally pass a function as `transform` argument to perform post processing of features before returning. 
This can for example be used to decode images or normalize colors to a certain range or pad variable length sequence.
 
```python
import tfrecord
import cv2

def decode_image(features):
    # get BGR image from bytes
    features["image"] = cv2.imdecode(features["image"], -1)
    return features


description = {
    "image": "bytes",
}

dataset = tfrecord.torch.TFRecordDataset("/tmp/data.tfrecord",
                                         index_path=None,
                                         description=description,
                                         transform=decode_image)

data = next(iter(dataset))
print(data)
```

### Writing tf.Example records in Python
```python
import tfrecord

writer = tfrecord.TFRecordWriter("/tmp/data.tfrecord")
writer.write({
    "image": (image_bytes, "byte"),
    "label": (label, "float"),
    "index": (index, "int")
})
writer.close()
```

### Reading tf.Example records in Python
```python
import tfrecord

loader = tfrecord.tfrecord_loader("/tmp/data.tfrecord", None, {
    "image": "byte",
    "label": "float",
    "index": "int"
})
for record in loader:
    print(record["label"])
```

## Reading & Writing tf.train.SequenceExample

SequenceExamples can be read and written using the same methods shown above with an extra argument
(`sequence_description` for reading and `sequence_datum` for writing) which cause the respective
read/write functions to treat the data as a SequenceExample.

### Writing SequenceExamples to file

```python
import tfrecord

writer = tfrecord.TFRecordWriter("/tmp/data.tfrecord")
writer.write({'length': (3, 'int'), 'label': (1, 'int')},
             {'tokens': ([[0, 0, 1], [0, 1, 0], [1, 0, 0]], 'int'), 'seq_labels': ([0, 1, 1], 'int')})
writer.write({'length': (3, 'int'), 'label': (1, 'int')},
             {'tokens': ([[0, 0, 1], [1, 0, 0]], 'int'), 'seq_labels': ([0, 1], 'int')})
writer.close()
```

### Reading SequenceExamples in python

Reading from a SequenceExample yeilds a tuple containing two elements.

```python
import tfrecord

context_description = {"length": "int", "label": "int"}
sequence_description = {"tokens": "int", "seq_labels": "int"}
loader = tfrecord.tfrecord_loader("/tmp/data.tfrecord", None,
                                  context_description,
                                  sequence_description=sequence_description)

for context, sequence_feats in loader:
    print(context["label"])
    print(sequence_feats["seq_labels"])
```

### Read SequenceExamples in PyTorch

As described in the section on `Transforming Input`, one can pass a function as the `transform` argument to
perform post processing of features. This should be used especially for the sequence features as these are
variable length sequence and need to be padded out before being batched.

```python
import torch
import numpy as np
from tfrecord.torch.dataset import TFRecordDataset

PAD_WIDTH = 5
def pad_sequence_feats(data):
    context, features = data
    for k, v in features.items():
        features[k] = np.pad(v, ((0, PAD_WIDTH - len(v)), (0, 0)), 'constant')
    return (context, features)

context_description = {"length": "int", "label": "int"}
sequence_description = {"tokens": "int ", "seq_labels": "int"}
dataset = TFRecordDataset("/tmp/data.tfrecord",
                          index_path=None,
			  description=context_description,
			  transform=pad_sequence_feats,
			  sequence_description=sequence_description)
loader = torch.utils.data.DataLoader(dataset, batch_size=32)
data = next(iter(loader))
print(data)
```

Alternatively, you could choose to implement a custom `collate_fn` in order to assemble the batch,
for example, to perform dynamic padding.

```python
import torch
import numpy as np
from tfrecord.torch.dataset import TFRecordDataset

def collate_fn(batch):
    from torch.utils.data._utils import collate
    from torch.nn.utils import rnn
    context, feats = zip(*batch)
    feats_ = {k: [torch.Tensor(d[k]) for d in feats] for k in feats[0]}
    return (collate.default_collate(context),
            {k: rnn.pad_sequence(f, True) for (k, f) in feats_.items()})

context_description = {"length": "int", "label": "int"}
sequence_description = {"tokens": "int ", "seq_labels": "int"}
dataset = TFRecordDataset("/tmp/data.tfrecord",
                          index_path=None,
			  description=context_description,
			  transform=pad_sequence_feats,
			  sequence_description=sequence_description)
loader = torch.utils.data.DataLoader(dataset, batch_size=32, collate_fn=collate_fn)
data = next(iter(loader))
print(data)
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/vahidk/tfrecord",
    "name": "tfrecord",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": null,
    "author": "Vahid Kazemi",
    "author_email": "vkazemi@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/26/f4/70e7a2bf973a6052c0fb44b66c80050aa9477fa7645a5bb16efb2387a975/tfrecord-1.14.5.tar.gz",
    "platform": null,
    "description": "# TFRecord reader and writer\n\nThis library allows reading and writing tfrecord files efficiently in python. The library also provides an IterableDataset reader of tfrecord files for PyTorch. Currently uncompressed and compressed gzip TFRecords are supported.\n\n## Installation\n\n```pip3 install tfrecord[torch]```\n\n## Usage\n\nIt's recommended to create an index file for each TFRecord file. Index file must be provided when using multiple workers, otherwise the loader may return duplicate records. You can create an index file for an individual tfrecord file with this utility program:\n```\npython3 -m tfrecord.tools.tfrecord2idx <tfrecord path> <index path>\n```\n\nTo create \"*.tfidnex\" files for all \"*.tfrecord\" files in a directory run:\n```\ntfrecord2idx <data dir>\n```\n\n## Reading & Writing tf.train.Example\n\n### Reading tf.Example records in PyTorch\nUse TFRecordDataset to read TFRecord files in PyTorch.\n```python\nimport torch\nfrom tfrecord.torch.dataset import TFRecordDataset\n\ntfrecord_path = \"/tmp/data.tfrecord\"\nindex_path = None\ndescription = {\"image\": \"byte\", \"label\": \"float\"}\ndataset = TFRecordDataset(tfrecord_path, index_path, description)\nloader = torch.utils.data.DataLoader(dataset, batch_size=32)\n\ndata = next(iter(loader))\nprint(data)\n```\n\nUse MultiTFRecordDataset to read multiple TFRecord files. This class samples from given tfrecord files with given probability.\n```python\nimport torch\nfrom tfrecord.torch.dataset import MultiTFRecordDataset\n\ntfrecord_pattern = \"/tmp/{}.tfrecord\"\nindex_pattern = \"/tmp/{}.index\"\nsplits = {\n    \"dataset1\": 0.8,\n    \"dataset2\": 0.2,\n}\ndescription = {\"image\": \"byte\", \"label\": \"int\"}\ndataset = MultiTFRecordDataset(tfrecord_pattern, index_pattern, splits, description)\nloader = torch.utils.data.DataLoader(dataset, batch_size=32)\n\ndata = next(iter(loader))\nprint(data)\n```\n\n### Infinite and finite PyTorch dataset\n\nBy default, `MultiTFRecordDataset` is infinite, meaning that it samples the data forever. You can make it finite by providing the appropriate flag\n```\ndataset = MultiTFRecordDataset(..., infinite=False)\n```\n\n### Shuffling the data\n\nBoth TFRecordDataset and MultiTFRecordDataset automatically shuffle the data when you provide a queue size.\n```\ndataset = TFRecordDataset(..., shuffle_queue_size=1024)\n```\n\n### Transforming input data\n\nYou can optionally pass a function as `transform` argument to perform post processing of features before returning. \nThis can for example be used to decode images or normalize colors to a certain range or pad variable length sequence.\n \n```python\nimport tfrecord\nimport cv2\n\ndef decode_image(features):\n    # get BGR image from bytes\n    features[\"image\"] = cv2.imdecode(features[\"image\"], -1)\n    return features\n\n\ndescription = {\n    \"image\": \"bytes\",\n}\n\ndataset = tfrecord.torch.TFRecordDataset(\"/tmp/data.tfrecord\",\n                                         index_path=None,\n                                         description=description,\n                                         transform=decode_image)\n\ndata = next(iter(dataset))\nprint(data)\n```\n\n### Writing tf.Example records in Python\n```python\nimport tfrecord\n\nwriter = tfrecord.TFRecordWriter(\"/tmp/data.tfrecord\")\nwriter.write({\n    \"image\": (image_bytes, \"byte\"),\n    \"label\": (label, \"float\"),\n    \"index\": (index, \"int\")\n})\nwriter.close()\n```\n\n### Reading tf.Example records in Python\n```python\nimport tfrecord\n\nloader = tfrecord.tfrecord_loader(\"/tmp/data.tfrecord\", None, {\n    \"image\": \"byte\",\n    \"label\": \"float\",\n    \"index\": \"int\"\n})\nfor record in loader:\n    print(record[\"label\"])\n```\n\n## Reading & Writing tf.train.SequenceExample\n\nSequenceExamples can be read and written using the same methods shown above with an extra argument\n(`sequence_description` for reading and `sequence_datum` for writing) which cause the respective\nread/write functions to treat the data as a SequenceExample.\n\n### Writing SequenceExamples to file\n\n```python\nimport tfrecord\n\nwriter = tfrecord.TFRecordWriter(\"/tmp/data.tfrecord\")\nwriter.write({'length': (3, 'int'), 'label': (1, 'int')},\n             {'tokens': ([[0, 0, 1], [0, 1, 0], [1, 0, 0]], 'int'), 'seq_labels': ([0, 1, 1], 'int')})\nwriter.write({'length': (3, 'int'), 'label': (1, 'int')},\n             {'tokens': ([[0, 0, 1], [1, 0, 0]], 'int'), 'seq_labels': ([0, 1], 'int')})\nwriter.close()\n```\n\n### Reading SequenceExamples in python\n\nReading from a SequenceExample yeilds a tuple containing two elements.\n\n```python\nimport tfrecord\n\ncontext_description = {\"length\": \"int\", \"label\": \"int\"}\nsequence_description = {\"tokens\": \"int\", \"seq_labels\": \"int\"}\nloader = tfrecord.tfrecord_loader(\"/tmp/data.tfrecord\", None,\n                                  context_description,\n                                  sequence_description=sequence_description)\n\nfor context, sequence_feats in loader:\n    print(context[\"label\"])\n    print(sequence_feats[\"seq_labels\"])\n```\n\n### Read SequenceExamples in PyTorch\n\nAs described in the section on `Transforming Input`, one can pass a function as the `transform` argument to\nperform post processing of features. This should be used especially for the sequence features as these are\nvariable length sequence and need to be padded out before being batched.\n\n```python\nimport torch\nimport numpy as np\nfrom tfrecord.torch.dataset import TFRecordDataset\n\nPAD_WIDTH = 5\ndef pad_sequence_feats(data):\n    context, features = data\n    for k, v in features.items():\n        features[k] = np.pad(v, ((0, PAD_WIDTH - len(v)), (0, 0)), 'constant')\n    return (context, features)\n\ncontext_description = {\"length\": \"int\", \"label\": \"int\"}\nsequence_description = {\"tokens\": \"int \", \"seq_labels\": \"int\"}\ndataset = TFRecordDataset(\"/tmp/data.tfrecord\",\n                          index_path=None,\n\t\t\t  description=context_description,\n\t\t\t  transform=pad_sequence_feats,\n\t\t\t  sequence_description=sequence_description)\nloader = torch.utils.data.DataLoader(dataset, batch_size=32)\ndata = next(iter(loader))\nprint(data)\n```\n\nAlternatively, you could choose to implement a custom `collate_fn` in order to assemble the batch,\nfor example, to perform dynamic padding.\n\n```python\nimport torch\nimport numpy as np\nfrom tfrecord.torch.dataset import TFRecordDataset\n\ndef collate_fn(batch):\n    from torch.utils.data._utils import collate\n    from torch.nn.utils import rnn\n    context, feats = zip(*batch)\n    feats_ = {k: [torch.Tensor(d[k]) for d in feats] for k in feats[0]}\n    return (collate.default_collate(context),\n            {k: rnn.pad_sequence(f, True) for (k, f) in feats_.items()})\n\ncontext_description = {\"length\": \"int\", \"label\": \"int\"}\nsequence_description = {\"tokens\": \"int \", \"seq_labels\": \"int\"}\ndataset = TFRecordDataset(\"/tmp/data.tfrecord\",\n                          index_path=None,\n\t\t\t  description=context_description,\n\t\t\t  transform=pad_sequence_feats,\n\t\t\t  sequence_description=sequence_description)\nloader = torch.utils.data.DataLoader(dataset, batch_size=32, collate_fn=collate_fn)\ndata = next(iter(loader))\nprint(data)\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "TFRecord reader",
    "version": "1.14.5",
    "project_urls": {
        "Homepage": "https://github.com/vahidk/tfrecord"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "26f470e7a2bf973a6052c0fb44b66c80050aa9477fa7645a5bb16efb2387a975",
                "md5": "c6482efe1c2c7f5171ee706555ee5f4a",
                "sha256": "c8a7a69446e981f85fb7059ddd62c3a8eecd18d0de3e1374b4f4189f931ad223"
            },
            "downloads": -1,
            "filename": "tfrecord-1.14.5.tar.gz",
            "has_sig": false,
            "md5_digest": "c6482efe1c2c7f5171ee706555ee5f4a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 15575,
            "upload_time": "2024-07-07T22:18:15",
            "upload_time_iso_8601": "2024-07-07T22:18:15.051047Z",
            "url": "https://files.pythonhosted.org/packages/26/f4/70e7a2bf973a6052c0fb44b66c80050aa9477fa7645a5bb16efb2387a975/tfrecord-1.14.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-07 22:18:15",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "vahidk",
    "github_project": "tfrecord",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "tfrecord"
}
        
Elapsed time: 1.06136s