numpydantic


Namenumpydantic JSON
Version 1.6.6 PyPI version JSON
download
home_pageNone
SummaryType and shape validation and serialization for arbitrary array types in pydantic models
upload_time2024-12-14 02:20:02
maintainerNone
docs_urlNone
authorNone
requires_python<4.0,>=3.9
licenseMIT
keywords arrays validation serialization numpy pydantic
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # numpydantic

[![PyPI - Version](https://img.shields.io/pypi/v/numpydantic)](https://pypi.org/project/numpydantic)
[![Documentation Status](https://readthedocs.org/projects/numpydantic/badge/?version=latest)](https://numpydantic.readthedocs.io/en/latest/?badge=latest)
[![Coverage Status](https://coveralls.io/repos/github/p2p-ld/numpydantic/badge.svg)](https://coveralls.io/github/p2p-ld/numpydantic)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)

A python package for specifying, validating, and serializing arrays with arbitrary backends in pydantic.

**Problem:** 
1) Pydantic is great for modeling data. 
2) Arrays are one of a few elemental types in computing,

but ...

3) Typical type annotations would only work for a single array library implementation
4) They wouldn’t allow you to specify array shapes and dtypes, and
5) If you try and specify an array in pydantic, this happens:

```python
>>> from pydantic import BaseModel
>>> import numpy as np

>>> class MyModel(BaseModel):
>>>     array: np.ndarray
pydantic.errors.PydanticSchemaGenerationError: 
Unable to generate pydantic-core schema for <class 'numpy.ndarray'>. 
Set `arbitrary_types_allowed=True` in the model_config to ignore this error 
or implement `__get_pydantic_core_schema__` on your type to fully support it.
```

**Solution**

Numpydantic allows you to do this:

```python
from pydantic import BaseModel
from numpydantic import NDArray, Shape

class MyModel(BaseModel):
    array: NDArray[Shape["3 x, 4 y, * z"], int]
```

And use it with your favorite array library:

```python
import numpy as np
import dask.array as da
import zarr

# numpy
model = MyModel(array=np.zeros((3, 4, 5), dtype=int))
# dask
model = MyModel(array=da.zeros((3, 4, 5), dtype=int))
# hdf5 datasets
model = MyModel(array=('data.h5', '/nested/dataset'))
# zarr arrays
model = MyModel(array=zarr.zeros((3,4,5), dtype=int))
model = MyModel(array='data.zarr')
model = MyModel(array=('data.zarr', '/nested/dataset'))
# video files
model = MyModel(array="data.mp4")
```

`numpydantic` supports pydantic but none of its behavior is dependent on it!
Use the `NDArray` type annotation like a regular type outside
of pydantic -- eg. to validate an array anywhere, use `isinstance`:

```python
array_type = NDArray[Shape["1, 2, 3"], int]
isinstance(np.zeros((1,2,3), dtype=int), array_type)
# True
isinstance(zarr.zeros((1,2,3), dtype=int), array_type)
# True
isinstance(np.zeros((4,5,6), dtype=int), array_type)
# False
isinstance(np.zeros((1,2,3), dtype=float), array_type)
# False
```

Or use it as a convenient callable shorthand for validating and working with
array types that usually don't have an array-like API.

```python
>>> rgb_video_type = NDArray[Shape["* t, 1920 x, 1080 y, 3 rgb"], np.uint8]
>>> video = rgb_video_type('data.mp4')
>>> video.shape
(10, 1920, 1080, 3)
>>> video[0, 0:3, 0:3, 0]
array([[0, 0, 0],
       [0, 0, 0],
       [0, 0, 0]], dtype=uint8)
```


## Features:
- **Types** - Annotations (based on [npytyping](https://github.com/ramonhagenaars/nptyping))
  for specifying arrays in pydantic models
- **Validation** - Shape, dtype, and other array validations
- **Interfaces** - Works with [`numpy`](https://numpydantic.readthedocs.io/en/latest/api/interface/numpy.html), 
  [`dask`](https://numpydantic.readthedocs.io/en/latest/api/interface/dask.html), 
  [`hdf5`](https://numpydantic.readthedocs.io/en/latest/api/interface/hdf5.html),
  [`video`](https://numpydantic.readthedocs.io/en/latest/api/interface/video.html), 
  [`zarr`](https://numpydantic.readthedocs.io/en/latest/api/interface/zarr.html),
  and a simple extension system to make it work with whatever else you want!
- **Serialization** - Dump an array as a JSON-compatible array-of-arrays with enough metadata to be able to 
  recreate the model in the native format
- **Schema Generation** - Correct JSON Schema for arrays, complete with shape and dtype constraints, to
  make your models interoperable 
- **Fast** - The validation codepath is careful to take quick exits and not perform unnecessary work,
  and interfaces use whatever tools available to validate against array metadata and lazy load to avoid
  expensive i/o operations. Our goal is to make numpydantic a tool you don't ever need to think about.

Coming soon:
- **Metadata** - This package was built to be used with [linkml arrays](https://linkml.io/linkml/schemas/arrays.html),
  so we will be extending it to include arbitrary metadata included in the type annotation object in the JSON schema representation.
- **Extensible Specification** - for v1, we are implementing the existing nptyping syntax, but 
  for v2 we will be updating that to an extensible specification syntax to allow interfaces to validate additional
  constraints like chunk sizes, as well as make array specifications more introspectable and friendly to runtime usage.
- **Advanced dtype handling** - handling dtypes that only exist in some array backends, allowing
  minimum and maximum precision ranges, and so on as type maps provided by interface classes :)
- (see [todo](https://numpydantic.readthedocs.io/en/latest/todo.html))

## Installation

numpydantic tries to keep dependencies minimal, so by default it only comes with 
dependencies to use the numpy interface. Add the extra relevant to your favorite
array library to be able to use it!

```shell
pip install numpydantic
# dask
pip install 'numpydantic[dask]'
# hdf5
pip install 'numpydantic[hdf5]'
# video
pip install 'numpydantic[video]'
# zarr
pip install 'numpydantic[zarr]'
# all array formats
pip intsall 'numpydantic[array]'
```

## Usage

> [!TIP]
> The README is just a sample! See the full documentation at 
> https://numpydantic.readthedocs.io

Specify an array using [nptyping syntax](https://github.com/ramonhagenaars/nptyping/blob/master/USERDOCS.md)
and use it with your favorite array library :)

Use the `NDArray` class like you would any other python type,
combine it with `Union`, make it `Optional`, etc.

For example, to specify a very special type of image that can either be
- a 2D float array where the axes can be any size, or 
- a 3D uint8 array where the third axis must be size 3
- a 1080p video 

```python
from typing import Union
from pydantic import BaseModel
import numpy as np

from numpydantic import NDArray, Shape

class Image(BaseModel):
    array: Union[
        NDArray[Shape["* x, * y"], float],
        NDArray[Shape["* x, * y, 3 rgb"], np.uint8],
        NDArray[Shape["* t, 1080 y, 1920 x, 3 rgb"], np.uint8]
    ]
```

And then use that as a transparent interface to your favorite array library!

### Interfaces

#### Numpy

The Coca-Cola of array libraries

```python
import numpy as np
# works
frame_gray = Image(array=np.ones((1280, 720), dtype=float))
frame_rgb  = Image(array=np.ones((1280, 720, 3), dtype=np.uint8))

# fails
wrong_n_dimensions = Image(array=np.ones((1280,), dtype=float))
wrong_shape = Image(array=np.ones((1280,720,10), dtype=np.uint8))

# shapes and types are checked together, so this also fails
wrong_shape_dtype_combo = Image(array=np.ones((1280, 720, 3), dtype=float))
```

#### Dask

High performance chunked arrays! The backend for many new array libraries! 

Works exactly the same as numpy arrays

```python
import dask.array as da

# validate a humongous image without having to load it into memory
video_array = da.zeros(shape=(1e10,1e20,3), dtype=np.uint8)
dask_video = Image(array=video_array)
```

#### HDF5

Array work increasingly can't fit on memory, but dealing with arrays on disk 
can become a pain in concurrent applications. Numpydantic allows you to 
specify the location of an array within an hdf5 file on disk and use it just like
any other array!

eg. Make an array on disk...

```python
from pathlib import Path
import h5py
from numpydantic.interface.hdf5 import H5ArrayPath

h5f_file = Path('my_file.h5')
array_path = "/nested/array"

# make an HDF5 array
h5f = h5py.File(h5f_file, "w")
array = np.random.randint(0, 255, (1920,1080,3), np.uint8)
h5f.create_dataset(array_path, data=array)
h5f.close()
```

Then use it in your model! numpydantic will only open the file as long as it's needed

```python
>>> h5f_image = Image(array=H5ArrayPath(file=h5f_file, path=array_path))
>>> h5f_image.array[0:5,0:5,0]
array([[0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0]], dtype=uint8)
>>> h5f_image.array[0:2,0:2,0] = 1
>>> h5f_image.array[0:5,0:5,0]
array([[1, 1, 0, 0, 0],
       [1, 1, 0, 0, 0],
       [0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0],
       [0, 0, 0, 0, 0]], dtype=uint8)
```

Numpydantic tries to be a smart but transparent proxy, exposing the methods and attributes
of the source type even when we aren't directly using them, like when dealing with on-disk HDF5 arrays.

If you want, you can take full control and directly interact with the underlying :class:`h5py.Dataset`
object and leave the file open between calls:

```python
>>> dataset = h5f_image.array.open()
>>> # do some stuff that requires the dataset to be held open
>>> h5f_image.array.close()
```

#### Video

Videos are just arrays with fancy encoding! Numpydantic can validate shape and dtype
as well as lazy load chunks of frames with arraylike syntax!

Say we have some video `data.mp4` ...

```python
video = Image(array='data.mp4')
# get a single frame
video.array[5]
# or a range of frames!
video.array[5:10]
# or whatever slicing you want to do!
video.array[5:50:5, 0:10, 50:70]
```

As elsewhere, a proxy class is a transparent pass-through interface to the underlying
opencv class, so we can get the rest of the video properties ...

```python
import cv2

# get the total frames from opencv
video.array.get(cv2.CAP_PROP_FRAME_COUNT)
# the proxy class also provides a convenience property
video.array.n_frames
```

#### Zarr

Zarr works similarly!

Use it with any of Zarr's backends: Nested, Zipfile, S3, it's all the same!

Eg. create a nested zarr array on disk and use it...

```python
import zarr
from numpydantic.interface.zarr import ZarrArrayPath

array_file = 'data/array.zarr'
nested_path = 'data/sets/here'

root = zarr.open(array_file, mode='w')
nested_array = root.zeros(
    nested_path, 
    shape=(1000, 1080, 1920, 3), 
    dtype=np.uint8
)

# validates just fine!
zarr_video = Image(array=ZarrArrayPath(array_file, nested_path))
# or just pass a tuple, the interface can discover it's a zarr array
zarr_video = Image(array=(array_file, nested_path))
```

### JSON Schema

Numpydantic generates JSON Schema for all its array specifications, so for the above
model, we get a schema for each of the possible array types that properly handles
the shape and dtype constraints and includes the origin numpy type as a `dtype` annotation.

```python
Image.model_json_schema()
```

```json
{
  "properties": {
    "array": {
      "anyOf": [
        {
          "items": {"items": {"type": "number"}, "type": "array"},
          "type": "array"
        },
        {
          "dtype": "numpy.uint8",
          "items": {
            "items": {
              "items": {
                "maximum": 255,
                "minimum": 0,
                "type": "integer"
              },
              "maxItems": 3,
              "minItems": 3,
              "type": "array"
            },
            "type": "array"
          },
          "type": "array"
        },
        {
          "dtype": "numpy.uint8",
          "items": {
            "items": {
              "items": {
                "items": {
                  "maximum": 255,
                  "minimum": 0,
                  "type": "integer"
                },
                "maxItems": 3,
                "minItems": 3,
                "type": "array"
              },
              "maxItems": 1920,
              "minItems": 1920,
              "type": "array"
            },
            "maxItems": 1080,
            "minItems": 1080,
            "type": "array"
          },
          "type": "array"
        }
      ],
      "title": "Array"
    }
  },
  "required": ["array"],
  "title": "Image",
  "type": "object"
}
```

numpydantic can even handle shapes with unbounded numbers of dimensions by using
recursive JSON schema!!!

So the any-shaped array (using nptyping's ellipsis notation):

```python
class AnyShape(BaseModel):
    array: NDArray[Shape["*, ..."], np.uint8]
```

is rendered to JSON-Schema like this:

```json
{
  "$defs": {
    "any-shape-array-9b5d89838a990d79": {
      "anyOf": [
        {
          "items": {
            "$ref": "#/$defs/any-shape-array-9b5d89838a990d79"
          },
          "type": "array"
        },
        {"maximum": 255, "minimum": 0, "type": "integer"}
      ]
    }
  },
  "properties": {
    "array": {
      "dtype": "numpy.uint8",
      "items": {"$ref": "#/$defs/any-shape-array-9b5d89838a990d79"},
      "title": "Array",
      "type": "array"
    }
  },
  "required": ["array"],
  "title": "AnyShape",
  "type": "object"
}
```

where the key `"any-shape-array-9b5d89838a990d79"` uses a (blake2b) hash of the
inner dtype specification so that having multiple any-shaped arrays in a single 
model schema are deduplicated without conflicts.

### Dumping

One of the main reasons to use chunked array libraries like zarr is to avoid
needing to load the entire array into memory. When dumping data to JSON, numpydantic 
tries to mirror this behavior, by default only dumping the metadata that is
necessary to identify the array.

For example, with zarr:

```python
array = zarr.array([[1,2,3],[4,5,6],[7,8,9]], dtype=float)
instance = Image(array=array)
dumped = instance.model_dump_json()
```

```json
{
  "array":
  {
    "Chunk shape": "(3, 3)",
    "Chunks initialized": "1/1",
    "Compressor": "Blosc(cname='lz4', clevel=5, shuffle=SHUFFLE, blocksize=0)",
    "Data type": "float64",
    "No. bytes": "72",
    "No. bytes stored": "421",
    "Order": "C",
    "Read-only": "False",
    "Shape": "(3, 3)",
    "Storage ratio": "0.2",
    "Store type": "zarr.storage.KVStore",
    "Type": "zarr.core.Array",
    "hexdigest": "c51604eace325fe42bbebf39146c0956bd2ed13c"
  }
}
```

To print the whole array, we use pydantic's serialization contexts:

```python
dumped = instance.model_dump_json(context={'zarr_dump_array': True})
```
```json
{
  "array":
  {
    "same thing,": "except also...",
    "array": [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],
    "hexdigest": "c51604eace325fe42bbebf39146c0956bd2ed13c"
  }
}
```

## Vendored Dependencies

We have vendored dependencies in the `src/numpydantic/vendor` package,
and reproduced their licenses in the `licenses` directory.

- [nptyping](https://github.com/ramonhagenaars/nptyping) - `numpydantic.vendor.nptyping` - `/licenses/nptyping.txt`
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "numpydantic",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": "arrays, validation, serialization, numpy, pydantic",
    "author": null,
    "author_email": "sneakers-the-rat <sneakers-the-rat@protonmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/91/b9/3afb3f86e1f54d3b55f1cc40713067869efe8a8d565fdf1dbc75d8d33d18/numpydantic-1.6.6.tar.gz",
    "platform": null,
    "description": "# numpydantic\n\n[![PyPI - Version](https://img.shields.io/pypi/v/numpydantic)](https://pypi.org/project/numpydantic)\n[![Documentation Status](https://readthedocs.org/projects/numpydantic/badge/?version=latest)](https://numpydantic.readthedocs.io/en/latest/?badge=latest)\n[![Coverage Status](https://coveralls.io/repos/github/p2p-ld/numpydantic/badge.svg)](https://coveralls.io/github/p2p-ld/numpydantic)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n\nA python package for specifying, validating, and serializing arrays with arbitrary backends in pydantic.\n\n**Problem:** \n1) Pydantic is great for modeling data. \n2) Arrays are one of a few elemental types in computing,\n\nbut ...\n\n3) Typical type annotations would only work for a single array library implementation\n4) They wouldn\u2019t allow you to specify array shapes and dtypes, and\n5) If you try and specify an array in pydantic, this happens:\n\n```python\n>>> from pydantic import BaseModel\n>>> import numpy as np\n\n>>> class MyModel(BaseModel):\n>>>     array: np.ndarray\npydantic.errors.PydanticSchemaGenerationError: \nUnable to generate pydantic-core schema for <class 'numpy.ndarray'>. \nSet `arbitrary_types_allowed=True` in the model_config to ignore this error \nor implement `__get_pydantic_core_schema__` on your type to fully support it.\n```\n\n**Solution**\n\nNumpydantic allows you to do this:\n\n```python\nfrom pydantic import BaseModel\nfrom numpydantic import NDArray, Shape\n\nclass MyModel(BaseModel):\n    array: NDArray[Shape[\"3 x, 4 y, * z\"], int]\n```\n\nAnd use it with your favorite array library:\n\n```python\nimport numpy as np\nimport dask.array as da\nimport zarr\n\n# numpy\nmodel = MyModel(array=np.zeros((3, 4, 5), dtype=int))\n# dask\nmodel = MyModel(array=da.zeros((3, 4, 5), dtype=int))\n# hdf5 datasets\nmodel = MyModel(array=('data.h5', '/nested/dataset'))\n# zarr arrays\nmodel = MyModel(array=zarr.zeros((3,4,5), dtype=int))\nmodel = MyModel(array='data.zarr')\nmodel = MyModel(array=('data.zarr', '/nested/dataset'))\n# video files\nmodel = MyModel(array=\"data.mp4\")\n```\n\n`numpydantic` supports pydantic but none of its behavior is dependent on it!\nUse the `NDArray` type annotation like a regular type outside\nof pydantic -- eg. to validate an array anywhere, use `isinstance`:\n\n```python\narray_type = NDArray[Shape[\"1, 2, 3\"], int]\nisinstance(np.zeros((1,2,3), dtype=int), array_type)\n# True\nisinstance(zarr.zeros((1,2,3), dtype=int), array_type)\n# True\nisinstance(np.zeros((4,5,6), dtype=int), array_type)\n# False\nisinstance(np.zeros((1,2,3), dtype=float), array_type)\n# False\n```\n\nOr use it as a convenient callable shorthand for validating and working with\narray types that usually don't have an array-like API.\n\n```python\n>>> rgb_video_type = NDArray[Shape[\"* t, 1920 x, 1080 y, 3 rgb\"], np.uint8]\n>>> video = rgb_video_type('data.mp4')\n>>> video.shape\n(10, 1920, 1080, 3)\n>>> video[0, 0:3, 0:3, 0]\narray([[0, 0, 0],\n       [0, 0, 0],\n       [0, 0, 0]], dtype=uint8)\n```\n\n\n## Features:\n- **Types** - Annotations (based on [npytyping](https://github.com/ramonhagenaars/nptyping))\n  for specifying arrays in pydantic models\n- **Validation** - Shape, dtype, and other array validations\n- **Interfaces** - Works with [`numpy`](https://numpydantic.readthedocs.io/en/latest/api/interface/numpy.html), \n  [`dask`](https://numpydantic.readthedocs.io/en/latest/api/interface/dask.html), \n  [`hdf5`](https://numpydantic.readthedocs.io/en/latest/api/interface/hdf5.html),\n  [`video`](https://numpydantic.readthedocs.io/en/latest/api/interface/video.html), \n  [`zarr`](https://numpydantic.readthedocs.io/en/latest/api/interface/zarr.html),\n  and a simple extension system to make it work with whatever else you want!\n- **Serialization** - Dump an array as a JSON-compatible array-of-arrays with enough metadata to be able to \n  recreate the model in the native format\n- **Schema Generation** - Correct JSON Schema for arrays, complete with shape and dtype constraints, to\n  make your models interoperable \n- **Fast** - The validation codepath is careful to take quick exits and not perform unnecessary work,\n  and interfaces use whatever tools available to validate against array metadata and lazy load to avoid\n  expensive i/o operations. Our goal is to make numpydantic a tool you don't ever need to think about.\n\nComing soon:\n- **Metadata** - This package was built to be used with [linkml arrays](https://linkml.io/linkml/schemas/arrays.html),\n  so we will be extending it to include arbitrary metadata included in the type annotation object in the JSON schema representation.\n- **Extensible Specification** - for v1, we are implementing the existing nptyping syntax, but \n  for v2 we will be updating that to an extensible specification syntax to allow interfaces to validate additional\n  constraints like chunk sizes, as well as make array specifications more introspectable and friendly to runtime usage.\n- **Advanced dtype handling** - handling dtypes that only exist in some array backends, allowing\n  minimum and maximum precision ranges, and so on as type maps provided by interface classes :)\n- (see [todo](https://numpydantic.readthedocs.io/en/latest/todo.html))\n\n## Installation\n\nnumpydantic tries to keep dependencies minimal, so by default it only comes with \ndependencies to use the numpy interface. Add the extra relevant to your favorite\narray library to be able to use it!\n\n```shell\npip install numpydantic\n# dask\npip install 'numpydantic[dask]'\n# hdf5\npip install 'numpydantic[hdf5]'\n# video\npip install 'numpydantic[video]'\n# zarr\npip install 'numpydantic[zarr]'\n# all array formats\npip intsall 'numpydantic[array]'\n```\n\n## Usage\n\n> [!TIP]\n> The README is just a sample! See the full documentation at \n> https://numpydantic.readthedocs.io\n\nSpecify an array using [nptyping syntax](https://github.com/ramonhagenaars/nptyping/blob/master/USERDOCS.md)\nand use it with your favorite array library :)\n\nUse the `NDArray` class like you would any other python type,\ncombine it with `Union`, make it `Optional`, etc.\n\nFor example, to specify a very special type of image that can either be\n- a 2D float array where the axes can be any size, or \n- a 3D uint8 array where the third axis must be size 3\n- a 1080p video \n\n```python\nfrom typing import Union\nfrom pydantic import BaseModel\nimport numpy as np\n\nfrom numpydantic import NDArray, Shape\n\nclass Image(BaseModel):\n    array: Union[\n        NDArray[Shape[\"* x, * y\"], float],\n        NDArray[Shape[\"* x, * y, 3 rgb\"], np.uint8],\n        NDArray[Shape[\"* t, 1080 y, 1920 x, 3 rgb\"], np.uint8]\n    ]\n```\n\nAnd then use that as a transparent interface to your favorite array library!\n\n### Interfaces\n\n#### Numpy\n\nThe Coca-Cola of array libraries\n\n```python\nimport numpy as np\n# works\nframe_gray = Image(array=np.ones((1280, 720), dtype=float))\nframe_rgb  = Image(array=np.ones((1280, 720, 3), dtype=np.uint8))\n\n# fails\nwrong_n_dimensions = Image(array=np.ones((1280,), dtype=float))\nwrong_shape = Image(array=np.ones((1280,720,10), dtype=np.uint8))\n\n# shapes and types are checked together, so this also fails\nwrong_shape_dtype_combo = Image(array=np.ones((1280, 720, 3), dtype=float))\n```\n\n#### Dask\n\nHigh performance chunked arrays! The backend for many new array libraries! \n\nWorks exactly the same as numpy arrays\n\n```python\nimport dask.array as da\n\n# validate a humongous image without having to load it into memory\nvideo_array = da.zeros(shape=(1e10,1e20,3), dtype=np.uint8)\ndask_video = Image(array=video_array)\n```\n\n#### HDF5\n\nArray work increasingly can't fit on memory, but dealing with arrays on disk \ncan become a pain in concurrent applications. Numpydantic allows you to \nspecify the location of an array within an hdf5 file on disk and use it just like\nany other array!\n\neg. Make an array on disk...\n\n```python\nfrom pathlib import Path\nimport h5py\nfrom numpydantic.interface.hdf5 import H5ArrayPath\n\nh5f_file = Path('my_file.h5')\narray_path = \"/nested/array\"\n\n# make an HDF5 array\nh5f = h5py.File(h5f_file, \"w\")\narray = np.random.randint(0, 255, (1920,1080,3), np.uint8)\nh5f.create_dataset(array_path, data=array)\nh5f.close()\n```\n\nThen use it in your model! numpydantic will only open the file as long as it's needed\n\n```python\n>>> h5f_image = Image(array=H5ArrayPath(file=h5f_file, path=array_path))\n>>> h5f_image.array[0:5,0:5,0]\narray([[0, 0, 0, 0, 0],\n       [0, 0, 0, 0, 0],\n       [0, 0, 0, 0, 0],\n       [0, 0, 0, 0, 0],\n       [0, 0, 0, 0, 0]], dtype=uint8)\n>>> h5f_image.array[0:2,0:2,0] = 1\n>>> h5f_image.array[0:5,0:5,0]\narray([[1, 1, 0, 0, 0],\n       [1, 1, 0, 0, 0],\n       [0, 0, 0, 0, 0],\n       [0, 0, 0, 0, 0],\n       [0, 0, 0, 0, 0]], dtype=uint8)\n```\n\nNumpydantic tries to be a smart but transparent proxy, exposing the methods and attributes\nof the source type even when we aren't directly using them, like when dealing with on-disk HDF5 arrays.\n\nIf you want, you can take full control and directly interact with the underlying :class:`h5py.Dataset`\nobject and leave the file open between calls:\n\n```python\n>>> dataset = h5f_image.array.open()\n>>> # do some stuff that requires the dataset to be held open\n>>> h5f_image.array.close()\n```\n\n#### Video\n\nVideos are just arrays with fancy encoding! Numpydantic can validate shape and dtype\nas well as lazy load chunks of frames with arraylike syntax!\n\nSay we have some video `data.mp4` ...\n\n```python\nvideo = Image(array='data.mp4')\n# get a single frame\nvideo.array[5]\n# or a range of frames!\nvideo.array[5:10]\n# or whatever slicing you want to do!\nvideo.array[5:50:5, 0:10, 50:70]\n```\n\nAs elsewhere, a proxy class is a transparent pass-through interface to the underlying\nopencv class, so we can get the rest of the video properties ...\n\n```python\nimport cv2\n\n# get the total frames from opencv\nvideo.array.get(cv2.CAP_PROP_FRAME_COUNT)\n# the proxy class also provides a convenience property\nvideo.array.n_frames\n```\n\n#### Zarr\n\nZarr works similarly!\n\nUse it with any of Zarr's backends: Nested, Zipfile, S3, it's all the same!\n\nEg. create a nested zarr array on disk and use it...\n\n```python\nimport zarr\nfrom numpydantic.interface.zarr import ZarrArrayPath\n\narray_file = 'data/array.zarr'\nnested_path = 'data/sets/here'\n\nroot = zarr.open(array_file, mode='w')\nnested_array = root.zeros(\n    nested_path, \n    shape=(1000, 1080, 1920, 3), \n    dtype=np.uint8\n)\n\n# validates just fine!\nzarr_video = Image(array=ZarrArrayPath(array_file, nested_path))\n# or just pass a tuple, the interface can discover it's a zarr array\nzarr_video = Image(array=(array_file, nested_path))\n```\n\n### JSON Schema\n\nNumpydantic generates JSON Schema for all its array specifications, so for the above\nmodel, we get a schema for each of the possible array types that properly handles\nthe shape and dtype constraints and includes the origin numpy type as a `dtype` annotation.\n\n```python\nImage.model_json_schema()\n```\n\n```json\n{\n  \"properties\": {\n    \"array\": {\n      \"anyOf\": [\n        {\n          \"items\": {\"items\": {\"type\": \"number\"}, \"type\": \"array\"},\n          \"type\": \"array\"\n        },\n        {\n          \"dtype\": \"numpy.uint8\",\n          \"items\": {\n            \"items\": {\n              \"items\": {\n                \"maximum\": 255,\n                \"minimum\": 0,\n                \"type\": \"integer\"\n              },\n              \"maxItems\": 3,\n              \"minItems\": 3,\n              \"type\": \"array\"\n            },\n            \"type\": \"array\"\n          },\n          \"type\": \"array\"\n        },\n        {\n          \"dtype\": \"numpy.uint8\",\n          \"items\": {\n            \"items\": {\n              \"items\": {\n                \"items\": {\n                  \"maximum\": 255,\n                  \"minimum\": 0,\n                  \"type\": \"integer\"\n                },\n                \"maxItems\": 3,\n                \"minItems\": 3,\n                \"type\": \"array\"\n              },\n              \"maxItems\": 1920,\n              \"minItems\": 1920,\n              \"type\": \"array\"\n            },\n            \"maxItems\": 1080,\n            \"minItems\": 1080,\n            \"type\": \"array\"\n          },\n          \"type\": \"array\"\n        }\n      ],\n      \"title\": \"Array\"\n    }\n  },\n  \"required\": [\"array\"],\n  \"title\": \"Image\",\n  \"type\": \"object\"\n}\n```\n\nnumpydantic can even handle shapes with unbounded numbers of dimensions by using\nrecursive JSON schema!!!\n\nSo the any-shaped array (using nptyping's ellipsis notation):\n\n```python\nclass AnyShape(BaseModel):\n    array: NDArray[Shape[\"*, ...\"], np.uint8]\n```\n\nis rendered to JSON-Schema like this:\n\n```json\n{\n  \"$defs\": {\n    \"any-shape-array-9b5d89838a990d79\": {\n      \"anyOf\": [\n        {\n          \"items\": {\n            \"$ref\": \"#/$defs/any-shape-array-9b5d89838a990d79\"\n          },\n          \"type\": \"array\"\n        },\n        {\"maximum\": 255, \"minimum\": 0, \"type\": \"integer\"}\n      ]\n    }\n  },\n  \"properties\": {\n    \"array\": {\n      \"dtype\": \"numpy.uint8\",\n      \"items\": {\"$ref\": \"#/$defs/any-shape-array-9b5d89838a990d79\"},\n      \"title\": \"Array\",\n      \"type\": \"array\"\n    }\n  },\n  \"required\": [\"array\"],\n  \"title\": \"AnyShape\",\n  \"type\": \"object\"\n}\n```\n\nwhere the key `\"any-shape-array-9b5d89838a990d79\"` uses a (blake2b) hash of the\ninner dtype specification so that having multiple any-shaped arrays in a single \nmodel schema are deduplicated without conflicts.\n\n### Dumping\n\nOne of the main reasons to use chunked array libraries like zarr is to avoid\nneeding to load the entire array into memory. When dumping data to JSON, numpydantic \ntries to mirror this behavior, by default only dumping the metadata that is\nnecessary to identify the array.\n\nFor example, with zarr:\n\n```python\narray = zarr.array([[1,2,3],[4,5,6],[7,8,9]], dtype=float)\ninstance = Image(array=array)\ndumped = instance.model_dump_json()\n```\n\n```json\n{\n  \"array\":\n  {\n    \"Chunk shape\": \"(3, 3)\",\n    \"Chunks initialized\": \"1/1\",\n    \"Compressor\": \"Blosc(cname='lz4', clevel=5, shuffle=SHUFFLE, blocksize=0)\",\n    \"Data type\": \"float64\",\n    \"No. bytes\": \"72\",\n    \"No. bytes stored\": \"421\",\n    \"Order\": \"C\",\n    \"Read-only\": \"False\",\n    \"Shape\": \"(3, 3)\",\n    \"Storage ratio\": \"0.2\",\n    \"Store type\": \"zarr.storage.KVStore\",\n    \"Type\": \"zarr.core.Array\",\n    \"hexdigest\": \"c51604eace325fe42bbebf39146c0956bd2ed13c\"\n  }\n}\n```\n\nTo print the whole array, we use pydantic's serialization contexts:\n\n```python\ndumped = instance.model_dump_json(context={'zarr_dump_array': True})\n```\n```json\n{\n  \"array\":\n  {\n    \"same thing,\": \"except also...\",\n    \"array\": [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]],\n    \"hexdigest\": \"c51604eace325fe42bbebf39146c0956bd2ed13c\"\n  }\n}\n```\n\n## Vendored Dependencies\n\nWe have vendored dependencies in the `src/numpydantic/vendor` package,\nand reproduced their licenses in the `licenses` directory.\n\n- [nptyping](https://github.com/ramonhagenaars/nptyping) - `numpydantic.vendor.nptyping` - `/licenses/nptyping.txt`",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Type and shape validation and serialization for arbitrary array types in pydantic models",
    "version": "1.6.6",
    "project_urls": {
        "changelog": "https://numpydantic.readthedocs.io/en/latest/changelog.html",
        "documentation": "https://numpydantic.readthedocs.io",
        "homepage": "https://numpydantic.readthedocs.io",
        "repository": "https://github.com/p2p-ld/numpydantic"
    },
    "split_keywords": [
        "arrays",
        " validation",
        " serialization",
        " numpy",
        " pydantic"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5b727ba1d8996eaa5915540fd80f472594cb1e6dbdbb2ac53cebbf687558b876",
                "md5": "c97167a8ff0830e200eef8f8ef9cb8bb",
                "sha256": "8b6113639dd39425caf80f5800e2ffc30059dcd6a74f7705c5ff8a71533ad95b"
            },
            "downloads": -1,
            "filename": "numpydantic-1.6.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c97167a8ff0830e200eef8f8ef9cb8bb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 85465,
            "upload_time": "2024-12-14T02:20:00",
            "upload_time_iso_8601": "2024-12-14T02:20:00.381147Z",
            "url": "https://files.pythonhosted.org/packages/5b/72/7ba1d8996eaa5915540fd80f472594cb1e6dbdbb2ac53cebbf687558b876/numpydantic-1.6.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "91b93afb3f86e1f54d3b55f1cc40713067869efe8a8d565fdf1dbc75d8d33d18",
                "md5": "dd73b2119e4b881d2169acb9a3f143e8",
                "sha256": "b3dcab2928ad7e150837647df3d1536b1b93ed3c1b311bf045f256f15b747de1"
            },
            "downloads": -1,
            "filename": "numpydantic-1.6.6.tar.gz",
            "has_sig": false,
            "md5_digest": "dd73b2119e4b881d2169acb9a3f143e8",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 77557,
            "upload_time": "2024-12-14T02:20:02",
            "upload_time_iso_8601": "2024-12-14T02:20:02.902837Z",
            "url": "https://files.pythonhosted.org/packages/91/b9/3afb3f86e1f54d3b55f1cc40713067869efe8a8d565fdf1dbc75d8d33d18/numpydantic-1.6.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-14 02:20:02",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "p2p-ld",
    "github_project": "numpydantic",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "numpydantic"
}
        
Elapsed time: 0.42515s