Name | dataglasses JSON |
Version |
0.6.0
JSON |
| download |
home_page | None |
Summary | Simple creation of dataclasses from JSON |
upload_time | 2024-11-03 23:00:18 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | MIT |
keywords |
dataclasses
json
json schema
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# dataglasses
[![PyPi](https://img.shields.io/pypi/v/dataglasses)](https://pypi.python.org/pypi/dataglasses)
[![Python](https://img.shields.io/pypi/pyversions/dataglasses)](https://pypi.python.org/pypi/dataglasses)
[![License](https://img.shields.io/pypi/l/dataglasses)](LICENSE)
[![Actions status](https://img.shields.io/github/actions/workflow/status/Udzu/dataglasses/quality_checks.yaml?logo=github&label=quality%20checks)](https://github.com/Udzu/dataglasses/actions/workflows/quality_checks.yaml)
[![Ruff](https://img.shields.io/badge/code%20style-ruff-000000.svg)](https://github.com/astral-sh/ruff)
A small package to simplify creating dataclasses from JSON and validating that JSON.
## Installation
```
$ pip install dataglasses
```
## Requirements
Requires Python 3.10 or later.
If you wish to validate arbitrary JSON data against the generated JSON schemas in Python, consider installing [jsonschema](https://github.com/python-jsonschema/jsonschema), though this is unnecessary when using `dataglasses` to convert JSON into dataclasses.
## Quick start
```python
>>> from dataclasses import dataclass
>>> from dataglasses import from_dict, to_json_schema
>>> from json import dumps
>>> @dataclass
... class InventoryItem:
... name: str
... unit_price: float
... quantity_on_hand: int = 0
>>> from_dict(InventoryItem, { "name": "widget", "unit_price": 3.0})
InventoryItem(name='widget', unit_price=3.0, quantity_on_hand=0)
>>> print(dumps(to_json_schema(InventoryItem), indent=2))
```
<details>
<summary>print output...</summary>
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$ref": "#/$defs/InventoryItem",
"$defs": {
"InventoryItem": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"unit_price": {
"type": "number"
},
"quantity_on_hand": {
"type": "integer",
"default": 0
}
},
"required": [
"name",
"unit_price"
]
}
}
}
```
</details>
## Objective
The purpose of this library is to speed up rapid development by making it trivial to populate type-annotated dataclasses with dictionary data extracted from JSON, as well as to perform basic validation on that data. The library contains just one file and two functions, so can even be directly copied into a project.
It is not intended for complex validation or high performance. For those, consider using [pydantic](https://github.com/pydantic/pydantic).
## Usage
The package contains just two functions:
```python
def from_dict(
cls: type[T],
value: Any,
*,
strict: bool = False,
transform: Optional[TransformRules] = None,
local_refs: Optional[set[type]] = None,
) -> T
````
This converts a nested dictionary `value` of input data into the given dataclass type `cls`, raising an exception if the conversion is not possible. (The optional keyword arguments are described further down.)
```python
def to_json_schema(
cls: type,
*,
strict: bool = False,
transform: Optional[TransformRules] = None,
local_refs: Optional[set[type]] = None,
) -> dict[str, Any]:
```
This generates a JSON schema representing valid inputs for the dataclass type `cls`, raising an exception if the class cannot be represented in JSON. (Again, the optional keyword arguments are described further down.)
Below is a summary of the different supported use cases:
### Nested structures
Dataclasses can be nested, using either global or local definitions.
```python
>>> @dataclass
... class TrackedItem:
...
... @dataclass
... class GPS:
... lat: float
... long: float
...
... item: InventoryItem
... location: GPS
>>> from_dict(TrackedItem, {
... "item": { "name": "pie", "unit_price": 42},
... "location": { "lat": 52.2, "long": 0.1 } })
TrackedItem(item=InventoryItem(name='pie', unit_price=42, quantity_on_hand=0),
location=TrackedItem.GPS(lat=52.2, long=0.1))
>>> print(dumps(to_json_schema(TrackedItem), indent=2))
```
<details>
<summary>print output...</summary>
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$ref": "#/$defs/TrackedItem",
"$defs": {
"TrackedItem": {
"type": "object",
"properties": {
"item": {
"$ref": "#/$defs/InventoryItem"
},
"location": {
"$ref": "#/$defs/TrackedItem.GPS"
}
},
"required": [
"item",
"location"
]
},
"InventoryItem": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"unit_price": {
"type": "number"
},
"quantity_on_hand": {
"type": "integer",
"default": 0
}
},
"required": [
"name",
"unit_price"
]
},
"TrackedItem.GPS": {
"type": "object",
"properties": {
"lat": {
"type": "number"
},
"long": {
"type": "number"
}
},
"required": [
"lat",
"long"
]
}
}
}
```
</details>
### Collection types
There is automatic support for the generic collection types most compatible with JSON: `list[T]`, `tuple[...]` and `Sequence[T]` (encoded as arrays) and `dict[str, T]` and `Mapping[str, T]` (encoded as objects).
```python
>>> from collections.abc import Mapping, Sequence
>>> @dataclass
... class Catalog:
... items: Sequence[InventoryItem]
... publisher: tuple[str, int]
... purchases: Mapping[str, int]
>>> from_dict(Catalog, {
... "items": [{ "name": "widget", "unit_price": 3.0}],
... "publisher": ["ACME", 1982],
... "purchases": { "Wile E. Coyote": 52}})
Catalog(items=[InventoryItem(name='widget', unit_price=3.0, quantity_on_hand=0)],
publisher=('ACME', 1982), purchases={'Wile E. Coyote': 52})
>>> print(dumps(to_json_schema(Catalog), indent=2))
```
<details>
<summary>print output...</summary>
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$ref": "#/$defs/Catalog",
"$defs": {
"Catalog": {
"type": "object",
"properties": {
"items": {
"type": "array",
"items": {
"$ref": "#/$defs/InventoryItem"
}
},
"publisher": {
"type": "array",
"prefixItems": [
{
"type": "string"
},
{
"type": "integer"
}
],
"minItems": 2,
"maxItems": 2
},
"purchases": {
"type": "object",
"patternProperties": {
"^.*$": {
"type": "integer"
}
}
}
},
"required": [
"items",
"publisher",
"purchases"
]
},
"InventoryItem": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"unit_price": {
"type": "number"
},
"quantity_on_hand": {
"type": "integer",
"default": 0
}
},
"required": [
"name",
"unit_price"
]
}
}
}
```
</details>
Unrestricted types like `list` or `dict` (or `set` or `Any`) and mappings with non-`str` keys can be used with `from_dict` but not with `to_json_schema`. Alternatively, these, alongside unsupported generic types like `set[T]`, can be used with both `from_dict` and `to_json_schema` by defining an appropriate encoding transformation (see section below).
### Optional and Union types
Union types (`S | T` or `Union[S, T, ...]`) are matched against all their permitted subtypes in order, returning the first successful match, or raising an exception if there are none. Optional types (`T | None` or `Optional[T]`) are handled similarly. Note that an optional type is not the same as an optional field (i.e. one with a default): a field with an optional type is still a required field unless it has a default value (which could be `None` but could also be something else).
```python
>>> from typing import Optional
>>> @dataclass
... class ItemPurchase:
... items: Sequence[InventoryItem | TrackedItem]
... invoice: Optional[int] = None
>>> from_dict(ItemPurchase, {
... "items": [{
... "item": { "name": "pie", "unit_price": 42},
... "location": { "lat": 52.2, "long": 0.1 } }],
... "invoice": 1234})
ItemPurchase(items=[TrackedItem(item=
InventoryItem(name='pie', unit_price=42, quantity_on_hand=0),
location=TrackedItem.GPS(lat=52.2, long=0.1))], invoice=1234)
>>> print(dumps(to_json_schema(ItemPurchase), indent=2))
```
<details>
<summary>print output...</summary>
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$ref": "#/$defs/ItemPurchase",
"$defs": {
"ItemPurchase": {
"type": "object",
"properties": {
"items": {
"type": "array",
"items": {
"anyOf": [
{
"$ref": "#/$defs/InventoryItem"
},
{
"$ref": "#/$defs/TrackedItem"
}
]
}
},
"invoice": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null
}
},
"required": [
"items"
]
},
"InventoryItem": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"unit_price": {
"type": "number"
},
"quantity_on_hand": {
"type": "integer",
"default": 0
}
},
"required": [
"name",
"unit_price"
]
},
"TrackedItem": {
"type": "object",
"properties": {
"item": {
"$ref": "#/$defs/InventoryItem"
},
"location": {
"$ref": "#/$defs/TrackedItem.GPS"
}
},
"required": [
"item",
"location"
]
},
"TrackedItem.GPS": {
"type": "object",
"properties": {
"lat": {
"type": "number"
},
"long": {
"type": "number"
}
},
"required": [
"lat",
"long"
]
}
}
}
```
</details>
### Enum and Literal types
Both `Enum` and `Literal` types can be used to match explicit enumerations. By default, `Enum` types match both the values and symbolic names (preferring the former in case of a clash). This behaviour can be overridden using a transformation if desired (see section below).
```python
>>> from enum import auto, StrEnum
>>> from typing import Literal
>>> class BuildType(StrEnum):
... DEBUG = auto()
... OPTIMIZED = auto()
>>> @dataclass
... class Release:
... build: BuildType
... approved: Literal["Yes", "No"]
>>> from_dict(Release, {"build": "debug", "confirmed": "Yes"})
Release(build=<Build.DEBUG: 'debug'>, approved='Yes')
>>> print(dumps(to_json_schema(Release), indent=2))
```
<details>
<summary>print output...</summary>
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$ref": "#/$defs/Release",
"$defs": {
"Release": {
"type": "object",
"properties": {
"build": {
"enum": [
"debug",
"optimized",
"DEBUG",
"OPTIMIZED"
]
},
"approved": {
"enum": [
"Yes",
"No"
]
}
},
"required": [
"build",
"confirmed"
]
}
}
}
```
</details>
### Annotated types
`Annotated` types can be used to populate the property `"description"` annotations in the JSON schema.
```python
>>> from typing import Annotated
>>> @dataclass
... class InventoryItem:
... name: Annotated[str, "item name"]
... unit_price: Annotated[float, "unit price"]
... quantity_on_hand: Annotated[int, "quantity on hand"] = 0
>>> from_dict(InventoryItem, { "name": "widget", "unit_price": 3.0})
InventoryItem(name='widget', unit_price=3.0, quantity_on_hand=0)
>>> print(dumps(to_json_schema(InventoryItem), indent=2))
```
<details>
<summary>print output...</summary>
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$ref": "#/$defs/InventoryItem",
"$defs": {
"InventoryItem": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "item name"
},
"unit_price": {
"type": "number",
"description": "unit price"
},
"quantity_on_hand": {
"type": "integer",
"description": "quantity on hand",
"default": 0
}
},
"required": [
"name",
"unit_price"
]
}
}
}
```
</details>
### Forward references
Forward reference types (written as string literals or `ForwardRef` objects) are supported, permitting recursive dataclasses. Global and class-scoped references are handled automatically:
```python
>>> @dataclass
... class Cons:
... head: "Head"
... tail: Optional["Cons"] = None
...
... @dataclass
... class Head:
... v: int
...
... def __repr__(self):
... return f"{self.head.v}::{self.tail}"
>>> from_dict(Cons, {"head": {"v": 1}, "tail": {"head": {"v": 2}}})
1::2::None
>> print(dumps(to_json_schema(Cons), indent=2))
```
<details>
<summary>print output...</summary>
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$ref": "#/$defs/Cons",
"$defs": {
"Cons": {
"type": "object",
"properties": {
"head": {
"$ref": "#/$defs/Cons.Head"
},
"tail": {
"anyOf": [
{
"$ref": "#/$defs/Cons"
},
{
"type": "null"
}
],
"default": null
}
},
"required": [
"head"
]
},
"Cons.Head": {
"type": "object",
"properties": {
"v": {
"type": "integer"
}
},
"required": [
"v"
]
}
}
}
```
</details>
Locally-scoped references, however, must be specified using the `local_refs` keyword:
```python
>>> def reverse_cons(seq):
...
... @dataclass
... class Cons:
... head: int
... tail: Optional["Cons"] = None
...
... def __repr__(self):
... return f"{self.head}::{self.tail}"
...
... value = None
... for x in seq: value = { "head": x, "tail": value }
... return from_dict(Cons, value, local_refs={Cons})
>>> reverse_cons([1,2,3])
3::2::1::None
```
### Strict mode
Both `from_dict` and `to_json_schema` default to ignoring additional properties that are not part of a dataclass (similar to `additionalProperties` defaulting to true in JSON schemas). This can be disabled with the `strict` keyword.
```python
>>> value = { "name": "widget", "unit_price": 4.0, "comment": "too expensive"}
>>> from_dict(InventoryItem, value)
InventoryItem(name='widget', unit_price=4.0, quantity_on_hand=0)
>>> from_dict(InventoryItem, value, strict=True)
TypeError: Unexpected <class '__main__.InventoryItem'> fields {'comment'}
>>> print(dumps(to_json_schema(InventoryItem, strict=True), indent=2))
```
<details>
<summary>print output...</summary>
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$ref": "#/$defs/InventoryItem",
"$defs": {
"InventoryItem": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "item name"
},
"unit_price": {
"type": "number",
"description": "unit price"
},
"quantity_on_hand": {
"type": "integer",
"description": "quantity on hand",
"default": 0
}
},
"required": [
"name",
"unit_price"
],
"additionalProperties": false
}
}
}
```
</details>
### Transformations
Transformations allow you to override the handling of specific types or dataclass fields, and can be used to normalise inputs or convert them into different types, including ones that aren't normally supported. Transformations are specified with the `transform` keyword, using a mapping:
* the mapping keys are either:
* a type used somewhere in the output dataclass: e.g. `str` or `set[int]`
* a dataclass field specified by a class-name tuple: e.g. `(InventoryItem, "name")` or `(Cons, "head")`
* the mapping values are a tuple consisting of:
* the JSON-serialisable input type that we want to represent this output type or field
* a callable function to convert from that input type to the output type
Note that the input type can be the same as the output type. Conversely, note that transformations don't help with serialising the dataclasses *back* into JSON from non-serialisable types.
```python
>>> @dataclass
... class Person:
... name : str
... aliases: set[str]
>>> transform = {
... str: (str, str.title),
... set[str]: (list[str], set),
... (Person, "name"): (str, lambda s: s + "!")}
>>> from_dict(Person, {"name": "robert", "aliases": ["bob", "bobby"]}, transform=transform)
Person(name='Robert!', aliases={'Bobby', 'Bob'})
>>> print(dumps(to_json_schema(Person, transform=transform), indent=2))
```
<details>
<summary>print output...</summary>
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$ref": "#/$defs/Person",
"$defs": {
"Person": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"aliases": {
"type": "array",
"items": {
"type": "string"
}
}
},
"required": [
"name",
"aliases"
]
}
}
}
```
</details>
## Contributions
Bug reports, feature requests and contributions are very welcome. Note that PRs must include tests with 100% code coverage and pass the quality checks [defined here](.github/workflows/quality_checks.yaml). More development details will be added shortly, once the project has stabilised...
Raw data
{
"_id": null,
"home_page": null,
"name": "dataglasses",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "dataclasses, json, json schema",
"author": null,
"author_email": "Uri Granta <uri.granta+python@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/26/3b/f975f3ab2ffb2343c6a9a53b1018b254dd49ddb38b4593253b681c49106f/dataglasses-0.6.0.tar.gz",
"platform": null,
"description": "# dataglasses\n\n[![PyPi](https://img.shields.io/pypi/v/dataglasses)](https://pypi.python.org/pypi/dataglasses)\n[![Python](https://img.shields.io/pypi/pyversions/dataglasses)](https://pypi.python.org/pypi/dataglasses)\n[![License](https://img.shields.io/pypi/l/dataglasses)](LICENSE)\n[![Actions status](https://img.shields.io/github/actions/workflow/status/Udzu/dataglasses/quality_checks.yaml?logo=github&label=quality%20checks)](https://github.com/Udzu/dataglasses/actions/workflows/quality_checks.yaml)\n[![Ruff](https://img.shields.io/badge/code%20style-ruff-000000.svg)](https://github.com/astral-sh/ruff)\n\nA small package to simplify creating dataclasses from JSON and validating that JSON.\n\n## Installation\n\n```\n$ pip install dataglasses\n```\n\n## Requirements\n\nRequires Python 3.10 or later.\n\nIf you wish to validate arbitrary JSON data against the generated JSON schemas in Python, consider installing [jsonschema](https://github.com/python-jsonschema/jsonschema), though this is unnecessary when using `dataglasses` to convert JSON into dataclasses.\n\n## Quick start\n\n```python\n>>> from dataclasses import dataclass\n>>> from dataglasses import from_dict, to_json_schema\n>>> from json import dumps\n\n>>> @dataclass\n... class InventoryItem:\n... name: str\n... unit_price: float\n... quantity_on_hand: int = 0\n\n>>> from_dict(InventoryItem, { \"name\": \"widget\", \"unit_price\": 3.0})\nInventoryItem(name='widget', unit_price=3.0, quantity_on_hand=0)\n\n>>> print(dumps(to_json_schema(InventoryItem), indent=2))\n```\n\n<details>\n<summary>print output...</summary>\n\n```json\n{\n \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n \"$ref\": \"#/$defs/InventoryItem\",\n \"$defs\": {\n \"InventoryItem\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"unit_price\": {\n \"type\": \"number\"\n },\n \"quantity_on_hand\": {\n \"type\": \"integer\",\n \"default\": 0\n }\n },\n \"required\": [\n \"name\",\n \"unit_price\"\n ]\n }\n }\n}\n```\n</details>\n\n## Objective\n\nThe purpose of this library is to speed up rapid development by making it trivial to populate type-annotated dataclasses with dictionary data extracted from JSON, as well as to perform basic validation on that data. The library contains just one file and two functions, so can even be directly copied into a project.\n\nIt is not intended for complex validation or high performance. For those, consider using [pydantic](https://github.com/pydantic/pydantic).\n\n## Usage\n\nThe package contains just two functions:\n\n```python\ndef from_dict(\n cls: type[T],\n value: Any,\n *,\n strict: bool = False,\n transform: Optional[TransformRules] = None,\n local_refs: Optional[set[type]] = None,\n) -> T\n````\nThis converts a nested dictionary `value` of input data into the given dataclass type `cls`, raising an exception if the conversion is not possible. (The optional keyword arguments are described further down.)\n\n```python\ndef to_json_schema(\n cls: type,\n *,\n strict: bool = False,\n transform: Optional[TransformRules] = None,\n local_refs: Optional[set[type]] = None,\n) -> dict[str, Any]:\n```\nThis generates a JSON schema representing valid inputs for the dataclass type `cls`, raising an exception if the class cannot be represented in JSON. (Again, the optional keyword arguments are described further down.)\n\nBelow is a summary of the different supported use cases:\n\n### Nested structures\n\nDataclasses can be nested, using either global or local definitions.\n\n```python\n>>> @dataclass\n... class TrackedItem:\n... \n... @dataclass\n... class GPS:\n... lat: float\n... long: float\n... \n... item: InventoryItem\n... location: GPS\n\n>>> from_dict(TrackedItem, {\n... \"item\": { \"name\": \"pie\", \"unit_price\": 42},\n... \"location\": { \"lat\": 52.2, \"long\": 0.1 } })\nTrackedItem(item=InventoryItem(name='pie', unit_price=42, quantity_on_hand=0),\nlocation=TrackedItem.GPS(lat=52.2, long=0.1))\n\n>>> print(dumps(to_json_schema(TrackedItem), indent=2))\n```\n\n<details>\n<summary>print output...</summary>\n\n```json\n{\n \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n \"$ref\": \"#/$defs/TrackedItem\",\n \"$defs\": {\n \"TrackedItem\": {\n \"type\": \"object\",\n \"properties\": {\n \"item\": {\n \"$ref\": \"#/$defs/InventoryItem\"\n },\n \"location\": {\n \"$ref\": \"#/$defs/TrackedItem.GPS\"\n }\n },\n \"required\": [\n \"item\",\n \"location\"\n ]\n },\n \"InventoryItem\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"unit_price\": {\n \"type\": \"number\"\n },\n \"quantity_on_hand\": {\n \"type\": \"integer\",\n \"default\": 0\n }\n },\n \"required\": [\n \"name\",\n \"unit_price\"\n ]\n },\n \"TrackedItem.GPS\": {\n \"type\": \"object\",\n \"properties\": {\n \"lat\": {\n \"type\": \"number\"\n },\n \"long\": {\n \"type\": \"number\"\n }\n },\n \"required\": [\n \"lat\",\n \"long\"\n ]\n }\n }\n}\n```\n</details>\n\n### Collection types\n\nThere is automatic support for the generic collection types most compatible with JSON: `list[T]`, `tuple[...]` and `Sequence[T]` (encoded as arrays) and `dict[str, T]` and `Mapping[str, T]` (encoded as objects).\n\n```python\n>>> from collections.abc import Mapping, Sequence\n\n>>> @dataclass\n... class Catalog:\n... items: Sequence[InventoryItem]\n... publisher: tuple[str, int]\n... purchases: Mapping[str, int]\n \n>>> from_dict(Catalog, {\n... \"items\": [{ \"name\": \"widget\", \"unit_price\": 3.0}],\n... \"publisher\": [\"ACME\", 1982],\n... \"purchases\": { \"Wile E. Coyote\": 52}})\nCatalog(items=[InventoryItem(name='widget', unit_price=3.0, quantity_on_hand=0)],\npublisher=('ACME', 1982), purchases={'Wile E. Coyote': 52}) \n\n>>> print(dumps(to_json_schema(Catalog), indent=2))\n```\n\n<details>\n<summary>print output...</summary>\n\n```json\n{\n \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n \"$ref\": \"#/$defs/Catalog\",\n \"$defs\": {\n \"Catalog\": {\n \"type\": \"object\",\n \"properties\": {\n \"items\": {\n \"type\": \"array\",\n \"items\": {\n \"$ref\": \"#/$defs/InventoryItem\"\n }\n },\n \"publisher\": {\n \"type\": \"array\",\n \"prefixItems\": [\n {\n \"type\": \"string\"\n },\n {\n \"type\": \"integer\"\n }\n ],\n \"minItems\": 2,\n \"maxItems\": 2\n },\n \"purchases\": {\n \"type\": \"object\",\n \"patternProperties\": {\n \"^.*$\": {\n \"type\": \"integer\"\n }\n }\n }\n },\n \"required\": [\n \"items\",\n \"publisher\",\n \"purchases\"\n ]\n },\n \"InventoryItem\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"unit_price\": {\n \"type\": \"number\"\n },\n \"quantity_on_hand\": {\n \"type\": \"integer\",\n \"default\": 0\n }\n },\n \"required\": [\n \"name\",\n \"unit_price\"\n ]\n }\n }\n}\n```\n</details>\n\nUnrestricted types like `list` or `dict` (or `set` or `Any`) and mappings with non-`str` keys can be used with `from_dict` but not with `to_json_schema`. Alternatively, these, alongside unsupported generic types like `set[T]`, can be used with both `from_dict` and `to_json_schema` by defining an appropriate encoding transformation (see section below). \n\n### Optional and Union types\n\nUnion types (`S | T` or `Union[S, T, ...]`) are matched against all their permitted subtypes in order, returning the first successful match, or raising an exception if there are none. Optional types (`T | None` or `Optional[T]`) are handled similarly. Note that an optional type is not the same as an optional field (i.e. one with a default): a field with an optional type is still a required field unless it has a default value (which could be `None` but could also be something else).\n\n```python\n>>> from typing import Optional\n\n>>> @dataclass\n... class ItemPurchase:\n... items: Sequence[InventoryItem | TrackedItem]\n... invoice: Optional[int] = None\n \n>>> from_dict(ItemPurchase, {\n... \"items\": [{\n... \"item\": { \"name\": \"pie\", \"unit_price\": 42},\n... \"location\": { \"lat\": 52.2, \"long\": 0.1 } }],\n... \"invoice\": 1234})\nItemPurchase(items=[TrackedItem(item=\nInventoryItem(name='pie', unit_price=42, quantity_on_hand=0),\nlocation=TrackedItem.GPS(lat=52.2, long=0.1))], invoice=1234)\n\n>>> print(dumps(to_json_schema(ItemPurchase), indent=2))\n```\n\n<details>\n<summary>print output...</summary>\n\n```json\n{\n \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n \"$ref\": \"#/$defs/ItemPurchase\",\n \"$defs\": {\n \"ItemPurchase\": {\n \"type\": \"object\",\n \"properties\": {\n \"items\": {\n \"type\": \"array\",\n \"items\": {\n \"anyOf\": [\n {\n \"$ref\": \"#/$defs/InventoryItem\"\n },\n {\n \"$ref\": \"#/$defs/TrackedItem\"\n }\n ]\n }\n },\n \"invoice\": {\n \"anyOf\": [\n {\n \"type\": \"integer\"\n },\n {\n \"type\": \"null\"\n }\n ],\n \"default\": null\n }\n },\n \"required\": [\n \"items\"\n ]\n },\n \"InventoryItem\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"unit_price\": {\n \"type\": \"number\"\n },\n \"quantity_on_hand\": {\n \"type\": \"integer\",\n \"default\": 0\n }\n },\n \"required\": [\n \"name\",\n \"unit_price\"\n ]\n },\n \"TrackedItem\": {\n \"type\": \"object\",\n \"properties\": {\n \"item\": {\n \"$ref\": \"#/$defs/InventoryItem\"\n },\n \"location\": {\n \"$ref\": \"#/$defs/TrackedItem.GPS\"\n }\n },\n \"required\": [\n \"item\",\n \"location\"\n ]\n },\n \"TrackedItem.GPS\": {\n \"type\": \"object\",\n \"properties\": {\n \"lat\": {\n \"type\": \"number\"\n },\n \"long\": {\n \"type\": \"number\"\n }\n },\n \"required\": [\n \"lat\",\n \"long\"\n ]\n }\n }\n}\n```\n</details>\n\n### Enum and Literal types\n\nBoth `Enum` and `Literal` types can be used to match explicit enumerations. By default, `Enum` types match both the values and symbolic names (preferring the former in case of a clash). This behaviour can be overridden using a transformation if desired (see section below). \n\n```python\n>>> from enum import auto, StrEnum\n>>> from typing import Literal\n\n>>> class BuildType(StrEnum):\n... DEBUG = auto()\n... OPTIMIZED = auto()\n \n>>> @dataclass\n... class Release:\n... build: BuildType\n... approved: Literal[\"Yes\", \"No\"]\n \n>>> from_dict(Release, {\"build\": \"debug\", \"confirmed\": \"Yes\"})\nRelease(build=<Build.DEBUG: 'debug'>, approved='Yes')\n\n>>> print(dumps(to_json_schema(Release), indent=2))\n```\n\n<details>\n<summary>print output...</summary>\n\n```json\n{\n \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n \"$ref\": \"#/$defs/Release\",\n \"$defs\": {\n \"Release\": {\n \"type\": \"object\",\n \"properties\": {\n \"build\": {\n \"enum\": [\n \"debug\",\n \"optimized\",\n \"DEBUG\",\n \"OPTIMIZED\"\n ]\n },\n \"approved\": {\n \"enum\": [\n \"Yes\",\n \"No\"\n ]\n }\n },\n \"required\": [\n \"build\",\n \"confirmed\"\n ]\n }\n }\n}\n```\n</details>\n\n### Annotated types\n\n `Annotated` types can be used to populate the property `\"description\"` annotations in the JSON schema. \n\n```python\n>>> from typing import Annotated\n\n>>> @dataclass\n... class InventoryItem:\n... name: Annotated[str, \"item name\"]\n... unit_price: Annotated[float, \"unit price\"]\n... quantity_on_hand: Annotated[int, \"quantity on hand\"] = 0\n\n>>> from_dict(InventoryItem, { \"name\": \"widget\", \"unit_price\": 3.0})\nInventoryItem(name='widget', unit_price=3.0, quantity_on_hand=0)\n\n>>> print(dumps(to_json_schema(InventoryItem), indent=2))\n```\n\n<details>\n<summary>print output...</summary>\n\n```json\n{\n \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n \"$ref\": \"#/$defs/InventoryItem\",\n \"$defs\": {\n \"InventoryItem\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\",\n \"description\": \"item name\"\n },\n \"unit_price\": {\n \"type\": \"number\",\n \"description\": \"unit price\"\n },\n \"quantity_on_hand\": {\n \"type\": \"integer\",\n \"description\": \"quantity on hand\",\n \"default\": 0\n }\n },\n \"required\": [\n \"name\",\n \"unit_price\"\n ]\n }\n }\n}\n```\n</details>\n\n### Forward references\n\nForward reference types (written as string literals or `ForwardRef` objects) are supported, permitting recursive dataclasses. Global and class-scoped references are handled automatically:\n\n```python\n>>> @dataclass\n... class Cons:\n... head: \"Head\"\n... tail: Optional[\"Cons\"] = None\n... \n... @dataclass\n... class Head:\n... v: int\n... \n... def __repr__(self):\n... return f\"{self.head.v}::{self.tail}\"\n\n>>> from_dict(Cons, {\"head\": {\"v\": 1}, \"tail\": {\"head\": {\"v\": 2}}})\n1::2::None\n\n>> print(dumps(to_json_schema(Cons), indent=2))\n```\n\n<details>\n<summary>print output...</summary>\n\n```json\n{\n \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n \"$ref\": \"#/$defs/Cons\",\n \"$defs\": {\n \"Cons\": {\n \"type\": \"object\",\n \"properties\": {\n \"head\": {\n \"$ref\": \"#/$defs/Cons.Head\"\n },\n \"tail\": {\n \"anyOf\": [\n {\n \"$ref\": \"#/$defs/Cons\"\n },\n {\n \"type\": \"null\"\n }\n ],\n \"default\": null\n }\n },\n \"required\": [\n \"head\"\n ]\n },\n \"Cons.Head\": {\n \"type\": \"object\",\n \"properties\": {\n \"v\": {\n \"type\": \"integer\"\n }\n },\n \"required\": [\n \"v\"\n ]\n }\n }\n}\n```\n</details>\n\nLocally-scoped references, however, must be specified using the `local_refs` keyword:\n\n```python\n>>> def reverse_cons(seq):\n... \n... @dataclass\n... class Cons:\n... head: int\n... tail: Optional[\"Cons\"] = None\n... \n... def __repr__(self):\n... return f\"{self.head}::{self.tail}\"\n... \n... value = None\n... for x in seq: value = { \"head\": x, \"tail\": value }\n... return from_dict(Cons, value, local_refs={Cons})\n\n>>> reverse_cons([1,2,3])\n3::2::1::None\n```\n\n### Strict mode\n\nBoth `from_dict` and `to_json_schema` default to ignoring additional properties that are not part of a dataclass (similar to `additionalProperties` defaulting to true in JSON schemas). This can be disabled with the `strict` keyword.\n```python\n>>> value = { \"name\": \"widget\", \"unit_price\": 4.0, \"comment\": \"too expensive\"}\n\n>>> from_dict(InventoryItem, value)\nInventoryItem(name='widget', unit_price=4.0, quantity_on_hand=0)\n>>> from_dict(InventoryItem, value, strict=True)\nTypeError: Unexpected <class '__main__.InventoryItem'> fields {'comment'}\n\n>>> print(dumps(to_json_schema(InventoryItem, strict=True), indent=2))\n```\n\n<details>\n<summary>print output...</summary>\n\n```json\n{\n \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n \"$ref\": \"#/$defs/InventoryItem\",\n \"$defs\": {\n \"InventoryItem\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\",\n \"description\": \"item name\"\n },\n \"unit_price\": {\n \"type\": \"number\",\n \"description\": \"unit price\"\n },\n \"quantity_on_hand\": {\n \"type\": \"integer\",\n \"description\": \"quantity on hand\",\n \"default\": 0\n }\n },\n \"required\": [\n \"name\",\n \"unit_price\"\n ],\n \"additionalProperties\": false\n }\n }\n}\n```\n</details>\n\n### Transformations\n\nTransformations allow you to override the handling of specific types or dataclass fields, and can be used to normalise inputs or convert them into different types, including ones that aren't normally supported. Transformations are specified with the `transform` keyword, using a mapping:\n\n* the mapping keys are either:\n * a type used somewhere in the output dataclass: e.g. `str` or `set[int]`\n * a dataclass field specified by a class-name tuple: e.g. `(InventoryItem, \"name\")` or `(Cons, \"head\")`\n* the mapping values are a tuple consisting of:\n * the JSON-serialisable input type that we want to represent this output type or field\n * a callable function to convert from that input type to the output type\n\nNote that the input type can be the same as the output type. Conversely, note that transformations don't help with serialising the dataclasses *back* into JSON from non-serialisable types.\n\n```python\n>>> @dataclass\n... class Person:\n... name : str\n... aliases: set[str]\n\n>>> transform = {\n... str: (str, str.title),\n... set[str]: (list[str], set),\n... (Person, \"name\"): (str, lambda s: s + \"!\")}\n \n>>> from_dict(Person, {\"name\": \"robert\", \"aliases\": [\"bob\", \"bobby\"]}, transform=transform)\nPerson(name='Robert!', aliases={'Bobby', 'Bob'})\n \n>>> print(dumps(to_json_schema(Person, transform=transform), indent=2))\n```\n\n<details>\n<summary>print output...</summary>\n\n```json\n{\n \"$schema\": \"https://json-schema.org/draft/2020-12/schema\",\n \"$ref\": \"#/$defs/Person\",\n \"$defs\": {\n \"Person\": {\n \"type\": \"object\",\n \"properties\": {\n \"name\": {\n \"type\": \"string\"\n },\n \"aliases\": {\n \"type\": \"array\",\n \"items\": {\n \"type\": \"string\"\n }\n }\n },\n \"required\": [\n \"name\",\n \"aliases\"\n ]\n }\n }\n}\n```\n</details>\n\n## Contributions\n\nBug reports, feature requests and contributions are very welcome. Note that PRs must include tests with 100% code coverage and pass the quality checks [defined here](.github/workflows/quality_checks.yaml). More development details will be added shortly, once the project has stabilised...\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Simple creation of dataclasses from JSON",
"version": "0.6.0",
"project_urls": null,
"split_keywords": [
"dataclasses",
" json",
" json schema"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5af201e67e66b62835f57a5ef64965eaeca61939fad6a89b31e0e931bc16722e",
"md5": "d4111ec56c76674e3808ee4b1668cecd",
"sha256": "7ea080a0830d53afb5e1f85c3ec2b692766bf99c7430390bfbc08b62542020b5"
},
"downloads": -1,
"filename": "dataglasses-0.6.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d4111ec56c76674e3808ee4b1668cecd",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 9703,
"upload_time": "2024-11-03T23:00:16",
"upload_time_iso_8601": "2024-11-03T23:00:16.407690Z",
"url": "https://files.pythonhosted.org/packages/5a/f2/01e67e66b62835f57a5ef64965eaeca61939fad6a89b31e0e931bc16722e/dataglasses-0.6.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "263bf975f3ab2ffb2343c6a9a53b1018b254dd49ddb38b4593253b681c49106f",
"md5": "c8babbcc798a7fa8f1016e307e6245b4",
"sha256": "1f02228a374892f21e99626eff66aa1c8f7d3702b3aaa55d62decab2a2c65608"
},
"downloads": -1,
"filename": "dataglasses-0.6.0.tar.gz",
"has_sig": false,
"md5_digest": "c8babbcc798a7fa8f1016e307e6245b4",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 31364,
"upload_time": "2024-11-03T23:00:18",
"upload_time_iso_8601": "2024-11-03T23:00:18.105290Z",
"url": "https://files.pythonhosted.org/packages/26/3b/f975f3ab2ffb2343c6a9a53b1018b254dd49ddb38b4593253b681c49106f/dataglasses-0.6.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-03 23:00:18",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "dataglasses"
}