> [!NOTE]
>The primary reason for this fork is to enable full round-trip serialization and deserialization of NumPy scalars and 0-dimensional arrays to JSON and back. This feature is essential for applications that require precise data preservation when working with NumPy data types.
Despite contributing this enhancement to the original project (see [Pull Request #99](https://github.com/mverleg/pyjson_tricks/pull/99)), there was a difference in opinion with the maintainer regarding its inclusion. As a result, this fork aims to continue development with this functionality integrated.
# ro_json
The [ro-json] package brings several pieces of
functionality to python handling of json files:
1. **Store and load numpy arrays** in human-readable format.
2. **Store and load class instances** both generic and customized.
3. **Store and load date/times** as a dictionary (including timezone).
4. **Preserve map order** `{}` using `OrderedDict`.
5. **Allow for comments** in json files by starting lines with `#`.
6. Sets, complex numbers, Decimal, Fraction, enums, compression,
duplicate keys, pathlib Paths, bytes ...
As well as compression and disallowing duplicate keys.
* Code: <https://github.com/ramonaoptics/ro-json>
<!-- * Documentation: <http://ro-json.readthedocs.org/en/latest/> -->
* PIP: <https://pypi.python.org/pypi/ro-json>
Several keys of the format `__keyname__` have special meanings, and more
might be added in future releases.
If you're considering JSON-but-with-comments as a config file format,
have a look at [HJSON](https://github.com/hjson/hjson-py), it might be
more appropriate. For other purposes, keep reading!
Thanks for all the Github starsā!
# Installation and use
You can install using
``` bash
pip install ro_json
```
Decoding of some data types needs the corresponding package to be
installed, e.g. `numpy` for arrays, `pandas` for dataframes and `pytz`
for timezone-aware datetimes.
You can import the usual json functions dump(s) and load(s), as well as
a separate comment removal function, as follows:
``` bash
from ro_json import dump, dumps, load, loads, strip_comments
```
The exact signatures of these and other functions are in the [documentation](http://json-tricks.readthedocs.org/en/latest/#main-components).
Quite some older versions of Python are supported. For an up-to-date list see [the automated tests](./.github/workflows/tests.yml).
# Features
## Numpy arrays
When not compressed, the array is encoded in sort-of-readable and very
flexible and portable format, like so:
``` python
arr = arange(0, 10, 1, dtype=uint8).reshape((2, 5))
print(dumps({'mydata': arr}))
```
this yields:
``` javascript
{
"mydata": {
"dtype": "uint8",
"shape": [2, 5],
"Corder": true,
"__ndarray__": [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]
}
}
```
which will be converted back to a numpy array when using
`ro_json.loads`. Note that the memory order (`Corder`) is only
stored in v3.1 and later and for arrays with at least 2 dimensions.
As you see, this uses the magic key `__ndarray__`. Don't use
`__ndarray__` as a dictionary key unless you're trying to make a numpy
array (and know what you're doing).
Numpy scalars are also serialized (v3.5+). They are represented by the
closest python primitive type. A special representation was not
feasible, because Python's json implementation serializes some numpy
types as primitives, without consulting custom encoders. If you want to
preserve the exact numpy type, use
[encode_scalars_inplace](https://json-tricks.readthedocs.io/en/latest/#ro_json.np_utils.encode_scalars_inplace).
There is also a compressed format (thanks `claydugo` for fix). From
the next major release, this will be default when using compression.
For now, you can use it as:
``` python
dumps(data, compression=True, properties={'ndarray_compact': True})
```
This compressed format encodes the array data in base64, with gzip
compression for the array, unless 1) compression has little effect for
that array, or 2) the whole file is already compressed. If you only want
compact format for large arrays, pass the number of elements to
`ndarray_compact`.
Example:
``` python
data = [linspace(0, 10, 9), array([pi, exp(1)])]
dumps(data, compression=False, properties={'ndarray_compact': 8})
[{
"__ndarray__": "b64.gz:H4sIAAAAAAAC/2NgQAZf7CE0iwOE5oPSIlBaEkrLQegGRShfxQEAz7QFikgAAAA=",
"dtype": "float64",
"shape": [9]
}, {
"__ndarray__": [3.141592653589793, 2.718281828459045],
"dtype": "float64",
"shape": [2]
}]
```
## Class instances
`ro_json` can serialize class instances.
If the class behaves normally (not generated dynamic, no `__new__` or
`__metaclass__` magic, etc) *and* all it's attributes are serializable,
then this should work by default.
``` python
# ro_json/test_class.py
class MyTestCls:
def __init__(self, **kwargs):
for k, v in kwargs.items():
setattr(self, k, v)
cls_instance = MyTestCls(s='ub', dct={'7': 7})
json = dumps(cls_instance, indent=4)
cls_instance_again = loads(json)
```
You'll get your instance back. Here the json looks like this:
``` javascript
{
"__instance_type__": [
"ro_json.test_class",
"MyTestCls"
],
"attributes": {
"s": "ub",
"dct": {
"7": 7
}
}
}
```
As you can see, this stores the module and class name. The class must be
importable from the same module when decoding (and should not have
changed). If it isn't, you have to manually provide a dictionary to
`cls_lookup_map` when loading in which the class name can be looked up.
Note that if the class is imported, then `globals()` is such a
dictionary (so try `loads(json, cls_lookup_map=glboals())`). Also note
that if the class is defined in the 'top' script (that you're calling
directly), then this isn't a module and the import part cannot be
extracted. Only the class name will be stored; it can then only be
deserialized in the same script, or if you provide `cls_lookup_map`.
Note that this also works with `slots` without having to do anything
(thanks to `koffie` and `dominicdoty`), which encodes like this (custom
indentation):
``` javascript
{
"__instance_type__": ["module.path", "ClassName"],
"slots": {"slotattr": 37},
"attributes": {"dictattr": 42}
}
```
If the instance doesn't serialize automatically, or if you want custom
behaviour, then you can implement `__json__encode__(self)` and
`__json_decode__(self, **attributes)` methods, like so:
``` python
class CustomEncodeCls:
def __init__(self):
self.relevant = 42
self.irrelevant = 37
def __json_encode__(self):
# should return primitive, serializable types like dict, list, int, string, float...
return {'relevant': self.relevant}
def __json_decode__(self, **attrs):
# should initialize all properties; note that __init__ is not called implicitly
self.relevant = attrs['relevant']
self.irrelevant = 12
```
As you've seen, this uses the magic key `__instance_type__`. Don't use
`__instance_type__` as a dictionary key unless you know what you're
doing.
## Date, time, datetime and timedelta
Date, time, datetime and timedelta objects are stored as dictionaries of
"day", "hour", "millisecond" etc keys, for each nonzero property.
Timezone name is also stored in case it is set, as is DST (thanks `eumir`).
You'll need to have `pytz` installed to use timezone-aware date/times,
it's not needed for naive date/times.
``` javascript
{
"__datetime__": null,
"year": 1988,
"month": 3,
"day": 15,
"hour": 8,
"minute": 3,
"second": 59,
"microsecond": 7,
"tzinfo": "Europe/Amsterdam"
}
```
This approach was chosen over timestamps for readability and consistency
between date and time, and over a single string to prevent parsing
problems and reduce dependencies. Note that if `primitives=True`,
date/times are encoded as ISO 8601, but they won't be restored
automatically.
Don't use `__date__`, `__time__`, `__datetime__`, `__timedelta__` or
`__tzinfo__` as dictionary keys unless you know what you're doing, as
they have special meaning.
## Order
Given an ordered dictionary like this (see the tests for a longer one):
``` python
ordered = OrderedDict((
('elephant', None),
('chicken', None),
('tortoise', None),
))
```
Converting to json and back will preserve the order:
``` python
from ro_json import dumps, loads
json = dumps(ordered)
ordered = loads(json, preserve_order=True)
```
where `preserve_order=True` is added for emphasis; it can be left out
since it's the default.
As a note on [performance](http://stackoverflow.com/a/8177061/723090),
both dicts and OrderedDicts have the same scaling for getting and
setting items (`O(1)`). In Python versions before 3.5, OrderedDicts were
implemented in Python rather than C, so were somewhat slower; since
Python 3.5 both are implemented in C. In summary, you should have no
scaling problems and probably no performance problems at all, especially
in Python 3. Python 3.6+ preserves order of dictionaries by default
making this redundant, but this is an implementation detail that should
not be relied on.
## Comments
*Warning: in the next major version, comment parsing will be opt-in, not
default anymore (for performance reasons). Update your code now to pass
`ignore_comments=True` explicitly if you want comment parsing.*
This package uses `#` and `//` for comments, which seem to be the most
common conventions, though only the latter is valid javascript.
For example, you could call `loads` on the following string:
{ # "comment 1
"hello": "Wor#d", "Bye": ""M#rk"", "yes\\"": 5,# comment" 2
"quote": ""th#t's" what she said", // comment "3"
"list": [1, 1, "#", """, "\", 8], "dict": {"q": 7} #" comment 4 with quotes
}
// comment 5
And it would return the de-commented version:
``` javascript
{
"hello": "Wor#d", "Bye": ""M#rk"", "yes\\"": 5,
"quote": ""th#t's" what she said",
"list": [1, 1, "#", """, "\", 8], "dict": {"q": 7}
}
```
Since comments aren't stored in the Python representation of the data,
loading and then saving a json file will remove the comments (it also
likely changes the indentation).
The implementation of comments is a bit crude, which means that there are
some exceptional cases that aren't handled correctly ([#57](https://github.com/mverleg/pyjson_tricks/issues/57)).
It is also not very fast. For that reason, if `ignore_comments` wasn't
explicitly set to True, then ro_json first tries to parse without
ignoring comments. If that fails, then it will automatically re-try
with comment handling. This makes the no-comment case faster at the cost
of the comment case, so if you are expecting comments make sure to set
`ignore_comments` to True.
## Other features
* Special floats like `NaN`, `Infinity` and
`-0` using the `allow_nan=True` argument
([non-standard](https://stackoverflow.com/questions/1423081/json-left-out-infinity-and-nan-json-status-in-ecmascript)
json, may not decode in other implementations).
* Sets are serializable and can be loaded. By default the set json
representation is sorted, to have a consistent representation.
* Save and load complex numbers (py3) with `1+2j` serializing as
`{'__complex__': [1, 2]}`.
* Save and load `Decimal` and `Fraction` (including NaN, infinity, -0
for Decimal).
* Save and load `Enum` (thanks to `Jenselme`), either built-in in
python3.4+, or with the [enum34](https://pypi.org/project/enum34/)
package in earlier versions. `IntEnum` needs
[encode_intenums_inplace](https://json-tricks.readthedocs.io/en/latest/#ro_json.utils.encode_intenums_inplace).
* `ro_json` allows for gzip compression using the
`compression=True` argument (off by default).
* `ro_json` can check for duplicate keys in maps by setting
`allow_duplicates` to False. These are [kind of
allowed](http://stackoverflow.com/questions/21832701/does-json-syntax-allow-duplicate-keys-in-an-object),
but are handled inconsistently between json implementations. In
Python, for `dict` and `OrderedDict`, duplicate keys are silently
overwritten.
* Save and load `pathlib.Path` objects (e.g., the current path,
`Path('.')`, serializes as `{"__pathlib__": "."}`)
(thanks to `bburan`).
* Save and load bytes (python 3+ only), which will be encoded as utf8 if
that is valid, or as base64 otherwise. Base64 is always used if
primitives are requested. Serialized as
`[{"__bytes_b64__": "aGVsbG8="}]` vs `[{"__bytes_utf8__": "hello"}]`.
* Save and load slices (thanks to `claydugo`).
# Preserve type vs use primitive
By default, types are encoded such that they can be restored to their
original type when loaded with `ro-json`. Example encodings in this
documentation refer to that format.
You can also choose to store things as their closest primitive type
(e.g. arrays and sets as lists, decimals as floats). This may be
desirable if you don't care about the exact type, or you are loading
the json in another language (which doesn't restore python types).
It's also smaller.
To forego meta data and store primitives instead, pass `primitives` to
`dump(s)`. This is available in version `3.8` and later. Example:
``` python
data = [
arange(0, 10, 1, dtype=int).reshape((2, 5)),
datetime(year=2017, month=1, day=19, hour=23, minute=00, second=00),
1 + 2j,
Decimal(42),
Fraction(1, 3),
MyTestCls(s='ub', dct={'7': 7}), # see later
set(range(7)),
]
# Encode with metadata to preserve types when decoding
print(dumps(data))
```
``` javascript
// (comments added and indenting changed)
[
// numpy array
{
"__ndarray__": [
[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]],
"dtype": "int64",
"shape": [2, 5],
"Corder": true
},
// datetime (naive)
{
"__datetime__": null,
"year": 2017,
"month": 1,
"day": 19,
"hour": 23
},
// complex number
{
"__complex__": [1.0, 2.0]
},
// decimal & fraction
{
"__decimal__": "42"
},
{
"__fraction__": true
"numerator": 1,
"denominator": 3,
},
// class instance
{
"__instance_type__": [
"tests.test_class",
"MyTestCls"
],
"attributes": {
"s": "ub",
"dct": {"7": 7}
}
},
// set
{
"__set__": [0, 1, 2, 3, 4, 5, 6]
}
]
```
``` python
# Encode as primitive types; more simple but loses type information
print(dumps(data, primitives=True))
```
``` javascript
// (comments added and indentation changed)
[
// numpy array
[[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]],
// datetime (naive)
"2017-01-19T23:00:00",
// complex number
[1.0, 2.0],
// decimal & fraction
42.0,
0.3333333333333333,
// class instance
{
"s": "ub",
"dct": {"7": 7}
},
// set
[0, 1, 2, 3, 4, 5, 6]
]
```
Note that valid json is produced either way: ``ro_json`` stores meta data as normal json, but other packages probably won't interpret it.
Note that valid json is produced either way: `ro_json` stores meta
data as normal json, but other packages probably won't interpret it.
# Usage & contributions
Code is under [Revised BSD License](LICENSE.txt)
so you can use it for most purposes including commercially.
Contributions are very welcome! Bug reports, feature suggestions and
code contributions help this project become more useful for everyone!
There is a short [contribution
guide](CONTRIBUTING.md).
Contributors not yet mentioned: `janLo` (performance boost).
# Tests
Tests are run automatically for commits to the repository for all
supported versions. This is the status:
To run the tests manually for your version, see [this guide](tests/run_locally.md).
Raw data
{
"_id": null,
"home_page": "https://github.com/ramonaoptics/ro-json",
"name": "ro-json",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "json, numpy, OrderedDict, comments, pandas, pytz, enum, encode, decode, serialize, deserialize, roundtrip",
"author": "Clay Dugo",
"author_email": "clay@ramonaoptics.com",
"download_url": "https://files.pythonhosted.org/packages/e8/7e/6921fc13e08d36ef16110c96bc38776a1a9afe34fa6e0e8014f6a02f65ae/ro_json-1.0.4.tar.gz",
"platform": null,
"description": "> [!NOTE]\n\n>The primary reason for this fork is to enable full round-trip serialization and deserialization of NumPy scalars and 0-dimensional arrays to JSON and back. This feature is essential for applications that require precise data preservation when working with NumPy data types.\n\nDespite contributing this enhancement to the original project (see [Pull Request #99](https://github.com/mverleg/pyjson_tricks/pull/99)), there was a difference in opinion with the maintainer regarding its inclusion. As a result, this fork aims to continue development with this functionality integrated.\n\n# ro_json\n\nThe [ro-json] package brings several pieces of\nfunctionality to python handling of json files:\n\n1. **Store and load numpy arrays** in human-readable format.\n2. **Store and load class instances** both generic and customized.\n3. **Store and load date/times** as a dictionary (including timezone).\n4. **Preserve map order** `{}` using `OrderedDict`.\n5. **Allow for comments** in json files by starting lines with `#`.\n6. Sets, complex numbers, Decimal, Fraction, enums, compression,\n duplicate keys, pathlib Paths, bytes ...\n\nAs well as compression and disallowing duplicate keys.\n\n* Code: <https://github.com/ramonaoptics/ro-json>\n<!-- * Documentation: <http://ro-json.readthedocs.org/en/latest/> -->\n* PIP: <https://pypi.python.org/pypi/ro-json>\n\nSeveral keys of the format `__keyname__` have special meanings, and more\nmight be added in future releases.\n\nIf you're considering JSON-but-with-comments as a config file format,\nhave a look at [HJSON](https://github.com/hjson/hjson-py), it might be\nmore appropriate. For other purposes, keep reading!\n\nThanks for all the Github stars\u2b50!\n\n# Installation and use\n\nYou can install using\n\n``` bash\npip install ro_json\n```\n\nDecoding of some data types needs the corresponding package to be\ninstalled, e.g. `numpy` for arrays, `pandas` for dataframes and `pytz`\nfor timezone-aware datetimes.\n\nYou can import the usual json functions dump(s) and load(s), as well as\na separate comment removal function, as follows:\n\n``` bash\nfrom ro_json import dump, dumps, load, loads, strip_comments\n```\n\nThe exact signatures of these and other functions are in the [documentation](http://json-tricks.readthedocs.org/en/latest/#main-components).\n\nQuite some older versions of Python are supported. For an up-to-date list see [the automated tests](./.github/workflows/tests.yml).\n\n# Features\n\n## Numpy arrays\n\nWhen not compressed, the array is encoded in sort-of-readable and very\nflexible and portable format, like so:\n\n``` python\narr = arange(0, 10, 1, dtype=uint8).reshape((2, 5))\nprint(dumps({'mydata': arr}))\n```\n\nthis yields:\n\n``` javascript\n{\n \"mydata\": {\n \"dtype\": \"uint8\",\n \"shape\": [2, 5],\n \"Corder\": true,\n \"__ndarray__\": [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]]\n }\n}\n```\n\nwhich will be converted back to a numpy array when using\n`ro_json.loads`. Note that the memory order (`Corder`) is only\nstored in v3.1 and later and for arrays with at least 2 dimensions.\n\nAs you see, this uses the magic key `__ndarray__`. Don't use\n`__ndarray__` as a dictionary key unless you're trying to make a numpy\narray (and know what you're doing).\n\nNumpy scalars are also serialized (v3.5+). They are represented by the\nclosest python primitive type. A special representation was not\nfeasible, because Python's json implementation serializes some numpy\ntypes as primitives, without consulting custom encoders. If you want to\npreserve the exact numpy type, use\n[encode_scalars_inplace](https://json-tricks.readthedocs.io/en/latest/#ro_json.np_utils.encode_scalars_inplace).\n\nThere is also a compressed format (thanks `claydugo` for fix). From\nthe next major release, this will be default when using compression.\nFor now, you can use it as:\n\n``` python\ndumps(data, compression=True, properties={'ndarray_compact': True})\n```\n\nThis compressed format encodes the array data in base64, with gzip\ncompression for the array, unless 1) compression has little effect for\nthat array, or 2) the whole file is already compressed. If you only want\ncompact format for large arrays, pass the number of elements to\n`ndarray_compact`.\n\nExample:\n\n``` python\ndata = [linspace(0, 10, 9), array([pi, exp(1)])]\ndumps(data, compression=False, properties={'ndarray_compact': 8})\n\n[{\n \"__ndarray__\": \"b64.gz:H4sIAAAAAAAC/2NgQAZf7CE0iwOE5oPSIlBaEkrLQegGRShfxQEAz7QFikgAAAA=\",\n \"dtype\": \"float64\",\n \"shape\": [9]\n }, {\n \"__ndarray__\": [3.141592653589793, 2.718281828459045],\n \"dtype\": \"float64\",\n \"shape\": [2]\n }]\n```\n\n## Class instances\n\n`ro_json` can serialize class instances.\n\nIf the class behaves normally (not generated dynamic, no `__new__` or\n`__metaclass__` magic, etc) *and* all it's attributes are serializable,\nthen this should work by default.\n\n``` python\n# ro_json/test_class.py\nclass MyTestCls:\ndef __init__(self, **kwargs):\n for k, v in kwargs.items():\n setattr(self, k, v)\n\ncls_instance = MyTestCls(s='ub', dct={'7': 7})\n\njson = dumps(cls_instance, indent=4)\ncls_instance_again = loads(json)\n```\n\nYou'll get your instance back. Here the json looks like this:\n\n``` javascript\n{\n \t\"__instance_type__\": [\n \t\t\"ro_json.test_class\",\n \t\t\"MyTestCls\"\n \t],\n \t\"attributes\": {\n \t\t\"s\": \"ub\",\n \t\t\"dct\": {\n \t\t\t\"7\": 7\n \t\t}\n \t}\n}\n```\n\nAs you can see, this stores the module and class name. The class must be\nimportable from the same module when decoding (and should not have\nchanged). If it isn't, you have to manually provide a dictionary to\n`cls_lookup_map` when loading in which the class name can be looked up.\nNote that if the class is imported, then `globals()` is such a\ndictionary (so try `loads(json, cls_lookup_map=glboals())`). Also note\nthat if the class is defined in the 'top' script (that you're calling\ndirectly), then this isn't a module and the import part cannot be\nextracted. Only the class name will be stored; it can then only be\ndeserialized in the same script, or if you provide `cls_lookup_map`.\n\nNote that this also works with `slots` without having to do anything\n(thanks to `koffie` and `dominicdoty`), which encodes like this (custom\nindentation):\n\n``` javascript\n{\n \"__instance_type__\": [\"module.path\", \"ClassName\"],\n \"slots\": {\"slotattr\": 37},\n \"attributes\": {\"dictattr\": 42}\n}\n```\n\nIf the instance doesn't serialize automatically, or if you want custom\nbehaviour, then you can implement `__json__encode__(self)` and\n`__json_decode__(self, **attributes)` methods, like so:\n\n``` python\nclass CustomEncodeCls:\ndef __init__(self):\n self.relevant = 42\n self.irrelevant = 37\n\n def __json_encode__(self):\n # should return primitive, serializable types like dict, list, int, string, float...\n return {'relevant': self.relevant}\n\n def __json_decode__(self, **attrs):\n # should initialize all properties; note that __init__ is not called implicitly\n self.relevant = attrs['relevant']\n self.irrelevant = 12\n```\n\nAs you've seen, this uses the magic key `__instance_type__`. Don't use\n`__instance_type__` as a dictionary key unless you know what you're\ndoing.\n\n## Date, time, datetime and timedelta\n\nDate, time, datetime and timedelta objects are stored as dictionaries of\n\"day\", \"hour\", \"millisecond\" etc keys, for each nonzero property.\n\nTimezone name is also stored in case it is set, as is DST (thanks `eumir`).\nYou'll need to have `pytz` installed to use timezone-aware date/times,\nit's not needed for naive date/times.\n\n``` javascript\n{\n \"__datetime__\": null,\n \"year\": 1988,\n \"month\": 3,\n \"day\": 15,\n \"hour\": 8,\n \"minute\": 3,\n \"second\": 59,\n \"microsecond\": 7,\n \"tzinfo\": \"Europe/Amsterdam\"\n}\n```\n\nThis approach was chosen over timestamps for readability and consistency\nbetween date and time, and over a single string to prevent parsing\nproblems and reduce dependencies. Note that if `primitives=True`,\ndate/times are encoded as ISO 8601, but they won't be restored\nautomatically.\n\nDon't use `__date__`, `__time__`, `__datetime__`, `__timedelta__` or\n`__tzinfo__` as dictionary keys unless you know what you're doing, as\nthey have special meaning.\n\n## Order\n\nGiven an ordered dictionary like this (see the tests for a longer one):\n\n``` python\nordered = OrderedDict((\n ('elephant', None),\n ('chicken', None),\n ('tortoise', None),\n))\n```\n\nConverting to json and back will preserve the order:\n\n``` python\nfrom ro_json import dumps, loads\njson = dumps(ordered)\nordered = loads(json, preserve_order=True)\n```\n\nwhere `preserve_order=True` is added for emphasis; it can be left out\nsince it's the default.\n\nAs a note on [performance](http://stackoverflow.com/a/8177061/723090),\nboth dicts and OrderedDicts have the same scaling for getting and\nsetting items (`O(1)`). In Python versions before 3.5, OrderedDicts were\nimplemented in Python rather than C, so were somewhat slower; since\nPython 3.5 both are implemented in C. In summary, you should have no\nscaling problems and probably no performance problems at all, especially\nin Python 3. Python 3.6+ preserves order of dictionaries by default\nmaking this redundant, but this is an implementation detail that should\nnot be relied on.\n\n## Comments\n\n*Warning: in the next major version, comment parsing will be opt-in, not\ndefault anymore (for performance reasons). Update your code now to pass\n`ignore_comments=True` explicitly if you want comment parsing.*\n\nThis package uses `#` and `//` for comments, which seem to be the most\ncommon conventions, though only the latter is valid javascript.\n\nFor example, you could call `loads` on the following string:\n\n{ # \"comment 1\n \"hello\": \"Wor#d\", \"Bye\": \"\"M#rk\"\", \"yes\\\\\"\": 5,# comment\" 2\n \"quote\": \"\"th#t's\" what she said\", // comment \"3\"\n \"list\": [1, 1, \"#\", \"\"\", \"\\\", 8], \"dict\": {\"q\": 7} #\" comment 4 with quotes\n}\n// comment 5\n\nAnd it would return the de-commented version:\n\n``` javascript\n{\n \"hello\": \"Wor#d\", \"Bye\": \"\"M#rk\"\", \"yes\\\\\"\": 5,\n \"quote\": \"\"th#t's\" what she said\",\n \"list\": [1, 1, \"#\", \"\"\", \"\\\", 8], \"dict\": {\"q\": 7}\n}\n```\n\nSince comments aren't stored in the Python representation of the data,\nloading and then saving a json file will remove the comments (it also\nlikely changes the indentation).\n\nThe implementation of comments is a bit crude, which means that there are\nsome exceptional cases that aren't handled correctly ([#57](https://github.com/mverleg/pyjson_tricks/issues/57)).\n\nIt is also not very fast. For that reason, if `ignore_comments` wasn't\nexplicitly set to True, then ro_json first tries to parse without\nignoring comments. If that fails, then it will automatically re-try\nwith comment handling. This makes the no-comment case faster at the cost\nof the comment case, so if you are expecting comments make sure to set\n`ignore_comments` to True.\n\n## Other features\n\n* Special floats like `NaN`, `Infinity` and\n `-0` using the `allow_nan=True` argument\n ([non-standard](https://stackoverflow.com/questions/1423081/json-left-out-infinity-and-nan-json-status-in-ecmascript)\n json, may not decode in other implementations).\n* Sets are serializable and can be loaded. By default the set json\n representation is sorted, to have a consistent representation.\n* Save and load complex numbers (py3) with `1+2j` serializing as\n `{'__complex__': [1, 2]}`.\n* Save and load `Decimal` and `Fraction` (including NaN, infinity, -0\n for Decimal).\n* Save and load `Enum` (thanks to `Jenselme`), either built-in in\n python3.4+, or with the [enum34](https://pypi.org/project/enum34/)\n package in earlier versions. `IntEnum` needs\n [encode_intenums_inplace](https://json-tricks.readthedocs.io/en/latest/#ro_json.utils.encode_intenums_inplace).\n* `ro_json` allows for gzip compression using the\n `compression=True` argument (off by default).\n* `ro_json` can check for duplicate keys in maps by setting\n `allow_duplicates` to False. These are [kind of\n allowed](http://stackoverflow.com/questions/21832701/does-json-syntax-allow-duplicate-keys-in-an-object),\n but are handled inconsistently between json implementations. In\n Python, for `dict` and `OrderedDict`, duplicate keys are silently\n overwritten.\n* Save and load `pathlib.Path` objects (e.g., the current path,\n `Path('.')`, serializes as `{\"__pathlib__\": \".\"}`)\n (thanks to `bburan`).\n* Save and load bytes (python 3+ only), which will be encoded as utf8 if\n that is valid, or as base64 otherwise. Base64 is always used if\n primitives are requested. Serialized as\n `[{\"__bytes_b64__\": \"aGVsbG8=\"}]` vs `[{\"__bytes_utf8__\": \"hello\"}]`.\n* Save and load slices (thanks to `claydugo`).\n\n# Preserve type vs use primitive\n\nBy default, types are encoded such that they can be restored to their\noriginal type when loaded with `ro-json`. Example encodings in this\ndocumentation refer to that format.\n\nYou can also choose to store things as their closest primitive type\n(e.g. arrays and sets as lists, decimals as floats). This may be\ndesirable if you don't care about the exact type, or you are loading\nthe json in another language (which doesn't restore python types).\nIt's also smaller.\n\nTo forego meta data and store primitives instead, pass `primitives` to\n`dump(s)`. This is available in version `3.8` and later. Example:\n\n``` python\ndata = [\n arange(0, 10, 1, dtype=int).reshape((2, 5)),\n datetime(year=2017, month=1, day=19, hour=23, minute=00, second=00),\n 1 + 2j,\n Decimal(42),\n Fraction(1, 3),\n MyTestCls(s='ub', dct={'7': 7}), # see later\n set(range(7)),\n]\n# Encode with metadata to preserve types when decoding\nprint(dumps(data))\n```\n\n``` javascript\n\n// (comments added and indenting changed)\n[\n // numpy array\n {\n \"__ndarray__\": [\n [0, 1, 2, 3, 4],\n [5, 6, 7, 8, 9]],\n \"dtype\": \"int64\",\n \"shape\": [2, 5],\n \"Corder\": true\n },\n // datetime (naive)\n {\n \"__datetime__\": null,\n \"year\": 2017,\n \"month\": 1,\n \"day\": 19,\n \"hour\": 23\n },\n // complex number\n {\n \"__complex__\": [1.0, 2.0]\n },\n // decimal & fraction\n {\n \"__decimal__\": \"42\"\n },\n {\n \"__fraction__\": true\n \"numerator\": 1,\n \"denominator\": 3,\n },\n // class instance\n {\n \"__instance_type__\": [\n \"tests.test_class\",\n \"MyTestCls\"\n ],\n \"attributes\": {\n \"s\": \"ub\",\n \"dct\": {\"7\": 7}\n }\n },\n // set\n {\n \"__set__\": [0, 1, 2, 3, 4, 5, 6]\n }\n]\n```\n\n``` python\n# Encode as primitive types; more simple but loses type information\nprint(dumps(data, primitives=True))\n```\n\n``` javascript\n// (comments added and indentation changed)\n[\n // numpy array\n [[0, 1, 2, 3, 4],\n [5, 6, 7, 8, 9]],\n // datetime (naive)\n \"2017-01-19T23:00:00\",\n // complex number\n [1.0, 2.0],\n // decimal & fraction\n 42.0,\n 0.3333333333333333,\n // class instance\n {\n \"s\": \"ub\",\n \"dct\": {\"7\": 7}\n },\n // set\n [0, 1, 2, 3, 4, 5, 6]\n]\n```\n\nNote that valid json is produced either way: ``ro_json`` stores meta data as normal json, but other packages probably won't interpret it.\n\nNote that valid json is produced either way: `ro_json` stores meta\ndata as normal json, but other packages probably won't interpret it.\n\n# Usage & contributions\n\nCode is under [Revised BSD License](LICENSE.txt)\nso you can use it for most purposes including commercially.\n\nContributions are very welcome! Bug reports, feature suggestions and\ncode contributions help this project become more useful for everyone!\nThere is a short [contribution\nguide](CONTRIBUTING.md).\n\nContributors not yet mentioned: `janLo` (performance boost).\n\n# Tests\n\nTests are run automatically for commits to the repository for all\nsupported versions. This is the status:\n\nTo run the tests manually for your version, see [this guide](tests/run_locally.md).\n",
"bugtrack_url": null,
"license": "BSD-3-Clause",
"summary": "Extra features for Python's JSON: comments, order, numpy, pandas, datetimes, and many more! Simple but customizable.",
"version": "1.0.4",
"project_urls": {
"Homepage": "https://github.com/ramonaoptics/ro-json",
"Source": "https://github.com/ramonaoptics/ro_json",
"Tracker": "https://github.com/ramonaoptics/ro_json/issues"
},
"split_keywords": [
"json",
" numpy",
" ordereddict",
" comments",
" pandas",
" pytz",
" enum",
" encode",
" decode",
" serialize",
" deserialize",
" roundtrip"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5182f1bc18150104a5a2be74ec43b130d59535b127c34b69d17d5ec3f4f7ea14",
"md5": "b6dc9b7eec0c0d3cbb953a6377c8764c",
"sha256": "f4822a225e8febea1e520455dc500851a8857cdfcdc4a86f9765ef777163eabd"
},
"downloads": -1,
"filename": "ro_json-1.0.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b6dc9b7eec0c0d3cbb953a6377c8764c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 31849,
"upload_time": "2024-09-24T20:09:58",
"upload_time_iso_8601": "2024-09-24T20:09:58.965502Z",
"url": "https://files.pythonhosted.org/packages/51/82/f1bc18150104a5a2be74ec43b130d59535b127c34b69d17d5ec3f4f7ea14/ro_json-1.0.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e87e6921fc13e08d36ef16110c96bc38776a1a9afe34fa6e0e8014f6a02f65ae",
"md5": "41d63faa81e7809f0c49328f6aec25a2",
"sha256": "731fd958231083b11497f6cabb187f80cbac7ce8b186718a5513b89bb1e5f1b2"
},
"downloads": -1,
"filename": "ro_json-1.0.4.tar.gz",
"has_sig": false,
"md5_digest": "41d63faa81e7809f0c49328f6aec25a2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 48090,
"upload_time": "2024-09-24T20:10:01",
"upload_time_iso_8601": "2024-09-24T20:10:01.650039Z",
"url": "https://files.pythonhosted.org/packages/e8/7e/6921fc13e08d36ef16110c96bc38776a1a9afe34fa6e0e8014f6a02f65ae/ro_json-1.0.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-24 20:10:01",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "ramonaoptics",
"github_project": "ro-json",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "ro-json"
}