Name | partd JSON |
Version |
1.4.2
JSON |
| download |
home_page | None |
Summary | Appendable key-value storage |
upload_time | 2024-05-06 19:51:41 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | BSD |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
locket
toolz
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
PartD
=====
|Build Status| |Version Status|
Key-value byte store with appendable values
Partd stores key-value pairs.
Values are raw bytes.
We append on old values.
Partd excels at shuffling operations.
Operations
----------
PartD has two main operations, ``append`` and ``get``.
Example
-------
1. Create a Partd backed by a directory::
>>> import partd
>>> p = partd.File('/path/to/new/dataset/')
2. Append key-byte pairs to dataset::
>>> p.append({'x': b'Hello ', 'y': b'123'})
>>> p.append({'x': b'world!', 'y': b'456'})
3. Get bytes associated to keys::
>>> p.get('x') # One key
b'Hello world!'
>>> p.get(['y', 'x']) # List of keys
[b'123456', b'Hello world!']
4. Destroy partd dataset::
>>> p.drop()
That's it.
Implementations
---------------
We can back a partd by an in-memory dictionary::
>>> p = Dict()
For larger amounts of data or to share data between processes we back a partd
by a directory of files. This uses file-based locks for consistency.::
>>> p = File('/path/to/dataset/')
However this can fail for many small writes. In these cases you may wish to buffer one partd with another, keeping a fixed maximum of data in the buffering partd. This writes the larger elements of the first partd to the second partd when space runs low::
>>> p = Buffer(Dict(), File(), available_memory=2e9) # 2GB memory buffer
You might also want to have many distributed process write to a single partd
consistently. This can be done with a server
* Server Process::
>>> p = Buffer(Dict(), File(), available_memory=2e9) # 2GB memory buffer
>>> s = Server(p, address='ipc://server')
* Worker processes::
>>> p = Client('ipc://server') # Client machine talks to remote server
Encodings and Compression
-------------------------
Once we can robustly and efficiently append bytes to a partd we consider
compression and encodings. This is generally available with the ``Encode``
partd, which accepts three functions, one to apply on bytes as they are
written, one to apply to bytes as they are read, and one to join bytestreams.
Common configurations already exist for common data and compression formats.
We may wish to compress and decompress data transparently as we interact with a
partd. Objects like ``BZ2``, ``Blosc``, ``ZLib`` and ``Snappy`` exist and take
another partd as an argument.::
>>> p = File(...)
>>> p = ZLib(p)
These work exactly as before, the (de)compression happens automatically.
Common data formats like Python lists, numpy arrays, and pandas
dataframes are also supported out of the box.::
>>> p = File(...)
>>> p = NumPy(p)
>>> p.append({'x': np.array([...])})
This lets us forget about bytes and think instead in our normal data types.
Composition
-----------
In principle we want to compose all of these choices together
1. Write policy: ``Dict``, ``File``, ``Buffer``, ``Client``
2. Encoding: ``Pickle``, ``Numpy``, ``Pandas``, ...
3. Compression: ``Blosc``, ``Snappy``, ...
Partd objects compose by nesting. Here we make a partd that writes pickle
encoded BZ2 compressed bytes directly to disk::
>>> p = Pickle(BZ2(File('foo')))
We could construct more complex systems that include compression,
serialization, buffering, and remote access.::
>>> server = Server(Buffer(Dict(), File(), available_memory=2e0))
>>> client = Pickle(Snappy(Client(server.address)))
>>> client.append({'x': [1, 2, 3]})
.. |Build Status| image:: https://github.com/dask/partd/workflows/CI/badge.svg
:target: https://github.com/dask/partd/actions?query=workflow%3ACI
.. |Version Status| image:: https://img.shields.io/pypi/v/partd.svg
:target: https://pypi.python.org/pypi/partd/
Raw data
{
"_id": null,
"home_page": null,
"name": "partd",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": "Matthew Rocklin <mrocklin@gmail.com>",
"keywords": null,
"author": null,
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/b2/3a/3f06f34820a31257ddcabdfafc2672c5816be79c7e353b02c1f318daa7d4/partd-1.4.2.tar.gz",
"platform": null,
"description": "PartD\n=====\n\n|Build Status| |Version Status|\n\nKey-value byte store with appendable values\n\n Partd stores key-value pairs.\n Values are raw bytes.\n We append on old values.\n\nPartd excels at shuffling operations.\n\nOperations\n----------\n\nPartD has two main operations, ``append`` and ``get``.\n\n\nExample\n-------\n\n1. Create a Partd backed by a directory::\n\n >>> import partd\n >>> p = partd.File('/path/to/new/dataset/')\n\n2. Append key-byte pairs to dataset::\n\n >>> p.append({'x': b'Hello ', 'y': b'123'})\n >>> p.append({'x': b'world!', 'y': b'456'})\n\n3. Get bytes associated to keys::\n\n >>> p.get('x') # One key\n b'Hello world!'\n\n >>> p.get(['y', 'x']) # List of keys\n [b'123456', b'Hello world!']\n\n4. Destroy partd dataset::\n\n >>> p.drop()\n\nThat's it.\n\n\nImplementations\n---------------\n\nWe can back a partd by an in-memory dictionary::\n\n >>> p = Dict()\n\nFor larger amounts of data or to share data between processes we back a partd\nby a directory of files. This uses file-based locks for consistency.::\n\n >>> p = File('/path/to/dataset/')\n\nHowever this can fail for many small writes. In these cases you may wish to buffer one partd with another, keeping a fixed maximum of data in the buffering partd. This writes the larger elements of the first partd to the second partd when space runs low::\n\n >>> p = Buffer(Dict(), File(), available_memory=2e9) # 2GB memory buffer\n\nYou might also want to have many distributed process write to a single partd\nconsistently. This can be done with a server\n\n* Server Process::\n\n >>> p = Buffer(Dict(), File(), available_memory=2e9) # 2GB memory buffer\n >>> s = Server(p, address='ipc://server')\n\n* Worker processes::\n\n >>> p = Client('ipc://server') # Client machine talks to remote server\n\n\nEncodings and Compression\n-------------------------\n\nOnce we can robustly and efficiently append bytes to a partd we consider\ncompression and encodings. This is generally available with the ``Encode``\npartd, which accepts three functions, one to apply on bytes as they are\nwritten, one to apply to bytes as they are read, and one to join bytestreams.\nCommon configurations already exist for common data and compression formats.\n\nWe may wish to compress and decompress data transparently as we interact with a\npartd. Objects like ``BZ2``, ``Blosc``, ``ZLib`` and ``Snappy`` exist and take\nanother partd as an argument.::\n\n >>> p = File(...)\n >>> p = ZLib(p)\n\nThese work exactly as before, the (de)compression happens automatically.\n\nCommon data formats like Python lists, numpy arrays, and pandas\ndataframes are also supported out of the box.::\n\n >>> p = File(...)\n >>> p = NumPy(p)\n >>> p.append({'x': np.array([...])})\n\nThis lets us forget about bytes and think instead in our normal data types.\n\nComposition\n-----------\n\nIn principle we want to compose all of these choices together\n\n1. Write policy: ``Dict``, ``File``, ``Buffer``, ``Client``\n2. Encoding: ``Pickle``, ``Numpy``, ``Pandas``, ...\n3. Compression: ``Blosc``, ``Snappy``, ...\n\nPartd objects compose by nesting. Here we make a partd that writes pickle\nencoded BZ2 compressed bytes directly to disk::\n\n >>> p = Pickle(BZ2(File('foo')))\n\nWe could construct more complex systems that include compression,\nserialization, buffering, and remote access.::\n\n >>> server = Server(Buffer(Dict(), File(), available_memory=2e0))\n\n >>> client = Pickle(Snappy(Client(server.address)))\n >>> client.append({'x': [1, 2, 3]})\n\n.. |Build Status| image:: https://github.com/dask/partd/workflows/CI/badge.svg\n :target: https://github.com/dask/partd/actions?query=workflow%3ACI\n.. |Version Status| image:: https://img.shields.io/pypi/v/partd.svg\n :target: https://pypi.python.org/pypi/partd/\n",
"bugtrack_url": null,
"license": "BSD",
"summary": "Appendable key-value storage",
"version": "1.4.2",
"project_urls": {
"Homepage": "http://github.com/dask/partd/"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "71e740fb618334dcdf7c5a316c0e7343c5cd82d3d866edc100d98e29bc945ecd",
"md5": "41e20351762ef81824a5078fb77d64e7",
"sha256": "978e4ac767ec4ba5b86c6eaa52e5a2a3bc748a2ca839e8cc798f1cc6ce6efb0f"
},
"downloads": -1,
"filename": "partd-1.4.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "41e20351762ef81824a5078fb77d64e7",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 18905,
"upload_time": "2024-05-06T19:51:39",
"upload_time_iso_8601": "2024-05-06T19:51:39.271208Z",
"url": "https://files.pythonhosted.org/packages/71/e7/40fb618334dcdf7c5a316c0e7343c5cd82d3d866edc100d98e29bc945ecd/partd-1.4.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "b23a3f06f34820a31257ddcabdfafc2672c5816be79c7e353b02c1f318daa7d4",
"md5": "597f7a8511dc653b358972b17242a303",
"sha256": "d022c33afbdc8405c226621b015e8067888173d85f7f5ecebb3cafed9a20f02c"
},
"downloads": -1,
"filename": "partd-1.4.2.tar.gz",
"has_sig": false,
"md5_digest": "597f7a8511dc653b358972b17242a303",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 21029,
"upload_time": "2024-05-06T19:51:41",
"upload_time_iso_8601": "2024-05-06T19:51:41.945072Z",
"url": "https://files.pythonhosted.org/packages/b2/3a/3f06f34820a31257ddcabdfafc2672c5816be79c7e353b02c1f318daa7d4/partd-1.4.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-05-06 19:51:41",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "dask",
"github_project": "partd",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "locket",
"specs": []
},
{
"name": "toolz",
"specs": []
}
],
"lcname": "partd"
}