sharedbuffers


Namesharedbuffers JSON
Version 0.4.5 PyPI version JSON
download
home_pagehttps://github.com/jampp/sharedbuffers/
SummaryShared-memory structured buffers
upload_time2017-10-12 15:18:59
maintainer
docs_urlNone
authorClaudio Freire
requires_python
licenseBSD 3-Clause
keywords
VCS
bugtrack_url
requirements lz4 xxhash numpy Cython chorde
Travis-CI No Travis.
coveralls test coverage No coveralls.
            sharedbuffers
=============

This library implements shared-memory typed buffers that can be read and manipulated (and we'll eventually 
support writes too) efficiently without serialization or deserialization.

The main supported implementation of obtaining shared memory is by memory-mapping files, but the library also supports
mapping buffers (anonymous mmap objects) as well, albeit they're harder to share among processes.

Currently, most primitive types and collections are supported, except `dicts`.

Supported primivite types:

    * int (up to 64 bit precision)
    * str (bytes)
    * unicode
    * frozenset
    * tuple / list

Primitive types are cloned into their actual builtin objects when accessed. Although fast, it does imply that contianers
will take up a lot of process-local memory when accessed. Support for collection proxies that take the place of
builtin containers is planned for a future release.

Objects can be registered with schema serializers and thus composite types can be mapped as well. For this to function
properly, objects need a class attribute specifying the attributes it holds and the type of the attributes. When an
attribute doesn't have a clearly defined type, it can be wrapped in a RTTI-containing container by specifying it as
type `object`.

For example:

.. code:: python

    class SomeStruct(object):
        __slot_types__ = {
            'a' : int,
            'b' : float,
            's' : str,
            'u' : unicode,
            'fset' : frozenset,
            'l' : list,
            'o' : object,
        }
        __slots__ = __slot_types__.keys()

Adding `__slot_types__`, however, isn't enough to make the object mappable. A schema definition needs to be created,
which can be used to map files or buffers and obtain proxies to the information within:

.. code:: python

    class SomeStruct(object):
        __slot_types__ = {
            'a' : int,
            'b' : float,
            's' : str,
            'u' : unicode,
            'fset' : frozenset,
            'l' : list,
            'o' : object,
        }
        __slots__ = __slot_types__.keys()
        __schema__ = mapped_struct.Schema.from_typed_slots(__slot_types__)

Using the schema is thus straightforward:

.. code:: python

    s = SomeStruct()
    s.a = 3
    s.s = 'blah'
    s.fset = frozenset([1,3])
    s.o = 3
    s.__schema__.pack(s) # returns a bytearray

    buf = bytearray(1000)
    s.__schema__.pack_into(s, buf, 10) # writes in offset 10 of buf, returns the size of the written object
    p = s.__schema__.unpack_from(s, buf, 10) # returns a proxy for the object just packed into buf, does not deserialize
    print p.a
    print p.s
    print p.fset

Typed objects can be nested, but for that a typecode must be assigned to each type in order for RTTI to properly
identify the custom types:

.. code:: python

    SomeStruct.__mapped_type__ = mapped_struct.mapped_object.register_schema(
        SomeStruct, SomeStruct.__schema__, 'S')

From then on, `SomeStruct` can be used as any other type when declaring field types.

High-level typed container classes can be created by inheriting the proper base class. Currently, there are only
arrays and mappings of two kinds: string-to-object, and uint-to-object

.. code:: python

    class StructArray(mapped_struct.MappedArrayProxyBase):
        schema = SomeStruct.__schema__
    class StructNameMapping(mapped_struct.MappedMappingProxyBase):
        IdMapper = mapped_struct.StringIdMapper
        ValueArray = StructArray
    class StructIdMapping(mapped_struct.MappedMappingProxyBase):
        IdMapper = mapped_struct.NumericIdMapper
        ValueArray = StructArray

The API for these high-level container objects is aimed at collections that don't really fit in RAM in their
pure-python form, so they must be built using an iterator over the items (ideally a generator that doesn't
put the whole collection in memory at once), and then mapped from the resulting file or buffer. An example:

.. code:: python

    with tempfile.NamedTemporaryFile() as destfile:
        arr = StructArray.build([SomeStruct(), SomeStruct()], destfile=destfile)
        print arr[0]

    with tempfile.NamedTemporaryFile() as destfile:
        arr = StructNameMapping.build(dict(a=SomeStruct(), b=SomeStruct()).iteritems(), destfile=destfile)
        print arr['a']

    with tempfile.NamedTemporaryFile() as destfile:
        arr = StructIdMapping.build({1:SomeStruct(), 3:SomeStruct()}.iteritems(), destfile=destfile)
        print arr[3]

When using nested hierarchies, it's possible to unify references to the same object by specifying an idmap dict.
However, since the idmap will map objects by their `id()`, objects must be kept alive by holding references to
them while they're still referenced in the idmap, so its usage is non-trivial. An example technique:

.. code:: python

    def all_structs(idmap):
        iter_all = iter(some_generator)
        while True:
            idmap.clear()
    
            sstructs = list(itertools.islice(iter_all, 10000))
            if not sstructs:
                break
    
            for ss in sstructs :
                # mapping from "s" attribute to struct
                yield (ss.s, ss)
            del sstructs
    
    idmap = {}
    name_mapping = StructNameMapping.build(all_structs(idmap), 
        destfile = destfile, idmap = idmap)

The above code syncs the lifetime of objects and their idmap entries to avoid mapping issues. If the invariant
isn't maintained (objects referenced in the idmap are alive and holding a unique `id()` value), the result will be
silent corruption of the resulting mapping due to object identity mixups.

There are variants of the mapping proxy classes and their associated id mapper classes that implement multi-maps.
That is, mappings that, when fed with multiple values for a key, will return a list of values for that key rather
than a single key. Their in-memory representation is identical, but their querying API returns all matching values
rather than the first one, so multi-maps and simple mappings are binary compatible.

Multi-maps with string keys can also be approximate, meaning the original keys will be discarded and the mapping will
only work with hashes, making the map much faster and more compact, at the expense of some inaccuracy where the
returned values could have extra values corresponding to other keys whose hash collide with the one being requested.

Tests
=====

Running tests can be done locally or on docker, using the script `run-tests.sh`:

.. code:: shell

  $> virtualenv venv
  $> . venv/bin/activate
  $> sh ./run-tests.sh


Alternatively, running it on docker can be done with the following command:

.. code:: shell

  $> docker run -v ${PWD}:/opt/sharedbuffers -w /opt/sharedbuffers python:2.7 /bin/sh run-tests.sh


            

Raw data

            {
    "maintainer": "", 
    "docs_url": null, 
    "requires_python": "", 
    "maintainer_email": "", 
    "cheesecake_code_kwalitee_id": null, 
    "keywords": "", 
    "upload_time": "2017-10-12 15:18:59", 
    "requirements": [
        {
            "name": "lz4", 
            "specs": []
        }, 
        {
            "name": "xxhash", 
            "specs": []
        }, 
        {
            "name": "numpy", 
            "specs": []
        }, 
        {
            "name": "Cython", 
            "specs": [
                [
                    ">=", 
                    "0.22"
                ]
            ]
        }, 
        {
            "name": "chorde", 
            "specs": []
        }
    ], 
    "author": "Claudio Freire", 
    "home_page": "https://github.com/jampp/sharedbuffers/", 
    "github_user": "jampp", 
    "download_url": "https://pypi.python.org/packages/41/b8/a671db492130deffa14f6dae12eeb82316bf80f14726db8283e3c9ac4982/sharedbuffers-0.4.5.tar.gz", 
    "platform": "", 
    "version": "0.4.5", 
    "cheesecake_documentation_id": null, 
    "description": "sharedbuffers\n=============\n\nThis library implements shared-memory typed buffers that can be read and manipulated (and we'll eventually \nsupport writes too) efficiently without serialization or deserialization.\n\nThe main supported implementation of obtaining shared memory is by memory-mapping files, but the library also supports\nmapping buffers (anonymous mmap objects) as well, albeit they're harder to share among processes.\n\nCurrently, most primitive types and collections are supported, except `dicts`.\n\nSupported primivite types:\n\n    * int (up to 64 bit precision)\n    * str (bytes)\n    * unicode\n    * frozenset\n    * tuple / list\n\nPrimitive types are cloned into their actual builtin objects when accessed. Although fast, it does imply that contianers\nwill take up a lot of process-local memory when accessed. Support for collection proxies that take the place of\nbuiltin containers is planned for a future release.\n\nObjects can be registered with schema serializers and thus composite types can be mapped as well. For this to function\nproperly, objects need a class attribute specifying the attributes it holds and the type of the attributes. When an\nattribute doesn't have a clearly defined type, it can be wrapped in a RTTI-containing container by specifying it as\ntype `object`.\n\nFor example:\n\n.. code:: python\n\n    class SomeStruct(object):\n        __slot_types__ = {\n            'a' : int,\n            'b' : float,\n            's' : str,\n            'u' : unicode,\n            'fset' : frozenset,\n            'l' : list,\n            'o' : object,\n        }\n        __slots__ = __slot_types__.keys()\n\nAdding `__slot_types__`, however, isn't enough to make the object mappable. A schema definition needs to be created,\nwhich can be used to map files or buffers and obtain proxies to the information within:\n\n.. code:: python\n\n    class SomeStruct(object):\n        __slot_types__ = {\n            'a' : int,\n            'b' : float,\n            's' : str,\n            'u' : unicode,\n            'fset' : frozenset,\n            'l' : list,\n            'o' : object,\n        }\n        __slots__ = __slot_types__.keys()\n        __schema__ = mapped_struct.Schema.from_typed_slots(__slot_types__)\n\nUsing the schema is thus straightforward:\n\n.. code:: python\n\n    s = SomeStruct()\n    s.a = 3\n    s.s = 'blah'\n    s.fset = frozenset([1,3])\n    s.o = 3\n    s.__schema__.pack(s) # returns a bytearray\n\n    buf = bytearray(1000)\n    s.__schema__.pack_into(s, buf, 10) # writes in offset 10 of buf, returns the size of the written object\n    p = s.__schema__.unpack_from(s, buf, 10) # returns a proxy for the object just packed into buf, does not deserialize\n    print p.a\n    print p.s\n    print p.fset\n\nTyped objects can be nested, but for that a typecode must be assigned to each type in order for RTTI to properly\nidentify the custom types:\n\n.. code:: python\n\n    SomeStruct.__mapped_type__ = mapped_struct.mapped_object.register_schema(\n        SomeStruct, SomeStruct.__schema__, 'S')\n\nFrom then on, `SomeStruct` can be used as any other type when declaring field types.\n\nHigh-level typed container classes can be created by inheriting the proper base class. Currently, there are only\narrays and mappings of two kinds: string-to-object, and uint-to-object\n\n.. code:: python\n\n    class StructArray(mapped_struct.MappedArrayProxyBase):\n        schema = SomeStruct.__schema__\n    class StructNameMapping(mapped_struct.MappedMappingProxyBase):\n        IdMapper = mapped_struct.StringIdMapper\n        ValueArray = StructArray\n    class StructIdMapping(mapped_struct.MappedMappingProxyBase):\n        IdMapper = mapped_struct.NumericIdMapper\n        ValueArray = StructArray\n\nThe API for these high-level container objects is aimed at collections that don't really fit in RAM in their\npure-python form, so they must be built using an iterator over the items (ideally a generator that doesn't\nput the whole collection in memory at once), and then mapped from the resulting file or buffer. An example:\n\n.. code:: python\n\n    with tempfile.NamedTemporaryFile() as destfile:\n        arr = StructArray.build([SomeStruct(), SomeStruct()], destfile=destfile)\n        print arr[0]\n\n    with tempfile.NamedTemporaryFile() as destfile:\n        arr = StructNameMapping.build(dict(a=SomeStruct(), b=SomeStruct()).iteritems(), destfile=destfile)\n        print arr['a']\n\n    with tempfile.NamedTemporaryFile() as destfile:\n        arr = StructIdMapping.build({1:SomeStruct(), 3:SomeStruct()}.iteritems(), destfile=destfile)\n        print arr[3]\n\nWhen using nested hierarchies, it's possible to unify references to the same object by specifying an idmap dict.\nHowever, since the idmap will map objects by their `id()`, objects must be kept alive by holding references to\nthem while they're still referenced in the idmap, so its usage is non-trivial. An example technique:\n\n.. code:: python\n\n    def all_structs(idmap):\n        iter_all = iter(some_generator)\n        while True:\n            idmap.clear()\n    \n            sstructs = list(itertools.islice(iter_all, 10000))\n            if not sstructs:\n                break\n    \n            for ss in sstructs :\n                # mapping from \"s\" attribute to struct\n                yield (ss.s, ss)\n            del sstructs\n    \n    idmap = {}\n    name_mapping = StructNameMapping.build(all_structs(idmap), \n        destfile = destfile, idmap = idmap)\n\nThe above code syncs the lifetime of objects and their idmap entries to avoid mapping issues. If the invariant\nisn't maintained (objects referenced in the idmap are alive and holding a unique `id()` value), the result will be\nsilent corruption of the resulting mapping due to object identity mixups.\n\nThere are variants of the mapping proxy classes and their associated id mapper classes that implement multi-maps.\nThat is, mappings that, when fed with multiple values for a key, will return a list of values for that key rather\nthan a single key. Their in-memory representation is identical, but their querying API returns all matching values\nrather than the first one, so multi-maps and simple mappings are binary compatible.\n\nMulti-maps with string keys can also be approximate, meaning the original keys will be discarded and the mapping will\nonly work with hashes, making the map much faster and more compact, at the expense of some inaccuracy where the\nreturned values could have extra values corresponding to other keys whose hash collide with the one being requested.\n\nTests\n=====\n\nRunning tests can be done locally or on docker, using the script `run-tests.sh`:\n\n.. code:: shell\n\n  $> virtualenv venv\n  $> . venv/bin/activate\n  $> sh ./run-tests.sh\n\n\nAlternatively, running it on docker can be done with the following command:\n\n.. code:: shell\n\n  $> docker run -v ${PWD}:/opt/sharedbuffers -w /opt/sharedbuffers python:2.7 /bin/sh run-tests.sh\n\n", 
    "lcname": "sharedbuffers", 
    "bugtrack_url": null, 
    "github": true, 
    "coveralls": false, 
    "name": "sharedbuffers", 
    "license": "BSD 3-Clause", 
    "travis_ci": false, 
    "github_project": "sharedbuffers", 
    "summary": "Shared-memory structured buffers", 
    "split_keywords": [], 
    "author_email": "klauss@jampp.com", 
    "urls": [
        {
            "has_sig": false, 
            "upload_time": "2017-10-12T15:18:59", 
            "comment_text": "", 
            "python_version": "source", 
            "url": "https://pypi.python.org/packages/41/b8/a671db492130deffa14f6dae12eeb82316bf80f14726db8283e3c9ac4982/sharedbuffers-0.4.5.tar.gz", 
            "md5_digest": "07a479211588ec12cbfe53d454a8b107", 
            "downloads": 0, 
            "filename": "sharedbuffers-0.4.5.tar.gz", 
            "packagetype": "sdist", 
            "path": "41/b8/a671db492130deffa14f6dae12eeb82316bf80f14726db8283e3c9ac4982/sharedbuffers-0.4.5.tar.gz", 
            "size": 39940
        }
    ], 
    "_id": null, 
    "cheesecake_installability_id": null
}