avro-gen3


Nameavro-gen3 JSON
Version 0.7.12 PyPI version JSON
download
home_pagehttps://github.com/acryldata/avro_gen
SummaryAvro record class and specific record reader generator
upload_time2024-03-01 20:53:03
maintainer
docs_urlNone
authorHarshal Sheth
requires_python
licenseLicense :: OSI Approved :: Apache Software License
keywords avro class generator
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage
            AVRO-GEN
========

[![Build Status](https://travis-ci.org/rbystrit/avro_gen.svg?branch=master)](https://travis-ci.org/rbystrit/avro_gen)
[![codecov](https://codecov.io/gh/rbystrit/avro_gen/branch/master/graph/badge.svg)](https://codecov.io/gh/rbystrit/avro_gen)
##### Avro record class and specific record reader generator.

Current Avro implementation in Python is completely typelss and operates on dicts. 
While in many cases this is convenient and pythonic, not being able to discover the schema
by looking at the code, not enforcing schema during record constructions, and not having any 
context help from the IDE could hamper developer performance and introduce bugs. 

This project aims to rectify this situation by providing a generator for constructing concrete
record classes and constructing a reader which wraps Avro DatumReader and returns concrete classes
instead of dicts. In order not to violate Avro internals, this functionality is built strictly
on top of the DatumReader and all the specific record classes dict wrappers which define accessor
properties with proper type hints for each field in the schema. For this exact reason the 
generator does not provide an overloaded DictWriter; each specific record appears just to be a 
regular dictionary.

This is a fork of [https://github.com/rbystrit/avro_gen](https://github.com/rbystrit/avro_gen).
It adds better Python 3 support, including types, better namespace handling, support for
documentation generation, and JSON (de-)serialization.

```sh
pip install avro-gen3
```
 
##### Usage:
    schema_json = "....."
    output_directory = "....."
    from avrogen import write_schema_files
    
    write_schema_files(schema_json, output_directory)
    
The generator will create output directory if it does not exist and put generated files there. 
The generated files will be:

>  OUTPUT_DIR
>  + \_\_init\_\_.py   
>  + schema_classes.py 
>  + submodules*
 
In order to deal with Avro namespaces, since python doesn't support circular imports, the generator
 will emit all records into schema_classes.py as nested classes. The top level class there will be
 SchemaClasses, whose children will be classes representing namespaces. Each namespace class will 
 in turn contain classes for records belonging to that namespace. 
 
 Consider following schema:
 
     {"type": "record", "name": "tweet", "namespace": "com.twitter.avro", "fields": [{"name": "ID", "type": "long" }
 
 Then schema_classes.py would contain:
 
    class SchemaClasses(object):
        class com(object):
            class twitter(object):
                class acro(object):
                    class tweetClass(DictWrapper):
                        def __init__(self, inner_dict=None):
                            ....
                        @property
                        def ID(self):
                            """
                            :rtype: long
                            """
                            return self._inner_dict.get('ID', None)
                        
                        @ID.setter
                        def ID(self, value):
                            #"""
                            #:param long value:
                            #"""
                            self._inner_dict['ID'] = value                        
    
 In order to map specific record types and namespaces to modules, so that proper importing can
 be supported, there generator will create a sub-module under the output directory for each namespace
 which will export names of all types contained in that namespace. Types declared with empty 
 namespace will be exported from the root module. 
 
 So for the example above, output directory will look as follows:
 
 >  OUTPUT_DIR
 >  + \_\_init\_\_.py
 >  + schema_classes.py
 >  + com
 >   + twitter
 >     + avro
 >       + \_\_init\_\_.py  

The contents of OUTPUT_DIR/com/twitter/avro/\_\_init\_\_.py will be:
    
    from ....schema_classes import SchemaClasses
    tweet = SchemaClasses.com.twitter.avro.tweet
    
So in your code you will be able to say:
    
    from OUTPUT_DIR.com.twitter.avro import tweet
    from OUTPUT_DIR import SpecificDatumReader as TweetReader, SCHEMA as your_schema
    from avro import datafile, io
    my_tweet = tweet()
    
    my_tweet.ID = 1
    with open('somefile', 'w+b') as f:
        writer = datafile.DataFileWriter(f,io.DatumWriter(), your_schema)
        writer.append(my_tweet)
        writer.close()
    
    with open('somefile', 'rb') as f:
        reader = datafile.DataFileReader(f,TweetReader(readers_schema=your_schema))
        my_tweet1 = next(reader)
        reader.close()
        
       
### Avro protocol support

Avro protocol support is implemented the same way as schema support. To generate classes 
for a protocol:

    protocol_json = "....."
    output_directory = "....."
    from avrogen import write_protocol_files
    
    write_protocol_files(protocol_json, output_directory)
    
The structure of the generated code will be exactly same as for schema, but in addition to
regular types, *Request types will be generated in the root namespace of the protocol for each 
each message defined.

### Logical types support

Avrogen implements logical types on top of standard avro package and supports generation of 
classes thus typed. To enable logical types support, pass **use_logical_types=True** to schema 
and protocol generators. If custom logical types are implemented and such types map to types 
other than simple types or datetime.* or decimal.* then pass **custom_imports** parameter to 
generator functions so that your types are imported. Types implemented out of the box are:

- decimal (using string representation only)
- date
- time-millis
- time-micros
- timestamp-millis
- timestamp-micros

To register your custom logical type, inherit from avrogen.logical.LogicalTypeProcessor, implement
abstract methods, and add an instance to avrogen.logical.DEFAULT_LOGICAL_TYPES dictionary under the 
name of your logical type. A sample implementation looks as follows:

    class DateLogicalTypeProcessor(LogicalTypeProcessor):
        _matching_types = {'int', 'long', 'float', 'double'}
    
        def can_convert(self, writers_schema):
            return isinstance(writers_schema, schema.PrimitiveSchema) and writers_schema.type == 'int'
    
        def validate(self, expected_schema, datum):
            return isinstance(datum, datetime.date)
    
        def convert(self, writers_schema, value):
            if not isinstance(value, datetime.date):
                raise Exception("Wrong type for date conversion")
            return (value - EPOCH_DATE).total_seconds() // SECONDS_IN_DAY
    
        def convert_back(self, writers_schema, readers_schema, value):
            return EPOCH_DATE + datetime.timedelta(days=int(value))
    
        def does_match(self, writers_schema, readers_schema):
            if isinstance(writers_schema, schema.PrimitiveSchema):
                if writers_schema.type in DateLogicalTypeProcessor._matching_types:
                    return True
            return False
    
        def typename(self):
            return 'datetime.date'
    
        def initializer(self, value=None):
            return ((
                        'logical.DateLogicalTypeProcessor().convert_back(None, None, %s)' % value) if value is not None
                    else 'datetime.datetime.today().date()')


To read/write data with logical type support, use generated SpecificDatumReader 
and a LogicalDatumWriter from avro.logical.
 



    

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/acryldata/avro_gen",
    "name": "avro-gen3",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "avro class generator",
    "author": "Harshal Sheth",
    "author_email": "hsheth2@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/e5/0b/a8793c128157887db96f5722e319e2b80f0b4798b53560f027528ce62a6b/avro-gen3-0.7.12.tar.gz",
    "platform": null,
    "description": "AVRO-GEN\n========\n\n[![Build Status](https://travis-ci.org/rbystrit/avro_gen.svg?branch=master)](https://travis-ci.org/rbystrit/avro_gen)\n[![codecov](https://codecov.io/gh/rbystrit/avro_gen/branch/master/graph/badge.svg)](https://codecov.io/gh/rbystrit/avro_gen)\n##### Avro record class and specific record reader generator.\n\nCurrent Avro implementation in Python is completely typelss and operates on dicts. \nWhile in many cases this is convenient and pythonic, not being able to discover the schema\nby looking at the code, not enforcing schema during record constructions, and not having any \ncontext help from the IDE could hamper developer performance and introduce bugs. \n\nThis project aims to rectify this situation by providing a generator for constructing concrete\nrecord classes and constructing a reader which wraps Avro DatumReader and returns concrete classes\ninstead of dicts. In order not to violate Avro internals, this functionality is built strictly\non top of the DatumReader and all the specific record classes dict wrappers which define accessor\nproperties with proper type hints for each field in the schema. For this exact reason the \ngenerator does not provide an overloaded DictWriter; each specific record appears just to be a \nregular dictionary.\n\nThis is a fork of [https://github.com/rbystrit/avro_gen](https://github.com/rbystrit/avro_gen).\nIt adds better Python 3 support, including types, better namespace handling, support for\ndocumentation generation, and JSON (de-)serialization.\n\n```sh\npip install avro-gen3\n```\n \n##### Usage:\n    schema_json = \".....\"\n    output_directory = \".....\"\n    from avrogen import write_schema_files\n    \n    write_schema_files(schema_json, output_directory)\n    \nThe generator will create output directory if it does not exist and put generated files there. \nThe generated files will be:\n\n>  OUTPUT_DIR\n>  + \\_\\_init\\_\\_.py   \n>  + schema_classes.py \n>  + submodules*\n \nIn order to deal with Avro namespaces, since python doesn't support circular imports, the generator\n will emit all records into schema_classes.py as nested classes. The top level class there will be\n SchemaClasses, whose children will be classes representing namespaces. Each namespace class will \n in turn contain classes for records belonging to that namespace. \n \n Consider following schema:\n \n     {\"type\": \"record\", \"name\": \"tweet\", \"namespace\": \"com.twitter.avro\", \"fields\": [{\"name\": \"ID\", \"type\": \"long\" }\n \n Then schema_classes.py would contain:\n \n    class SchemaClasses(object):\n        class com(object):\n            class twitter(object):\n                class acro(object):\n                    class tweetClass(DictWrapper):\n                        def __init__(self, inner_dict=None):\n                            ....\n                        @property\n                        def ID(self):\n                            \"\"\"\n                            :rtype: long\n                            \"\"\"\n                            return self._inner_dict.get('ID', None)\n                        \n                        @ID.setter\n                        def ID(self, value):\n                            #\"\"\"\n                            #:param long value:\n                            #\"\"\"\n                            self._inner_dict['ID'] = value                        \n    \n In order to map specific record types and namespaces to modules, so that proper importing can\n be supported, there generator will create a sub-module under the output directory for each namespace\n which will export names of all types contained in that namespace. Types declared with empty \n namespace will be exported from the root module. \n \n So for the example above, output directory will look as follows:\n \n >  OUTPUT_DIR\n >  + \\_\\_init\\_\\_.py\n >  + schema_classes.py\n >  + com\n >   + twitter\n >     + avro\n >       + \\_\\_init\\_\\_.py  \n\nThe contents of OUTPUT_DIR/com/twitter/avro/\\_\\_init\\_\\_.py will be:\n    \n    from ....schema_classes import SchemaClasses\n    tweet = SchemaClasses.com.twitter.avro.tweet\n    \nSo in your code you will be able to say:\n    \n    from OUTPUT_DIR.com.twitter.avro import tweet\n    from OUTPUT_DIR import SpecificDatumReader as TweetReader, SCHEMA as your_schema\n    from avro import datafile, io\n    my_tweet = tweet()\n    \n    my_tweet.ID = 1\n    with open('somefile', 'w+b') as f:\n        writer = datafile.DataFileWriter(f,io.DatumWriter(), your_schema)\n        writer.append(my_tweet)\n        writer.close()\n    \n    with open('somefile', 'rb') as f:\n        reader = datafile.DataFileReader(f,TweetReader(readers_schema=your_schema))\n        my_tweet1 = next(reader)\n        reader.close()\n        \n       \n### Avro protocol support\n\nAvro protocol support is implemented the same way as schema support. To generate classes \nfor a protocol:\n\n    protocol_json = \".....\"\n    output_directory = \".....\"\n    from avrogen import write_protocol_files\n    \n    write_protocol_files(protocol_json, output_directory)\n    \nThe structure of the generated code will be exactly same as for schema, but in addition to\nregular types, *Request types will be generated in the root namespace of the protocol for each \neach message defined.\n\n### Logical types support\n\nAvrogen implements logical types on top of standard avro package and supports generation of \nclasses thus typed. To enable logical types support, pass **use_logical_types=True** to schema \nand protocol generators. If custom logical types are implemented and such types map to types \nother than simple types or datetime.* or decimal.* then pass **custom_imports** parameter to \ngenerator functions so that your types are imported. Types implemented out of the box are:\n\n- decimal (using string representation only)\n- date\n- time-millis\n- time-micros\n- timestamp-millis\n- timestamp-micros\n\nTo register your custom logical type, inherit from avrogen.logical.LogicalTypeProcessor, implement\nabstract methods, and add an instance to avrogen.logical.DEFAULT_LOGICAL_TYPES dictionary under the \nname of your logical type. A sample implementation looks as follows:\n\n    class DateLogicalTypeProcessor(LogicalTypeProcessor):\n        _matching_types = {'int', 'long', 'float', 'double'}\n    \n        def can_convert(self, writers_schema):\n            return isinstance(writers_schema, schema.PrimitiveSchema) and writers_schema.type == 'int'\n    \n        def validate(self, expected_schema, datum):\n            return isinstance(datum, datetime.date)\n    \n        def convert(self, writers_schema, value):\n            if not isinstance(value, datetime.date):\n                raise Exception(\"Wrong type for date conversion\")\n            return (value - EPOCH_DATE).total_seconds() // SECONDS_IN_DAY\n    \n        def convert_back(self, writers_schema, readers_schema, value):\n            return EPOCH_DATE + datetime.timedelta(days=int(value))\n    \n        def does_match(self, writers_schema, readers_schema):\n            if isinstance(writers_schema, schema.PrimitiveSchema):\n                if writers_schema.type in DateLogicalTypeProcessor._matching_types:\n                    return True\n            return False\n    \n        def typename(self):\n            return 'datetime.date'\n    \n        def initializer(self, value=None):\n            return ((\n                        'logical.DateLogicalTypeProcessor().convert_back(None, None, %s)' % value) if value is not None\n                    else 'datetime.datetime.today().date()')\n\n\nTo read/write data with logical type support, use generated SpecificDatumReader \nand a LogicalDatumWriter from avro.logical.\n \n\n\n\n    \n",
    "bugtrack_url": null,
    "license": "License :: OSI Approved :: Apache Software License",
    "summary": "Avro record class and specific record reader generator",
    "version": "0.7.12",
    "project_urls": {
        "Homepage": "https://github.com/acryldata/avro_gen"
    },
    "split_keywords": [
        "avro",
        "class",
        "generator"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "718ee2f186c5899dc7dc60e4fe278c06206547ce3c6370f89c53c2ced2545a16",
                "md5": "610e126b139d2c6efce6ddb2e638b6e1",
                "sha256": "b13b96d5e68ec4f9405127bc82fad40f38e9d9af81436a0a49e8f5e8414e1143"
            },
            "downloads": -1,
            "filename": "avro_gen3-0.7.12-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "610e126b139d2c6efce6ddb2e638b6e1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 26984,
            "upload_time": "2024-03-01T20:53:00",
            "upload_time_iso_8601": "2024-03-01T20:53:00.886453Z",
            "url": "https://files.pythonhosted.org/packages/71/8e/e2f186c5899dc7dc60e4fe278c06206547ce3c6370f89c53c2ced2545a16/avro_gen3-0.7.12-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e50ba8793c128157887db96f5722e319e2b80f0b4798b53560f027528ce62a6b",
                "md5": "6d368111e02551688a80d6b18a702924",
                "sha256": "379632572fcb1ff50313557824bacd75d01d48c018d18cfd3c0276f71ac92e45"
            },
            "downloads": -1,
            "filename": "avro-gen3-0.7.12.tar.gz",
            "has_sig": false,
            "md5_digest": "6d368111e02551688a80d6b18a702924",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 23403,
            "upload_time": "2024-03-01T20:53:03",
            "upload_time_iso_8601": "2024-03-01T20:53:03.039389Z",
            "url": "https://files.pythonhosted.org/packages/e5/0b/a8793c128157887db96f5722e319e2b80f0b4798b53560f027528ce62a6b/avro-gen3-0.7.12.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-01 20:53:03",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "acryldata",
    "github_project": "avro_gen",
    "travis_ci": false,
    "coveralls": true,
    "github_actions": true,
    "lcname": "avro-gen3"
}
        
Elapsed time: 0.22724s