booklet


Namebooklet JSON
Version 0.7.5 PyPI version JSON
download
home_pageNone
SummaryA python key-value file database
upload_time2024-10-26 09:27:44
maintainerNone
docs_urlNone
authorNone
requires_pythonNone
licenseNone
keywords dbm shelve
VCS
bugtrack_url
requirements portalocker orjson uuid6
Travis-CI No Travis.
coveralls test coverage No coveralls.
            Booklet
==================================

Introduction
------------
Booklet is a pure python key-value file database. It allows for multiple serializers for both the keys and values. Booklet uses the `MutableMapping <https://docs.python.org/3/library/collections.abc.html#collections-abstract-base-classes>`_ class API which is the same as python's dictionary in addition to some `dbm <https://docs.python.org/3/library/dbm.html>`_ methods (i.e. sync and prune).
It is thread-safe on writes (using thread locks) and multiprocessing-safe (using file locks). Reads are not thread safe.

When an error occurs (e.g. trying to access a key that doesn't exist), booklet will properly close the file and remove the file locks. This will not sync any changes, so the user will lose any changes that were not synced. There will be circumstances that can occur that will not properly close the file, so care still needs to be made.

Installation
------------
Install via pip::

  pip install booklet

Or conda::

  conda install -c mullenkamp booklet


I'll probably put it on conda-forge once I feel like it's up to an appropriate standard...


Serialization
-----------------------------
Both the keys and values stored in Booklet must be bytes when written to disk. This is the default when "open" is called. Booklet allows for various serializers to be used for taking input keys and values and converting them to bytes. There are many in-built serializers. Check the booklet.available_serializers list for what's available. Some serializers require additional packages to be installed (e.g. orjson, zstd, etc). If you want to serialize to json, then it is highly recommended to use orjson or msgpack as they are substantially faster than the standard json python module. If in-built serializers are assigned at initial file creation, then they will be saved on future reading and writing on the same file (i.e. they don't need to be passed after the first time). Setting a serializer to None will not do any serializing, and the input must be bytes.
The user can also pass custom serializers to the key_serializer and value_serializer parameters. These must have "dumps" and "loads" static methods. This allows the user to chain a serializer and a compressor together if desired. Custom serializers must be passed for writing and reading as they are not stored in the booklet file.

.. code:: python

  import booklet

  print(booklet.available_serializers)


Usage
-----
The docstrings have a lot of info about the classes and methods. Files should be opened with the booklet.open function. Read the docstrings of the open function for more details.

Write data using the context manager
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code:: python

  import booklet

  with booklet.open('test.blt', 'n', value_serializer='pickle', key_serializer='str', n_buckets=12007) as db:
    db['test_key'] = ['one', 2, 'three', 4]


Read data using the context manager
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code:: python

  with booklet.open('test.blt', 'r') as db:
    test_data = db['test_key']

Notice that you don't need to pass serializer parameters when reading (and additional writing) when in-built serializers are used. Booklet stores this info on the initial file creation.

In most cases, the user should use python's context manager "with" when reading and writing data. This will ensure data is properly written and locks are released on the file. If the context manager is not used, then the user must be sure to run the db.sync() (or db.close()) at the end of a series of writes to ensure the data has been fully written to disk. Only after the writes have been synced can additional reads occur. Make sure you close your file or you'll run into file deadlocks!

Write data without using the context manager
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code:: python

  import booklet

  db = booklet.open('test.blt', 'n', value_serializer='pickle', key_serializer='str')

  db['test_key'] = ['one', 2, 'three', 4]
  db['2nd_test_key'] = ['five', 6, 'seven', 8]

  db.sync()  # Normally not necessary if the user closes the file after writing
  db.close() # Will also run sync as part of the closing process


Read data without using the context manager
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code:: python

  db = booklet.open('test.blt') # 'r' is the default flag

  test_data1 = db['test_key']
  test_data2 = db['2nd_test_key']

  db.close()


Prune deleted items
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When a key/value is "deleted", it's actually just flagged internally as deleted and the item is ignored on the following requests. This is the same for keys that get reassigned. To remove these deleted items from the file completely, the user can run the "prune" method. This should only be performed when the user has done a ton of deletes/overwrites as prune can be computationally intensive. There is no performance improvement to removing these items from the file. It's purely to regain space.

.. code:: python

  with booklet.open('test.blt', 'w') as db:
    del db['test_key']
    db.sync()
    db.prune()


File metadata
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The user can assign overall metadata to the file as a json serializable object (i.e. dict or list). The methods are called set_metadata and get_metadata. The metadata is independent from all of the other key/value pairs assigned in the normal way. It won't be returned with any other methods. If metadata has not already been assigned, the get_metadata method will return None.

.. code:: python

  with booklet.open('test.blt', 'w') as db:
    db.set_metadata({'meta_key1': 'This is stored as metadata'})
    meta = db.get_metadata()


Item timestamps
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Timestamps associated with each assigned item have been implemented, but must be turned on at file initialization. By default it's off. The timestamps are stored and returned as an int of the number of microseconds in POSIX UTC time. There are new methods to set and get the timestamps. It's quite new...so I won't supply more info until it's further tested.


Custom serializers
~~~~~~~~~~~~~~~~~~
.. code:: python

  import orjson

  class Orjson:
    def dumps(obj):
        return orjson.dumps(obj, option=orjson.OPT_NON_STR_KEYS | orjson.OPT_OMIT_MICROSECONDS | orjson.OPT_SERIALIZE_NUMPY)
    def loads(obj):
        return orjson.loads(obj)

  with booklet.open('test.blt', 'n', value_serializer=Orjson, key_serializer='str') as db:
    db['test_key'] = ['one', 2, 'three', 4]


The Orjson class is actually already built into the package. You can pass the string 'orjson' to either serializer parameters to use the above serializer. This is just an example of a custom serializer.

Here's another example with compression.

.. code:: python

  import orjson
  import zstandard as zstd

  class OrjsonZstd:
    def dumps(obj):
        return zstd.compress(orjson.dumps(obj, option=orjson.OPT_NON_STR_KEYS | orjson.OPT_OMIT_MICROSECONDS | orjson.OPT_SERIALIZE_NUMPY))
    def loads(obj):
        return orjson.loads(zstd.decompress(obj))

  with booklet.open('test.blt', 'n', value_serializer=OrjsonZstd, key_serializer='str') as db:
    db['big_test'] = list(range(1000000))

  with booklet.open('test.blt', 'r', value_serializer=OrjsonZstd) as db:
    big_test_data = db['big_test']

If you use a custom serializer, then you'll always need to pass it to booklet.open for additional reading and writing.


The open flag follows the standard dbm options:

+---------+-------------------------------------------+
| Value   | Meaning                                   |
+=========+===========================================+
| ``'r'`` | Open existing database for reading only   |
|         | (default)                                 |
+---------+-------------------------------------------+
| ``'w'`` | Open existing database for reading and    |
|         | writing                                   |
+---------+-------------------------------------------+
| ``'c'`` | Open database for reading and writing,    |
|         | creating it if it doesn't exist           |
+---------+-------------------------------------------+
| ``'n'`` | Always create a new, empty database, open |
|         | for reading and writing                   |
+---------+-------------------------------------------+

Design
-------
VariableValue (default)
~~~~~~~~~~~~~~~~~~~~~~~~
There are two groups in a booklet file plus some initial bytes for parameters (sub index). The sub index is 200 bytes long, but currently only 37 bytes are used. The two other groups are the bucket index group and the data block group. The bucket index group contains the "hash table". This bucket index contains a fixed number of buckets (n_buckets) and each bucket contains a 6 byte integer of the position of the first data block associated with that bucket. When the user requests a value from a key input, the key is hashed and the modulus of the n_buckets is performed to determine which bucket to read. The 6 bytes is read from that bucket, converted to an integer, then booklet knows where the first data block is located in the file. The data block group contains all of the data blocks each of which contains the key hash, next data block pos, key length, value length, timestamp (if init with timestamps), key, and value (in this order).

The number of bytes per data block object includes:
key hash: 13
next data block pos: 6
key length: 2
value length: 4
timestamp: either 0 (if init without timestamps) or 7
key: variable
value: variable

When the first data block pos is determined through the initial key hashing and bucket reading, the first 19 bytes (key hash and next data block pos) are read. Booklet then checks the next data block pos (ndbp). If the ndbp is 0, then it has been assigned the delete flag and is ignored. The key hash from the data block is compared to the key hash from the input. If they are the same, then this is the data block we want. If they are different, then we look again at the ndbp. If the ndbp is 1, then this is the last data block associated with the key hash and the input key hash doesn't exist. If the ndbp is > 1, then we move to the next data block based on the ndbp and try the cycle again until either we hit a dead end or we find the same key hash.

When we find the identical key hash, Booklet reads 6 bytes (key len and value len) to determine how many bytes are needed to be read to get the key/value (since they are variable). Depending on whether the user wants the key, value, and/or timestamp, Booklet will read 7 bytes (timestamp len) plus the number of bytes for the key and value. 

Deletes assign ndbp to 0 and reassign the prior data block it's original ndbp. This essentially just removes this data block from the key hash data block chain.
A delete also happens when a user "overwrites" the same key.

A "prune" method has been created that allows the user to remove "deleted" items. It has two optional parameters. If timestamps have been initialized in booklet, then the user can pass a timestamp that will remove all items older than that timestamp. The reindexing option allows the user to increase the n_buckets when the number items greatly exceeds the initialized n_buckets. The implementation essentially just clears the original index then iterates through all data blocks and rewrites only the data blocks that haven't been deleted. In the case of the reindexing, it determines the difference between the old index size and the new index size, expands the file by that difference, moves all of the data blocks to the end of the file, and then writes the newer (and longer) index to the file. Then it continues with the normal pruning procedure. 

FixedValue
~~~~~~~~~~~
The main difference from VariableValue is that the value length is globally fixed. The data block in a FixedValue object does not contain the value length as the value will always be the same global value length. The main advantage of this difference is that any overwrites of the same key can be written back to the same location on the file instead of always being appended to the end of the file. If a use-case includes many overwrites and the values are always the same size, then the FixedValue object is ideal.

There are currently no timestamps in the FixedValue. This could be enabled in the future.

Limitations
-----------
The main limitation is that booklet does not have automatic reindexing (increasing the n_buckets). In the current design, reindexing is computationally intensive when the file is large. The user should generally assign an appropriate n_buckets at initialization. This should be approximately the same number as the expected number of keys/values. The default is set at 12007. The "prune" method now has a reindexing option that allows the users to deliberately update/increase the index.

Benchmarks
-----------
From my initial tests, the performance is comparable to other very fast key-value databases (e.g. gdbm, lmdb) and faster than sqlitedict.


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "booklet",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "dbm, shelve",
    "author": null,
    "author_email": "Mike Kittridge <mullenkamp1@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/ff/62/01af90bc79576f449be0e30e16301260fa255a39c30f085e8014f740d5f8/booklet-0.7.5.tar.gz",
    "platform": null,
    "description": "Booklet\n==================================\n\nIntroduction\n------------\nBooklet is a pure python key-value file database. It allows for multiple serializers for both the keys and values. Booklet uses the `MutableMapping <https://docs.python.org/3/library/collections.abc.html#collections-abstract-base-classes>`_ class API which is the same as python's dictionary in addition to some `dbm <https://docs.python.org/3/library/dbm.html>`_ methods (i.e. sync and prune).\nIt is thread-safe on writes (using thread locks) and multiprocessing-safe (using file locks). Reads are not thread safe.\n\nWhen an error occurs (e.g. trying to access a key that doesn't exist), booklet will properly close the file and remove the file locks. This will not sync any changes, so the user will lose any changes that were not synced. There will be circumstances that can occur that will not properly close the file, so care still needs to be made.\n\nInstallation\n------------\nInstall via pip::\n\n  pip install booklet\n\nOr conda::\n\n  conda install -c mullenkamp booklet\n\n\nI'll probably put it on conda-forge once I feel like it's up to an appropriate standard...\n\n\nSerialization\n-----------------------------\nBoth the keys and values stored in Booklet must be bytes when written to disk. This is the default when \"open\" is called. Booklet allows for various serializers to be used for taking input keys and values and converting them to bytes. There are many in-built serializers. Check the booklet.available_serializers list for what's available. Some serializers require additional packages to be installed (e.g. orjson, zstd, etc). If you want to serialize to json, then it is highly recommended to use orjson or msgpack as they are substantially faster than the standard json python module. If in-built serializers are assigned at initial file creation, then they will be saved on future reading and writing on the same file (i.e. they don't need to be passed after the first time). Setting a serializer to None will not do any serializing, and the input must be bytes.\nThe user can also pass custom serializers to the key_serializer and value_serializer parameters. These must have \"dumps\" and \"loads\" static methods. This allows the user to chain a serializer and a compressor together if desired. Custom serializers must be passed for writing and reading as they are not stored in the booklet file.\n\n.. code:: python\n\n  import booklet\n\n  print(booklet.available_serializers)\n\n\nUsage\n-----\nThe docstrings have a lot of info about the classes and methods. Files should be opened with the booklet.open function. Read the docstrings of the open function for more details.\n\nWrite data using the context manager\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n.. code:: python\n\n  import booklet\n\n  with booklet.open('test.blt', 'n', value_serializer='pickle', key_serializer='str', n_buckets=12007) as db:\n    db['test_key'] = ['one', 2, 'three', 4]\n\n\nRead data using the context manager\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n.. code:: python\n\n  with booklet.open('test.blt', 'r') as db:\n    test_data = db['test_key']\n\nNotice that you don't need to pass serializer parameters when reading (and additional writing) when in-built serializers are used. Booklet stores this info on the initial file creation.\n\nIn most cases, the user should use python's context manager \"with\" when reading and writing data. This will ensure data is properly written and locks are released on the file. If the context manager is not used, then the user must be sure to run the db.sync() (or db.close()) at the end of a series of writes to ensure the data has been fully written to disk. Only after the writes have been synced can additional reads occur. Make sure you close your file or you'll run into file deadlocks!\n\nWrite data without using the context manager\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n.. code:: python\n\n  import booklet\n\n  db = booklet.open('test.blt', 'n', value_serializer='pickle', key_serializer='str')\n\n  db['test_key'] = ['one', 2, 'three', 4]\n  db['2nd_test_key'] = ['five', 6, 'seven', 8]\n\n  db.sync()  # Normally not necessary if the user closes the file after writing\n  db.close() # Will also run sync as part of the closing process\n\n\nRead data without using the context manager\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n.. code:: python\n\n  db = booklet.open('test.blt') # 'r' is the default flag\n\n  test_data1 = db['test_key']\n  test_data2 = db['2nd_test_key']\n\n  db.close()\n\n\nPrune deleted items\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nWhen a key/value is \"deleted\", it's actually just flagged internally as deleted and the item is ignored on the following requests. This is the same for keys that get reassigned. To remove these deleted items from the file completely, the user can run the \"prune\" method. This should only be performed when the user has done a ton of deletes/overwrites as prune can be computationally intensive. There is no performance improvement to removing these items from the file. It's purely to regain space.\n\n.. code:: python\n\n  with booklet.open('test.blt', 'w') as db:\n    del db['test_key']\n    db.sync()\n    db.prune()\n\n\nFile metadata\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThe user can assign overall metadata to the file as a json serializable object (i.e. dict or list). The methods are called set_metadata and get_metadata. The metadata is independent from all of the other key/value pairs assigned in the normal way. It won't be returned with any other methods. If metadata has not already been assigned, the get_metadata method will return None.\n\n.. code:: python\n\n  with booklet.open('test.blt', 'w') as db:\n    db.set_metadata({'meta_key1': 'This is stored as metadata'})\n    meta = db.get_metadata()\n\n\nItem timestamps\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nTimestamps associated with each assigned item have been implemented, but must be turned on at file initialization. By default it's off. The timestamps are stored and returned as an int of the number of microseconds in POSIX UTC time. There are new methods to set and get the timestamps. It's quite new...so I won't supply more info until it's further tested.\n\n\nCustom serializers\n~~~~~~~~~~~~~~~~~~\n.. code:: python\n\n  import orjson\n\n  class Orjson:\n    def dumps(obj):\n        return orjson.dumps(obj, option=orjson.OPT_NON_STR_KEYS | orjson.OPT_OMIT_MICROSECONDS | orjson.OPT_SERIALIZE_NUMPY)\n    def loads(obj):\n        return orjson.loads(obj)\n\n  with booklet.open('test.blt', 'n', value_serializer=Orjson, key_serializer='str') as db:\n    db['test_key'] = ['one', 2, 'three', 4]\n\n\nThe Orjson class is actually already built into the package. You can pass the string 'orjson' to either serializer parameters to use the above serializer. This is just an example of a custom serializer.\n\nHere's another example with compression.\n\n.. code:: python\n\n  import orjson\n  import zstandard as zstd\n\n  class OrjsonZstd:\n    def dumps(obj):\n        return zstd.compress(orjson.dumps(obj, option=orjson.OPT_NON_STR_KEYS | orjson.OPT_OMIT_MICROSECONDS | orjson.OPT_SERIALIZE_NUMPY))\n    def loads(obj):\n        return orjson.loads(zstd.decompress(obj))\n\n  with booklet.open('test.blt', 'n', value_serializer=OrjsonZstd, key_serializer='str') as db:\n    db['big_test'] = list(range(1000000))\n\n  with booklet.open('test.blt', 'r', value_serializer=OrjsonZstd) as db:\n    big_test_data = db['big_test']\n\nIf you use a custom serializer, then you'll always need to pass it to booklet.open for additional reading and writing.\n\n\nThe open flag follows the standard dbm options:\n\n+---------+-------------------------------------------+\n| Value   | Meaning                                   |\n+=========+===========================================+\n| ``'r'`` | Open existing database for reading only   |\n|         | (default)                                 |\n+---------+-------------------------------------------+\n| ``'w'`` | Open existing database for reading and    |\n|         | writing                                   |\n+---------+-------------------------------------------+\n| ``'c'`` | Open database for reading and writing,    |\n|         | creating it if it doesn't exist           |\n+---------+-------------------------------------------+\n| ``'n'`` | Always create a new, empty database, open |\n|         | for reading and writing                   |\n+---------+-------------------------------------------+\n\nDesign\n-------\nVariableValue (default)\n~~~~~~~~~~~~~~~~~~~~~~~~\nThere are two groups in a booklet file plus some initial bytes for parameters (sub index). The sub index is 200 bytes long, but currently only 37 bytes are used. The two other groups are the bucket index group and the data block group. The bucket index group contains the \"hash table\". This bucket index contains a fixed number of buckets (n_buckets) and each bucket contains a 6 byte integer of the position of the first data block associated with that bucket. When the user requests a value from a key input, the key is hashed and the modulus of the n_buckets is performed to determine which bucket to read. The 6 bytes is read from that bucket, converted to an integer, then booklet knows where the first data block is located in the file. The data block group contains all of the data blocks each of which contains the key hash, next data block pos, key length, value length, timestamp (if init with timestamps), key, and value (in this order).\n\nThe number of bytes per data block object includes:\nkey hash: 13\nnext data block pos: 6\nkey length: 2\nvalue length: 4\ntimestamp: either 0 (if init without timestamps) or 7\nkey: variable\nvalue: variable\n\nWhen the first data block pos is determined through the initial key hashing and bucket reading, the first 19 bytes (key hash and next data block pos) are read. Booklet then checks the next data block pos (ndbp). If the ndbp is 0, then it has been assigned the delete flag and is ignored. The key hash from the data block is compared to the key hash from the input. If they are the same, then this is the data block we want. If they are different, then we look again at the ndbp. If the ndbp is 1, then this is the last data block associated with the key hash and the input key hash doesn't exist. If the ndbp is > 1, then we move to the next data block based on the ndbp and try the cycle again until either we hit a dead end or we find the same key hash.\n\nWhen we find the identical key hash, Booklet reads 6 bytes (key len and value len) to determine how many bytes are needed to be read to get the key/value (since they are variable). Depending on whether the user wants the key, value, and/or timestamp, Booklet will read 7 bytes (timestamp len) plus the number of bytes for the key and value. \n\nDeletes assign ndbp to 0 and reassign the prior data block it's original ndbp. This essentially just removes this data block from the key hash data block chain.\nA delete also happens when a user \"overwrites\" the same key.\n\nA \"prune\" method has been created that allows the user to remove \"deleted\" items. It has two optional parameters. If timestamps have been initialized in booklet, then the user can pass a timestamp that will remove all items older than that timestamp. The reindexing option allows the user to increase the n_buckets when the number items greatly exceeds the initialized n_buckets. The implementation essentially just clears the original index then iterates through all data blocks and rewrites only the data blocks that haven't been deleted. In the case of the reindexing, it determines the difference between the old index size and the new index size, expands the file by that difference, moves all of the data blocks to the end of the file, and then writes the newer (and longer) index to the file. Then it continues with the normal pruning procedure. \n\nFixedValue\n~~~~~~~~~~~\nThe main difference from VariableValue is that the value length is globally fixed. The data block in a FixedValue object does not contain the value length as the value will always be the same global value length. The main advantage of this difference is that any overwrites of the same key can be written back to the same location on the file instead of always being appended to the end of the file. If a use-case includes many overwrites and the values are always the same size, then the FixedValue object is ideal.\n\nThere are currently no timestamps in the FixedValue. This could be enabled in the future.\n\nLimitations\n-----------\nThe main limitation is that booklet does not have automatic reindexing (increasing the n_buckets). In the current design, reindexing is computationally intensive when the file is large. The user should generally assign an appropriate n_buckets at initialization. This should be approximately the same number as the expected number of keys/values. The default is set at 12007. The \"prune\" method now has a reindexing option that allows the users to deliberately update/increase the index.\n\nBenchmarks\n-----------\nFrom my initial tests, the performance is comparable to other very fast key-value databases (e.g. gdbm, lmdb) and faster than sqlitedict.\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A python key-value file database",
    "version": "0.7.5",
    "project_urls": {
        "Homepage": "https://github.com/mullenkamp/booklet"
    },
    "split_keywords": [
        "dbm",
        " shelve"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d99bfa94d00ba529a0287e59bfd3f113f44eaad4c2cdbcf45119badd4fbacf4c",
                "md5": "c3be6c019a1b083f11d46670888e1dc7",
                "sha256": "f460ad528c8bb37939084f974c66f492f8125d1bc090d670f04eafdb1b4fb551"
            },
            "downloads": -1,
            "filename": "booklet-0.7.5-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c3be6c019a1b083f11d46670888e1dc7",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": null,
            "size": 29464,
            "upload_time": "2024-10-26T09:27:42",
            "upload_time_iso_8601": "2024-10-26T09:27:42.949576Z",
            "url": "https://files.pythonhosted.org/packages/d9/9b/fa94d00ba529a0287e59bfd3f113f44eaad4c2cdbcf45119badd4fbacf4c/booklet-0.7.5-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ff6201af90bc79576f449be0e30e16301260fa255a39c30f085e8014f740d5f8",
                "md5": "92690f423e960874461951446342c599",
                "sha256": "7d2ddd7c8a018ee48cdcd575fb40ce22ff294f9a28c4e6be3ed1b2a666007d61"
            },
            "downloads": -1,
            "filename": "booklet-0.7.5.tar.gz",
            "has_sig": false,
            "md5_digest": "92690f423e960874461951446342c599",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 26386,
            "upload_time": "2024-10-26T09:27:44",
            "upload_time_iso_8601": "2024-10-26T09:27:44.416170Z",
            "url": "https://files.pythonhosted.org/packages/ff/62/01af90bc79576f449be0e30e16301260fa255a39c30f085e8014f740d5f8/booklet-0.7.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-26 09:27:44",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mullenkamp",
    "github_project": "booklet",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "portalocker",
            "specs": []
        },
        {
            "name": "orjson",
            "specs": []
        },
        {
            "name": "uuid6",
            "specs": [
                [
                    ">=",
                    "2024.07.10"
                ]
            ]
        }
    ],
    "lcname": "booklet"
}
        
Elapsed time: 0.70954s