abnosql


Nameabnosql JSON
Version 0.0.23 PyPI version JSON
download
home_pagehttps://github.com/rog555/abnosql
SummaryNoSQL Abstraction Library
upload_time2024-04-26 22:50:32
maintainerRoger Foskett
docs_urlNone
authorRoger Foskett
requires_python<4.0,>=3.9
licenseMIT
keywords nosql azure cosmos aws dynamodb
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # NoSQL Abstraction Library

Basic CRUD and query support for NoSQL databases, allowing for portable cloud native applications

- AWS DynamoDB <img height="15" width="15" src="https://unpkg.com/simple-icons@v9/icons/amazondynamodb.svg" />
- Azure Cosmos NoSQL <img height="15" width="15" src="https://unpkg.com/simple-icons@v9/icons/microsoftazure.svg" />
- Google Firestore <img height="15" width="15" src="https://unpkg.com/simple-icons@v9/icons/firebase.svg" />

This library is not intended to create databases/tables, use Terraform/ARM/CloudFormation etc for that

Why not just use the name 'nosql' or 'pynosql'? because they already exist on pypi :-)

[![tests](https://github.com/rog555/abnosql/actions/workflows/python-package.yml/badge.svg)](https://github.com/rog555/abnosql/actions/workflows/python-package.yml)[![codecov](https://codecov.io/gh/rog555/abnosql/branch/main/graph/badge.svg?token=9gTkGPgASh)](https://codecov.io/gh/rog555/abnosql)

- [NoSQL Abstraction Library](#nosql-abstraction-library)
  - [Installation](#installation)
- [Usage](#usage)
  - [API Docs](#api-docs)
  - [Querying](#querying)
  - [Indexes](#indexes)
  - [Updates](#updates)
  - [Existence Checking](#existence-checking)
  - [Schema Validation](#schema-validation)
  - [Partition Keys](#partition-keys)
  - [Pagination](#pagination)
  - [Audit](#audit)
  - [Change Feed / Stream Support](#change-feed--stream-support)
  - [Client Side Encryption](#client-side-encryption)
- [Configuration](#configuration)
  - [AWS DynamoDB](#aws-dynamodb)
  - [Azure Cosmos NoSQL](#azure-cosmos-nosql)
  - [Google Firestore](#google-firestore)
- [Plugins and Hooks](#plugins-and-hooks)
- [Testing](#testing)
  - [AWS DynamoDB](#aws-dynamodb-1)
  - [Azure Cosmos NoSQL](#azure-cosmos-nosql-1)
  - [Google Firestore](#google-firestore-1)
- [CLI](#cli)
- [Future Enhancements / Ideas](#future-enhancements--ideas)


## Installation

```
pip install 'abnosql[dynamodb]'
pip install 'abnosql[cosmos]'
pip install 'abnosql[firestore]'
```

For optional [client side](#client-side-encryption) field level envelope encryption

```
pip install 'abnosql[aws-kms]'
pip install 'abnosql[azure-kms]'
```

By default, abnosql does not include database dependencies.  This is to facilitate packaging
abnosql into AWS Lambda or Azure Functions (for example), without over-bloating the packages

# Usage

```
from abnosql import table
import os

os.environ['ABNOSQL_DB'] = 'dynamodb'
os.environ['ABNOSQL_KEY_ATTRS'] = 'hk,rk'

item = {
    'hk': '1',
    'rk': 'a',
    'num': 5,
    'obj': {
        'foo': 'bar',
        'num': 5,
        'list': [1, 2, 3],
    },
    'list': [1, 2, 3],
    'str': 'str'
}

tb = table('mytable')

# create/replace
tb.put_item(item)

# update - using ABNOSQL_KEY_ATTRS
updated_item = tb.put_item(
    {'hk': '1', 'rk': 'a', 'str': 'STR'},
    update=True
)
assert updated_item['str'] == 'STR'

# bulk
tb.put_items([item])

# note partition/hash key should be first kwarg
assert tb.get_item(hk='1', rk='a') == item

assert tb.query({'hk': '1'})['items'] == [item]

# scan
assert tb.query()['items'] == [item]

# be careful not to use cloud specific statements!
assert tb.query_sql(
    'SELECT * FROM mytable WHERE mytable.hk = @hk AND mytable.num > @num',
    {'@hk': '1', '@num': 4}
)['items'] == [item]

tb.delete_item({'hk': '1', 'rk': 'a'})
```

## API Docs

See [API Docs](https://rog555.github.io/abnosql/docs/abnosql/table.html)

## Querying

`query()` performs DynamoDB [Query](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html) using KeyConditionExpression (if `key` supplied) and exact match on FilterExpression if filters are supplied.  For Cosmos, SQL is generated.  This is the safest/most cloud agnostic way to query and probably OK for most use cases.

`query_sql()` performs Dynamodb [ExecuteStatement](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_ExecuteStatement.html) passing in the supplied [PartiQL](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ql-reference.html) statement.  Cosmos uses the NoSQL [SELECT](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/query/select) syntax.

During mocked tests, [SQLGlot](https://sqlglot.com/) is used to [execute](https://sqlglot.com/sqlglot.html#sql-execution) the statement, so results may differ...

Care should be taken with `query_sql()` to not to use SQL features that are specific to any specific provider (breaking the abstraction capability of using abnosql in the first place)

The Firestore plugin uses sqlglot to parse simple SQL statements (eg AND only supported)

## Indexes

Beyond partition and range keys defined on the table, indexes currently have limited support within abnosql

 - The DynamoDB implemention of `query()` allows a [secondary index](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SecondaryIndexes.html) to be specified via optional `index` kwarg
 - [Cosmos](https://learn.microsoft.com/en-us/azure/cosmos-db/index-overview) has Range, Spatial and Composite indexes, however the abnosql library does not do anything yet with `index` kwarg in `query()` implementation.

## Updates

`put_item()` and `put_items()` support `update` boolean attribute, which if supplied will do an `update_item()` on DynamoDB, and a `patch_item()` on Cosmos.  For this to work however, you must specify the key attribute names, either via `ABNOSQL_KEY_ATTRS` env var as a comma separated list (eg perhaps multiple tables all share common partition/range key scheme), or as the `key_attrs` config item  when instantiating the table, eg:

```
tb = table('mytable', {'key_attrs': ['hk', 'rk']})
```

If you don't need to do any updates and only need to do create/replace, then these key attribute names do not need to be supplied

All items being updated must actually exist first, or else exception raised

Firestore does not return updated item, so if this is required use `put_get` = `True` config variable


## Existence Checking

If `check_exists` config attribute is `True`, then CRUD operations will raise exceptions as follows:

- `get_item()` raises `NotFoundException` if item doesnt exist
- `put_item()` raises `ExistsException` if item already exists
- `put_item(update=True)` raises `NotFoundException` if item doesnt exist to update
- `delete_item()` raises `NotFoundException` if item doesnt exist

This adds some delay overhead as abnosql must check if item exists

This can also be enabled by setting environment variable `ABNOSQL_CHECK_EXISTS=TRUE`

If for some reason you need to override this behaviour once enabled for `put_item()` create operation,
you can pass `abnosql_check_exists=False` into the item (this gets popped out so not persisten), which
will allow create operation to overwrite the existing item without throwing `ExistsException`

## Schema Validation

`config` can define jsonschema to validate upon create or update operations (via `put_item()`)

Combination of the following config attributes supported

- `schema` : jsonschema dict or yaml string, applied to both create and update
- `create_schema` : jsonschema dict/yaml only on create
- `update_schema` : jsonschema dict/yaml only on update
- `schema_errmsg` : override default error message on both create and update
- `create_schema_errmsg` : override default error message on create
- `update_schema_errmsg` : override default error message on update

You can get details of validation errors through `e.to_problem()` or `e.detail`

NOTE: `key_attrs` required when updating (see [Updates](#updates))

## Partition Keys

A few methods such as `get_item()`, `delete_item()` and `query()` need to know partition/hash keys as defined on the table.  To avoid having to configure this or lookup from the provider, the convention used is that the first kwarg or dictionary item is the partition key, and if supplied the 2nd is the range/sort key.

## Pagination

`query` and `query_sql` accept `limit` and `next` optional kwargs and return `next` in response. Use these to paginate.

This works for AWS DyanmoDB & Firestore, however Azure Cosmos has a limitation with continuation token for cross partitions queries (see [Python SDK documentation](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos)).  For Cosmos, abnosql appends OFFSET and LIMIT in the SQL statement if not already present, and returns `next`.  `limit` is defaulted to 100.  See the tests for examples

## Audit

`put_item()` and `put_items()` take an optional `audit_user` kwarg.  If supplied, absnosql will add the following to the item:

- `createdBy` - value of `audit_user`, added if does not exist in item supplied to put_item()
- `createdDate` - UTC ISO timestamp string, added if does not exist
- `modifiedBy` - value of `audit_user` always added
- `modifiedDate` - UTC ISO timestamp string, always added

You can also specify `audit_user` as config attribute to table.  If you prefer snake_case over CamelCase, you can set env var `ABNOSQL_CAMELCASE` = `FALSE`

NOTE: created* will only be added if `update` is not True in a `put_item()` operation

## Change Feed / Stream Support

**AWS DynamoDB** [Streams](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html) allow Lambda functions to be triggered upon create, update and delete table operations.  The event sent to the lambda (see [aws docs](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.Tutorial2.html)) contains `eventName` and `eventSourceARN`, where:

- `eventName` - name of event, eg `INSERT`, `MODIFY` or `REMOVE` (see [here](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_streams_Record.html))
- `eventSourceARN` - ARN of the table name

This allows a single stream processor lambda to process events from multiple tables (eg for writing into ElasticSearch)

Like DynamoDB, **Azure CosmosDB** supports [change feeds](https://learn.microsoft.com/en-us/azure/cosmos-db/change-feed), however the event sent to the function (currently) omits the event source (table name) and only delete event names are available if a [preview change feed mode](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html) is enabled, which needs explicit enablement for.

Because both the eventName and eventSource are ideally needed (irrespective of preview mode or not), abnosql library automatically adds the `changeMetadata` to an item during create, update and delete, eg:

```
item = {
    "hk": "1",
    "rk": "a",
    "changeMetadata": {
        "eventName": "INSERT",
        "eventSource": "sometable"
    }
}
```

Because no REMOVE event is sent at all without preview change feed mode above - abnosql must first update the item, and then delete it.  This is also needed for the eventSource / table name to be captured in the event, so unfortunately until Cosmos supports both attributes, update is needed before a delete.  5 second synchronous sleep is added by default between update and delete to allow CosmosDB to send the update event (0 seconds results in no update event).  This can be controlled with `ABNOSQL_COSMOS_CHANGE_META_SLEEPSECS` env var (defaults to `5` seconds), and disabled by setting to `0`

This behaviour is enabled by default, however can be disabled by setting `ABNOSQL_COSMOS_CHANGE_META` env var to `FALSE` or `cosmos_change_meta=False` in table config.  `ABNOSQL_CAMELCASE` = `FALSE` env var can also be used to change attribute names used to snake_case if needed

To write an Azure Function / AWS Lambda that is able to process both DynamoDB and Cosmos events, look for `changeMetadata` first and if present use that otherwise look for `eventName` and `eventSourceARN` in the event payload assuming its DynamoDB

**Google Firestore** should support [triggering functions](https://firebase.google.com/docs/functions/firestore-events?gen=2nd#python-preview) similar to DynamoDB Streams, so changeMetadata is not required

## Client Side Encryption

If configured in table config with `kms` attribute, abnosql will perform client side encryption using AWS KMS or Azure KeyVault

Each attribute value defined in the config is encrypted with a 256-bit AES-GCM data key generated for each attribute value:

- `aws` uses [AWS Encryption SDK for Python](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/python.html)
- `azure` uses [python cryptography](https://cryptography.io/en/latest/hazmat/primitives/aead/#cryptography.hazmat.primitives.ciphers.aead.AESGCM.generate_key) to generate AES-GCM data key, encrypt the attribute value and then uses an RSA CMK in Azure Keyvault to wrap/unwrap (envelope encryption) the AES-GCM data key.  The module uses the [azure-keyvaults-keys](https://learn.microsoft.com/en-us/python/api/overview/azure/keyvault-keys-readme?view=azure-python) python SDK for wrap/unrap functionality of the generated data key (Azure doesnt support generate data key as AWS does)

Both providers use a [256-bit AES-GCM](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/supported-algorithms.html) generated data key with AAD/encryption context (Azure provider uses a 96-nonce).  AES-GCM is an Authenticated symmetric encryption scheme used by both AWS and Azure (and [Hashicorp Vault](https://developer.hashicorp.com/vault/docs/secrets/transit#aes256-gcm96))

See also [AWS Encryption Best Practices](https://docs.aws.amazon.com/prescriptive-guidance/latest/encryption-best-practices/welcome.html)

Example config:

```
{
    'kms': {
        'key_ids': ['https://foo.vault.azure.net/keys/bar/45e36a1024a04062bd489db0d9004d09'],
        'key_attrs': ['hk', 'rk'],
        'attrs': ['obj', 'str']
    }
}
```

Where:
- `key_ids`: list of AWS KMS Key ARNs or Azure KeyVault identifier (URL to RSA CMK).  This is picked up via `ABNOSQL_KMS_KEYS` env var as a comma separated list (*NOTE: env var recommended to avoid provider specific code*)
- `key_attrs`: list of key attributes in the item from which the AAD/encryption context is set.  Taken from `ABNOSQL_KEY_ATTRS` env var or table `key_attrs` if defined there
- `attrs`: list of attributes keys to encrypt
- `key_bytes`: optional for azure, use your own AESGCM key if specified, otherwise generate one

If `kms` config attribute is present, abnosql will look for the `ABNOSQL_KMS` provider to load the appropriate provider KMS module (eg "aws" or "azure"), and if not present use default depending on the database (eg cosmos will use azure, dynamodb will use aws)

In example above, the key_attrs `['hk', 'rk']` are used to define the encryption context / AAD used, and attrs `['obj', 'str']` what attributes to encrypt/decrypt

With an item:

```
{
    'hk': '1',
    'rk': 'b',
    'obj': {'foo':'bar'},
    'str': 'foobar'
}
```

The encryption context / AAD is set to hk=1 and rk=b and obj and str values are encrypted

If you don't want to use any of these providers, then you can use `put_item_pre` and `get_item_post` hooks to perform your own client side encryption

See also [AWS Multi-region encryption keys](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/configure.html#config-mrks) and set `ABNOSQL_KMS_KEYS` env var as comma list of ARNs

# Configuration

It is recommended to use environment variables where possible to avoid provider specific application code

if `ABNOSQL_DB` env var is not set, abnosql will attempt to apply defaults based on available environment variables:

- `AWS_DEFAULT_REGION` - sets database to `dynamodb` (see [aws docs](https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html))
- `FUNCTIONS_WORKER_RUNTIME` - sets database to `cosmos` (see [azure docs](https://learn.microsoft.com/en-us/azure/azure-functions/functions-app-settings#functions_worker_runtime))
- `K_SERVICE` - sets database to `firestore` (though this could also get confused if running on knative)


## AWS DynamoDB

Set the following environment variable and use the usual AWS environment variables that boto3 uses

- `ABNOSQL_DB` = "dynamodb"

Or set the boto3 session in the config

```
from abnosql import table
import boto3

tb = table(
    'mytable',
    config={'session': boto3.Session()},
    database='dynamodb'
)
```

## Azure Cosmos NoSQL

Set the following environment variables:

- `ABNOSQL_DB` = "cosmos"
- `ABNOSQL_COSMOS_ACCOUNT` = your database account
- `ABNOSQL_COSMOS_ENDPOINT` = drived from `ABNOSQL_COSMOS_ACCOUNT` if not set
- `ABNOSQL_COSMOS_CREDENTIAL` = your cosmos credential, use [Azure Key Vault References](https://learn.microsoft.com/en-us/azure/app-service/app-service-key-vault-references?tabs=azure-cli) if using Azure Functions.  Don't set to use DefaultAzureCredential / managed identity.
- `ABNOSQL_COSMOS_DATABASE` = cosmos database

**OR** - use the connection string format:

- `ABNOSQL_DB` = "cosmos://account@credential:database" or "cosmos://account@:database" to use managed identity (credential could also be "DefaultAzureCredential")

Alternatively, define in config (though ideally you want to use env vars to avoid application / environment specific code).

```
from abnosql import table

tb = table(
    'mytable',
    config={'account': 'foo', 'database': 'bar'},
    database='cosmos'
)
```


## Google Firestore

Set the following environment variables:

- `ABNOSQL_DB` = "firestore"
- `ABNOSQL_FIRESTORE_PROJECT` or `GOOGLE_CLOUD_PROJECT` = google cloud project
- `ABNOSQL_FIRESTORE_DATABASE` = Firestore database
- `ABNOSQL_FIRESTORE_CREDENTIALS` = oauth, optional - if using google CLI, its also picked up from `~/.config/gcloud/application_default_credentials.json` if found

**OR** - use the connection string format:

- `ABNOSQL_DB` = "firestore://project@credential:database"

Alternatively, define in config (though ideally you want to use env vars to avoid application / environment specific code).

```
from abnosql import table

tb = table(
    'mytable',
    config={'project': 'foo', 'database': 'bar'},
    database='firestore'
)
```

See also https://cloud.google.com/firestore/docs/authentication



# Plugins and Hooks

abnosql uses pluggy and registers in the `abnosql.table` namespace

The following hooks are available

- `set_config` - set config
- `get_item_post` - called after `get_item()`, can return modified data
- `put_item_pre`
- `put_item_post`
- `put_items_post`
- `delete_item_post`

See the [TableSpecs](https://github.com/rog555/abnosql/blob/main/abnosql/table.py#L16) and example [test_hooks()](https://github.com/rog555/abnosql/blob/main/tests/common.py#L70)

# Testing

## AWS DynamoDB

Use `moto` package and `abnosql.mocks.mock_dynamodbx` 

mock_dynamodbx is used for query_sql and only needed if/until moto provides full partiql support

Example:

```
from abnosql.mocks import mock_dynamodbx 
from moto import mock_dynamodb

@mock_dynamodb
@mock_dynamodbx  # needed for query_sql only
def test_something():
    ...
```

More examples in [tests/test_dynamodb.py](./tests/test_dynamodb.py)

## Azure Cosmos NoSQL

Use `requests` package and `abnosql.mocks.mock_cosmos` 

Example:

```
from abnosql.mocks import mock_cosmos
import requests

@mock_cosmos
@responses.activate
def test_something():
    ...
```

More examples in [tests/test_cosmos.py](./tests/test_cosmos.py)


## Google Firestore

Use [python-mock-firestore](https://github.com/mdowds/python-mock-firestore) and pass `MockFirestore()` to table config as `client` attribute

Example:

```
from mockfirestore import MockFirestore


def test_something():
    tb = table('mytable', {'client': MockFirestore()})
    item = tb.get_item(foo='bar')

```

# CLI

Small abnosql CLI installed with few of the commands above

```
Usage: abnosql [OPTIONS] COMMAND [ARGS]...

Options:
  --help  Show this message and exit.

Commands:
  delete-item
  get-item
  put-item
  put-items
  query
  query-sql
```

To install dependencies

```
pip install 'abnosql[cli]'
```

Example querying table in Azure Cosmos, with cosmos.json config file containing endpoint, credential and database

```
$ abnosql query-sql mytable 'SELECT * FROM mytable' -d cosmos -c cosmos.json
partkey      id      num  obj                                          list       str
-----------  ----  -----  -------------------------------------------  ---------  -----
p1           p1.1      5  {'foo': 'bar', 'num': 5, 'list': [1, 2, 3]}  [1, 2, 3]  str
p2           p2.1      5  {'foo': 'bar', 'num': 5, 'list': [1, 2, 3]}  [1, 2, 3]  str
p2           p2.2      5  {'foo': 'bar', 'num': 5, 'list': [1, 2, 3]}  [1, 2, 3]  str
```

# Future Enhancements / Ideas

- [x] client side encryption
- [x] test pagination & exception handling
- [x] [Google Firestore](https://cloud.google.com/python/docs/reference/firestore/latest) support, ideally in the core library (though could be added outside via use of the plugin system).  Would need something like [FireSQL](https://firebaseopensource.com/projects/jsayol/firesql/) implemented for python, maybe via sqlglot
- [ ] [Google Vault](https://cloud.google.com/python/docs/reference/cloudkms/latest/) KMS support
- [ ] [Hashicorp Vault](https://github.com/hashicorp/vault-examples/blob/main/examples/_quick-start/python/example.py) KMS support
- [ ] Simple caching (maybe) using globals (used for AWS Lambda / Azure Functions)
- [ ] PostgresSQL support using JSONB column (see [here](https://medium.com/geekculture/json-and-postgresql-using-json-to-mimic-nosqls-storage-benefits-1564c69f61fc) for example).  Would be nice to avoid an ORM and having to define a model for each table...
- [ ] blob storage backend? could use something similar to [NoDB](https://github.com/Miserlou/NoDB) but maybe combined with [smart_open](https://github.com/RaRe-Technologies/smart_open) and DuckDB's [Hive Partitioning](https://duckdb.org/docs/data/partitioning/hive_partitioning.html)
- [ ] Redis..
- [ ] Hook implementations to write to ElasticSearch / OpenSearch for better searching.  Useful when not able to use [AWS Stream Processors](https://aws.amazon.com/blogs/compute/indexing-amazon-dynamodb-content-with-amazon-elasticsearch-service-using-aws-lambda/) [Azure Change Feed](https://learn.microsoft.com/en-us/azure/cosmos-db/change-feed), or [Elasticstore](https://github.com/acupofjose/elasticstore). Why? because not all databases support stream processing, and if they do you don't want the hastle of using [CDC](https://berbagimadani.medium.com/sync-postgresql-to-elasticsearch-and-cdc-change-data-capture-b847e8bcf568)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/rog555/abnosql",
    "name": "abnosql",
    "maintainer": "Roger Foskett",
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": "r_foskett@hotmail.com",
    "keywords": "nosql, azure cosmos, aws dynamodb",
    "author": "Roger Foskett",
    "author_email": "r_foskett@hotmail.com",
    "download_url": "https://files.pythonhosted.org/packages/76/18/35b40f46ba518811a6636a55f4b84cd72fb47627faffbd8e71d02dbad4a7/abnosql-0.0.23.tar.gz",
    "platform": "any",
    "description": "# NoSQL Abstraction Library\n\nBasic CRUD and query support for NoSQL databases, allowing for portable cloud native applications\n\n- AWS DynamoDB <img height=\"15\" width=\"15\" src=\"https://unpkg.com/simple-icons@v9/icons/amazondynamodb.svg\" />\n- Azure Cosmos NoSQL <img height=\"15\" width=\"15\" src=\"https://unpkg.com/simple-icons@v9/icons/microsoftazure.svg\" />\n- Google Firestore <img height=\"15\" width=\"15\" src=\"https://unpkg.com/simple-icons@v9/icons/firebase.svg\" />\n\nThis library is not intended to create databases/tables, use Terraform/ARM/CloudFormation etc for that\n\nWhy not just use the name 'nosql' or 'pynosql'? because they already exist on pypi :-)\n\n[![tests](https://github.com/rog555/abnosql/actions/workflows/python-package.yml/badge.svg)](https://github.com/rog555/abnosql/actions/workflows/python-package.yml)[![codecov](https://codecov.io/gh/rog555/abnosql/branch/main/graph/badge.svg?token=9gTkGPgASh)](https://codecov.io/gh/rog555/abnosql)\n\n- [NoSQL Abstraction Library](#nosql-abstraction-library)\n  - [Installation](#installation)\n- [Usage](#usage)\n  - [API Docs](#api-docs)\n  - [Querying](#querying)\n  - [Indexes](#indexes)\n  - [Updates](#updates)\n  - [Existence Checking](#existence-checking)\n  - [Schema Validation](#schema-validation)\n  - [Partition Keys](#partition-keys)\n  - [Pagination](#pagination)\n  - [Audit](#audit)\n  - [Change Feed / Stream Support](#change-feed--stream-support)\n  - [Client Side Encryption](#client-side-encryption)\n- [Configuration](#configuration)\n  - [AWS DynamoDB](#aws-dynamodb)\n  - [Azure Cosmos NoSQL](#azure-cosmos-nosql)\n  - [Google Firestore](#google-firestore)\n- [Plugins and Hooks](#plugins-and-hooks)\n- [Testing](#testing)\n  - [AWS DynamoDB](#aws-dynamodb-1)\n  - [Azure Cosmos NoSQL](#azure-cosmos-nosql-1)\n  - [Google Firestore](#google-firestore-1)\n- [CLI](#cli)\n- [Future Enhancements / Ideas](#future-enhancements--ideas)\n\n\n## Installation\n\n```\npip install 'abnosql[dynamodb]'\npip install 'abnosql[cosmos]'\npip install 'abnosql[firestore]'\n```\n\nFor optional [client side](#client-side-encryption) field level envelope encryption\n\n```\npip install 'abnosql[aws-kms]'\npip install 'abnosql[azure-kms]'\n```\n\nBy default, abnosql does not include database dependencies.  This is to facilitate packaging\nabnosql into AWS Lambda or Azure Functions (for example), without over-bloating the packages\n\n# Usage\n\n```\nfrom abnosql import table\nimport os\n\nos.environ['ABNOSQL_DB'] = 'dynamodb'\nos.environ['ABNOSQL_KEY_ATTRS'] = 'hk,rk'\n\nitem = {\n    'hk': '1',\n    'rk': 'a',\n    'num': 5,\n    'obj': {\n        'foo': 'bar',\n        'num': 5,\n        'list': [1, 2, 3],\n    },\n    'list': [1, 2, 3],\n    'str': 'str'\n}\n\ntb = table('mytable')\n\n# create/replace\ntb.put_item(item)\n\n# update - using ABNOSQL_KEY_ATTRS\nupdated_item = tb.put_item(\n    {'hk': '1', 'rk': 'a', 'str': 'STR'},\n    update=True\n)\nassert updated_item['str'] == 'STR'\n\n# bulk\ntb.put_items([item])\n\n# note partition/hash key should be first kwarg\nassert tb.get_item(hk='1', rk='a') == item\n\nassert tb.query({'hk': '1'})['items'] == [item]\n\n# scan\nassert tb.query()['items'] == [item]\n\n# be careful not to use cloud specific statements!\nassert tb.query_sql(\n    'SELECT * FROM mytable WHERE mytable.hk = @hk AND mytable.num > @num',\n    {'@hk': '1', '@num': 4}\n)['items'] == [item]\n\ntb.delete_item({'hk': '1', 'rk': 'a'})\n```\n\n## API Docs\n\nSee [API Docs](https://rog555.github.io/abnosql/docs/abnosql/table.html)\n\n## Querying\n\n`query()` performs DynamoDB [Query](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html) using KeyConditionExpression (if `key` supplied) and exact match on FilterExpression if filters are supplied.  For Cosmos, SQL is generated.  This is the safest/most cloud agnostic way to query and probably OK for most use cases.\n\n`query_sql()` performs Dynamodb [ExecuteStatement](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_ExecuteStatement.html) passing in the supplied [PartiQL](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ql-reference.html) statement.  Cosmos uses the NoSQL [SELECT](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/query/select) syntax.\n\nDuring mocked tests, [SQLGlot](https://sqlglot.com/) is used to [execute](https://sqlglot.com/sqlglot.html#sql-execution) the statement, so results may differ...\n\nCare should be taken with `query_sql()` to not to use SQL features that are specific to any specific provider (breaking the abstraction capability of using abnosql in the first place)\n\nThe Firestore plugin uses sqlglot to parse simple SQL statements (eg AND only supported)\n\n## Indexes\n\nBeyond partition and range keys defined on the table, indexes currently have limited support within abnosql\n\n - The DynamoDB implemention of `query()` allows a [secondary index](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SecondaryIndexes.html) to be specified via optional `index` kwarg\n - [Cosmos](https://learn.microsoft.com/en-us/azure/cosmos-db/index-overview) has Range, Spatial and Composite indexes, however the abnosql library does not do anything yet with `index` kwarg in `query()` implementation.\n\n## Updates\n\n`put_item()` and `put_items()` support `update` boolean attribute, which if supplied will do an `update_item()` on DynamoDB, and a `patch_item()` on Cosmos.  For this to work however, you must specify the key attribute names, either via `ABNOSQL_KEY_ATTRS` env var as a comma separated list (eg perhaps multiple tables all share common partition/range key scheme), or as the `key_attrs` config item  when instantiating the table, eg:\n\n```\ntb = table('mytable', {'key_attrs': ['hk', 'rk']})\n```\n\nIf you don't need to do any updates and only need to do create/replace, then these key attribute names do not need to be supplied\n\nAll items being updated must actually exist first, or else exception raised\n\nFirestore does not return updated item, so if this is required use `put_get` = `True` config variable\n\n\n## Existence Checking\n\nIf `check_exists` config attribute is `True`, then CRUD operations will raise exceptions as follows:\n\n- `get_item()` raises `NotFoundException` if item doesnt exist\n- `put_item()` raises `ExistsException` if item already exists\n- `put_item(update=True)` raises `NotFoundException` if item doesnt exist to update\n- `delete_item()` raises `NotFoundException` if item doesnt exist\n\nThis adds some delay overhead as abnosql must check if item exists\n\nThis can also be enabled by setting environment variable `ABNOSQL_CHECK_EXISTS=TRUE`\n\nIf for some reason you need to override this behaviour once enabled for `put_item()` create operation,\nyou can pass `abnosql_check_exists=False` into the item (this gets popped out so not persisten), which\nwill allow create operation to overwrite the existing item without throwing `ExistsException`\n\n## Schema Validation\n\n`config` can define jsonschema to validate upon create or update operations (via `put_item()`)\n\nCombination of the following config attributes supported\n\n- `schema` : jsonschema dict or yaml string, applied to both create and update\n- `create_schema` : jsonschema dict/yaml only on create\n- `update_schema` : jsonschema dict/yaml only on update\n- `schema_errmsg` : override default error message on both create and update\n- `create_schema_errmsg` : override default error message on create\n- `update_schema_errmsg` : override default error message on update\n\nYou can get details of validation errors through `e.to_problem()` or `e.detail`\n\nNOTE: `key_attrs` required when updating (see [Updates](#updates))\n\n## Partition Keys\n\nA few methods such as `get_item()`, `delete_item()` and `query()` need to know partition/hash keys as defined on the table.  To avoid having to configure this or lookup from the provider, the convention used is that the first kwarg or dictionary item is the partition key, and if supplied the 2nd is the range/sort key.\n\n## Pagination\n\n`query` and `query_sql` accept `limit` and `next` optional kwargs and return `next` in response. Use these to paginate.\n\nThis works for AWS DyanmoDB & Firestore, however Azure Cosmos has a limitation with continuation token for cross partitions queries (see [Python SDK documentation](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/cosmos/azure-cosmos)).  For Cosmos, abnosql appends OFFSET and LIMIT in the SQL statement if not already present, and returns `next`.  `limit` is defaulted to 100.  See the tests for examples\n\n## Audit\n\n`put_item()` and `put_items()` take an optional `audit_user` kwarg.  If supplied, absnosql will add the following to the item:\n\n- `createdBy` - value of `audit_user`, added if does not exist in item supplied to put_item()\n- `createdDate` - UTC ISO timestamp string, added if does not exist\n- `modifiedBy` - value of `audit_user` always added\n- `modifiedDate` - UTC ISO timestamp string, always added\n\nYou can also specify `audit_user` as config attribute to table.  If you prefer snake_case over CamelCase, you can set env var `ABNOSQL_CAMELCASE` = `FALSE`\n\nNOTE: created* will only be added if `update` is not True in a `put_item()` operation\n\n## Change Feed / Stream Support\n\n**AWS DynamoDB** [Streams](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html) allow Lambda functions to be triggered upon create, update and delete table operations.  The event sent to the lambda (see [aws docs](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.Lambda.Tutorial2.html)) contains `eventName` and `eventSourceARN`, where:\n\n- `eventName` - name of event, eg `INSERT`, `MODIFY` or `REMOVE` (see [here](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_streams_Record.html))\n- `eventSourceARN` - ARN of the table name\n\nThis allows a single stream processor lambda to process events from multiple tables (eg for writing into ElasticSearch)\n\nLike DynamoDB, **Azure CosmosDB** supports [change feeds](https://learn.microsoft.com/en-us/azure/cosmos-db/change-feed), however the event sent to the function (currently) omits the event source (table name) and only delete event names are available if a [preview change feed mode](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html) is enabled, which needs explicit enablement for.\n\nBecause both the eventName and eventSource are ideally needed (irrespective of preview mode or not), abnosql library automatically adds the `changeMetadata` to an item during create, update and delete, eg:\n\n```\nitem = {\n    \"hk\": \"1\",\n    \"rk\": \"a\",\n    \"changeMetadata\": {\n        \"eventName\": \"INSERT\",\n        \"eventSource\": \"sometable\"\n    }\n}\n```\n\nBecause no REMOVE event is sent at all without preview change feed mode above - abnosql must first update the item, and then delete it.  This is also needed for the eventSource / table name to be captured in the event, so unfortunately until Cosmos supports both attributes, update is needed before a delete.  5 second synchronous sleep is added by default between update and delete to allow CosmosDB to send the update event (0 seconds results in no update event).  This can be controlled with `ABNOSQL_COSMOS_CHANGE_META_SLEEPSECS` env var (defaults to `5` seconds), and disabled by setting to `0`\n\nThis behaviour is enabled by default, however can be disabled by setting `ABNOSQL_COSMOS_CHANGE_META` env var to `FALSE` or `cosmos_change_meta=False` in table config.  `ABNOSQL_CAMELCASE` = `FALSE` env var can also be used to change attribute names used to snake_case if needed\n\nTo write an Azure Function / AWS Lambda that is able to process both DynamoDB and Cosmos events, look for `changeMetadata` first and if present use that otherwise look for `eventName` and `eventSourceARN` in the event payload assuming its DynamoDB\n\n**Google Firestore** should support [triggering functions](https://firebase.google.com/docs/functions/firestore-events?gen=2nd#python-preview) similar to DynamoDB Streams, so changeMetadata is not required\n\n## Client Side Encryption\n\nIf configured in table config with `kms` attribute, abnosql will perform client side encryption using AWS KMS or Azure KeyVault\n\nEach attribute value defined in the config is encrypted with a 256-bit AES-GCM data key generated for each attribute value:\n\n- `aws` uses [AWS Encryption SDK for Python](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/python.html)\n- `azure` uses [python cryptography](https://cryptography.io/en/latest/hazmat/primitives/aead/#cryptography.hazmat.primitives.ciphers.aead.AESGCM.generate_key) to generate AES-GCM data key, encrypt the attribute value and then uses an RSA CMK in Azure Keyvault to wrap/unwrap (envelope encryption) the AES-GCM data key.  The module uses the [azure-keyvaults-keys](https://learn.microsoft.com/en-us/python/api/overview/azure/keyvault-keys-readme?view=azure-python) python SDK for wrap/unrap functionality of the generated data key (Azure doesnt support generate data key as AWS does)\n\nBoth providers use a [256-bit AES-GCM](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/supported-algorithms.html) generated data key with AAD/encryption context (Azure provider uses a 96-nonce).  AES-GCM is an Authenticated symmetric encryption scheme used by both AWS and Azure (and [Hashicorp Vault](https://developer.hashicorp.com/vault/docs/secrets/transit#aes256-gcm96))\n\nSee also [AWS Encryption Best Practices](https://docs.aws.amazon.com/prescriptive-guidance/latest/encryption-best-practices/welcome.html)\n\nExample config:\n\n```\n{\n    'kms': {\n        'key_ids': ['https://foo.vault.azure.net/keys/bar/45e36a1024a04062bd489db0d9004d09'],\n        'key_attrs': ['hk', 'rk'],\n        'attrs': ['obj', 'str']\n    }\n}\n```\n\nWhere:\n- `key_ids`: list of AWS KMS Key ARNs or Azure KeyVault identifier (URL to RSA CMK).  This is picked up via `ABNOSQL_KMS_KEYS` env var as a comma separated list (*NOTE: env var recommended to avoid provider specific code*)\n- `key_attrs`: list of key attributes in the item from which the AAD/encryption context is set.  Taken from `ABNOSQL_KEY_ATTRS` env var or table `key_attrs` if defined there\n- `attrs`: list of attributes keys to encrypt\n- `key_bytes`: optional for azure, use your own AESGCM key if specified, otherwise generate one\n\nIf `kms` config attribute is present, abnosql will look for the `ABNOSQL_KMS` provider to load the appropriate provider KMS module (eg \"aws\" or \"azure\"), and if not present use default depending on the database (eg cosmos will use azure, dynamodb will use aws)\n\nIn example above, the key_attrs `['hk', 'rk']` are used to define the encryption context / AAD used, and attrs `['obj', 'str']` what attributes to encrypt/decrypt\n\nWith an item:\n\n```\n{\n    'hk': '1',\n    'rk': 'b',\n    'obj': {'foo':'bar'},\n    'str': 'foobar'\n}\n```\n\nThe encryption context / AAD is set to hk=1 and rk=b and obj and str values are encrypted\n\nIf you don't want to use any of these providers, then you can use `put_item_pre` and `get_item_post` hooks to perform your own client side encryption\n\nSee also [AWS Multi-region encryption keys](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/configure.html#config-mrks) and set `ABNOSQL_KMS_KEYS` env var as comma list of ARNs\n\n# Configuration\n\nIt is recommended to use environment variables where possible to avoid provider specific application code\n\nif `ABNOSQL_DB` env var is not set, abnosql will attempt to apply defaults based on available environment variables:\n\n- `AWS_DEFAULT_REGION` - sets database to `dynamodb` (see [aws docs](https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html))\n- `FUNCTIONS_WORKER_RUNTIME` - sets database to `cosmos` (see [azure docs](https://learn.microsoft.com/en-us/azure/azure-functions/functions-app-settings#functions_worker_runtime))\n- `K_SERVICE` - sets database to `firestore` (though this could also get confused if running on knative)\n\n\n## AWS DynamoDB\n\nSet the following environment variable and use the usual AWS environment variables that boto3 uses\n\n- `ABNOSQL_DB` = \"dynamodb\"\n\nOr set the boto3 session in the config\n\n```\nfrom abnosql import table\nimport boto3\n\ntb = table(\n    'mytable',\n    config={'session': boto3.Session()},\n    database='dynamodb'\n)\n```\n\n## Azure Cosmos NoSQL\n\nSet the following environment variables:\n\n- `ABNOSQL_DB` = \"cosmos\"\n- `ABNOSQL_COSMOS_ACCOUNT` = your database account\n- `ABNOSQL_COSMOS_ENDPOINT` = drived from `ABNOSQL_COSMOS_ACCOUNT` if not set\n- `ABNOSQL_COSMOS_CREDENTIAL` = your cosmos credential, use [Azure Key Vault References](https://learn.microsoft.com/en-us/azure/app-service/app-service-key-vault-references?tabs=azure-cli) if using Azure Functions.  Don't set to use DefaultAzureCredential / managed identity.\n- `ABNOSQL_COSMOS_DATABASE` = cosmos database\n\n**OR** - use the connection string format:\n\n- `ABNOSQL_DB` = \"cosmos://account@credential:database\" or \"cosmos://account@:database\" to use managed identity (credential could also be \"DefaultAzureCredential\")\n\nAlternatively, define in config (though ideally you want to use env vars to avoid application / environment specific code).\n\n```\nfrom abnosql import table\n\ntb = table(\n    'mytable',\n    config={'account': 'foo', 'database': 'bar'},\n    database='cosmos'\n)\n```\n\n\n## Google Firestore\n\nSet the following environment variables:\n\n- `ABNOSQL_DB` = \"firestore\"\n- `ABNOSQL_FIRESTORE_PROJECT` or `GOOGLE_CLOUD_PROJECT` = google cloud project\n- `ABNOSQL_FIRESTORE_DATABASE` = Firestore database\n- `ABNOSQL_FIRESTORE_CREDENTIALS` = oauth, optional - if using google CLI, its also picked up from `~/.config/gcloud/application_default_credentials.json` if found\n\n**OR** - use the connection string format:\n\n- `ABNOSQL_DB` = \"firestore://project@credential:database\"\n\nAlternatively, define in config (though ideally you want to use env vars to avoid application / environment specific code).\n\n```\nfrom abnosql import table\n\ntb = table(\n    'mytable',\n    config={'project': 'foo', 'database': 'bar'},\n    database='firestore'\n)\n```\n\nSee also https://cloud.google.com/firestore/docs/authentication\n\n\n\n# Plugins and Hooks\n\nabnosql uses pluggy and registers in the `abnosql.table` namespace\n\nThe following hooks are available\n\n- `set_config` - set config\n- `get_item_post` - called after `get_item()`, can return modified data\n- `put_item_pre`\n- `put_item_post`\n- `put_items_post`\n- `delete_item_post`\n\nSee the [TableSpecs](https://github.com/rog555/abnosql/blob/main/abnosql/table.py#L16) and example [test_hooks()](https://github.com/rog555/abnosql/blob/main/tests/common.py#L70)\n\n# Testing\n\n## AWS DynamoDB\n\nUse `moto` package and `abnosql.mocks.mock_dynamodbx` \n\nmock_dynamodbx is used for query_sql and only needed if/until moto provides full partiql support\n\nExample:\n\n```\nfrom abnosql.mocks import mock_dynamodbx \nfrom moto import mock_dynamodb\n\n@mock_dynamodb\n@mock_dynamodbx  # needed for query_sql only\ndef test_something():\n    ...\n```\n\nMore examples in [tests/test_dynamodb.py](./tests/test_dynamodb.py)\n\n## Azure Cosmos NoSQL\n\nUse `requests` package and `abnosql.mocks.mock_cosmos` \n\nExample:\n\n```\nfrom abnosql.mocks import mock_cosmos\nimport requests\n\n@mock_cosmos\n@responses.activate\ndef test_something():\n    ...\n```\n\nMore examples in [tests/test_cosmos.py](./tests/test_cosmos.py)\n\n\n## Google Firestore\n\nUse [python-mock-firestore](https://github.com/mdowds/python-mock-firestore) and pass `MockFirestore()` to table config as `client` attribute\n\nExample:\n\n```\nfrom mockfirestore import MockFirestore\n\n\ndef test_something():\n    tb = table('mytable', {'client': MockFirestore()})\n    item = tb.get_item(foo='bar')\n\n```\n\n# CLI\n\nSmall abnosql CLI installed with few of the commands above\n\n```\nUsage: abnosql [OPTIONS] COMMAND [ARGS]...\n\nOptions:\n  --help  Show this message and exit.\n\nCommands:\n  delete-item\n  get-item\n  put-item\n  put-items\n  query\n  query-sql\n```\n\nTo install dependencies\n\n```\npip install 'abnosql[cli]'\n```\n\nExample querying table in Azure Cosmos, with cosmos.json config file containing endpoint, credential and database\n\n```\n$ abnosql query-sql mytable 'SELECT * FROM mytable' -d cosmos -c cosmos.json\npartkey      id      num  obj                                          list       str\n-----------  ----  -----  -------------------------------------------  ---------  -----\np1           p1.1      5  {'foo': 'bar', 'num': 5, 'list': [1, 2, 3]}  [1, 2, 3]  str\np2           p2.1      5  {'foo': 'bar', 'num': 5, 'list': [1, 2, 3]}  [1, 2, 3]  str\np2           p2.2      5  {'foo': 'bar', 'num': 5, 'list': [1, 2, 3]}  [1, 2, 3]  str\n```\n\n# Future Enhancements / Ideas\n\n- [x] client side encryption\n- [x] test pagination & exception handling\n- [x] [Google Firestore](https://cloud.google.com/python/docs/reference/firestore/latest) support, ideally in the core library (though could be added outside via use of the plugin system).  Would need something like [FireSQL](https://firebaseopensource.com/projects/jsayol/firesql/) implemented for python, maybe via sqlglot\n- [ ] [Google Vault](https://cloud.google.com/python/docs/reference/cloudkms/latest/) KMS support\n- [ ] [Hashicorp Vault](https://github.com/hashicorp/vault-examples/blob/main/examples/_quick-start/python/example.py) KMS support\n- [ ] Simple caching (maybe) using globals (used for AWS Lambda / Azure Functions)\n- [ ] PostgresSQL support using JSONB column (see [here](https://medium.com/geekculture/json-and-postgresql-using-json-to-mimic-nosqls-storage-benefits-1564c69f61fc) for example).  Would be nice to avoid an ORM and having to define a model for each table...\n- [ ] blob storage backend? could use something similar to [NoDB](https://github.com/Miserlou/NoDB) but maybe combined with [smart_open](https://github.com/RaRe-Technologies/smart_open) and DuckDB's [Hive Partitioning](https://duckdb.org/docs/data/partitioning/hive_partitioning.html)\n- [ ] Redis..\n- [ ] Hook implementations to write to ElasticSearch / OpenSearch for better searching.  Useful when not able to use [AWS Stream Processors](https://aws.amazon.com/blogs/compute/indexing-amazon-dynamodb-content-with-amazon-elasticsearch-service-using-aws-lambda/) [Azure Change Feed](https://learn.microsoft.com/en-us/azure/cosmos-db/change-feed), or [Elasticstore](https://github.com/acupofjose/elasticstore). Why? because not all databases support stream processing, and if they do you don't want the hastle of using [CDC](https://berbagimadani.medium.com/sync-postgresql-to-elasticsearch-and-cdc-change-data-capture-b847e8bcf568)\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "NoSQL Abstraction Library",
    "version": "0.0.23",
    "project_urls": {
        "Download": "http://pypi.python.org/pypi/abnosql",
        "Homepage": "https://github.com/rog555/abnosql"
    },
    "split_keywords": [
        "nosql",
        " azure cosmos",
        " aws dynamodb"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cc52619a6a571b28522c269ddb2dde3ba42b075686756c14328b4b6acfc363c1",
                "md5": "185f8e27055e81ad782c6320e7185f93",
                "sha256": "bda137ae70d155e58ecbdea30dab2bfde60c2235a0e8f6e636f1913739de8fce"
            },
            "downloads": -1,
            "filename": "abnosql-0.0.23-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "185f8e27055e81ad782c6320e7185f93",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 42581,
            "upload_time": "2024-04-26T22:50:35",
            "upload_time_iso_8601": "2024-04-26T22:50:35.611568Z",
            "url": "https://files.pythonhosted.org/packages/cc/52/619a6a571b28522c269ddb2dde3ba42b075686756c14328b4b6acfc363c1/abnosql-0.0.23-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "761835b40f46ba518811a6636a55f4b84cd72fb47627faffbd8e71d02dbad4a7",
                "md5": "5a20502913881b4347fc7e1f79033f16",
                "sha256": "1a8848dee9a1ef71672c6ae42088589e30aeda116da0638cb13dccf965080af4"
            },
            "downloads": -1,
            "filename": "abnosql-0.0.23.tar.gz",
            "has_sig": false,
            "md5_digest": "5a20502913881b4347fc7e1f79033f16",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 41198,
            "upload_time": "2024-04-26T22:50:32",
            "upload_time_iso_8601": "2024-04-26T22:50:32.695291Z",
            "url": "https://files.pythonhosted.org/packages/76/18/35b40f46ba518811a6636a55f4b84cd72fb47627faffbd8e71d02dbad4a7/abnosql-0.0.23.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-26 22:50:32",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "rog555",
    "github_project": "abnosql",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "abnosql"
}
        
Elapsed time: 0.22056s