python-cdd-gae


Namepython-cdd-gae JSON
Version 0.0.15 PyPI version JSON
download
home_pagehttps://github.com/offscale/cdd-python
SummaryMigration tooling from Google App Engine (webapp2, ndb) to python-cdd supported (FastAPI, SQLalchemy).
upload_time2023-02-02 04:01:03
maintainer
docs_urlNone
authorSamuel Marks
requires_python>=3.6
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            cdd-python-gae
==============
![Python version range](https://img.shields.io/badge/python-3.6%20|%203.7%20|%203.8%20|%203.9%20|%203.10%20|%203.11-blue.svg)
![Python implementation](https://img.shields.io/badge/implementation-cpython-blue.svg)
[![License](https://img.shields.io/badge/license-Apache--2.0%20OR%20MIT-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![Linting, testing, coverage, and release](https://github.com/offscale/cdd-python-gae/workflows/Linting,%20testing,%20coverage,%20and%20release/badge.svg)](https://github.com/offscale/cdd-python-gae/actions)
![Tested OSs, others may work](https://img.shields.io/badge/Tested%20on-Linux%20|%20macOS%20|%20Windows-green)
![Documentation coverage](https://raw.githubusercontent.com/offscale/cdd-python-gae/master/.github/doccoverage.svg)
[![codecov](https://codecov.io/gh/offscale/cdd-python-gae/branch/master/graph/badge.svg)](https://codecov.io/gh/offscale/cdd-python-gae)
[![black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort)
[![PyPi: release](https://img.shields.io/pypi/v/python-cdd-gae.svg?maxAge=3600)](https://pypi.org/project/python-cdd-gae)

Migration tooling from Google App Engine (webapp2, ndb) to python-cdd supported (FastAPI, SQLalchemy).

Public SDK works with filenames, source code, and even in memory constructs (e.g., as imported into your REPL).
CLI available also.

Note: Parquet files are supported as it takes too long to run NDB queries to batch acquire / batch insert into SQL.

## Install package

### PyPi

    pip install python-cdd-gae

### Master

    pip install -r https://raw.githubusercontent.com/offscale/cdd-python-gae/master/requirements.txt
    pip install https://api.github.com/repos/offscale/cdd-python-gae/zipball#egg=cdd

## Goal

Migrate from Google App Engine to cloud-independent runtime (e.g., vanilla CPython 3.11 with SQLite). 

## Relation to other projects

This was created independent of `cdd-python` project for two reasons:

  0. Unidirectional;
  1. Relevant to fewer people.

## SDK

### Approach

Traverse the AST for ndb and webapp2.

## Advantages

  - 

## Disadvantages

  - 

## Alternatives

  - 

## Minor other use-cases this facilitates

  - 

## CLI for this project

    $ python -m cdd_gae --help
    usage: python -m cdd_gae gen [-h] [--parse {ndb,parquet,webapp2}] --emit
                                 {argparse,class,function,json_schema,pydantic,sqlalchemy,sqlalchemy_table}
                                 -i INPUT_FILE -o OUTPUT_FILE [--name NAME]
                                 [--dry-run]
    
    options:
      -h, --help            show this help message and exit
      --parse {ndb,parquet,webapp2}
                            What type the input is.
      --emit {argparse,class,function,json_schema,pydantic,sqlalchemy,sqlalchemy_table}
                            What type to generate.
      -i INPUT_FILE, --input-file INPUT_FILE
                            Python file to parse NDB `class`es out of
      -o OUTPUT_FILE, --output-file OUTPUT_FILE
                            Empty file to generate SQLalchemy classes to
      --name NAME           Name of function/class to emit, defaults to inferring
                            from filename
      --dry-run             Show what would be created; don't actually write to
                            the filesystem.

### `python -m cdd_gae gen`

    $ python -m cdd_gae gen --help
    usage: python -m cdd_gae gen [-h] [--parse {ndb,parquet,webapp2}] --emit
                                 {argparse,class,function,json_schema,pydantic,sqlalchemy,sqlalchemy_table}
                                 -i INPUT_FILE -o OUTPUT_FILE [--name NAME]
                                 [--dry-run]
    
    options:
      -h, --help            show this help message and exit
      --parse {ndb,parquet,webapp2}
                            What type the input is.
      --emit {argparse,class,function,json_schema,pydantic,sqlalchemy,sqlalchemy_table}
                            What type to generate.
      -i INPUT_FILE, --input-file INPUT_FILE
                            Python file to parse NDB `class`es out of
      -o OUTPUT_FILE, --output-file OUTPUT_FILE
                            Empty file to generate SQLalchemy classes to
      --name NAME           Name of function/class to emit, defaults to inferring
                            from filename
      --dry-run             Show what would be created; don't actually write to
                            the filesystem.

### `python -m cdd_gae ndb2sqlalchemy_migrator`

    $ python -m cdd_gae ndb2sqlalchemy_migrator --help
    usage: python -m cdd_gae ndb2sqlalchemy_migrator [-h] --ndb-file NDB_FILE
                                                     --sqlalchemy-file
                                                     SQLALCHEMY_FILE
                                                     --ndb-mod-to-import
                                                     NDB_MOD_TO_IMPORT
                                                     --sqlalchemy-mod-to-import
                                                     SQLALCHEMY_MOD_TO_IMPORT -o
                                                     OUTPUT_FOLDER [--dry-run]
    
    options:
      -h, --help            show this help message and exit
      --ndb-file NDB_FILE   Python file containing the NDB `class`es
      --sqlalchemy-file SQLALCHEMY_FILE
                            Python file containing the NDB `class`es
      --ndb-mod-to-import NDB_MOD_TO_IMPORT
                            NDB module name that the entity will be imported from
      --sqlalchemy-mod-to-import SQLALCHEMY_MOD_TO_IMPORT
                            SQLalchemy module name that the entity will be
                            imported from
      -o OUTPUT_FOLDER, --output-folder OUTPUT_FOLDER
                            Empty folder to generate scripts that migrate from one
                            NDB class to one SQLalchemy class
      --dry-run             Show what would be created; don't actually write to
                            the filesystem.

### `python -m cdd_gae gen parquet2table`

    $ python -m cdd_gae parquet2table --help
    usage: python -m cdd_gae parquet2table [-h] -i FILENAME
                                           [--database-uri DATABASE_URI]
                                           [--table-name TABLE_NAME] [--dry-run]
    
    options:
      -h, --help            show this help message and exit
      -i FILENAME, --input-file FILENAME
                            Parquet file
      --database-uri DATABASE_URI
                            Database connection string. Defaults to `RDBMS_URI` in
                            your env vars.
      --table-name TABLE_NAME
                            Table name to use, else use penultimate underscore
                            surrounding word form filename basename
      --dry-run             Show what would be created; don't actually write to
                            the filesystem.

---

## Data migration

The most efficient way seems to be:

  0. Backup from NDB to Google Cloud Storage
  1. Import from Google Cloud Storage to Google BigQuery
  2. Export from Google BigQuery to Apache Parquet files in Google Cloud Storage
  3. Download and parse the Parquet files, then insert into SQL

(for the following scripts set `GOOGLE_PROJECT_ID`, `GOOGLE_BUCKET_NAME`, `NAMESPACE`, `GOOGLE_LOCATION`)

### Backup from NDB to Google Cloud Storage
```sh
for entity in kind0 kind1; do
  gcloud datastore export 'gs://'"$GOOGLE_BUCKET_NAME" --project "$GOOGLE_PROJECT_ID" --kinds "$entity" --async &
done
```

### Import from Google Cloud Storage to Google BigQuery
```sh
printf 'bq mk "%s"\n' "$NAMESPACE" > migrate.bash
gsutil ls 'gs://'"$GOOGLE_BUCKET_NAME"'/**/all_namespaces/kind_*' | python3 -c 'import sys, posixpath, fileinput; f=fileinput.input(encoding="utf-8"); d=dict(map(lambda e: (posixpath.basename(posixpath.dirname(e)), posixpath.dirname(e)), sorted(f))); f.close(); print("\n".join(map(lambda k: "( bq mk \"'"$NAMESPACE"'.{k}\" && bq --location='"$GOOGLE_LOCATION"' load --source_format=DATASTORE_BACKUP \"'"$NAMESPACE"'.{k}\" \"{v}/all_namespaces_{k}.export_metadata\" ) &".format(k=k, v=d[k]), sorted(d.keys()))),sep="");' >> migrate.bash
# Then run `bash migrate.bash`
```

### Export from Google BigQuery to Apache Parquet files in Google Cloud Storage
```sh
for entity in kind0 kind1; do
  bq extract --location="$GOOGLE_LOCATION" --destination_format='PARQUET' "$NAMESPACE"'.kind_'"$entity" 'gs://'"$GOOGLE_BUCKET_NAME"'/'"$entity"'/*' &
done
```

###  Download and parse the Parquet files, then insert into SQL
Download from Google Cloud Bucket:
```sh
gcloud storage cp -R 'gs://'"$GOOGLE_BUCKET_NAME"'/folder/*' '/data'
```

Use this script to create SQLalchemy files from Parquet files:
```bash
#!/usr/bin/env bash

module_dir='parquet_to_postgres'
mkdir -p "$module_dir"
main_py="$module_dir"'/__main__.py'
printf '%s\n' \
       'from os import environ' \
       'from sqlalchemy import create_engine' '' '' \
       'if __name__ == "__main__":' \
       '    engine = create_engine(environ["RDBMS_URI"])' \
       '    print("Creating tables")' \
       '    Base.metadata.create_all(engine)' > "$main_py"
printf '%s\n' \
       'from sqlalchemy.orm import declarative_base' '' \
       'Base = declarative_base()' \
       '__all__ = ["Base"]' > "$module_dir"'/__init__.py'

declare -a extra_imports=()

for parquet_file in 2023-01-18_0_kind0_000000000000 2023-01-18_0_kind1_000000000000; do
  IFS='_'; read -r _ _ table_name _ _ _ <<< "${parquet_file//+(*\/|.*)}"
  py_file="$module_dir"'/'"$table_name"'.py'
  python -m cdd_gae gen --parse 'parquet' --emit 'sqlalchemy' -i "$parquet_file" -o "$py_file" --name "$table_name"
  echo -e 'from . import Base' | cat - "$py_file" | sponge "$py_file"
  printf -v table_import 'from %s.%s import %s' "$module_dir" "$table_name" "$table_name"
  extra_imports+=("$table_import")
done

extra_imports+=('from . import Base')

( IFS=$'\n'; echo -e "${extra_imports[*]}" ) | cat - "$main_py" | sponge "$main_py"
```

Then run `python -m "$module_dir"` to execute the `CREATE TABLE`s.

Finally, to batch insert into your tables concurrently; replace `RDBMS_URI` with your database connection string:
```sh
export RDBMS_URI='postgresql://username:password@host/database'
for parquet_file in 2023-01-18_0_kind0_000000000000 2023-01-18_0_kind1_000000000000; do
  python -m cdd_gae parquet2table -i "$parquet_file" &
done
# Or with the concurrent `fd`
# fd -tf . '/data' -E 'exclude_tbl' -x python -m cdd_gae parquet2table -i
# Or with explicit table_name from parent folder's basename:
# fd -tf . '/data' -E 'exclude_tbl' -x bash -c 'python -m cdd_gae parquet2table --table-name "$(basename ${0%/*})" -i "$0"' {}
```

---

## License

Licensed under either of

- Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or <https://www.apache.org/licenses/LICENSE-2.0>)
- MIT license ([LICENSE-MIT](LICENSE-MIT) or <https://opensource.org/licenses/MIT>)

at your option.

### Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted
for inclusion in the work by you, as defined in the Apache-2.0 license, shall be
dual licensed as above, without any additional terms or conditions.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/offscale/cdd-python",
    "name": "python-cdd-gae",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "",
    "author": "Samuel Marks",
    "author_email": "807580+SamuelMarks@users.noreply.github.com",
    "download_url": "https://files.pythonhosted.org/packages/3c/ff/75679ad7aed727288fd6842eec5cc93b58e388d7a6d711a90db86cabfff3/python-cdd-gae-0.0.15.tar.gz",
    "platform": null,
    "description": "cdd-python-gae\n==============\n![Python version range](https://img.shields.io/badge/python-3.6%20|%203.7%20|%203.8%20|%203.9%20|%203.10%20|%203.11-blue.svg)\n![Python implementation](https://img.shields.io/badge/implementation-cpython-blue.svg)\n[![License](https://img.shields.io/badge/license-Apache--2.0%20OR%20MIT-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![Linting, testing, coverage, and release](https://github.com/offscale/cdd-python-gae/workflows/Linting,%20testing,%20coverage,%20and%20release/badge.svg)](https://github.com/offscale/cdd-python-gae/actions)\n![Tested OSs, others may work](https://img.shields.io/badge/Tested%20on-Linux%20|%20macOS%20|%20Windows-green)\n![Documentation coverage](https://raw.githubusercontent.com/offscale/cdd-python-gae/master/.github/doccoverage.svg)\n[![codecov](https://codecov.io/gh/offscale/cdd-python-gae/branch/master/graph/badge.svg)](https://codecov.io/gh/offscale/cdd-python-gae)\n[![black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort)\n[![PyPi: release](https://img.shields.io/pypi/v/python-cdd-gae.svg?maxAge=3600)](https://pypi.org/project/python-cdd-gae)\n\nMigration tooling from Google App Engine (webapp2, ndb) to python-cdd supported (FastAPI, SQLalchemy).\n\nPublic SDK works with filenames, source code, and even in memory constructs (e.g., as imported into your REPL).\nCLI available also.\n\nNote: Parquet files are supported as it takes too long to run NDB queries to batch acquire / batch insert into SQL.\n\n## Install package\n\n### PyPi\n\n    pip install python-cdd-gae\n\n### Master\n\n    pip install -r https://raw.githubusercontent.com/offscale/cdd-python-gae/master/requirements.txt\n    pip install https://api.github.com/repos/offscale/cdd-python-gae/zipball#egg=cdd\n\n## Goal\n\nMigrate from Google App Engine to cloud-independent runtime (e.g., vanilla CPython 3.11 with SQLite). \n\n## Relation to other projects\n\nThis was created independent of `cdd-python` project for two reasons:\n\n  0. Unidirectional;\n  1. Relevant to fewer people.\n\n## SDK\n\n### Approach\n\nTraverse the AST for ndb and webapp2.\n\n## Advantages\n\n  - \n\n## Disadvantages\n\n  - \n\n## Alternatives\n\n  - \n\n## Minor other use-cases this facilitates\n\n  - \n\n## CLI for this project\n\n    $ python -m cdd_gae --help\n    usage: python -m cdd_gae gen [-h] [--parse {ndb,parquet,webapp2}] --emit\n                                 {argparse,class,function,json_schema,pydantic,sqlalchemy,sqlalchemy_table}\n                                 -i INPUT_FILE -o OUTPUT_FILE [--name NAME]\n                                 [--dry-run]\n    \n    options:\n      -h, --help            show this help message and exit\n      --parse {ndb,parquet,webapp2}\n                            What type the input is.\n      --emit {argparse,class,function,json_schema,pydantic,sqlalchemy,sqlalchemy_table}\n                            What type to generate.\n      -i INPUT_FILE, --input-file INPUT_FILE\n                            Python file to parse NDB `class`es out of\n      -o OUTPUT_FILE, --output-file OUTPUT_FILE\n                            Empty file to generate SQLalchemy classes to\n      --name NAME           Name of function/class to emit, defaults to inferring\n                            from filename\n      --dry-run             Show what would be created; don't actually write to\n                            the filesystem.\n\n### `python -m cdd_gae gen`\n\n    $ python -m cdd_gae gen --help\n    usage: python -m cdd_gae gen [-h] [--parse {ndb,parquet,webapp2}] --emit\n                                 {argparse,class,function,json_schema,pydantic,sqlalchemy,sqlalchemy_table}\n                                 -i INPUT_FILE -o OUTPUT_FILE [--name NAME]\n                                 [--dry-run]\n    \n    options:\n      -h, --help            show this help message and exit\n      --parse {ndb,parquet,webapp2}\n                            What type the input is.\n      --emit {argparse,class,function,json_schema,pydantic,sqlalchemy,sqlalchemy_table}\n                            What type to generate.\n      -i INPUT_FILE, --input-file INPUT_FILE\n                            Python file to parse NDB `class`es out of\n      -o OUTPUT_FILE, --output-file OUTPUT_FILE\n                            Empty file to generate SQLalchemy classes to\n      --name NAME           Name of function/class to emit, defaults to inferring\n                            from filename\n      --dry-run             Show what would be created; don't actually write to\n                            the filesystem.\n\n### `python -m cdd_gae ndb2sqlalchemy_migrator`\n\n    $ python -m cdd_gae ndb2sqlalchemy_migrator --help\n    usage: python -m cdd_gae ndb2sqlalchemy_migrator [-h] --ndb-file NDB_FILE\n                                                     --sqlalchemy-file\n                                                     SQLALCHEMY_FILE\n                                                     --ndb-mod-to-import\n                                                     NDB_MOD_TO_IMPORT\n                                                     --sqlalchemy-mod-to-import\n                                                     SQLALCHEMY_MOD_TO_IMPORT -o\n                                                     OUTPUT_FOLDER [--dry-run]\n    \n    options:\n      -h, --help            show this help message and exit\n      --ndb-file NDB_FILE   Python file containing the NDB `class`es\n      --sqlalchemy-file SQLALCHEMY_FILE\n                            Python file containing the NDB `class`es\n      --ndb-mod-to-import NDB_MOD_TO_IMPORT\n                            NDB module name that the entity will be imported from\n      --sqlalchemy-mod-to-import SQLALCHEMY_MOD_TO_IMPORT\n                            SQLalchemy module name that the entity will be\n                            imported from\n      -o OUTPUT_FOLDER, --output-folder OUTPUT_FOLDER\n                            Empty folder to generate scripts that migrate from one\n                            NDB class to one SQLalchemy class\n      --dry-run             Show what would be created; don't actually write to\n                            the filesystem.\n\n### `python -m cdd_gae gen parquet2table`\n\n    $ python -m cdd_gae parquet2table --help\n    usage: python -m cdd_gae parquet2table [-h] -i FILENAME\n                                           [--database-uri DATABASE_URI]\n                                           [--table-name TABLE_NAME] [--dry-run]\n    \n    options:\n      -h, --help            show this help message and exit\n      -i FILENAME, --input-file FILENAME\n                            Parquet file\n      --database-uri DATABASE_URI\n                            Database connection string. Defaults to `RDBMS_URI` in\n                            your env vars.\n      --table-name TABLE_NAME\n                            Table name to use, else use penultimate underscore\n                            surrounding word form filename basename\n      --dry-run             Show what would be created; don't actually write to\n                            the filesystem.\n\n---\n\n## Data migration\n\nThe most efficient way seems to be:\n\n  0. Backup from NDB to Google Cloud Storage\n  1. Import from Google Cloud Storage to Google BigQuery\n  2. Export from Google BigQuery to Apache Parquet files in Google Cloud Storage\n  3. Download and parse the Parquet files, then insert into SQL\n\n(for the following scripts set `GOOGLE_PROJECT_ID`, `GOOGLE_BUCKET_NAME`, `NAMESPACE`, `GOOGLE_LOCATION`)\n\n### Backup from NDB to Google Cloud Storage\n```sh\nfor entity in kind0 kind1; do\n  gcloud datastore export 'gs://'\"$GOOGLE_BUCKET_NAME\" --project \"$GOOGLE_PROJECT_ID\" --kinds \"$entity\" --async &\ndone\n```\n\n### Import from Google Cloud Storage to Google BigQuery\n```sh\nprintf 'bq mk \"%s\"\\n' \"$NAMESPACE\" > migrate.bash\ngsutil ls 'gs://'\"$GOOGLE_BUCKET_NAME\"'/**/all_namespaces/kind_*' | python3 -c 'import sys, posixpath, fileinput; f=fileinput.input(encoding=\"utf-8\"); d=dict(map(lambda e: (posixpath.basename(posixpath.dirname(e)), posixpath.dirname(e)), sorted(f))); f.close(); print(\"\\n\".join(map(lambda k: \"( bq mk \\\"'\"$NAMESPACE\"'.{k}\\\" && bq --location='\"$GOOGLE_LOCATION\"' load --source_format=DATASTORE_BACKUP \\\"'\"$NAMESPACE\"'.{k}\\\" \\\"{v}/all_namespaces_{k}.export_metadata\\\" ) &\".format(k=k, v=d[k]), sorted(d.keys()))),sep=\"\");' >> migrate.bash\n# Then run `bash migrate.bash`\n```\n\n### Export from Google BigQuery to Apache Parquet files in Google Cloud Storage\n```sh\nfor entity in kind0 kind1; do\n  bq extract --location=\"$GOOGLE_LOCATION\" --destination_format='PARQUET' \"$NAMESPACE\"'.kind_'\"$entity\" 'gs://'\"$GOOGLE_BUCKET_NAME\"'/'\"$entity\"'/*' &\ndone\n```\n\n###  Download and parse the Parquet files, then insert into SQL\nDownload from Google Cloud Bucket:\n```sh\ngcloud storage cp -R 'gs://'\"$GOOGLE_BUCKET_NAME\"'/folder/*' '/data'\n```\n\nUse this script to create SQLalchemy files from Parquet files:\n```bash\n#!/usr/bin/env bash\n\nmodule_dir='parquet_to_postgres'\nmkdir -p \"$module_dir\"\nmain_py=\"$module_dir\"'/__main__.py'\nprintf '%s\\n' \\\n       'from os import environ' \\\n       'from sqlalchemy import create_engine' '' '' \\\n       'if __name__ == \"__main__\":' \\\n       '    engine = create_engine(environ[\"RDBMS_URI\"])' \\\n       '    print(\"Creating tables\")' \\\n       '    Base.metadata.create_all(engine)' > \"$main_py\"\nprintf '%s\\n' \\\n       'from sqlalchemy.orm import declarative_base' '' \\\n       'Base = declarative_base()' \\\n       '__all__ = [\"Base\"]' > \"$module_dir\"'/__init__.py'\n\ndeclare -a extra_imports=()\n\nfor parquet_file in 2023-01-18_0_kind0_000000000000 2023-01-18_0_kind1_000000000000; do\n  IFS='_'; read -r _ _ table_name _ _ _ <<< \"${parquet_file//+(*\\/|.*)}\"\n  py_file=\"$module_dir\"'/'\"$table_name\"'.py'\n  python -m cdd_gae gen --parse 'parquet' --emit 'sqlalchemy' -i \"$parquet_file\" -o \"$py_file\" --name \"$table_name\"\n  echo -e 'from . import Base' | cat - \"$py_file\" | sponge \"$py_file\"\n  printf -v table_import 'from %s.%s import %s' \"$module_dir\" \"$table_name\" \"$table_name\"\n  extra_imports+=(\"$table_import\")\ndone\n\nextra_imports+=('from . import Base')\n\n( IFS=$'\\n'; echo -e \"${extra_imports[*]}\" ) | cat - \"$main_py\" | sponge \"$main_py\"\n```\n\nThen run `python -m \"$module_dir\"` to execute the `CREATE TABLE`s.\n\nFinally, to batch insert into your tables concurrently; replace `RDBMS_URI` with your database connection string:\n```sh\nexport RDBMS_URI='postgresql://username:password@host/database'\nfor parquet_file in 2023-01-18_0_kind0_000000000000 2023-01-18_0_kind1_000000000000; do\n  python -m cdd_gae parquet2table -i \"$parquet_file\" &\ndone\n# Or with the concurrent `fd`\n# fd -tf . '/data' -E 'exclude_tbl' -x python -m cdd_gae parquet2table -i\n# Or with explicit table_name from parent folder's basename:\n# fd -tf . '/data' -E 'exclude_tbl' -x bash -c 'python -m cdd_gae parquet2table --table-name \"$(basename ${0%/*})\" -i \"$0\"' {}\n```\n\n---\n\n## License\n\nLicensed under either of\n\n- Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or <https://www.apache.org/licenses/LICENSE-2.0>)\n- MIT license ([LICENSE-MIT](LICENSE-MIT) or <https://opensource.org/licenses/MIT>)\n\nat your option.\n\n### Contribution\n\nUnless you explicitly state otherwise, any contribution intentionally submitted\nfor inclusion in the work by you, as defined in the Apache-2.0 license, shall be\ndual licensed as above, without any additional terms or conditions.\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Migration tooling from Google App Engine (webapp2, ndb) to python-cdd supported (FastAPI, SQLalchemy).",
    "version": "0.0.15",
    "project_urls": {
        "Homepage": "https://github.com/offscale/cdd-python"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f3ae594e5abfe89c109a969e656f7828d0b331e960ddb914f581c9cd1a972164",
                "md5": "9c7958a35281b6ab3ae517460d54ab89",
                "sha256": "a0540695fc679a589dc1e1c7ee67648b5b4d73f6d9f548312c54d2d7bafa7c09"
            },
            "downloads": -1,
            "filename": "python_cdd_gae-0.0.15-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9c7958a35281b6ab3ae517460d54ab89",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 36788,
            "upload_time": "2023-02-02T04:01:02",
            "upload_time_iso_8601": "2023-02-02T04:01:02.372568Z",
            "url": "https://files.pythonhosted.org/packages/f3/ae/594e5abfe89c109a969e656f7828d0b331e960ddb914f581c9cd1a972164/python_cdd_gae-0.0.15-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3cff75679ad7aed727288fd6842eec5cc93b58e388d7a6d711a90db86cabfff3",
                "md5": "2f6ddf9bc80f8d9878af8e30b81d23f3",
                "sha256": "eeec651b3fd4a16958817d30196579db71cfe30e573c1130db80d5a20bb78df6"
            },
            "downloads": -1,
            "filename": "python-cdd-gae-0.0.15.tar.gz",
            "has_sig": false,
            "md5_digest": "2f6ddf9bc80f8d9878af8e30b81d23f3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 31204,
            "upload_time": "2023-02-02T04:01:03",
            "upload_time_iso_8601": "2023-02-02T04:01:03.646356Z",
            "url": "https://files.pythonhosted.org/packages/3c/ff/75679ad7aed727288fd6842eec5cc93b58e388d7a6d711a90db86cabfff3/python-cdd-gae-0.0.15.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-02-02 04:01:03",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "offscale",
    "github_project": "cdd-python",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "python-cdd-gae"
}
        
Elapsed time: 0.20632s