glue-utils


Nameglue-utils JSON
Version 0.9.1 PyPI version JSON
download
home_pagehttps://github.com/dashmug/glue-utils
SummaryReusable utilities for working with Glue PySpark jobs
upload_time2024-11-14 10:57:20
maintainerNone
docs_urlNone
authorNoel Llevares
requires_python<4.0,>=3.9
licenseMIT
keywords aws glue pyspark spark etl data data-engineering
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # glue-utils

[![PyPI - Version](https://img.shields.io/pypi/v/glue-utils)](https://pypi.org/project/glue-utils/)
[![PyPI - Downloads](https://img.shields.io/pypi/dw/glue-utils)](https://pypistats.org/packages/glue-utils)
[![License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)

[![Continuous Integration](https://github.com/dashmug/glue-utils/actions/workflows/ci.yml/badge.svg)](https://github.com/dashmug/glue-utils/actions/workflows/ci.yml)
[![CodeQL](https://github.com/dashmug/glue-utils/actions/workflows/github-code-scanning/codeql/badge.svg)](https://github.com/dashmug/glue-utils/actions/workflows/github-code-scanning/codeql)
[![Dependabot Updates](https://github.com/dashmug/glue-utils/actions/workflows/dependabot/dependabot-updates/badge.svg)](https://github.com/dashmug/glue-utils/actions/workflows/dependabot/dependabot-updates)

[![Lines of Code](https://sonarcloud.io/api/project_badges/measure?project=dashmug_glue-utils&metric=ncloc)](https://sonarcloud.io/summary/overall?id=dashmug_glue-utils)
[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=dashmug_glue-utils&metric=alert_status)](https://sonarcloud.io/summary/overall?id=dashmug_glue-utils)
[![Coverage](https://sonarcloud.io/api/project_badges/measure?project=dashmug_glue-utils&metric=coverage)](https://sonarcloud.io/summary/overall?id=dashmug_glue-utils)
[![Reliability Rating](https://sonarcloud.io/api/project_badges/measure?project=dashmug_glue-utils&metric=reliability_rating)](https://sonarcloud.io/summary/overall?id=dashmug_glue-utils)
[![Security Rating](https://sonarcloud.io/api/project_badges/measure?project=dashmug_glue-utils&metric=security_rating)](https://sonarcloud.io/summary/overall?id=dashmug_glue-utils)
[![Maintainability Rating](https://sonarcloud.io/api/project_badges/measure?project=dashmug_glue-utils&metric=sqale_rating)](https://sonarcloud.io/summary/overall?id=dashmug_glue-utils)

`glue-utils` is a Python library designed to enhance the developer experience when working with AWS Glue ETL and Python Shell jobs. It reduces boilerplate code, increases type safety, and improves IDE auto-completion, making Glue development easier and more efficient.

![image](docs/images/concise.py.png)

- [Usage in AWS Glue](#usage-in-aws-glue)
- [Usage when developing jobs locally](#usage-when-developing-jobs-locally)
- [Main Features](#main-features)
  - [`BaseOptions`](#baseoptions)
  - [`GluePySparkContext`](#gluepysparkcontext)
  - [`GluePySparkJob`](#gluepysparkjob)
- [Other features](#other-features)

## Usage in AWS Glue

To use `glue-utils` in AWS Glue, it needs to be added as an
[additional python module](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-python-libraries.html#addl-python-modules-support)
in your Glue job.

You can do this by adding an `--additional-python-modules` job parameter
with the value, `glue_utils==0.9.1`. For more information about setting
job parameters, see [AWS Glue job parameters](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.html).

## Usage when developing jobs locally

This library does not include `pyspark` and `aws-glue-libs` as
dependencies as they are already pre-installed in Glue's runtime
environment.

To help in developing your Glue jobs locally in your IDE, it is helpful
to install `pyspark` and `aws-glue-libs`. Unfortunately, `aws-glue-libs`
is not available through PyPI so we can only install it from its git
repository.

```sh
# Glue 4.0 uses PySpark 3.3.0
pip install pyspark==3.3.0
pip install git+https://github.com/awslabs/aws-glue-libs.git@master
pip install glue-utils
```

## Main Features

- `BaseOptions`
  - a dataclass that parses the options supplied via command-line
    arguments
- `GluePySparkContext`
  - a subclass of `awsglue.context.GlueContext` that adds convenient
    type-safe methods (methods that ensure the correct data types are
    used) for the most common connection types.
- `GluePySparkJob`
  - a convenient class that simplifies and reduces the boilerplate
    code needed in Glue jobs.

### `BaseOptions`

`BaseOptions` resolves the required arguments into a dataclass to help
your IDE auto-complete and detect potential `KeyError`s. It also makes
type checkers such as `pyright` and `mypy` detect those errors at
design or build time instead of at runtime.

```python
from dataclasses import dataclass
from glue_utils import BaseOptions


@dataclass
class Options(BaseOptions):
    start_date: str
    end_date: str


args = Options.from_sys_argv()

print(f"The day partition key is: {args.start_date}")
```

_Note: Similar to the behavior of `awsglue.utils.getResolvedOptions`,
all arguments are **strings**. A warning is raised when defining a field as
other data types. We aim to auto-cast those values in the future._

### `GluePySparkContext`

`GluePySparkContext` is a subclass of
[`awsglue.context.GlueContext`](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-crawler-pyspark-extensions-glue-context.html)
with the following additional convenience methods for creating and
writing `DynamicFrame`s for the common connection types. The method
signatures ensure that you are passing the right connection options
and/or format options for the chosen connection type.

- MySQL
  - `create_dynamic_frame_from_mysql`
  - `write_dynamic_frame_to_mysql`
- Oracle
  - `create_dynamic_frame_from_oracle`
  - `write_dynamic_frame_to_oracle`
- PostgreSQL
  - `create_dynamic_frame_from_postgresql`
  - `write_dynamic_frame_to_postgresql`
- SQL Server
  - `create_dynamic_frame_from_sqlserver`
  - `write_dynamic_frame_to_sqlserver`
- S3
  - JSON
    - `create_dynamic_frame_from_s3_json`
    - `write_dynamic_frame_to_s3_json`
  - CSV
    - `create_dynamic_frame_from_s3_csv`
    - `write_dynamic_frame_to_s3_csv`
  - Parquet
    - `create_dynamic_frame_from_s3_parquet`
    - `write_dynamic_frame_to_s3_parquet`
  - XML
    - `create_dynamic_frame_from_s3_xml`
    - `write_dynamic_frame_to_s3_xml`
- DynamoDB
  - `create_dynamic_frame_from_dynamodb`
  - `create_dynamic_frame_from_dynamodb_export`
  - `write_dynamic_frame_to_dynamodb`
- Kinesis
  - `create_dynamic_frame_from_kinesis`
  - `write_dynamic_frame_to_kinesis`
- Kafka
  - `create_dynamic_frame_from_kafka`
  - `write_dynamic_frame_to_kafka`
- OpenSearch
  - `create_dynamic_frame_from_opensearch`
  - `write_dynamic_frame_to_opensearch`
- DocumentDB
  - `create_dynamic_frame_from_documentdb`
  - `write_dynamic_frame_to_documentdb`
- MongoDB
  - `create_dynamic_frame_from_mongodb`
  - `write_dynamic_frame_to_mongodb`

### `GluePySparkJob`

`GluePySparkJob` reduces the boilerplate code needed by using reasonable
defaults while still allowing for customizations by passing keyword
arguments.

In its simplest form, it takes care of instantiating
`awsglue.context.GlueContext` and initializing `awsglue.job.Job`.

```python
from glue_utils.pyspark import GluePySparkJob

# Instantiate with defaults.
job = GluePySparkJob()

# This is the SparkContext object.
sc = job.sc

# This is the GluePySparkContext(GlueContext) object.
glue_context = job.glue_context

# This is the SparkSession object.
spark = job.spark

# The rest of your job's logic.

# Commit the job if necessary (e.g. when using bookmarks).
job.commit()
```

#### `options_cls`

You may pass a subclass of `BaseOptions` to make the resolved options
available in `job.options`.

```python
from dataclasses import dataclass
from glue_utils import BaseOptions
from glue_utils.pyspark import GluePySparkJob


@dataclass
class Options(BaseOptions):
    # Specify the arguments as field names
    start_date: str
    end_date: str
    source_path: str


# Instantiate with the above Options class.
job = GluePySparkJob(options_cls=Options)

# Use the resolved values using the fields available in job.options.
print(f"The S3 path is {job.options.source_path}")
```

#### `log_level`

You may configure the logging level. It is set to `GluePySparkJob.LogLevel.WARN` by
default.

```python
from glue_utils.pyspark import GluePySparkJob


# Log only errors.
job = GluePySparkJob(log_level=GluePySparkJob.LogLevel.ERROR)
```

#### `spark_conf`

You may set Spark configuration values by instantiating a custom
`pyspark.SparkConf` object to pass to `GluePySparkJob`.

```python
from pyspark import SparkConf
from glue_utils.pyspark import GluePySparkJob

# Instantiate a SparkConf and set the desired config keys/values.
spark_conf = SparkConf()
spark_conf.set("spark.driver.maxResultSize", "4g")

# Instantiate with the above custom SparkConf.
job = GluePySparkJob(spark_conf=spark_conf)
```

#### `glue_context_options`

You may set options that are passed to `awsglue.context.GlueContext`.

```python
from glue_utils.pyspark import GlueContextOptions, GluePySparkJob

job = GluePySparkJob(glue_context_options={
    "minPartitions": 2,
    "targetPartitions": 10,
})

# Alternatively, you can use the GlueContextOptions TypedDict.
job = GluePySparkJob(glue_context_options=GlueContextOptions(
    minPartitions=2,
    targetPartitions=10,
)
```

## Other features

The following modules contain useful `TypedDict`s for defining
connection options or format options to pass as arguments to various
`awsglue.context.GlueContext` methods:

- `glue_utils.pyspark.connection_options`
  - for defining `connection_options` for various connection types
- `glue_utils.pyspark.format_options`
  - for defining `format_options` for various formats

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/dashmug/glue-utils",
    "name": "glue-utils",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": "aws, glue, pyspark, spark, etl, data, data-engineering",
    "author": "Noel Llevares",
    "author_email": "dashmug@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/23/f9/a133441d2ac7274890ba8b1ada53e92aa02033b6b768e8a669129cf67eba/glue_utils-0.9.1.tar.gz",
    "platform": null,
    "description": "# glue-utils\n\n[![PyPI - Version](https://img.shields.io/pypi/v/glue-utils)](https://pypi.org/project/glue-utils/)\n[![PyPI - Downloads](https://img.shields.io/pypi/dw/glue-utils)](https://pypistats.org/packages/glue-utils)\n[![License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)\n\n[![Continuous Integration](https://github.com/dashmug/glue-utils/actions/workflows/ci.yml/badge.svg)](https://github.com/dashmug/glue-utils/actions/workflows/ci.yml)\n[![CodeQL](https://github.com/dashmug/glue-utils/actions/workflows/github-code-scanning/codeql/badge.svg)](https://github.com/dashmug/glue-utils/actions/workflows/github-code-scanning/codeql)\n[![Dependabot Updates](https://github.com/dashmug/glue-utils/actions/workflows/dependabot/dependabot-updates/badge.svg)](https://github.com/dashmug/glue-utils/actions/workflows/dependabot/dependabot-updates)\n\n[![Lines of Code](https://sonarcloud.io/api/project_badges/measure?project=dashmug_glue-utils&metric=ncloc)](https://sonarcloud.io/summary/overall?id=dashmug_glue-utils)\n[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=dashmug_glue-utils&metric=alert_status)](https://sonarcloud.io/summary/overall?id=dashmug_glue-utils)\n[![Coverage](https://sonarcloud.io/api/project_badges/measure?project=dashmug_glue-utils&metric=coverage)](https://sonarcloud.io/summary/overall?id=dashmug_glue-utils)\n[![Reliability Rating](https://sonarcloud.io/api/project_badges/measure?project=dashmug_glue-utils&metric=reliability_rating)](https://sonarcloud.io/summary/overall?id=dashmug_glue-utils)\n[![Security Rating](https://sonarcloud.io/api/project_badges/measure?project=dashmug_glue-utils&metric=security_rating)](https://sonarcloud.io/summary/overall?id=dashmug_glue-utils)\n[![Maintainability Rating](https://sonarcloud.io/api/project_badges/measure?project=dashmug_glue-utils&metric=sqale_rating)](https://sonarcloud.io/summary/overall?id=dashmug_glue-utils)\n\n`glue-utils` is a Python library designed to enhance the developer experience when working with AWS Glue ETL and Python Shell jobs. It reduces boilerplate code, increases type safety, and improves IDE auto-completion, making Glue development easier and more efficient.\n\n![image](docs/images/concise.py.png)\n\n- [Usage in AWS Glue](#usage-in-aws-glue)\n- [Usage when developing jobs locally](#usage-when-developing-jobs-locally)\n- [Main Features](#main-features)\n  - [`BaseOptions`](#baseoptions)\n  - [`GluePySparkContext`](#gluepysparkcontext)\n  - [`GluePySparkJob`](#gluepysparkjob)\n- [Other features](#other-features)\n\n## Usage in AWS Glue\n\nTo use `glue-utils` in AWS Glue, it needs to be added as an\n[additional python module](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-python-libraries.html#addl-python-modules-support)\nin your Glue job.\n\nYou can do this by adding an `--additional-python-modules` job parameter\nwith the value, `glue_utils==0.9.1`. For more information about setting\njob parameters, see [AWS Glue job parameters](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.html).\n\n## Usage when developing jobs locally\n\nThis library does not include `pyspark` and `aws-glue-libs` as\ndependencies as they are already pre-installed in Glue's runtime\nenvironment.\n\nTo help in developing your Glue jobs locally in your IDE, it is helpful\nto install `pyspark` and `aws-glue-libs`. Unfortunately, `aws-glue-libs`\nis not available through PyPI so we can only install it from its git\nrepository.\n\n```sh\n# Glue 4.0 uses PySpark 3.3.0\npip install pyspark==3.3.0\npip install git+https://github.com/awslabs/aws-glue-libs.git@master\npip install glue-utils\n```\n\n## Main Features\n\n- `BaseOptions`\n  - a dataclass that parses the options supplied via command-line\n    arguments\n- `GluePySparkContext`\n  - a subclass of `awsglue.context.GlueContext` that adds convenient\n    type-safe methods (methods that ensure the correct data types are\n    used) for the most common connection types.\n- `GluePySparkJob`\n  - a convenient class that simplifies and reduces the boilerplate\n    code needed in Glue jobs.\n\n### `BaseOptions`\n\n`BaseOptions` resolves the required arguments into a dataclass to help\nyour IDE auto-complete and detect potential `KeyError`s. It also makes\ntype checkers such as `pyright` and `mypy` detect those errors at\ndesign or build time instead of at runtime.\n\n```python\nfrom dataclasses import dataclass\nfrom glue_utils import BaseOptions\n\n\n@dataclass\nclass Options(BaseOptions):\n    start_date: str\n    end_date: str\n\n\nargs = Options.from_sys_argv()\n\nprint(f\"The day partition key is: {args.start_date}\")\n```\n\n_Note: Similar to the behavior of `awsglue.utils.getResolvedOptions`,\nall arguments are **strings**. A warning is raised when defining a field as\nother data types. We aim to auto-cast those values in the future._\n\n### `GluePySparkContext`\n\n`GluePySparkContext` is a subclass of\n[`awsglue.context.GlueContext`](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-crawler-pyspark-extensions-glue-context.html)\nwith the following additional convenience methods for creating and\nwriting `DynamicFrame`s for the common connection types. The method\nsignatures ensure that you are passing the right connection options\nand/or format options for the chosen connection type.\n\n- MySQL\n  - `create_dynamic_frame_from_mysql`\n  - `write_dynamic_frame_to_mysql`\n- Oracle\n  - `create_dynamic_frame_from_oracle`\n  - `write_dynamic_frame_to_oracle`\n- PostgreSQL\n  - `create_dynamic_frame_from_postgresql`\n  - `write_dynamic_frame_to_postgresql`\n- SQL Server\n  - `create_dynamic_frame_from_sqlserver`\n  - `write_dynamic_frame_to_sqlserver`\n- S3\n  - JSON\n    - `create_dynamic_frame_from_s3_json`\n    - `write_dynamic_frame_to_s3_json`\n  - CSV\n    - `create_dynamic_frame_from_s3_csv`\n    - `write_dynamic_frame_to_s3_csv`\n  - Parquet\n    - `create_dynamic_frame_from_s3_parquet`\n    - `write_dynamic_frame_to_s3_parquet`\n  - XML\n    - `create_dynamic_frame_from_s3_xml`\n    - `write_dynamic_frame_to_s3_xml`\n- DynamoDB\n  - `create_dynamic_frame_from_dynamodb`\n  - `create_dynamic_frame_from_dynamodb_export`\n  - `write_dynamic_frame_to_dynamodb`\n- Kinesis\n  - `create_dynamic_frame_from_kinesis`\n  - `write_dynamic_frame_to_kinesis`\n- Kafka\n  - `create_dynamic_frame_from_kafka`\n  - `write_dynamic_frame_to_kafka`\n- OpenSearch\n  - `create_dynamic_frame_from_opensearch`\n  - `write_dynamic_frame_to_opensearch`\n- DocumentDB\n  - `create_dynamic_frame_from_documentdb`\n  - `write_dynamic_frame_to_documentdb`\n- MongoDB\n  - `create_dynamic_frame_from_mongodb`\n  - `write_dynamic_frame_to_mongodb`\n\n### `GluePySparkJob`\n\n`GluePySparkJob` reduces the boilerplate code needed by using reasonable\ndefaults while still allowing for customizations by passing keyword\narguments.\n\nIn its simplest form, it takes care of instantiating\n`awsglue.context.GlueContext` and initializing `awsglue.job.Job`.\n\n```python\nfrom glue_utils.pyspark import GluePySparkJob\n\n# Instantiate with defaults.\njob = GluePySparkJob()\n\n# This is the SparkContext object.\nsc = job.sc\n\n# This is the GluePySparkContext(GlueContext) object.\nglue_context = job.glue_context\n\n# This is the SparkSession object.\nspark = job.spark\n\n# The rest of your job's logic.\n\n# Commit the job if necessary (e.g. when using bookmarks).\njob.commit()\n```\n\n#### `options_cls`\n\nYou may pass a subclass of `BaseOptions` to make the resolved options\navailable in `job.options`.\n\n```python\nfrom dataclasses import dataclass\nfrom glue_utils import BaseOptions\nfrom glue_utils.pyspark import GluePySparkJob\n\n\n@dataclass\nclass Options(BaseOptions):\n    # Specify the arguments as field names\n    start_date: str\n    end_date: str\n    source_path: str\n\n\n# Instantiate with the above Options class.\njob = GluePySparkJob(options_cls=Options)\n\n# Use the resolved values using the fields available in job.options.\nprint(f\"The S3 path is {job.options.source_path}\")\n```\n\n#### `log_level`\n\nYou may configure the logging level. It is set to `GluePySparkJob.LogLevel.WARN` by\ndefault.\n\n```python\nfrom glue_utils.pyspark import GluePySparkJob\n\n\n# Log only errors.\njob = GluePySparkJob(log_level=GluePySparkJob.LogLevel.ERROR)\n```\n\n#### `spark_conf`\n\nYou may set Spark configuration values by instantiating a custom\n`pyspark.SparkConf` object to pass to `GluePySparkJob`.\n\n```python\nfrom pyspark import SparkConf\nfrom glue_utils.pyspark import GluePySparkJob\n\n# Instantiate a SparkConf and set the desired config keys/values.\nspark_conf = SparkConf()\nspark_conf.set(\"spark.driver.maxResultSize\", \"4g\")\n\n# Instantiate with the above custom SparkConf.\njob = GluePySparkJob(spark_conf=spark_conf)\n```\n\n#### `glue_context_options`\n\nYou may set options that are passed to `awsglue.context.GlueContext`.\n\n```python\nfrom glue_utils.pyspark import GlueContextOptions, GluePySparkJob\n\njob = GluePySparkJob(glue_context_options={\n    \"minPartitions\": 2,\n    \"targetPartitions\": 10,\n})\n\n# Alternatively, you can use the GlueContextOptions TypedDict.\njob = GluePySparkJob(glue_context_options=GlueContextOptions(\n    minPartitions=2,\n    targetPartitions=10,\n)\n```\n\n## Other features\n\nThe following modules contain useful `TypedDict`s for defining\nconnection options or format options to pass as arguments to various\n`awsglue.context.GlueContext` methods:\n\n- `glue_utils.pyspark.connection_options`\n  - for defining `connection_options` for various connection types\n- `glue_utils.pyspark.format_options`\n  - for defining `format_options` for various formats\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Reusable utilities for working with Glue PySpark jobs",
    "version": "0.9.1",
    "project_urls": {
        "Documentation": "https://github.com/dashmug/glue-utils/wiki",
        "Homepage": "https://github.com/dashmug/glue-utils",
        "Repository": "https://github.com/dashmug/glue-utils/issues"
    },
    "split_keywords": [
        "aws",
        " glue",
        " pyspark",
        " spark",
        " etl",
        " data",
        " data-engineering"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4dfc7e43915fe18bcc5d320168a18a1bfcd7c46f802ce236df716f43665f8379",
                "md5": "eee808143368ba88da9516994f0ce4ee",
                "sha256": "a090cf54cd4502465da94fe4c638f82325d3fcec4abbf9a8be829fa8f814abd0"
            },
            "downloads": -1,
            "filename": "glue_utils-0.9.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "eee808143368ba88da9516994f0ce4ee",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 21146,
            "upload_time": "2024-11-14T10:57:18",
            "upload_time_iso_8601": "2024-11-14T10:57:18.556095Z",
            "url": "https://files.pythonhosted.org/packages/4d/fc/7e43915fe18bcc5d320168a18a1bfcd7c46f802ce236df716f43665f8379/glue_utils-0.9.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "23f9a133441d2ac7274890ba8b1ada53e92aa02033b6b768e8a669129cf67eba",
                "md5": "10e7cf0d5954c85f2074ad0b5f6bb0a2",
                "sha256": "f53a51f0234cc542efb791ded32ff44870b9e8861850edeee99fe64cb47df000"
            },
            "downloads": -1,
            "filename": "glue_utils-0.9.1.tar.gz",
            "has_sig": false,
            "md5_digest": "10e7cf0d5954c85f2074ad0b5f6bb0a2",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 18306,
            "upload_time": "2024-11-14T10:57:20",
            "upload_time_iso_8601": "2024-11-14T10:57:20.406471Z",
            "url": "https://files.pythonhosted.org/packages/23/f9/a133441d2ac7274890ba8b1ada53e92aa02033b6b768e8a669129cf67eba/glue_utils-0.9.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-14 10:57:20",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "dashmug",
    "github_project": "glue-utils",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "glue-utils"
}
        
Elapsed time: 1.37394s