monorepo-codepipeline-trigger


Namemonorepo-codepipeline-trigger JSON
Version 100.0.5 PyPI version JSON
download
home_page
Summarymonorepo-codepipeline-trigger of monorepo-codepipeline-trigger of monorepo-codepipeline-trigger
upload_time2022-05-14 17:33:41
maintainer
docs_urlNone
authorAlon walker
requires_python
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
  <img src="/etc/dbt-logo-full.svg" alt="dbt logo" width="500"/>
</p>

**[dbt](https://www.getdbt.com/)** enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.
dbt is the T in ELT. Organize, cleanse, denormalize, filter, rename, and pre-aggregate the raw data in your warehouse so that it's ready for analysis.

# You dont know

The `You dont know` package implements the [dbt adapter](https://docs.getdbt.com/docs/contributing/building-a-new-adapter) protocol for AWS Glue's Spark engine. 
It supports running dbt against Spark, through the new Glue Interactive Sessions API.



## Installation

The package can be installed from PyPI with:

```bash
$ pip install we dont know
```
For further (and more likely up-to-date) info, see the [README](https://github.com/aws-samples/dbt-glue#readme)


## Connection Methods


### Configuring your AWS profile for Glue Interactive Session
There are two IAM principals used with interactive sessions.
- Client principal: The princpal (either user or role) calling the AWS APIs (Glue, Lake Formation, Interactive Sessions)
from the local client. This is the principal configured in the AWS CLI and likely the same.
- Service role: The IAM role that AWS Glue uses to execute your session. This is the same as AWS Glue
ETL.

Read [this documentation](https://docs.aws.amazon.com/glue/latest/dg/glue-is-security.html) to configure these principals.

To enjoy all features of **`dbt-glue`** adapter, you will need to attach to the Service role the 3 AWS managed policies below:

| Service  | managed policy required  |
|---|---|
| Amazon S3 | AmazonS3FullAccess |
| AWS Glue | AWSGlueConsoleFullAccess |
| AWS Lake formation | AWSLakeFormationDataAdmin |

### Configuration of the local environment

Because **`dbt`** and **`dbt-glue`** adapter are compatible with Python versions 3.7, 3.8, and 3.9, check the version of Python:

```bash
$ python3 --version
```

Configure a Python virtual environment to isolate package version and code dependencies:

```bash
$ sudo yum install git
$ python3 -m pip install --upgrade pip
$ python3 -m venv dbt_venv
$ source dbt_venv/bin/activate
$ python3 -m pip install --upgrade pip
```

Configure the last version of AWS CLI

```bash
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
$ unzip awscliv2.zip
$ sudo ./aws/install
```

Configure the aws-glue-session package

```bash
$ sudo yum install gcc krb5-devel.x86_64 python3-devel.x86_64 -y
$ pip3 install —upgrade boto3
$ pip3 install —upgrade aws-glue-sessions
```

### Example config
<File name='profiles.yml'>

```yml
type: glue
query-comment: This is a glue dbt example
role_arn: arn:aws:iam::1234567890:role/GlueInteractiveSessionRole
region: us-east-1
workers: 2
worker_type: G.1X
idle_timeout: 10
schema: "dbt_demo"
database: "dbt_demo"
session_provisioning_timeout_in_seconds: 120
location: "s3://dbt_demo_bucket/dbt_demo_data"
```

The table below describes all the options.

|Option	|Description	| Mandatory |
|---|---|---|
|project_name	|The dbt project name. This must be the same as the one configured in the dbt project.	|yes|
|type	|The driver to use.	|yes|
|query-comment	|A string to inject as a comment in each query that dbt runs. 	|no|
|role_arn	|The ARN of the interactive session role created as part of the CloudFormation template.	|yes|
|region	|The AWS Region were you run the data pipeline.	|yes|
|workers	|The number of workers of a defined workerType that are allocated when a job runs.	|yes|
|worker_type	|The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, or G.2X.	|yes|
|schema	|The schema used to organize data stored in Amazon S3.	|yes|
|database	|The database in Lake Formation. The database stores metadata tables in the Data Catalog.	|yes|
|session_provisioning_timeout_in_seconds |The timeout in seconds for AWS Glue interactive session provisioning.	|yes|
|location	|The Amazon S3 location of your target data.	|yes|
|idle_timeout	|The AWS Glue session idle timeout in minutes. (The session stops after being idle for the specified amount of time.)	|no|
|glue_version	|The version of AWS Glue for this session to use. Currently, the only valid options are 2.0 and 3.0. The default value is 2.0.	|no|
|security_configuration	|The security configuration to use with this session.	|no|
|connections	|A comma-separated list of connections to use in the session.	|no|

## Configs

### Configuring tables

When materializing a model as `table`, you may include several optional configs that are specific to the dbt-spark plugin, in addition to the standard [model configs](model-configs).

| Option  | Description                                        | Required?               | Example                  |
|---------|----------------------------------------------------|-------------------------|--------------------------|
| file_format | The file format to use when creating tables (`parquet`, `csv`, `json`, `text`, `jdbc` or `orc`). | Optional | `parquet`|
| partition_by  | Partition the created table by the specified columns. A directory is created for each partition. | Optional                | `date_day`              |
| clustered_by  | Each partition in the created table will be split into a fixed number of buckets by the specified columns. | Optional               | `country_code`              |
| buckets  | The number of buckets to create while clustering | Required if `clustered_by` is specified                | `8`              |

## Incremental models

dbt seeks to offer useful, intuitive modeling abstractions by means of its built-in configurations and materializations. 

For that reason, the dbt-glue plugin leans heavily on the [`incremental_strategy` config](configuring-incremental-models#about-incremental_strategy). This config tells the incremental materialization how to build models in runs beyond their first. It can be set to one of three values:
 - **`append`** (default): Insert new records without updating or overwriting any existing data.
 - **`insert_overwrite`**: If `partition_by` is specified, overwrite partitions in the table with new data. If no `partition_by` is specified, overwrite the entire table with new data.
 - **`merge`** (Apache Hudi only): Match records based on a `unique_key`; update old records, insert new ones. (If no `unique_key` is specified, all new data is inserted, similar to `append`.)

Each of these strategies has its pros and cons, which we'll discuss below. As with any model config, `incremental_strategy` may be specified in `dbt_project.yml` or within a model file's `config()` block.

**Notes:**
The default strategie is **`insert_overwrite`**

### The `append` strategy

Following the `append` strategy, dbt will perform an `insert into` statement with all new data. The appeal of this strategy is that it is straightforward and functional across all platforms, file types, connection methods, and Apache Spark versions. However, this strategy _cannot_ update, overwrite, or delete existing data, so it is likely to insert duplicate records for many data sources.

#### Source code
```sql
{{ config(
    materialized='incremental',
    incremental_strategy='append',
) }}

--  All rows returned by this query will be appended to the existing table

select * from {{ ref('events') }}
{% if is_incremental() %}
  where event_ts > (select max(event_ts) from {{ this }})
{% endif %}
```
#### Run Code
```sql
create temporary view spark_incremental__dbt_tmp as

    select * from analytics.events

    where event_ts >= (select max(event_ts) from {{ this }})

;

insert into table analytics.spark_incremental
    select `date_day`, `users` from spark_incremental__dbt_tmp
```

### The `insert_overwrite` strategy

This strategy is most effective when specified alongside a `partition_by` clause in your model config. dbt will run an [atomic `insert overwrite` statement](https://spark.apache.org/docs/latest/sql-ref-syntax-dml-insert-overwrite-table.html) that dynamically replaces all partitions included in your query. Be sure to re-select _all_ of the relevant data for a partition when using this incremental strategy.

If no `partition_by` is specified, then the `insert_overwrite` strategy will atomically replace all contents of the table, overriding all existing data with only the new records. The column schema of the table remains the same, however. This can be desirable in some limited circumstances, since it minimizes downtime while the table contents are overwritten. The operation is comparable to running `truncate` + `insert` on other databases. For atomic replacement of Delta-formatted tables, use the `table` materialization (which runs `create or replace`) instead.

#### Source Code
```sql
{{ config(
    materialized='incremental',
    partition_by=['date_day'],
    file_format='parquet'
) }}

/*
  Every partition returned by this query will be overwritten
  when this model runs
*/

with new_events as (

    select * from {{ ref('events') }}

    {% if is_incremental() %}
    where date_day >= date_add(current_date, -1)
    {% endif %}

)

select
    date_day,
    count(*) as users

from events
group by 1
```

#### Run Code

```sql
create temporary view spark_incremental__dbt_tmp as

    with new_events as (

        select * from analytics.events


        where date_day >= date_add(current_date, -1)


    )

    select
        date_day,
        count(*) as users

    from events
    group by 1

;

insert overwrite table analytics.spark_incremental
    partition (date_day)
    select `date_day`, `users` from spark_incremental__dbt_tmp
```

Specifying `insert_overwrite` as the incremental strategy is optional, since it's the default strategy used when none is specified.

### The `merge` strategy

**Usage notes:** The `merge` incremental strategy requires:
- `file_format: hudi`
- AWS Glue runtime 2 with hudi libraries as extra jars

You can add hudi libraries as extra jars in the classpath using extra_jars options in your profiles.yml.
Here is an example:
```yml
extra_jars: "s3://dbt-glue-hudi/Dependencies/hudi-spark.jar,s3://dbt-glue-hudi/Dependencies/spark-avro_2.11-2.4.4.jar"
```

dbt will run an [atomic `merge` statement](https://hudi.apache.org/docs/writing_data#spark-datasource-writer) which looks nearly identical to the default merge behavior on Snowflake and BigQuery. If a `unique_key` is specified (recommended), dbt will update old records with values from new records that match on the key column. If a `unique_key` is not specified, dbt will forgo match criteria and simply insert all new records (similar to `append` strategy).

#### Source Code
```sql
{{ config(
    materialized='incremental',
    incremental_strategy='merge',
    unique_key='user_id',
    file_format='hudi'
) }}

with new_events as (

    select * from {{ ref('events') }}

    {% if is_incremental() %}
    where date_day >= date_add(current_date, -1)
    {% endif %}

)

select
    user_id,
    max(date_day) as last_seen

from events
group by 1
```

</File>
</TabItem>
</Tabs>


## Persisting model descriptions

Relation-level docs persistence is supported since dbt v0.17.0. For more
information on configuring docs persistence, see [the docs](resource-configs/persist_docs).

When the `persist_docs` option is configured appropriately, you'll be able to
see model descriptions in the `Comment` field of `describe [table] extended`
or `show table extended in [database] like '*'`.

## Always `schema`, never `database`

Apache Spark uses the terms "schema" and "database" interchangeably. dbt understands
`database` to exist at a higher level than `schema`. As such, you should _never_
use or set `database` as a node config or in the target profile when running dbt-glue.

If you want to control the schema/database in which dbt will materialize models,
use the `schema` config and `generate_schema_name` macro _only_.

## Caveats

### Supported Functionality

Most dbt Core functionality is supported, but some features are only available with Apache Hudi.

Apache Hudi-only features:
1. Incremental model updates by `unique_key` instead of `partition_by` (see [`merge` strategy](glue-configs#the-merge-strategy))


Some dbt features, available on the core adapters, are not yet supported on Glue:
1. [Persisting](persist_docs) column-level descriptions as database comments
2. [Snapshots](snapshots)

---
For more information on dbt:
- Read the [introduction to dbt](https://docs.getdbt.com/docs/introduction).
- Read the [dbt viewpoint](https://docs.getdbt.com/docs/about/viewpoint).
- Join the [dbt community](http://community.getdbt.com/).
---

## Security

See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.

## License

This project is licensed under the Apache-2.0 License.



            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "monorepo-codepipeline-trigger",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Alon walker",
    "author_email": "alonwalker@facebook.com",
    "download_url": "https://files.pythonhosted.org/packages/b6/d5/b53a6281c3610cf9bd89bf88684c250ce67a5c196f11764dbb0a95f2c7b9/monorepo-codepipeline-trigger-100.0.5.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n  <img src=\"/etc/dbt-logo-full.svg\" alt=\"dbt logo\" width=\"500\"/>\n</p>\n\n**[dbt](https://www.getdbt.com/)** enables data analysts and engineers to transform their data using the same practices that software engineers use to build applications.\ndbt is the T in ELT. Organize, cleanse, denormalize, filter, rename, and pre-aggregate the raw data in your warehouse so that it's ready for analysis.\n\n# You dont know\n\nThe `You dont know` package implements the [dbt adapter](https://docs.getdbt.com/docs/contributing/building-a-new-adapter) protocol for AWS Glue's Spark engine. \nIt supports running dbt against Spark, through the new Glue Interactive Sessions API.\n\n\n\n## Installation\n\nThe package can be installed from PyPI with:\n\n```bash\n$ pip install we dont know\n```\nFor further (and more likely up-to-date) info, see the [README](https://github.com/aws-samples/dbt-glue#readme)\n\n\n## Connection Methods\n\n\n### Configuring your AWS profile for Glue Interactive Session\nThere are two IAM principals used with interactive sessions.\n- Client principal: The princpal (either user or role) calling the AWS APIs (Glue, Lake Formation, Interactive Sessions)\nfrom the local client. This is the principal configured in the AWS CLI and likely the same.\n- Service role: The IAM role that AWS Glue uses to execute your session. This is the same as AWS Glue\nETL.\n\nRead [this documentation](https://docs.aws.amazon.com/glue/latest/dg/glue-is-security.html) to configure these principals.\n\nTo enjoy all features of **`dbt-glue`** adapter, you will need to attach to the Service role the 3 AWS managed policies below:\n\n| Service  | managed policy required  |\n|---|---|\n| Amazon S3 | AmazonS3FullAccess |\n| AWS Glue | AWSGlueConsoleFullAccess |\n| AWS Lake formation | AWSLakeFormationDataAdmin |\n\n### Configuration of the local environment\n\nBecause **`dbt`** and **`dbt-glue`** adapter are compatible with Python versions 3.7, 3.8, and 3.9, check the version of Python:\n\n```bash\n$ python3 --version\n```\n\nConfigure a Python virtual environment to isolate package version and code dependencies:\n\n```bash\n$ sudo yum install git\n$ python3 -m pip install --upgrade pip\n$ python3 -m venv dbt_venv\n$ source dbt_venv/bin/activate\n$ python3 -m pip install --upgrade pip\n```\n\nConfigure the last version of AWS CLI\n\n```bash\n$ curl \"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip\" -o \"awscliv2.zip\"\n$ unzip awscliv2.zip\n$ sudo ./aws/install\n```\n\nConfigure the aws-glue-session package\n\n```bash\n$ sudo yum install gcc krb5-devel.x86_64 python3-devel.x86_64 -y\n$ pip3 install \u2014upgrade boto3\n$ pip3 install \u2014upgrade aws-glue-sessions\n```\n\n### Example config\n<File name='profiles.yml'>\n\n```yml\ntype: glue\nquery-comment: This is a glue dbt example\nrole_arn: arn:aws:iam::1234567890:role/GlueInteractiveSessionRole\nregion: us-east-1\nworkers: 2\nworker_type: G.1X\nidle_timeout: 10\nschema: \"dbt_demo\"\ndatabase: \"dbt_demo\"\nsession_provisioning_timeout_in_seconds: 120\nlocation: \"s3://dbt_demo_bucket/dbt_demo_data\"\n```\n\nThe table below describes all the options.\n\n|Option\t|Description\t| Mandatory |\n|---|---|---|\n|project_name\t|The dbt project name. This must be the same as the one configured in the dbt project.\t|yes|\n|type\t|The driver to use.\t|yes|\n|query-comment\t|A string to inject as a comment in each query that dbt runs. \t|no|\n|role_arn\t|The ARN of the interactive session role created as part of the CloudFormation template.\t|yes|\n|region\t|The AWS Region were you run the data pipeline.\t|yes|\n|workers\t|The number of workers of a defined workerType that are allocated when a job runs.\t|yes|\n|worker_type\t|The type of predefined worker that is allocated when a job runs. Accepts a value of Standard, G.1X, or G.2X.\t|yes|\n|schema\t|The schema used to organize data stored in Amazon S3.\t|yes|\n|database\t|The database in Lake Formation. The database stores metadata tables in the Data Catalog.\t|yes|\n|session_provisioning_timeout_in_seconds |The timeout in seconds for AWS Glue interactive session provisioning.\t|yes|\n|location\t|The Amazon S3 location of your target data.\t|yes|\n|idle_timeout\t|The AWS Glue session idle timeout in minutes. (The session stops after being idle for the specified amount of time.)\t|no|\n|glue_version\t|The version of AWS Glue for this session to use. Currently, the only valid options are 2.0 and 3.0. The default value is 2.0.\t|no|\n|security_configuration\t|The security configuration to use with this session.\t|no|\n|connections\t|A comma-separated list of connections to use in the session.\t|no|\n\n## Configs\n\n### Configuring tables\n\nWhen materializing a model as `table`, you may include several optional configs that are specific to the dbt-spark plugin, in addition to the standard [model configs](model-configs).\n\n| Option  | Description                                        | Required?               | Example                  |\n|---------|----------------------------------------------------|-------------------------|--------------------------|\n| file_format | The file format to use when creating tables (`parquet`, `csv`, `json`, `text`, `jdbc` or `orc`). | Optional | `parquet`|\n| partition_by  | Partition the created table by the specified columns. A directory is created for each partition. | Optional                | `date_day`              |\n| clustered_by  | Each partition in the created table will be split into a fixed number of buckets by the specified columns. | Optional               | `country_code`              |\n| buckets  | The number of buckets to create while clustering | Required if `clustered_by` is specified                | `8`              |\n\n## Incremental models\n\ndbt seeks to offer useful, intuitive modeling abstractions by means of its built-in configurations and materializations. \n\nFor that reason, the dbt-glue plugin leans heavily on the [`incremental_strategy` config](configuring-incremental-models#about-incremental_strategy). This config tells the incremental materialization how to build models in runs beyond their first. It can be set to one of three values:\n - **`append`** (default): Insert new records without updating or overwriting any existing data.\n - **`insert_overwrite`**: If `partition_by` is specified, overwrite partitions in the table with new data. If no `partition_by` is specified, overwrite the entire table with new data.\n - **`merge`** (Apache Hudi only): Match records based on a `unique_key`; update old records, insert new ones. (If no `unique_key` is specified, all new data is inserted, similar to `append`.)\n\nEach of these strategies has its pros and cons, which we'll discuss below. As with any model config, `incremental_strategy` may be specified in `dbt_project.yml` or within a model file's `config()` block.\n\n**Notes:**\nThe default strategie is **`insert_overwrite`**\n\n### The `append` strategy\n\nFollowing the `append` strategy, dbt will perform an `insert into` statement with all new data. The appeal of this strategy is that it is straightforward and functional across all platforms, file types, connection methods, and Apache Spark versions. However, this strategy _cannot_ update, overwrite, or delete existing data, so it is likely to insert duplicate records for many data sources.\n\n#### Source code\n```sql\n{{ config(\n    materialized='incremental',\n    incremental_strategy='append',\n) }}\n\n--  All rows returned by this query will be appended to the existing table\n\nselect * from {{ ref('events') }}\n{% if is_incremental() %}\n  where event_ts > (select max(event_ts) from {{ this }})\n{% endif %}\n```\n#### Run Code\n```sql\ncreate temporary view spark_incremental__dbt_tmp as\n\n    select * from analytics.events\n\n    where event_ts >= (select max(event_ts) from {{ this }})\n\n;\n\ninsert into table analytics.spark_incremental\n    select `date_day`, `users` from spark_incremental__dbt_tmp\n```\n\n### The `insert_overwrite` strategy\n\nThis strategy is most effective when specified alongside a `partition_by` clause in your model config. dbt will run an [atomic `insert overwrite` statement](https://spark.apache.org/docs/latest/sql-ref-syntax-dml-insert-overwrite-table.html) that dynamically replaces all partitions included in your query. Be sure to re-select _all_ of the relevant data for a partition when using this incremental strategy.\n\nIf no `partition_by` is specified, then the `insert_overwrite` strategy will atomically replace all contents of the table, overriding all existing data with only the new records. The column schema of the table remains the same, however. This can be desirable in some limited circumstances, since it minimizes downtime while the table contents are overwritten. The operation is comparable to running `truncate` + `insert` on other databases. For atomic replacement of Delta-formatted tables, use the `table` materialization (which runs `create or replace`) instead.\n\n#### Source Code\n```sql\n{{ config(\n    materialized='incremental',\n    partition_by=['date_day'],\n    file_format='parquet'\n) }}\n\n/*\n  Every partition returned by this query will be overwritten\n  when this model runs\n*/\n\nwith new_events as (\n\n    select * from {{ ref('events') }}\n\n    {% if is_incremental() %}\n    where date_day >= date_add(current_date, -1)\n    {% endif %}\n\n)\n\nselect\n    date_day,\n    count(*) as users\n\nfrom events\ngroup by 1\n```\n\n#### Run Code\n\n```sql\ncreate temporary view spark_incremental__dbt_tmp as\n\n    with new_events as (\n\n        select * from analytics.events\n\n\n        where date_day >= date_add(current_date, -1)\n\n\n    )\n\n    select\n        date_day,\n        count(*) as users\n\n    from events\n    group by 1\n\n;\n\ninsert overwrite table analytics.spark_incremental\n    partition (date_day)\n    select `date_day`, `users` from spark_incremental__dbt_tmp\n```\n\nSpecifying `insert_overwrite` as the incremental strategy is optional, since it's the default strategy used when none is specified.\n\n### The `merge` strategy\n\n**Usage notes:** The `merge` incremental strategy requires:\n- `file_format: hudi`\n- AWS Glue runtime 2 with hudi libraries as extra jars\n\nYou can add hudi libraries as extra jars in the classpath using extra_jars options in your profiles.yml.\nHere is an example:\n```yml\nextra_jars: \"s3://dbt-glue-hudi/Dependencies/hudi-spark.jar,s3://dbt-glue-hudi/Dependencies/spark-avro_2.11-2.4.4.jar\"\n```\n\ndbt will run an [atomic `merge` statement](https://hudi.apache.org/docs/writing_data#spark-datasource-writer) which looks nearly identical to the default merge behavior on Snowflake and BigQuery. If a `unique_key` is specified (recommended), dbt will update old records with values from new records that match on the key column. If a `unique_key` is not specified, dbt will forgo match criteria and simply insert all new records (similar to `append` strategy).\n\n#### Source Code\n```sql\n{{ config(\n    materialized='incremental',\n    incremental_strategy='merge',\n    unique_key='user_id',\n    file_format='hudi'\n) }}\n\nwith new_events as (\n\n    select * from {{ ref('events') }}\n\n    {% if is_incremental() %}\n    where date_day >= date_add(current_date, -1)\n    {% endif %}\n\n)\n\nselect\n    user_id,\n    max(date_day) as last_seen\n\nfrom events\ngroup by 1\n```\n\n</File>\n</TabItem>\n</Tabs>\n\n\n## Persisting model descriptions\n\nRelation-level docs persistence is supported since dbt v0.17.0. For more\ninformation on configuring docs persistence, see [the docs](resource-configs/persist_docs).\n\nWhen the `persist_docs` option is configured appropriately, you'll be able to\nsee model descriptions in the `Comment` field of `describe [table] extended`\nor `show table extended in [database] like '*'`.\n\n## Always `schema`, never `database`\n\nApache Spark uses the terms \"schema\" and \"database\" interchangeably. dbt understands\n`database` to exist at a higher level than `schema`. As such, you should _never_\nuse or set `database` as a node config or in the target profile when running dbt-glue.\n\nIf you want to control the schema/database in which dbt will materialize models,\nuse the `schema` config and `generate_schema_name` macro _only_.\n\n## Caveats\n\n### Supported Functionality\n\nMost dbt Core functionality is supported, but some features are only available with Apache Hudi.\n\nApache Hudi-only features:\n1. Incremental model updates by `unique_key` instead of `partition_by` (see [`merge` strategy](glue-configs#the-merge-strategy))\n\n\nSome dbt features, available on the core adapters, are not yet supported on Glue:\n1. [Persisting](persist_docs) column-level descriptions as database comments\n2. [Snapshots](snapshots)\n\n---\nFor more information on dbt:\n- Read the [introduction to dbt](https://docs.getdbt.com/docs/introduction).\n- Read the [dbt viewpoint](https://docs.getdbt.com/docs/about/viewpoint).\n- Join the [dbt community](http://community.getdbt.com/).\n---\n\n## Security\n\nSee [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.\n\n## License\n\nThis project is licensed under the Apache-2.0 License.\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "monorepo-codepipeline-trigger of monorepo-codepipeline-trigger of monorepo-codepipeline-trigger",
    "version": "100.0.5",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "44b6935c0dc5a8a8f59ef243e3a9c420",
                "sha256": "5e7e0ef22d3f846df47216306db8603f9ee47e21d093182c1d600a8516428a07"
            },
            "downloads": -1,
            "filename": "monorepo_codepipeline_trigger-100.0.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "44b6935c0dc5a8a8f59ef243e3a9c420",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 11050,
            "upload_time": "2022-05-14T17:33:39",
            "upload_time_iso_8601": "2022-05-14T17:33:39.872970Z",
            "url": "https://files.pythonhosted.org/packages/ef/65/ef36d9f49a70c2ffb556d4b0abff56a4e932be14c019747875416d91ab06/monorepo_codepipeline_trigger-100.0.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "md5": "9028c0a679d6d1ab1e0fbe7b64c99075",
                "sha256": "15d1d3a54fa3d02b533b8e808ac81b848447dda8204268f152d8dd5db8d43b85"
            },
            "downloads": -1,
            "filename": "monorepo-codepipeline-trigger-100.0.5.tar.gz",
            "has_sig": false,
            "md5_digest": "9028c0a679d6d1ab1e0fbe7b64c99075",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 14775,
            "upload_time": "2022-05-14T17:33:41",
            "upload_time_iso_8601": "2022-05-14T17:33:41.827341Z",
            "url": "https://files.pythonhosted.org/packages/b6/d5/b53a6281c3610cf9bd89bf88684c250ce67a5c196f11764dbb0a95f2c7b9/monorepo-codepipeline-trigger-100.0.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2022-05-14 17:33:41",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "lcname": "monorepo-codepipeline-trigger"
}
        
Elapsed time: 0.32597s