macrometa-target-bigquery


Namemacrometa-target-bigquery JSON
Version 1.0.0 PyPI version JSON
download
home_pagehttps://github.com/Macrometacorp/macrometa-target-bigquery
SummaryMacrometa target bigquery connector for loading data to BigQuery
upload_time2023-08-01 08:48:47
maintainer
docs_urlNone
authorMacrometa
requires_python
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # macrometa-target-bigquery

Macrometa target bigquery connector that loads data into BigQuery following the [Singer spec](https://github.com/singer-io/getting-started/blob/master/docs/SPEC.md).


## How to use it

If you want to run this target connector independently please read further.

## Install

First, make sure Python 3 is installed on your system or follow these
installation instructions for [Mac](http://docs.python-guide.org/en/latest/starting/install3/osx/) or
[Ubuntu](https://www.digitalocean.com/community/tutorials/how-to-install-python-3-and-set-up-a-local-programming-environment-on-ubuntu-16-04).

It's recommended to use a virtualenv:

```bash
make venv
```

### To run

Like any other target connector that's following the singer specification:

`some-singer-source(tap) | macrometa-target-bigquery --config [config.json]`

It's reading incoming messages from STDIN and using the properties in `config.json` to upload data into BigQuery.

**Note**: To avoid version conflicts run `source` and `targets` in separate virtual environments.

### Configuration settings

Running the the target connector requires a `config.json` file. An example with the minimal settings:

   ```json
   {
     "project_id": "mygbqproject"
   }
   ```

Full list of options in `config.json`:

| Property                                | Type      | Required?    | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
| -------------------------------------   | --------- | ------------ | ---------------------------------------------------------------                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
| project_id                              | String    | Yes          | BigQuery project                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
| location                                | String    |              | Region where BigQuery stores your dataset                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
| default_target_schema                   | String    |              | Name of the schema where the tables will be created. If `schema_mapping` is not defined then every stream sent by the tap is loaded into this schema.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
| default_target_schema_select_permission | String    |              | Grant USAGE privilege on newly created schemas and grant SELECT privilege on newly created                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
| schema_mapping                          | Object    |              | Useful if you want to load multiple streams from one source to multiple BigQuery schemas.<br><br>If the source sends the `stream_id` in `<schema_name>-<table_name>` format then this option overwrites the `default_target_schema` value. Note, that using `schema_mapping` you can overwrite the `default_target_schema_select_permission` value to grant SELECT permissions to different groups per schemas or optionally you can create indices automatically for the replicated tables. |
| batch_size_rows                         | Integer   |              | (Default: 100000) Maximum number of rows in each batch. At the end of each batch, the rows in the batch are loaded into BigQuery.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
| batch_wait_limit_seconds                | Integer   |              | (Default: None) Maximum time to wait for batch to reach `batch_size_rows`. |
| flush_all_streams                       | Boolean   |              | (Default: False) Flush and load every stream into BigQuery when one batch is full. Warning: This may trigger transfer of data with low number of records, and may cause performance problems.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
| parallelism                             | Integer   |              | (Default: 0) The number of threads used to flush tables. 0 will create a thread for each stream, up to parallelism_max. -1 will create a thread for each CPU core. Any other positive number will create that number of threads, up to parallelism_max.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
| max_parallelism                         | Integer   |              | (Default: 16) Max number of parallel threads to use when flushing tables.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
| add_metadata_columns                    | Boolean   |              | (Default: False) Metadata columns add extra row level information about data ingestions, (i.e. when was the row read in source, when was inserted or deleted in bigquery etc.) Metadata columns are creating automatically by adding extra columns to the tables with a column prefix `_sdc_`. The column names are following the stitch naming conventions documented at https://www.stitchdata.com/docs/data-structure/integration-schemas#sdc-columns. Enabling metadata columns will flag the deleted rows by setting the `_sdc_deleted_at` metadata column. Without the `add_metadata_columns` option the deleted rows from sources will not be recognisable in BigQuery.                                                                                                                                      |
| hard_delete                             | Boolean   |              | (Default: False) When `hard_delete` option is true then DELETE SQL commands will be performed in BigQuery to delete rows in tables. It's achieved by continuously checking the  `_sdc_deleted_at` metadata column sent by the source. Due to deleting rows requires metadata columns, `hard_delete` option automatically enables the `add_metadata_columns` option as well.|
| hard_delete_mapping                     | Object    |              | This is useful if you want to set `hard_delete` for some streams but not others. This should contain a mapping of `stream_id: <Boolean>`. This boolean will override the default behaviour set with `hard_delete` for that stream. If a stream is not defined in `hard_delete_mapping` it will behave according to `hard_delete`. When `hard_delete` option is true then DELETE SQL commands will be performed in BigQuery to delete rows in tables. It's achieved by continuously checking the  `_sdc_deleted_at` metadata column sent by the singer source. Due to deleting rows requires metadata columns, `hard_delete` option automatically enables the `add_metadata_columns` option as well.|
| data_flattening_max_level               | Integer   |              | (Default: 0) Object type RECORD items from sources can be loaded into VARIANT columns as JSON (default) or we can flatten the schema by creating columns automatically.<br><br>When value is 0 (default) then flattening functionality is turned off.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
| primary_key_required                    | Boolean   |              | (Default: True) Log based and Incremental replications on tables with no Primary Key cause duplicates when merging UPDATE events. When set to true, stop loading data if no Primary Key is defined.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| validate_records                        | Boolean   |              | (Default: False) Validate every single record message to the corresponding JSON schema. This option is disabled by default and invalid RECORD messages will fail only at load time by BigQuery. Enabling this option will detect invalid records earlier but could cause performance degradation.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
| temp_schema                             | String    |              | Name of the schema where the temporary tables will be created. Will default to the same schema as the target tables                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| use_partition_pruning                   | Boolean   |              | (Default: False) If `true` then BigQuery table partition pruning will be used for tables which have partitioning enabled. This partitioning should be on a column which is immutable such as an integer primary key or a `created_at` column. The partitioning should be set up manually by the user. This feature can dramatically reduce the cost of each `MERGE` for large tables.                                                                                                                                                                                                                                                                                                                                                                                                                                   |


### Schema Changes

This macrometa target connector does follow the [PipelineWise specification](https://transferwise.github.io/pipelinewise/user_guide/schema_changes.html) for schema changes except versioning columns because of the way BigQuery works.

BigQuery does not allow for column renames so a column modification works like this instead:

#### Versioning columns

Target connectors are versioning columns **when data type change is detected** in the source
table. Versioning columns means that the old column with the old datatype is kept
and a new column is created by adding a suffix to the name depending of the type (and also a timestamp for struct and arrays)
to the column name with the new data type. This new column will be added to the table.

For example if the data type of ``COLUMN_THREE`` changes from ``INTEGER`` to ``VARCHAR``
PipelineWise will replicate data in this order:

1. Before changing data type ``COLUMN_THREE`` is ``INTEGER`` just like in in source table:

| **COLUMN_ONE** | **COLUMN_TWO** | **COLUMN_THREE** (INTEGER) |
|----------------|----------------|----------------------------|
| text           | text           | 1                          |
| text           | text           | 2                          |
| text           | text           | 3                          |

2. After the data type change ``COLUMN_THREE`` remains ``INTEGER`` with
the old data and a new ``COLUMN_TREE__st`` column created with ``STRING`` type that keeps
data only after the change.

| **COLUMN_ONE** | **COLUMN_TWO** | **COLUMN_THREE** (INTEGER) | **COLUMN_THREE__st** (VARCHAR) |
|----------------|----------------|----------------------------|--------------------------------|
| text           | text           | 111                        |                                |
| text           | text           | 222                        |                                |
| text           | text           | 333                        |                                |
| text           | text           |                            | 444-ABC                        |
| text           | text           |                            | 555-DEF                        |

.. warning::

  Please note the ``NULL`` values in ``COLUMN_THREE`` and ``COLUMN_THREE__st`` columns.
  **Historical values are not converted to the new data types!**
  If you need the actual representation of the table after data type changes then
  you need to resync the table.


#### Column clustering

This target connector tries to speed up the querying of the resulting tables by clustering the
columns in each table by the primary key of the stream.

The choice and ordering of the clustering keys are defined in the same order as the
`key_properties` columns in the stream's `SCHEMA` messages.

Bigquery places a limit on the number of clustering keys (4 as of 2022-08-02), so if the
number of clustering keys is greater than 4, this target will simply use the first 4
columns defined in `key_properties` property.

### To run tests:

1. Define environment variables that requires running the tests
```
  export GOOGLE_APPLICATION_CREDENTIALS=<credentials-json-file>
  export MACROMETA_TARGET_BIGQUERY_PROJECT=<bigquery project to run your tests on>
  export MACROMETA_TARGET_BIGQUERY_SCHEMA=<temporary schema for running the tests>
```

2. Install python dependencies in a virtual env and run nose unit and integration tests
```
make venv
```

3. To run unit tests:
```
make unit_test
```

4. To run integration tests:
```
make integration_test
```

### To run pylint:

1. Install python dependencies and run python linter
```
make venv pylint
```

## License

Apache License Version 2.0

See [LICENSE](LICENSE) to see the full text.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Macrometacorp/macrometa-target-bigquery",
    "name": "macrometa-target-bigquery",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Macrometa",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/ca/c3/be2deae4cc2c1e1ba4b3d712de42c9a54bd0421fa333be583008af134ff0/macrometa-target-bigquery-1.0.0.tar.gz",
    "platform": null,
    "description": "# macrometa-target-bigquery\n\nMacrometa target bigquery connector that loads data into BigQuery following the [Singer spec](https://github.com/singer-io/getting-started/blob/master/docs/SPEC.md).\n\n\n## How to use it\n\nIf you want to run this target connector independently please read further.\n\n## Install\n\nFirst, make sure Python 3 is installed on your system or follow these\ninstallation instructions for [Mac](http://docs.python-guide.org/en/latest/starting/install3/osx/) or\n[Ubuntu](https://www.digitalocean.com/community/tutorials/how-to-install-python-3-and-set-up-a-local-programming-environment-on-ubuntu-16-04).\n\nIt's recommended to use a virtualenv:\n\n```bash\nmake venv\n```\n\n### To run\n\nLike any other target connector that's following the singer specification:\n\n`some-singer-source(tap) | macrometa-target-bigquery --config [config.json]`\n\nIt's reading incoming messages from STDIN and using the properties in `config.json` to upload data into BigQuery.\n\n**Note**: To avoid version conflicts run `source` and `targets` in separate virtual environments.\n\n### Configuration settings\n\nRunning the the target connector requires a `config.json` file. An example with the minimal settings:\n\n   ```json\n   {\n     \"project_id\": \"mygbqproject\"\n   }\n   ```\n\nFull list of options in `config.json`:\n\n| Property                                | Type      | Required?    | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |\n| -------------------------------------   | --------- | ------------ | ---------------------------------------------------------------                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |\n| project_id                              | String    | Yes          | BigQuery project                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |\n| location                                | String    |              | Region where BigQuery stores your dataset                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |\n| default_target_schema                   | String    |              | Name of the schema where the tables will be created. If `schema_mapping` is not defined then every stream sent by the tap is loaded into this schema.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |\n| default_target_schema_select_permission | String    |              | Grant USAGE privilege on newly created schemas and grant SELECT privilege on newly created                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |\n| schema_mapping                          | Object    |              | Useful if you want to load multiple streams from one source to multiple BigQuery schemas.<br><br>If the source sends the `stream_id` in `<schema_name>-<table_name>` format then this option overwrites the `default_target_schema` value. Note, that using `schema_mapping` you can overwrite the `default_target_schema_select_permission` value to grant SELECT permissions to different groups per schemas or optionally you can create indices automatically for the replicated tables. |\n| batch_size_rows                         | Integer   |              | (Default: 100000) Maximum number of rows in each batch. At the end of each batch, the rows in the batch are loaded into BigQuery.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |\n| batch_wait_limit_seconds                | Integer   |              | (Default: None) Maximum time to wait for batch to reach `batch_size_rows`. |\n| flush_all_streams                       | Boolean   |              | (Default: False) Flush and load every stream into BigQuery when one batch is full. Warning: This may trigger transfer of data with low number of records, and may cause performance problems.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |\n| parallelism                             | Integer   |              | (Default: 0) The number of threads used to flush tables. 0 will create a thread for each stream, up to parallelism_max. -1 will create a thread for each CPU core. Any other positive number will create that number of threads, up to parallelism_max.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |\n| max_parallelism                         | Integer   |              | (Default: 16) Max number of parallel threads to use when flushing tables.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |\n| add_metadata_columns                    | Boolean   |              | (Default: False) Metadata columns add extra row level information about data ingestions, (i.e. when was the row read in source, when was inserted or deleted in bigquery etc.) Metadata columns are creating automatically by adding extra columns to the tables with a column prefix `_sdc_`. The column names are following the stitch naming conventions documented at https://www.stitchdata.com/docs/data-structure/integration-schemas#sdc-columns. Enabling metadata columns will flag the deleted rows by setting the `_sdc_deleted_at` metadata column. Without the `add_metadata_columns` option the deleted rows from sources will not be recognisable in BigQuery.                                                                                                                                      |\n| hard_delete                             | Boolean   |              | (Default: False) When `hard_delete` option is true then DELETE SQL commands will be performed in BigQuery to delete rows in tables. It's achieved by continuously checking the  `_sdc_deleted_at` metadata column sent by the source. Due to deleting rows requires metadata columns, `hard_delete` option automatically enables the `add_metadata_columns` option as well.|\n| hard_delete_mapping                     | Object    |              | This is useful if you want to set `hard_delete` for some streams but not others. This should contain a mapping of `stream_id: <Boolean>`. This boolean will override the default behaviour set with `hard_delete` for that stream. If a stream is not defined in `hard_delete_mapping` it will behave according to `hard_delete`. When `hard_delete` option is true then DELETE SQL commands will be performed in BigQuery to delete rows in tables. It's achieved by continuously checking the  `_sdc_deleted_at` metadata column sent by the singer source. Due to deleting rows requires metadata columns, `hard_delete` option automatically enables the `add_metadata_columns` option as well.|\n| data_flattening_max_level               | Integer   |              | (Default: 0) Object type RECORD items from sources can be loaded into VARIANT columns as JSON (default) or we can flatten the schema by creating columns automatically.<br><br>When value is 0 (default) then flattening functionality is turned off.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |\n| primary_key_required                    | Boolean   |              | (Default: True) Log based and Incremental replications on tables with no Primary Key cause duplicates when merging UPDATE events. When set to true, stop loading data if no Primary Key is defined.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |\n| validate_records                        | Boolean   |              | (Default: False) Validate every single record message to the corresponding JSON schema. This option is disabled by default and invalid RECORD messages will fail only at load time by BigQuery. Enabling this option will detect invalid records earlier but could cause performance degradation.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |\n| temp_schema                             | String    |              | Name of the schema where the temporary tables will be created. Will default to the same schema as the target tables                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |\n| use_partition_pruning                   | Boolean   |              | (Default: False) If `true` then BigQuery table partition pruning will be used for tables which have partitioning enabled. This partitioning should be on a column which is immutable such as an integer primary key or a `created_at` column. The partitioning should be set up manually by the user. This feature can dramatically reduce the cost of each `MERGE` for large tables.                                                                                                                                                                                                                                                                                                                                                                                                                                   |\n\n\n### Schema Changes\n\nThis macrometa target connector does follow the [PipelineWise specification](https://transferwise.github.io/pipelinewise/user_guide/schema_changes.html) for schema changes except versioning columns because of the way BigQuery works.\n\nBigQuery does not allow for column renames so a column modification works like this instead:\n\n#### Versioning columns\n\nTarget connectors are versioning columns **when data type change is detected** in the source\ntable. Versioning columns means that the old column with the old datatype is kept\nand a new column is created by adding a suffix to the name depending of the type (and also a timestamp for struct and arrays)\nto the column name with the new data type. This new column will be added to the table.\n\nFor example if the data type of ``COLUMN_THREE`` changes from ``INTEGER`` to ``VARCHAR``\nPipelineWise will replicate data in this order:\n\n1. Before changing data type ``COLUMN_THREE`` is ``INTEGER`` just like in in source table:\n\n| **COLUMN_ONE** | **COLUMN_TWO** | **COLUMN_THREE** (INTEGER) |\n|----------------|----------------|----------------------------|\n| text           | text           | 1                          |\n| text           | text           | 2                          |\n| text           | text           | 3                          |\n\n2. After the data type change ``COLUMN_THREE`` remains ``INTEGER`` with\nthe old data and a new ``COLUMN_TREE__st`` column created with ``STRING`` type that keeps\ndata only after the change.\n\n| **COLUMN_ONE** | **COLUMN_TWO** | **COLUMN_THREE** (INTEGER) | **COLUMN_THREE__st** (VARCHAR) |\n|----------------|----------------|----------------------------|--------------------------------|\n| text           | text           | 111                        |                                |\n| text           | text           | 222                        |                                |\n| text           | text           | 333                        |                                |\n| text           | text           |                            | 444-ABC                        |\n| text           | text           |                            | 555-DEF                        |\n\n.. warning::\n\n  Please note the ``NULL`` values in ``COLUMN_THREE`` and ``COLUMN_THREE__st`` columns.\n  **Historical values are not converted to the new data types!**\n  If you need the actual representation of the table after data type changes then\n  you need to resync the table.\n\n\n#### Column clustering\n\nThis target connector tries to speed up the querying of the resulting tables by clustering the\ncolumns in each table by the primary key of the stream.\n\nThe choice and ordering of the clustering keys are defined in the same order as the\n`key_properties` columns in the stream's `SCHEMA` messages.\n\nBigquery places a limit on the number of clustering keys (4 as of 2022-08-02), so if the\nnumber of clustering keys is greater than 4, this target will simply use the first 4\ncolumns defined in `key_properties` property.\n\n### To run tests:\n\n1. Define environment variables that requires running the tests\n```\n  export GOOGLE_APPLICATION_CREDENTIALS=<credentials-json-file>\n  export MACROMETA_TARGET_BIGQUERY_PROJECT=<bigquery project to run your tests on>\n  export MACROMETA_TARGET_BIGQUERY_SCHEMA=<temporary schema for running the tests>\n```\n\n2. Install python dependencies in a virtual env and run nose unit and integration tests\n```\nmake venv\n```\n\n3. To run unit tests:\n```\nmake unit_test\n```\n\n4. To run integration tests:\n```\nmake integration_test\n```\n\n### To run pylint:\n\n1. Install python dependencies and run python linter\n```\nmake venv pylint\n```\n\n## License\n\nApache License Version 2.0\n\nSee [LICENSE](LICENSE) to see the full text.\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Macrometa target bigquery connector for loading data to BigQuery",
    "version": "1.0.0",
    "project_urls": {
        "Homepage": "https://github.com/Macrometacorp/macrometa-target-bigquery"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "932c5a1fc264e9b0d1f1911b32fa90b2c9c53e515675d0a349324e9e0f64706f",
                "md5": "992c83ab4ca467eef17e297695204528",
                "sha256": "eff4b3ffe3b8c8490c83e742fa255eb1f987cda39283b09e634e89a832a7b66f"
            },
            "downloads": -1,
            "filename": "macrometa_target_bigquery-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "992c83ab4ca467eef17e297695204528",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 28452,
            "upload_time": "2023-08-01T08:48:46",
            "upload_time_iso_8601": "2023-08-01T08:48:46.340671Z",
            "url": "https://files.pythonhosted.org/packages/93/2c/5a1fc264e9b0d1f1911b32fa90b2c9c53e515675d0a349324e9e0f64706f/macrometa_target_bigquery-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cac3be2deae4cc2c1e1ba4b3d712de42c9a54bd0421fa333be583008af134ff0",
                "md5": "54e2ceecd828adc0ff720f1c0d09dcb9",
                "sha256": "73b668d88058ba1d4e51f8f648d73de66baf9b7911974a4a9b81c7b217e7eebd"
            },
            "downloads": -1,
            "filename": "macrometa-target-bigquery-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "54e2ceecd828adc0ff720f1c0d09dcb9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 29169,
            "upload_time": "2023-08-01T08:48:47",
            "upload_time_iso_8601": "2023-08-01T08:48:47.569141Z",
            "url": "https://files.pythonhosted.org/packages/ca/c3/be2deae4cc2c1e1ba4b3d712de42c9a54bd0421fa333be583008af134ff0/macrometa-target-bigquery-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-01 08:48:47",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Macrometacorp",
    "github_project": "macrometa-target-bigquery",
    "github_not_found": true,
    "lcname": "macrometa-target-bigquery"
}
        
Elapsed time: 0.10044s