# target-db2
`target-db2` is a Singer target for db2.
Build with the [Meltano Target SDK](https://sdk.meltano.com), by the [Infostrux Team](https://www.infostrux.com/)
## Installation
Install from PyPi:
```bash
pipx install target-db2
```
Install from GitHub:
```bash
pipx install git+https://github.com/Infostrux-Solutions/target-db2.git@main
```
### Install via Meltano Configuration (meltano.yml)
```yaml
loaders:
- name: target-db2
namespace: target_db2
# uncomment one of the following
# pip_url: git+https://github.com/Infostrux-Solutions/target-db2.git@main
# pip_url: target-db2
```
_complete the meltano installation with appropriate configuration settings described in [Settings](#settings) section below._
## Configuration
<!--
Developer TODO: Each time the project's version is bumped, recreate these sections
1. Capabilities
2. Settings
3. Supported Python Versions
This section can be created by copy-pasting the CLI output from:
```
target-db2 --about --format=markdown
```
-->
## Capabilities
* `about`
* `stream-maps`
* `schema-flattening`
* `validate-records`
## Settings
| Setting | Required | Default | Description |
|:--------|:--------:|:-------:|:------------|
| host | True | None | IBM Db2 Database Host |
| port | True | None | IBM Db2 Database Port |
| user | True | None | IBM Db2 Database User Name |
| password | True | None | IBM Db2 Database User Password |
| database | True | None | IBM Db2 Database Name |
| varchar_size | False | None | Field size for Varchar type. Default 10000. <BR/>Since JSON values are serialized to varchar, <BR/>it may be necessary to increase this value. <BR/>Max possible value 32764 |
| add_record_metadata | False | None | Add metadata to records. |
| load_method | False | TargetLoadMethods.APPEND_ONLY | The method to use when loading data into the destination. `append-only` will always write all input records whether that records already exists or not. `upsert` will update existing records and insert new records. `overwrite` will delete all existing records and insert all input records. |
| batch_size_rows | False | None | Maximum number of rows in each batch. |
| validate_records | False | 1 | Whether to validate the schema of the incoming streams. |
| stream_maps | False | None | Config object for stream maps capability. For more information check out [Stream Maps](https://sdk.meltano.com/en/latest/stream_maps.html). |
| stream_map_config | False | None | User-defined config values to be used within map expressions. |
| faker_config | False | None | Config for the [`Faker`](https://faker.readthedocs.io/en/master/) instance variable `fake` used within map expressions. Only applicable if the plugin specifies `faker` as an addtional dependency (through the `singer-sdk` `faker` extra or directly). |
| faker_config.seed | False | None | Value to seed the Faker generator for deterministic output: https://faker.readthedocs.io/en/master/#seeding-the-generator |
| faker_config.locale | False | None | One or more LCID locale strings to produce localized output for: https://faker.readthedocs.io/en/master/#localization |
| flattening_enabled | False | None | 'True' to enable schema flattening and automatically expand nested properties. |
| flattening_max_depth | False | None | The max depth to flatten schemas. |
A full list of supported settings and capabilities is available by running: `target-db2 --about`
## Supported Python Versions
* 3.8
* 3.9
* 3.10
* 3.11
* 3.12
### Configure using environment variables
This Singer target will automatically import any environment variables within the working directory's
`.env` if the `--config=ENV` is provided, such that config values will be considered if a matching
environment variable is set either in the terminal context or in the `.env` file.
### Db2 Authentication and Authorization
Currently, only username / password (UID / PWD) based authentication is supported. If you need support for additional authentication mechanisms, please open an issue.
The username & password can be provided through `meltano.yml` or the `target-db2`'s config.json. The user must have the following permissions in order to be able to load data into Db2.
## Minimal Permissions Required on DB2
This library will perform the following actions on DB2.
* CREATE TABLE
* DROP TABLE
* ALTER TABLE ADD COLUMN
* ALTER TABLE ALTER COLUMN
* INSERT INTO TABLE
* MERGE INTO TABLE USING
* [OPTIONALLY] CREATE SCHEMA
_NOTE: `CREATE SCHEMA` is used to create a new schema where data will be loaded. If the stated target_schema, specified via `default_target_schema` exists, this library will not issue a `CREATE SCHEMA` command_
## Known Limitations & Issues
### Complex Data Structures (arrays & maps)
Complex values such as `dict` or `list` will be json encoded and stored as `VARCHAR`. The `VARCHAR` column has a default size of 10000, and it is user configurable via the setting `varchar_size`.
IBM Db2 allows VARCHAR columns up to 32704 bytes.
This target currently does not write to CLOB fields, PRs welcome!
## Usage
You can easily run `target-db2` by itself or in a pipeline using [Meltano](https://meltano.com/).
### Executing the Target Directly
_note: requires installing `tap-carbon-intensity`_
```bash
target-db2 --version
target-db2 --help
# Test using the "Carbon Intensity" sample:
tap-carbon-intensity | target-db2 --config /path/to/target-db2-config.json
```
## Developer Resources
Follow these instructions to contribute to this project.
### Initialize your Development Environment
```bash
pipx install poetry
poetry install
```
### Create and Run Tests
Create tests within the `tests` subfolder and
then run:
```bash
poetry run pytest
```
You can also test the `target-db2` CLI interface directly using `poetry run`:
```bash
poetry run target-db2 --help
```
### Testing with [Meltano](https://meltano.com/)
_**Note:** This target will work in any Singer environment and does not require Meltano.
Examples here are for convenience and to streamline end-to-end orchestration scenarios._
This project is supplied with a custom `meltano.yml` project file already created, as well as a docker-compose.yml file to set up source & target systems for end-to-end testing.
Next, install Meltano (if you haven't already) and any needed plugins:
```bash
# Install meltano
pipx install meltano
# Initialize meltano within this directory
cd target-db2
meltano install
```
Now you can test and orchestrate using Meltano:
```bash
# Test invocation:
meltano invoke target-db2 --version
# OR run a test `elt` pipeline with the Carbon Intensity sample tap:
docker compose up -d
meltano run tap-carbon-intensity target-db2
# after testing is complete, remember to shut down the container resources
docker compose down
# OR run a test `elt` pipeline with the supplied postgresql
# source & sample data generator
docker compose up -d
python generate_postgresql_data.py
meltano run tap-postgres target-db2
# after testing is complete, remember to shut down the container resources
docker compose down
```
### SDK Dev Guide
See the [dev guide](https://sdk.meltano.com/en/latest/dev_guide.html) for more instructions on how to use the Meltano Singer SDK to
develop your own Singer taps and targets.
Raw data
{
"_id": null,
"home_page": "https://www.infostrux.com",
"name": "target-db2",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.13,>=3.8",
"maintainer_email": null,
"keywords": "ELT, IBM, Db2, Singer, Singer Target, Meltano, Meltano SDK",
"author": "Haleemur Ali",
"author_email": "haleemur@infostrux.com",
"download_url": "https://files.pythonhosted.org/packages/b7/02/fc44b4b95c91eadaf44e67b058fde04cc575cd075d27e1997414c9b0c78d/target_db2-0.1.5.tar.gz",
"platform": null,
"description": "# target-db2\n\n`target-db2` is a Singer target for db2.\n\nBuild with the [Meltano Target SDK](https://sdk.meltano.com), by the [Infostrux Team](https://www.infostrux.com/)\n\n## Installation\n\nInstall from PyPi:\n\n```bash\npipx install target-db2\n```\n\nInstall from GitHub:\n\n```bash\npipx install git+https://github.com/Infostrux-Solutions/target-db2.git@main\n```\n\n\n### Install via Meltano Configuration (meltano.yml)\n\n```yaml\n loaders:\n - name: target-db2\n namespace: target_db2\n # uncomment one of the following\n # pip_url: git+https://github.com/Infostrux-Solutions/target-db2.git@main\n # pip_url: target-db2\n```\n\n_complete the meltano installation with appropriate configuration settings described in [Settings](#settings) section below._\n\n## Configuration\n\n<!--\nDeveloper TODO: Each time the project's version is bumped, recreate these sections\n\n1. Capabilities\n2. Settings\n3. Supported Python Versions\n\nThis section can be created by copy-pasting the CLI output from:\n\n```\ntarget-db2 --about --format=markdown\n```\n-->\n\n\n## Capabilities\n\n* `about`\n* `stream-maps`\n* `schema-flattening`\n* `validate-records`\n\n## Settings\n\n| Setting | Required | Default | Description |\n|:--------|:--------:|:-------:|:------------|\n| host | True | None | IBM Db2 Database Host |\n| port | True | None | IBM Db2 Database Port |\n| user | True | None | IBM Db2 Database User Name |\n| password | True | None | IBM Db2 Database User Password |\n| database | True | None | IBM Db2 Database Name |\n| varchar_size | False | None | Field size for Varchar type. Default 10000. <BR/>Since JSON values are serialized to varchar, <BR/>it may be necessary to increase this value. <BR/>Max possible value 32764 |\n| add_record_metadata | False | None | Add metadata to records. |\n| load_method | False | TargetLoadMethods.APPEND_ONLY | The method to use when loading data into the destination. `append-only` will always write all input records whether that records already exists or not. `upsert` will update existing records and insert new records. `overwrite` will delete all existing records and insert all input records. |\n| batch_size_rows | False | None | Maximum number of rows in each batch. |\n| validate_records | False | 1 | Whether to validate the schema of the incoming streams. |\n| stream_maps | False | None | Config object for stream maps capability. For more information check out [Stream Maps](https://sdk.meltano.com/en/latest/stream_maps.html). |\n| stream_map_config | False | None | User-defined config values to be used within map expressions. |\n| faker_config | False | None | Config for the [`Faker`](https://faker.readthedocs.io/en/master/) instance variable `fake` used within map expressions. Only applicable if the plugin specifies `faker` as an addtional dependency (through the `singer-sdk` `faker` extra or directly). |\n| faker_config.seed | False | None | Value to seed the Faker generator for deterministic output: https://faker.readthedocs.io/en/master/#seeding-the-generator |\n| faker_config.locale | False | None | One or more LCID locale strings to produce localized output for: https://faker.readthedocs.io/en/master/#localization |\n| flattening_enabled | False | None | 'True' to enable schema flattening and automatically expand nested properties. |\n| flattening_max_depth | False | None | The max depth to flatten schemas. |\n\nA full list of supported settings and capabilities is available by running: `target-db2 --about`\n\n## Supported Python Versions\n\n* 3.8\n* 3.9\n* 3.10\n* 3.11\n* 3.12\n\n### Configure using environment variables\n\nThis Singer target will automatically import any environment variables within the working directory's\n`.env` if the `--config=ENV` is provided, such that config values will be considered if a matching\nenvironment variable is set either in the terminal context or in the `.env` file.\n\n### Db2 Authentication and Authorization\n\n\nCurrently, only username / password (UID / PWD) based authentication is supported. If you need support for additional authentication mechanisms, please open an issue.\n\nThe username & password can be provided through `meltano.yml` or the `target-db2`'s config.json. The user must have the following permissions in order to be able to load data into Db2.\n\n## Minimal Permissions Required on DB2\n\nThis library will perform the following actions on DB2.\n\n* CREATE TABLE\n* DROP TABLE\n* ALTER TABLE ADD COLUMN\n* ALTER TABLE ALTER COLUMN\n* INSERT INTO TABLE\n* MERGE INTO TABLE USING\n* [OPTIONALLY] CREATE SCHEMA\n\n_NOTE: `CREATE SCHEMA` is used to create a new schema where data will be loaded. If the stated target_schema, specified via `default_target_schema` exists, this library will not issue a `CREATE SCHEMA` command_\n\n## Known Limitations & Issues\n\n### Complex Data Structures (arrays & maps)\n\nComplex values such as `dict` or `list` will be json encoded and stored as `VARCHAR`. The `VARCHAR` column has a default size of 10000, and it is user configurable via the setting `varchar_size`.\n\nIBM Db2 allows VARCHAR columns up to 32704 bytes.\n\nThis target currently does not write to CLOB fields, PRs welcome!\n\n## Usage\n\nYou can easily run `target-db2` by itself or in a pipeline using [Meltano](https://meltano.com/).\n\n### Executing the Target Directly\n\n_note: requires installing `tap-carbon-intensity`_\n\n```bash\ntarget-db2 --version\ntarget-db2 --help\n# Test using the \"Carbon Intensity\" sample:\ntap-carbon-intensity | target-db2 --config /path/to/target-db2-config.json\n```\n\n## Developer Resources\n\nFollow these instructions to contribute to this project.\n\n### Initialize your Development Environment\n\n```bash\npipx install poetry\npoetry install\n```\n\n### Create and Run Tests\n\nCreate tests within the `tests` subfolder and\n then run:\n\n```bash\npoetry run pytest\n```\n\nYou can also test the `target-db2` CLI interface directly using `poetry run`:\n\n```bash\npoetry run target-db2 --help\n```\n\n### Testing with [Meltano](https://meltano.com/)\n\n_**Note:** This target will work in any Singer environment and does not require Meltano.\nExamples here are for convenience and to streamline end-to-end orchestration scenarios._\n\nThis project is supplied with a custom `meltano.yml` project file already created, as well as a docker-compose.yml file to set up source & target systems for end-to-end testing.\n\n\nNext, install Meltano (if you haven't already) and any needed plugins:\n\n```bash\n# Install meltano\npipx install meltano\n# Initialize meltano within this directory\ncd target-db2\nmeltano install\n```\n\nNow you can test and orchestrate using Meltano:\n\n```bash\n# Test invocation:\nmeltano invoke target-db2 --version\n\n# OR run a test `elt` pipeline with the Carbon Intensity sample tap:\ndocker compose up -d\nmeltano run tap-carbon-intensity target-db2\n# after testing is complete, remember to shut down the container resources\ndocker compose down\n\n# OR run a test `elt` pipeline with the supplied postgresql\n# source & sample data generator\ndocker compose up -d\npython generate_postgresql_data.py\nmeltano run tap-postgres target-db2\n# after testing is complete, remember to shut down the container resources\ndocker compose down\n```\n\n### SDK Dev Guide\n\nSee the [dev guide](https://sdk.meltano.com/en/latest/dev_guide.html) for more instructions on how to use the Meltano Singer SDK to\ndevelop your own Singer taps and targets.\n\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "`target-db2` is a Singer target for db2, built with the Meltano Singer SDK.",
"version": "0.1.5",
"project_urls": {
"Homepage": "https://www.infostrux.com",
"Repository": "https://github.com/Infostrux-Solutions/target-db2"
},
"split_keywords": [
"elt",
" ibm",
" db2",
" singer",
" singer target",
" meltano",
" meltano sdk"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a06712acc40ef2dbab38a110b1e3e2208d2cded0631c033eb860b7be9ffd21d7",
"md5": "933b618f3c186886983339543a9707b2",
"sha256": "cb35f00b95ee66e798cd2cc257fd692a5a0452ab7f466225be951eb21d24b10e"
},
"downloads": -1,
"filename": "target_db2-0.1.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "933b618f3c186886983339543a9707b2",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.13,>=3.8",
"size": 40444,
"upload_time": "2024-10-03T18:57:36",
"upload_time_iso_8601": "2024-10-03T18:57:36.797245Z",
"url": "https://files.pythonhosted.org/packages/a0/67/12acc40ef2dbab38a110b1e3e2208d2cded0631c033eb860b7be9ffd21d7/target_db2-0.1.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "b702fc44b4b95c91eadaf44e67b058fde04cc575cd075d27e1997414c9b0c78d",
"md5": "eda29cdfb2d525feeeb5b6530567b1cb",
"sha256": "6b2b0097f1d8c4881b5f48ca01d2f7d030309079d5d939d6d62d8b1ee121bb2b"
},
"downloads": -1,
"filename": "target_db2-0.1.5.tar.gz",
"has_sig": false,
"md5_digest": "eda29cdfb2d525feeeb5b6530567b1cb",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.13,>=3.8",
"size": 38238,
"upload_time": "2024-10-03T18:57:38",
"upload_time_iso_8601": "2024-10-03T18:57:38.365018Z",
"url": "https://files.pythonhosted.org/packages/b7/02/fc44b4b95c91eadaf44e67b058fde04cc575cd075d27e1997414c9b0c78d/target_db2-0.1.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-03 18:57:38",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Infostrux-Solutions",
"github_project": "target-db2",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "target-db2"
}