db-schemachange


Namedb-schemachange JSON
Version 0.1.1 PyPI version JSON
download
home_pagehttps://github.com/LTranData/db-schemachange
SummaryA simple, lightweight Python-based Database Change Management tool for various databases
upload_time2025-07-12 08:09:50
maintainerNone
docs_urlNone
authorLTranData
requires_python>=3.8
licenseApache-2.0
keywords
VCS
bugtrack_url
requirements Jinja2 PyYAML structlog marshmallow sqlparse pyarrow psycopg psycopg pymssql mysql-connector-python oracledb snowflake-connector-python databricks-sql-connector databricks-sdk pytest pytest-cov
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # db-schemachange

[![pytest](https://github.com/LTranData/db-schemachange/actions/workflows/master-pytest.yml/badge.svg)](https://github.com/LTranData/db-schemachange/actions/workflows/master-pytest.yml)
[![PyPI](https://img.shields.io/pypi/v/db-schemachange.svg)](https://pypi.org/project/db-schemachange)

## Overview

`db-schemachange` is a simple, lightweight python based tool to manage database objects for Databricks, Snowflake, MySQL, Postgres, SQL Server, and Oracle. It
follows an Imperative-style approach to Database Change Management (DCM) and was inspired by
the [Flyway database migration tool](https://flywaydb.org). When combined with a version control system and a CI/CD
tool, database changes can be approved and deployed through a pipeline using modern software delivery practices. As such
schemachange plays a critical role in enabling Database (or Data) DevOps.

For the complete list of changes made to schemachange check out the [CHANGELOG](CHANGELOG.md).

To learn more about making a contribution to schemachange, please see our [Contributing guide](.github/CONTRIBUTING.md).

## Installation options

You can install the `db-schemachange` package with all available connectors, or you can choose a specific one that suits your needs for a lighter installation.

```bash
pip install --upgrade db-schemachange # Install the package WITHOUT connectors
pip install --upgrade "db-schemachange[all]" # Install the package with all connectors
pip install --upgrade "db-schemachange[postgres]" # Install the package with Postgres connector
pip install --upgrade "db-schemachange[sqlserver]" # Install the package with SQL Server connector
pip install --upgrade "db-schemachange[mysql]" # Install the package with MySQL connector
pip install --upgrade "db-schemachange[oracle]" # Install the package with Oracle connector
pip install --upgrade "db-schemachange[snowflake]" # Install the package with Snowflake connector
pip install --upgrade "db-schemachange[databricks]" # Install the package with Databricks connector
```

## Table of Contents

- [Overview](#overview)
- [Installation options](#installation-options)
- [Project Structure](#project-structure)
- [Change Scripts](#change-scripts)
  - [Versioned Script Naming](#versioned-script-naming)
  - [Repeatable Script Naming](#repeatable-script-naming)
  - [Always Script Naming](#always-script-naming)
  - [Script Requirements](#script-requirements)
  - [Using Variables in Scripts](#using-variables-in-scripts)
    - [Secrets filtering](#secrets-filtering)
  - [Jinja templating engine](#jinja-templating-engine)
- [Change History Table](#change-history-table)
- [Configuration](#configuration)
  - [db-schemachange configuration](#db-schemachange-configuration)
    - [CLI usage](#cli-usage)
      - [deploy](#deploy)
      - [render](#render)
    - [YAML config file](#yaml-config-file)
  - [connections-config.yml](#connections-configyml)
- [Authentication](#authentication)
  - [Databricks](#databricks)
  - [MySQL](#mysql)
  - [Oracle](#oracle)
  - [Postgres](#postgres)
  - [Snowflake](#snowflake)
  - [SQL Server](#sql-server)
- [Yaml Jinja support](#yaml-jinja-support)
  - [env_var](#env_var)
- [Running schemachange](#running-schemachange)
  - [Prerequisites](#prerequisites)
  - [Running the Script](#running-the-script)
  - [Using Docker](#using-docker)
- [Maintainers](#maintainers)
- [Demo](#demo)

## Project Structure

```
(project_root)
|
|-- folder_1
    |-- V1.1.1__first_change.sql
    |-- V1.1.2__second_change.sql
    |-- R__sp_add_sales.sql
    |-- R__fn_get_timezone.sql
|-- folder_2
    |-- folder_3
        |-- V1.1.3__third_change.sql
        |-- R__fn_sort_ascii.sql
```

The `db-schemachange` folder structure is very flexible. The `project_root` folder is specified with the `-f`
or `--root-folder` argument. `db-schemachange` only pays attention to the filenames, not the paths. Therefore, under
the `project_root` folder you are free to arrange the change scripts any way you see fit. You can have as many
subfolders (and nested subfolders) as you would like.

## Change Scripts

### Versioned Script Naming

Versioned change scripts follow a similar naming convention to that used
by [Flyway Versioned Migrations](https://flywaydb.org/documentation/migrations#versioned-migrations). The script name
must follow this pattern (image taken
from [Flyway docs](https://flywaydb.org/documentation/migrations#versioned-migrations)):

<img src="images/flyway-naming-convention.png" alt="Flyway naming conventions" title="Flyway naming conventions" width="300" />

With the following rules for each part of the filename:

- **Prefix**: The letter 'V' for versioned change
- **Version**: A unique version number with dots or underscores separating as many number parts as you like
- **Separator**: \_\_ (two underscores)
- **Description**: An arbitrary description with words separated by underscores or spaces (can not include two
  underscores)
- **Suffix**: .sql or .sql.jinja

For example, a script name that follows this convention is: `V1.1.1__first_change.sql`. As with Flyway, the unique
version string is very flexible. You just need to be consistent and always use the same convention, like 3 sets of
numbers separated by periods. Here are a few valid version strings:

- 1.1
- 1_1
- 1.2.3
- 1_2_3

Every script within a database folder must have a unique version number. `db-schemachange` will check for duplicate version
numbers and throw an error if it finds any. This helps to ensure that developers who are working in parallel don't
accidentally (re-)use the same version number.

### Repeatable Script Naming

Repeatable change scripts follow a similar naming convention to that used
by [Flyway Versioned Migrations](https://flywaydb.org/documentation/concepts/migrations.html#repeatable-migrations). The
script name must follow this pattern (image taken
from [Flyway docs](https://flywaydb.org/documentation/concepts/migrations.html#repeatable-migrations):

<img src="images/flyway-repeatable-naming-convention.png" alt="Flyway naming conventions" title="Flyway naming conventions" width="300" />

e.g:

- R\_\_sp_add_sales.sql
- R\_\_fn_get_timezone.sql
- R\_\_fn_sort_ascii.sql

All repeatable change scripts are applied each time the utility is run, if there is a change in the file.
Repeatable scripts could be used for maintaining code that always needs to be applied in its entirety. e.g. stores
procedures, functions and view definitions etc.

Just like Flyway, within a single migration run, repeatable scripts are always applied after all pending versioned
scripts have been executed. Repeatable scripts are applied in alphabetical order of their description.

### Always Script Naming

Always change scripts are executed with every run of `db-schemachange`. This is an addition to the implementation
of [Flyway Versioned Migrations](https://flywaydb.org/documentation/concepts/migrations.html#repeatable-migrations).
The script name must follow this pattern:

`A__Some_description.sql`

e.g.

- A\_\_add_user.sql
- A\_\_assign_roles.sql

This type of change script is useful for an environment set up after cloning. Always scripts are applied always last.

### Script Requirements

`db-schemachange` is designed to be very lightweight and not impose too many limitations. Each change script can have any
number of SQL statements within it and must supply the necessary context, like catalog/database and schema names. `db-schemachange` will simply run the contents of each script against
the target database, in the correct order. After each script, Schemachange will execute "reset" the context (catalog/database, schema) to the values used to configure the connector.

### Using Variables in Scripts

`db-schemachange` supports the jinja engine for a variable replacement strategy. One important use of variables is to support
multiple environments (dev, test, prod) in a single database by dynamically changing the database name during
deployment. To use a variable in a change script, use this syntax anywhere in the script: `{{ variable1 }}`.

To pass variables to `db-schemachange`, check out the [Configuration](#configuration) section below. You can either use
the `--vars` command line parameter or the YAML config file `schemachange-config.yml`. For the command line version you
can pass variables like this: `--vars '{"variable1": "value", "variable2": "value2"}'`. This parameter accepts a flat
JSON object formatted as a string.

> _Nested objects and arrays don't make sense at this point and aren't supported._

`db-schemachange` will replace any variable placeholders before running your change script code and will throw an error if it
finds any variable placeholders that haven't been replaced.

#### Secrets filtering

While many CI/CD tools already have the capability to filter secrets, it is best that any tool also does not output
secrets to the console or logs. Schemachange implements secrets filtering in a number of areas to ensure secrets are not
writen to the console or logs. The only exception is the `render` command which will display secrets.

A secret is just a standard variable that has been tagged as a secret. This is determined using a naming convention and
either of the following will tag a variable as a secret:

1. The variable name has the word `secret` in it.
   ```yaml
   config-version: 1
   vars:
     bucket_name: S3://...... # not a secret
     secret_key: 567576D8E # a secret
   ```
2. The variable is a child of a key named `secrets`.
   ```yaml
   config-version: 1
   vars:
   secrets:
     my_key: 567576D8E # a secret
   aws:
     bucket_name: S3://...... # not a secret
     secrets:
       encryption_key: FGDSUUEHDHJK # a secret
       us_east_1:
         encryption_key: sdsdsd # a secret
   ```

### Jinja templating engine

`db-schemachange` uses the Jinja templating engine internally and
supports: [expressions](https://jinja.palletsprojects.com/en/3.0.x/templates/#expressions), [macros](https://jinja.palletsprojects.com/en/3.0.x/templates/#macros), [includes](https://jinja.palletsprojects.com/en/3.0.x/templates/#include)
and [template inheritance](https://jinja.palletsprojects.com/en/3.0.x/templates/#template-inheritance).

These files can be stored in the root-folder but `db-schemachange` also provides a separate modules
folder `--modules-folder`. This allows common logic to be stored outside of the main changes scripts.

The Jinja auto-escaping feature is disabled in `db-schemachange`, this feature in Jinja is currently designed for where the
output language is HTML/XML. So if you are using `db-schemachange` with untrusted inputs you will need to handle this within
your change scripts.

## Change History Table

`db-schemachange` records all applied changes scripts to the change history table. By default, `db-schemachange` will attempt to
log all activities to the `METADATA.[SCHEMACHANGE].CHANGE_HISTORY` table, based on the database you are using. The name and location of the change history
table can be overriden via a command line argument (`-c` or `--change-history-table`) or the `schemachange-config.yml`
file (`change-history-table`). The value passed to the parameter can have a one, two, or three part name (e.g. "
TABLE_NAME", or "SCHEMA_NAME.TABLE_NAME", or " DATABASE_NAME.SCHEMA_NAME.TABLE_NAME"). This can be used to support
multiple environments (dev, test, prod).

By default, `db-schemachange` will not try to create the change history table, and it will fail if the table does not exist.
This behavior can be altered by passing in the `--create-change-history-table` argument or adding
`create-change-history-table: true` to the `schemachange-config.yml` file. Even with the `--create-change-history-table`
parameter, `db-schemachange` will not attempt to create the database for the change history table. That must be created
before running `db-schemachange`.

The structure of the `CHANGE_HISTORY` table is as follows:

| Column Name    | Type          | Example                    |
| -------------- | ------------- | -------------------------- |
| VERSION        | VARCHAR(1000) | 1.1.1                      |
| DESCRIPTION    | VARCHAR(1000) | First change               |
| SCRIPT         | VARCHAR(1000) | V1.1.1\_\_first_change.sql |
| SCRIPT_TYPE    | VARCHAR(1000) | V                          |
| CHECKSUM       | VARCHAR(1000) | 38e5ba03b1a6d2...          |
| EXECUTION_TIME | BIGINT        | 4                          |
| STATUS         | VARCHAR(1000) | Success                    |
| INSTALLED_BY   | VARCHAR(1000) | DATABASE_USER              |
| INSTALLED_ON   | TIMESTAMP     | 2020-03-17 12:54:33.123    |

A new row will be added to this table every time a change script has been applied to the database. `db-schemachange` will use
this table to identify which changes have been applied to the database and will not apply the same version more than
once.

Here is the current schema DDL for the change history table (found in the [schemachange/cli.py](schemachange/cli.py)
script), in case you choose to create it manually and not use the `--create-change-history-table` parameter:

```sql
CREATE TABLE IF NOT EXISTS METADATA.[SCHEMACHANGE].CHANGE_HISTORY
(
    VERSION VARCHAR(1000),
    DESCRIPTION VARCHAR(1000),
    SCRIPT VARCHAR(1000),
    SCRIPT_TYPE VARCHAR(1000),
    CHECKSUM VARCHAR(1000),
    EXECUTION_TIME BIGINT,
    STATUS VARCHAR(1000),
    INSTALLED_BY VARCHAR(1000),
    INSTALLED_ON TIMESTAMP
)
```

## Configuration

### db-schemachange configuration

Schemachange-specific parameters can be supplied in two different ways (in order of priority):

1. Command Line Arguments
2. YAML config file

`vars` provided via command-line argument will be merged with vars provided via YAML config.

#### CLI usage

##### deploy

This is the main command that runs the deployment process.

```bash
usage: schemachange deploy [-h] \
  [--config-folder CONFIG_FOLDER] \
  [--config-file-name CONFIG_FILE_NAME] \
  [-f ROOT_FOLDER] \
  [-m MODULES_FOLDER] \
  [--vars VARS] \
  [--db-type DB_TYPE] \
  [--connections-file-path CONNECTIONS_FILE_PATH] \
  [-c CHANGE_HISTORY_TABLE] \
  [--create-change-history-table] \
  [--query-tag QUERY_TAG] \
  [-v] \
  [-ac] \
  [--dry-run]
```

| Parameter                                                            | Description                                                                                                                                                                                                            |
| -------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| -h, --help                                                           | Show the help message and exit                                                                                                                                                                                         |
| --config-folder CONFIG_FOLDER                                        | The folder to look in for the schemachange config file (the default is the current working directory)                                                                                                                  |
| --config-file-name CONFIG_FILE_NAME                                  | The file name of the schemachange config file. (the default is schemachange-config.yml)                                                                                                                                |
| -f ROOT_FOLDER, --root-folder ROOT_FOLDER                            | The root folder for the database change scripts. The default is the current directory.                                                                                                                                 |
| -m MODULES_FOLDER, --modules-folder MODULES_FOLDER                   | The modules folder for jinja macros and templates to be used across mutliple scripts                                                                                                                                   |
| --vars VARS                                                          | Define values for the variables to replaced in change scripts, given in JSON format. Vars supplied via the command line will be merged with YAML-supplied vars (e.g. '{"variable1": "value1", "variable2": "value2"}') |
| -v, --verbose                                                        | Display verbose debugging details during execution. The default is 'False'.                                                                                                                                            |
| --db-type                                                            | Database type to run schemachange against. Should be one of [DATABRICKS, MYSQL, ORACLE, POSTGRES, SNOWFLAKE, SQL_SERVER]                                                                                               |
| --connections-file-path CONNECTIONS_FILE_PATH                        | YAML file for connection detail such as username, password, database,...                                                                                                                                               |
| -c CHANGE_HISTORY_TABLE, --change-history-table CHANGE_HISTORY_TABLE | Used to override the default name of the change history table (which is METADATA.[SCHEMACHANGE].CHANGE_HISTORY)                                                                                                        |
| --create-change-history-table                                        | Create the change history table if it does not exist. The default is 'False'.                                                                                                                                          |
| -ac, --autocommit                                                    | Enable autocommit feature for DML commands. The default is 'False'.                                                                                                                                                    |
| --dry-run                                                            | Run schemachange in dry run mode. The default is 'False'.                                                                                                                                                              |
| --query-tag                                                          | A string to include in the QUERY_TAG that is attached to every SQL statement executed.                                                                                                                                 |

##### render

This subcommand is used to render a single script to the console. It is intended to support the development and
troubleshooting of script that use features from the jinja template engine.

```bash
usage: schemachange render [-h] \
  [--config-folder CONFIG_FOLDER] \
  [-f ROOT_FOLDER] \
  [-m MODULES_FOLDER] \
  [--vars VARS] \
  [-v] script
```

| Parameter                                          | Description                                                                                                                               |
| -------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| --config-folder CONFIG_FOLDER                      | The folder to look in for the schemachange-config.yml file (the default is the current working directory)                                 |
| -f ROOT_FOLDER, --root-folder ROOT_FOLDER          | The root folder for the database change scripts                                                                                           |
| -m MODULES_FOLDER, --modules-folder MODULES_FOLDER | The modules folder for jinja macros and templates to be used across multiple scripts                                                      |
| --vars VARS                                        | Define values for the variables to replaced in change scripts, given in JSON format (e.g. {"variable1": "value1", "variable2": "value2"}) |
| -v, --verbose                                      | Display verbose debugging details during execution (the default is False)                                                                 |

#### YAML config file

By default, Schemachange expects the YAML config file to be named `schemachange-config.yml`, located in the current
working directory. The YAML file name can be overridden with the
`--config-file-name` [command-line argument](#cli-usage). The folder can be overridden by using the
`--config-folder` [command-line argument](#cli-usage)

Here is the list of available configurations in the `schemachange-config.yml` file:

```yaml
# Database type
db-type: MYSQL

# Path to connection detail file
connections-file-path: null

# The root folder for the database change scripts
root-folder: "/path/to/folder"

# The modules folder for jinja macros and templates to be used across multiple scripts.
modules-folder: null

# Used to override the default name of the change history table (the default is METADATA.SCHEMACHANGE.CHANGE_HISTORY)
change-history-table: null

# Create the change history schema and table, if they do not exist (the default is False)
create-change-history-table: false

# Define values for the variables to replaced in change scripts. vars supplied via the command line will be merged into YAML-supplied vars
vars:
  var1: "value1"
  var2: "value2"
  secrets:
    var3: "value3" # This is considered a secret and will not be displayed in any output

# Enable autocommit feature for DML commands (the default is False)
autocommit: false

# Display verbose debugging details during execution (the default is False)
verbose: false

# Run schemachange in dry run mode (the default is False)
dry-run: false

# A string to include in the QUERY_TAG that is attached to every SQL statement executed
query-tag: "QUERY_TAG"
```

### connections-config.yml

Connection detail of the database passed in `--connections-file-path` CLI parameter or `connections-file-path` property in YAML config file. Please refer to [Authentication](#authentication)

## Authentication

Schemachange supports the many of the authentication methods supported by the each database connector.
Please see below sample YAML file to pass in `--connections-file-path` parameter

### Databricks

Allowed parameters in https://docs.databricks.com/aws/en/dev-tools/python-sql-connector#connection-class and few other options

```yaml
server_hostname: "<ws_id>.cloud.databricks.com"
http_path: "/sql/1.0/warehouse/<warehouse_id>"
access_token: "<access_token>"
auth_type: "<oauth>"
credentials_provider:
  client_id: "<client_id>"
  client_secret: "<client_secret>"
password: <password>
username: <username>
session_configuration: # Spark session configuration parameters
  spark.sql.variable.substitute: true
http_headers:
  - !!python/tuple ["header_1", "value_1"]
  - !!python/tuple ["header_2", "value_2"]
catalog: <catalog>
schema: <schema>
use_cloud_fetch: false
user_agent_entry: "<application_name>"
use_inline_params: false
oauth_client_id: "<oauth_client_id>"
oauth_redirect_port: 443
```

### MySQL

Allowed parameters in https://dev.mysql.com/doc/connector-python/en/connector-python-connectargs.html and few other options

```yaml
database: "<database>"
user: "<user>"
password: "<password>"
host: "<host>"
port: 3306
# ...others
```

### Oracle

Allowed parameters in `oracledb/connection.py::connect` of Python Oracle connector package `oracledb`

```yaml
user: "<user>"
password: "<password>"
host: "<host>"
port: "<port>"
service_name: "<service_name>"
# ...others
```

### Postgres

Allowed parameters in https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS and few other options in `psycopg/connection.py::connect` of `psycopg` package

```yaml
host: "<host>"
port: "<port>"
dbname: "<dbname>"
user: "<user>"
password: "<password>"
# ...others
```

### Snowflake

Allowed parameters in `snowflake/connector/connection.py::DEFAULT_CONFIGURATION` of `snowflake-connector-python` package

```yaml
user: "<user>"
password: "<password>"
host: "<host>"
port: "<port>"
database: "<database>"
schema: "<schema>"
warehouse: "<warehouse>"
role: "<role>"
account: "<account>"
# ...others
```

### SQL Server

Allowed parameters in `pymssql/_pymssql.pyi::connect` of `pymssql` package

```yaml
server: "<server>"
user: "<user>"
password: "<password>"
database: "<database>"
# ...others
```

## Yaml Jinja support

The YAML config file supports the jinja templating language and has a custom function "env_var" to access environmental
variables. Jinja variables are unavailable and not yet loaded since they are supplied by the YAML file. Customisation of
the YAML file can only happen through values passed via environment variables.

### env_var

Provides access to environmental variables. The function can be used two different ways.

Return the value of the environmental variable if it exists, otherwise return the default value.

```jinja
{{ env_var('<environmental_variable>', 'default') }}
```

Return the value of the environmental variable if it exists, otherwise raise an error.

```jinja
{{ env_var('<environmental_variable>') }}
```

## Running schemachange

### Prerequisites

In order to run schemachange you must have the following:

- You will need to have a recent version of python 3 installed
- You will need to create the change history table used by schemachange in the database (
  see [Change History Table](#change-history-table) above for more details)
  - First, you will need to create a database/catalog to store your change history table (schemachange will not help you with
    this).
  - Second, you will need to create the change history schema and table. You can do this manually (
    see [Change History Table](#change-history-table) above for the DDL) or have schemachange create them by running
    it with the `--create-change-history-table` parameter (just make sure the user you're running
    schemachange with has privileges to create a schema and table in that database)
- You will need to create (or choose) a user account that has privileges to apply the changes in your change script
  - Don't forget that this user also needs the SELECT and INSERT privileges on the change history table

### Running the Script

schemachange is a single python script located at [schemachange/cli.py](schemachange/cli.py). It can be executed as
follows:

```bash
python -m schemachange.cli [subcommand] [-h] \
  [--config-folder CONFIG_FOLDER] \
  [--config-file-name CONFIG_FILE_NAME] \
  [-f ROOT_FOLDER] \
  [-m MODULES_FOLDER] \
  [--vars VARS] \
  [--db-type DB_TYPE] \
  [--connections-file-path CONNECTIONS_FILE_PATH] \
  [-c CHANGE_HISTORY_TABLE] \
  [--create-change-history-table] \
  [--query-tag QUERY_TAG] \
  [-v] \
  [-ac] \
  [--dry-run]
```

Or if installed via `pip`, it can be executed as follows:

```bash
# Build library from source
pip install --upgrade build
pip install --upgrade -r requirements.txt
python -m build
pip install dist/db_schemachange-*-py3-none-any.whl

# Or install via PyPI
pip install --upgrade "db-schemachange[all]" # Install the package with all connectors
pip install --upgrade "db-schemachange[databricks]" # Or install the package with specific connector

# Run the command
schemachange [subcommand] [-h] \
  [--config-folder CONFIG_FOLDER] \
  [--config-file-name CONFIG_FILE_NAME] \
  [-f ROOT_FOLDER] \
  [-m MODULES_FOLDER] \
  [--vars VARS] \
  [--db-type DB_TYPE] \
  [--connections-file-path CONNECTIONS_FILE_PATH] \
  [-c CHANGE_HISTORY_TABLE] \
  [--create-change-history-table] \
  [--query-tag QUERY_TAG] \
  [-v] \
  [-ac] \
  [--dry-run]
```

1. Make sure you've completed the [Prerequisites](#prerequisites) steps above
1. Get a copy of this schemachange repository (either via a clone or download)
1. Open a shell and change directory to your copy of the schemachange repository
1. Run schemachange (see [Running the Script](#running-the-script) above) with your database connection details and
   respective demo project as the root folder (make sure you use the full path)

### Using Docker

```bash
docker run -it --rm \
  --name schemachange-script \
  -v "$PWD":/usr/src/schemachange \
  -w /usr/src/schemachange \
  python:3 /bin/bash -c "pip install --upgrade 'db-schemachange[all]' && schemachange [subcommand] [-h] [--config-folder CONFIG_FOLDER] [--config-file-name CONFIG_FILE_NAME] [-f ROOT_FOLDER] [-m MODULES_FOLDER] [--vars VARS] [--db-type DB_TYPE] [--connections-file-path CONNECTIONS_FILE_PATH] [-c CHANGE_HISTORY_TABLE] [--create-change-history-table] [--query-tag QUERY_TAG] [-v] [-ac] [--dry-run]"
```

## Maintainers

- Lam Tran (@LTranData)

## Demo

```bash
schemachange deploy \
  --verbose \
  --vars '{"secret":"abc"}' \
  --connections-file-path demo/mysql/config/connections-config.yml \
  --config-folder demo/mysql/config \
  --root-folder demo/mysql/scripts \
  --db-type MYSQL \
  --create-change-history-table
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/LTranData/db-schemachange",
    "name": "db-schemachange",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "LTranData",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/23/cb/96dd1fa222941e2e2be84a89940180f412020c56c805ed73af840c783748/db_schemachange-0.1.1.tar.gz",
    "platform": null,
    "description": "# db-schemachange\n\n[![pytest](https://github.com/LTranData/db-schemachange/actions/workflows/master-pytest.yml/badge.svg)](https://github.com/LTranData/db-schemachange/actions/workflows/master-pytest.yml)\n[![PyPI](https://img.shields.io/pypi/v/db-schemachange.svg)](https://pypi.org/project/db-schemachange)\n\n## Overview\n\n`db-schemachange` is a simple, lightweight python based tool to manage database objects for Databricks, Snowflake, MySQL, Postgres, SQL Server, and Oracle. It\nfollows an Imperative-style approach to Database Change Management (DCM) and was inspired by\nthe [Flyway database migration tool](https://flywaydb.org). When combined with a version control system and a CI/CD\ntool, database changes can be approved and deployed through a pipeline using modern software delivery practices. As such\nschemachange plays a critical role in enabling Database (or Data) DevOps.\n\nFor the complete list of changes made to schemachange check out the [CHANGELOG](CHANGELOG.md).\n\nTo learn more about making a contribution to schemachange, please see our [Contributing guide](.github/CONTRIBUTING.md).\n\n## Installation options\n\nYou can install the `db-schemachange` package with all available connectors, or you can choose a specific one that suits your needs for a lighter installation.\n\n```bash\npip install --upgrade db-schemachange # Install the package WITHOUT connectors\npip install --upgrade \"db-schemachange[all]\" # Install the package with all connectors\npip install --upgrade \"db-schemachange[postgres]\" # Install the package with Postgres connector\npip install --upgrade \"db-schemachange[sqlserver]\" # Install the package with SQL Server connector\npip install --upgrade \"db-schemachange[mysql]\" # Install the package with MySQL connector\npip install --upgrade \"db-schemachange[oracle]\" # Install the package with Oracle connector\npip install --upgrade \"db-schemachange[snowflake]\" # Install the package with Snowflake connector\npip install --upgrade \"db-schemachange[databricks]\" # Install the package with Databricks connector\n```\n\n## Table of Contents\n\n- [Overview](#overview)\n- [Installation options](#installation-options)\n- [Project Structure](#project-structure)\n- [Change Scripts](#change-scripts)\n  - [Versioned Script Naming](#versioned-script-naming)\n  - [Repeatable Script Naming](#repeatable-script-naming)\n  - [Always Script Naming](#always-script-naming)\n  - [Script Requirements](#script-requirements)\n  - [Using Variables in Scripts](#using-variables-in-scripts)\n    - [Secrets filtering](#secrets-filtering)\n  - [Jinja templating engine](#jinja-templating-engine)\n- [Change History Table](#change-history-table)\n- [Configuration](#configuration)\n  - [db-schemachange configuration](#db-schemachange-configuration)\n    - [CLI usage](#cli-usage)\n      - [deploy](#deploy)\n      - [render](#render)\n    - [YAML config file](#yaml-config-file)\n  - [connections-config.yml](#connections-configyml)\n- [Authentication](#authentication)\n  - [Databricks](#databricks)\n  - [MySQL](#mysql)\n  - [Oracle](#oracle)\n  - [Postgres](#postgres)\n  - [Snowflake](#snowflake)\n  - [SQL Server](#sql-server)\n- [Yaml Jinja support](#yaml-jinja-support)\n  - [env_var](#env_var)\n- [Running schemachange](#running-schemachange)\n  - [Prerequisites](#prerequisites)\n  - [Running the Script](#running-the-script)\n  - [Using Docker](#using-docker)\n- [Maintainers](#maintainers)\n- [Demo](#demo)\n\n## Project Structure\n\n```\n(project_root)\n|\n|-- folder_1\n    |-- V1.1.1__first_change.sql\n    |-- V1.1.2__second_change.sql\n    |-- R__sp_add_sales.sql\n    |-- R__fn_get_timezone.sql\n|-- folder_2\n    |-- folder_3\n        |-- V1.1.3__third_change.sql\n        |-- R__fn_sort_ascii.sql\n```\n\nThe `db-schemachange` folder structure is very flexible. The `project_root` folder is specified with the `-f`\nor `--root-folder` argument. `db-schemachange` only pays attention to the filenames, not the paths. Therefore, under\nthe `project_root` folder you are free to arrange the change scripts any way you see fit. You can have as many\nsubfolders (and nested subfolders) as you would like.\n\n## Change Scripts\n\n### Versioned Script Naming\n\nVersioned change scripts follow a similar naming convention to that used\nby [Flyway Versioned Migrations](https://flywaydb.org/documentation/migrations#versioned-migrations). The script name\nmust follow this pattern (image taken\nfrom [Flyway docs](https://flywaydb.org/documentation/migrations#versioned-migrations)):\n\n<img src=\"images/flyway-naming-convention.png\" alt=\"Flyway naming conventions\" title=\"Flyway naming conventions\" width=\"300\" />\n\nWith the following rules for each part of the filename:\n\n- **Prefix**: The letter 'V' for versioned change\n- **Version**: A unique version number with dots or underscores separating as many number parts as you like\n- **Separator**: \\_\\_ (two underscores)\n- **Description**: An arbitrary description with words separated by underscores or spaces (can not include two\n  underscores)\n- **Suffix**: .sql or .sql.jinja\n\nFor example, a script name that follows this convention is: `V1.1.1__first_change.sql`. As with Flyway, the unique\nversion string is very flexible. You just need to be consistent and always use the same convention, like 3 sets of\nnumbers separated by periods. Here are a few valid version strings:\n\n- 1.1\n- 1_1\n- 1.2.3\n- 1_2_3\n\nEvery script within a database folder must have a unique version number. `db-schemachange` will check for duplicate version\nnumbers and throw an error if it finds any. This helps to ensure that developers who are working in parallel don't\naccidentally (re-)use the same version number.\n\n### Repeatable Script Naming\n\nRepeatable change scripts follow a similar naming convention to that used\nby [Flyway Versioned Migrations](https://flywaydb.org/documentation/concepts/migrations.html#repeatable-migrations). The\nscript name must follow this pattern (image taken\nfrom [Flyway docs](https://flywaydb.org/documentation/concepts/migrations.html#repeatable-migrations):\n\n<img src=\"images/flyway-repeatable-naming-convention.png\" alt=\"Flyway naming conventions\" title=\"Flyway naming conventions\" width=\"300\" />\n\ne.g:\n\n- R\\_\\_sp_add_sales.sql\n- R\\_\\_fn_get_timezone.sql\n- R\\_\\_fn_sort_ascii.sql\n\nAll repeatable change scripts are applied each time the utility is run, if there is a change in the file.\nRepeatable scripts could be used for maintaining code that always needs to be applied in its entirety. e.g. stores\nprocedures, functions and view definitions etc.\n\nJust like Flyway, within a single migration run, repeatable scripts are always applied after all pending versioned\nscripts have been executed. Repeatable scripts are applied in alphabetical order of their description.\n\n### Always Script Naming\n\nAlways change scripts are executed with every run of `db-schemachange`. This is an addition to the implementation\nof [Flyway Versioned Migrations](https://flywaydb.org/documentation/concepts/migrations.html#repeatable-migrations).\nThe script name must follow this pattern:\n\n`A__Some_description.sql`\n\ne.g.\n\n- A\\_\\_add_user.sql\n- A\\_\\_assign_roles.sql\n\nThis type of change script is useful for an environment set up after cloning. Always scripts are applied always last.\n\n### Script Requirements\n\n`db-schemachange` is designed to be very lightweight and not impose too many limitations. Each change script can have any\nnumber of SQL statements within it and must supply the necessary context, like catalog/database and schema names. `db-schemachange` will simply run the contents of each script against\nthe target database, in the correct order. After each script, Schemachange will execute \"reset\" the context (catalog/database, schema) to the values used to configure the connector.\n\n### Using Variables in Scripts\n\n`db-schemachange` supports the jinja engine for a variable replacement strategy. One important use of variables is to support\nmultiple environments (dev, test, prod) in a single database by dynamically changing the database name during\ndeployment. To use a variable in a change script, use this syntax anywhere in the script: `{{ variable1 }}`.\n\nTo pass variables to `db-schemachange`, check out the [Configuration](#configuration) section below. You can either use\nthe `--vars` command line parameter or the YAML config file `schemachange-config.yml`. For the command line version you\ncan pass variables like this: `--vars '{\"variable1\": \"value\", \"variable2\": \"value2\"}'`. This parameter accepts a flat\nJSON object formatted as a string.\n\n> _Nested objects and arrays don't make sense at this point and aren't supported._\n\n`db-schemachange` will replace any variable placeholders before running your change script code and will throw an error if it\nfinds any variable placeholders that haven't been replaced.\n\n#### Secrets filtering\n\nWhile many CI/CD tools already have the capability to filter secrets, it is best that any tool also does not output\nsecrets to the console or logs. Schemachange implements secrets filtering in a number of areas to ensure secrets are not\nwriten to the console or logs. The only exception is the `render` command which will display secrets.\n\nA secret is just a standard variable that has been tagged as a secret. This is determined using a naming convention and\neither of the following will tag a variable as a secret:\n\n1. The variable name has the word `secret` in it.\n   ```yaml\n   config-version: 1\n   vars:\n     bucket_name: S3://...... # not a secret\n     secret_key: 567576D8E # a secret\n   ```\n2. The variable is a child of a key named `secrets`.\n   ```yaml\n   config-version: 1\n   vars:\n   secrets:\n     my_key: 567576D8E # a secret\n   aws:\n     bucket_name: S3://...... # not a secret\n     secrets:\n       encryption_key: FGDSUUEHDHJK # a secret\n       us_east_1:\n         encryption_key: sdsdsd # a secret\n   ```\n\n### Jinja templating engine\n\n`db-schemachange` uses the Jinja templating engine internally and\nsupports: [expressions](https://jinja.palletsprojects.com/en/3.0.x/templates/#expressions), [macros](https://jinja.palletsprojects.com/en/3.0.x/templates/#macros), [includes](https://jinja.palletsprojects.com/en/3.0.x/templates/#include)\nand [template inheritance](https://jinja.palletsprojects.com/en/3.0.x/templates/#template-inheritance).\n\nThese files can be stored in the root-folder but `db-schemachange` also provides a separate modules\nfolder `--modules-folder`. This allows common logic to be stored outside of the main changes scripts.\n\nThe Jinja auto-escaping feature is disabled in `db-schemachange`, this feature in Jinja is currently designed for where the\noutput language is HTML/XML. So if you are using `db-schemachange` with untrusted inputs you will need to handle this within\nyour change scripts.\n\n## Change History Table\n\n`db-schemachange` records all applied changes scripts to the change history table. By default, `db-schemachange` will attempt to\nlog all activities to the `METADATA.[SCHEMACHANGE].CHANGE_HISTORY` table, based on the database you are using. The name and location of the change history\ntable can be overriden via a command line argument (`-c` or `--change-history-table`) or the `schemachange-config.yml`\nfile (`change-history-table`). The value passed to the parameter can have a one, two, or three part name (e.g. \"\nTABLE_NAME\", or \"SCHEMA_NAME.TABLE_NAME\", or \" DATABASE_NAME.SCHEMA_NAME.TABLE_NAME\"). This can be used to support\nmultiple environments (dev, test, prod).\n\nBy default, `db-schemachange` will not try to create the change history table, and it will fail if the table does not exist.\nThis behavior can be altered by passing in the `--create-change-history-table` argument or adding\n`create-change-history-table: true` to the `schemachange-config.yml` file. Even with the `--create-change-history-table`\nparameter, `db-schemachange` will not attempt to create the database for the change history table. That must be created\nbefore running `db-schemachange`.\n\nThe structure of the `CHANGE_HISTORY` table is as follows:\n\n| Column Name    | Type          | Example                    |\n| -------------- | ------------- | -------------------------- |\n| VERSION        | VARCHAR(1000) | 1.1.1                      |\n| DESCRIPTION    | VARCHAR(1000) | First change               |\n| SCRIPT         | VARCHAR(1000) | V1.1.1\\_\\_first_change.sql |\n| SCRIPT_TYPE    | VARCHAR(1000) | V                          |\n| CHECKSUM       | VARCHAR(1000) | 38e5ba03b1a6d2...          |\n| EXECUTION_TIME | BIGINT        | 4                          |\n| STATUS         | VARCHAR(1000) | Success                    |\n| INSTALLED_BY   | VARCHAR(1000) | DATABASE_USER              |\n| INSTALLED_ON   | TIMESTAMP     | 2020-03-17 12:54:33.123    |\n\nA new row will be added to this table every time a change script has been applied to the database. `db-schemachange` will use\nthis table to identify which changes have been applied to the database and will not apply the same version more than\nonce.\n\nHere is the current schema DDL for the change history table (found in the [schemachange/cli.py](schemachange/cli.py)\nscript), in case you choose to create it manually and not use the `--create-change-history-table` parameter:\n\n```sql\nCREATE TABLE IF NOT EXISTS METADATA.[SCHEMACHANGE].CHANGE_HISTORY\n(\n    VERSION VARCHAR(1000),\n    DESCRIPTION VARCHAR(1000),\n    SCRIPT VARCHAR(1000),\n    SCRIPT_TYPE VARCHAR(1000),\n    CHECKSUM VARCHAR(1000),\n    EXECUTION_TIME BIGINT,\n    STATUS VARCHAR(1000),\n    INSTALLED_BY VARCHAR(1000),\n    INSTALLED_ON TIMESTAMP\n)\n```\n\n## Configuration\n\n### db-schemachange configuration\n\nSchemachange-specific parameters can be supplied in two different ways (in order of priority):\n\n1. Command Line Arguments\n2. YAML config file\n\n`vars` provided via command-line argument will be merged with vars provided via YAML config.\n\n#### CLI usage\n\n##### deploy\n\nThis is the main command that runs the deployment process.\n\n```bash\nusage: schemachange deploy [-h] \\\n  [--config-folder CONFIG_FOLDER] \\\n  [--config-file-name CONFIG_FILE_NAME] \\\n  [-f ROOT_FOLDER] \\\n  [-m MODULES_FOLDER] \\\n  [--vars VARS] \\\n  [--db-type DB_TYPE] \\\n  [--connections-file-path CONNECTIONS_FILE_PATH] \\\n  [-c CHANGE_HISTORY_TABLE] \\\n  [--create-change-history-table] \\\n  [--query-tag QUERY_TAG] \\\n  [-v] \\\n  [-ac] \\\n  [--dry-run]\n```\n\n| Parameter                                                            | Description                                                                                                                                                                                                            |\n| -------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| -h, --help                                                           | Show the help message and exit                                                                                                                                                                                         |\n| --config-folder CONFIG_FOLDER                                        | The folder to look in for the schemachange config file (the default is the current working directory)                                                                                                                  |\n| --config-file-name CONFIG_FILE_NAME                                  | The file name of the schemachange config file. (the default is schemachange-config.yml)                                                                                                                                |\n| -f ROOT_FOLDER, --root-folder ROOT_FOLDER                            | The root folder for the database change scripts. The default is the current directory.                                                                                                                                 |\n| -m MODULES_FOLDER, --modules-folder MODULES_FOLDER                   | The modules folder for jinja macros and templates to be used across mutliple scripts                                                                                                                                   |\n| --vars VARS                                                          | Define values for the variables to replaced in change scripts, given in JSON format. Vars supplied via the command line will be merged with YAML-supplied vars (e.g. '{\"variable1\": \"value1\", \"variable2\": \"value2\"}') |\n| -v, --verbose                                                        | Display verbose debugging details during execution. The default is 'False'.                                                                                                                                            |\n| --db-type                                                            | Database type to run schemachange against. Should be one of [DATABRICKS, MYSQL, ORACLE, POSTGRES, SNOWFLAKE, SQL_SERVER]                                                                                               |\n| --connections-file-path CONNECTIONS_FILE_PATH                        | YAML file for connection detail such as username, password, database,...                                                                                                                                               |\n| -c CHANGE_HISTORY_TABLE, --change-history-table CHANGE_HISTORY_TABLE | Used to override the default name of the change history table (which is METADATA.[SCHEMACHANGE].CHANGE_HISTORY)                                                                                                        |\n| --create-change-history-table                                        | Create the change history table if it does not exist. The default is 'False'.                                                                                                                                          |\n| -ac, --autocommit                                                    | Enable autocommit feature for DML commands. The default is 'False'.                                                                                                                                                    |\n| --dry-run                                                            | Run schemachange in dry run mode. The default is 'False'.                                                                                                                                                              |\n| --query-tag                                                          | A string to include in the QUERY_TAG that is attached to every SQL statement executed.                                                                                                                                 |\n\n##### render\n\nThis subcommand is used to render a single script to the console. It is intended to support the development and\ntroubleshooting of script that use features from the jinja template engine.\n\n```bash\nusage: schemachange render [-h] \\\n  [--config-folder CONFIG_FOLDER] \\\n  [-f ROOT_FOLDER] \\\n  [-m MODULES_FOLDER] \\\n  [--vars VARS] \\\n  [-v] script\n```\n\n| Parameter                                          | Description                                                                                                                               |\n| -------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |\n| --config-folder CONFIG_FOLDER                      | The folder to look in for the schemachange-config.yml file (the default is the current working directory)                                 |\n| -f ROOT_FOLDER, --root-folder ROOT_FOLDER          | The root folder for the database change scripts                                                                                           |\n| -m MODULES_FOLDER, --modules-folder MODULES_FOLDER | The modules folder for jinja macros and templates to be used across multiple scripts                                                      |\n| --vars VARS                                        | Define values for the variables to replaced in change scripts, given in JSON format (e.g. {\"variable1\": \"value1\", \"variable2\": \"value2\"}) |\n| -v, --verbose                                      | Display verbose debugging details during execution (the default is False)                                                                 |\n\n#### YAML config file\n\nBy default, Schemachange expects the YAML config file to be named `schemachange-config.yml`, located in the current\nworking directory. The YAML file name can be overridden with the\n`--config-file-name` [command-line argument](#cli-usage). The folder can be overridden by using the\n`--config-folder` [command-line argument](#cli-usage)\n\nHere is the list of available configurations in the `schemachange-config.yml` file:\n\n```yaml\n# Database type\ndb-type: MYSQL\n\n# Path to connection detail file\nconnections-file-path: null\n\n# The root folder for the database change scripts\nroot-folder: \"/path/to/folder\"\n\n# The modules folder for jinja macros and templates to be used across multiple scripts.\nmodules-folder: null\n\n# Used to override the default name of the change history table (the default is METADATA.SCHEMACHANGE.CHANGE_HISTORY)\nchange-history-table: null\n\n# Create the change history schema and table, if they do not exist (the default is False)\ncreate-change-history-table: false\n\n# Define values for the variables to replaced in change scripts. vars supplied via the command line will be merged into YAML-supplied vars\nvars:\n  var1: \"value1\"\n  var2: \"value2\"\n  secrets:\n    var3: \"value3\" # This is considered a secret and will not be displayed in any output\n\n# Enable autocommit feature for DML commands (the default is False)\nautocommit: false\n\n# Display verbose debugging details during execution (the default is False)\nverbose: false\n\n# Run schemachange in dry run mode (the default is False)\ndry-run: false\n\n# A string to include in the QUERY_TAG that is attached to every SQL statement executed\nquery-tag: \"QUERY_TAG\"\n```\n\n### connections-config.yml\n\nConnection detail of the database passed in `--connections-file-path` CLI parameter or `connections-file-path` property in YAML config file. Please refer to [Authentication](#authentication)\n\n## Authentication\n\nSchemachange supports the many of the authentication methods supported by the each database connector.\nPlease see below sample YAML file to pass in `--connections-file-path` parameter\n\n### Databricks\n\nAllowed parameters in https://docs.databricks.com/aws/en/dev-tools/python-sql-connector#connection-class and few other options\n\n```yaml\nserver_hostname: \"<ws_id>.cloud.databricks.com\"\nhttp_path: \"/sql/1.0/warehouse/<warehouse_id>\"\naccess_token: \"<access_token>\"\nauth_type: \"<oauth>\"\ncredentials_provider:\n  client_id: \"<client_id>\"\n  client_secret: \"<client_secret>\"\npassword: <password>\nusername: <username>\nsession_configuration: # Spark session configuration parameters\n  spark.sql.variable.substitute: true\nhttp_headers:\n  - !!python/tuple [\"header_1\", \"value_1\"]\n  - !!python/tuple [\"header_2\", \"value_2\"]\ncatalog: <catalog>\nschema: <schema>\nuse_cloud_fetch: false\nuser_agent_entry: \"<application_name>\"\nuse_inline_params: false\noauth_client_id: \"<oauth_client_id>\"\noauth_redirect_port: 443\n```\n\n### MySQL\n\nAllowed parameters in https://dev.mysql.com/doc/connector-python/en/connector-python-connectargs.html and few other options\n\n```yaml\ndatabase: \"<database>\"\nuser: \"<user>\"\npassword: \"<password>\"\nhost: \"<host>\"\nport: 3306\n# ...others\n```\n\n### Oracle\n\nAllowed parameters in `oracledb/connection.py::connect` of Python Oracle connector package `oracledb`\n\n```yaml\nuser: \"<user>\"\npassword: \"<password>\"\nhost: \"<host>\"\nport: \"<port>\"\nservice_name: \"<service_name>\"\n# ...others\n```\n\n### Postgres\n\nAllowed parameters in https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS and few other options in `psycopg/connection.py::connect` of `psycopg` package\n\n```yaml\nhost: \"<host>\"\nport: \"<port>\"\ndbname: \"<dbname>\"\nuser: \"<user>\"\npassword: \"<password>\"\n# ...others\n```\n\n### Snowflake\n\nAllowed parameters in `snowflake/connector/connection.py::DEFAULT_CONFIGURATION` of `snowflake-connector-python` package\n\n```yaml\nuser: \"<user>\"\npassword: \"<password>\"\nhost: \"<host>\"\nport: \"<port>\"\ndatabase: \"<database>\"\nschema: \"<schema>\"\nwarehouse: \"<warehouse>\"\nrole: \"<role>\"\naccount: \"<account>\"\n# ...others\n```\n\n### SQL Server\n\nAllowed parameters in `pymssql/_pymssql.pyi::connect` of `pymssql` package\n\n```yaml\nserver: \"<server>\"\nuser: \"<user>\"\npassword: \"<password>\"\ndatabase: \"<database>\"\n# ...others\n```\n\n## Yaml Jinja support\n\nThe YAML config file supports the jinja templating language and has a custom function \"env_var\" to access environmental\nvariables. Jinja variables are unavailable and not yet loaded since they are supplied by the YAML file. Customisation of\nthe YAML file can only happen through values passed via environment variables.\n\n### env_var\n\nProvides access to environmental variables. The function can be used two different ways.\n\nReturn the value of the environmental variable if it exists, otherwise return the default value.\n\n```jinja\n{{ env_var('<environmental_variable>', 'default') }}\n```\n\nReturn the value of the environmental variable if it exists, otherwise raise an error.\n\n```jinja\n{{ env_var('<environmental_variable>') }}\n```\n\n## Running schemachange\n\n### Prerequisites\n\nIn order to run schemachange you must have the following:\n\n- You will need to have a recent version of python 3 installed\n- You will need to create the change history table used by schemachange in the database (\n  see [Change History Table](#change-history-table) above for more details)\n  - First, you will need to create a database/catalog to store your change history table (schemachange will not help you with\n    this).\n  - Second, you will need to create the change history schema and table. You can do this manually (\n    see [Change History Table](#change-history-table) above for the DDL) or have schemachange create them by running\n    it with the `--create-change-history-table` parameter (just make sure the user you're running\n    schemachange with has privileges to create a schema and table in that database)\n- You will need to create (or choose) a user account that has privileges to apply the changes in your change script\n  - Don't forget that this user also needs the SELECT and INSERT privileges on the change history table\n\n### Running the Script\n\nschemachange is a single python script located at [schemachange/cli.py](schemachange/cli.py). It can be executed as\nfollows:\n\n```bash\npython -m schemachange.cli [subcommand] [-h] \\\n  [--config-folder CONFIG_FOLDER] \\\n  [--config-file-name CONFIG_FILE_NAME] \\\n  [-f ROOT_FOLDER] \\\n  [-m MODULES_FOLDER] \\\n  [--vars VARS] \\\n  [--db-type DB_TYPE] \\\n  [--connections-file-path CONNECTIONS_FILE_PATH] \\\n  [-c CHANGE_HISTORY_TABLE] \\\n  [--create-change-history-table] \\\n  [--query-tag QUERY_TAG] \\\n  [-v] \\\n  [-ac] \\\n  [--dry-run]\n```\n\nOr if installed via `pip`, it can be executed as follows:\n\n```bash\n# Build library from source\npip install --upgrade build\npip install --upgrade -r requirements.txt\npython -m build\npip install dist/db_schemachange-*-py3-none-any.whl\n\n# Or install via PyPI\npip install --upgrade \"db-schemachange[all]\" # Install the package with all connectors\npip install --upgrade \"db-schemachange[databricks]\" # Or install the package with specific connector\n\n# Run the command\nschemachange [subcommand] [-h] \\\n  [--config-folder CONFIG_FOLDER] \\\n  [--config-file-name CONFIG_FILE_NAME] \\\n  [-f ROOT_FOLDER] \\\n  [-m MODULES_FOLDER] \\\n  [--vars VARS] \\\n  [--db-type DB_TYPE] \\\n  [--connections-file-path CONNECTIONS_FILE_PATH] \\\n  [-c CHANGE_HISTORY_TABLE] \\\n  [--create-change-history-table] \\\n  [--query-tag QUERY_TAG] \\\n  [-v] \\\n  [-ac] \\\n  [--dry-run]\n```\n\n1. Make sure you've completed the [Prerequisites](#prerequisites) steps above\n1. Get a copy of this schemachange repository (either via a clone or download)\n1. Open a shell and change directory to your copy of the schemachange repository\n1. Run schemachange (see [Running the Script](#running-the-script) above) with your database connection details and\n   respective demo project as the root folder (make sure you use the full path)\n\n### Using Docker\n\n```bash\ndocker run -it --rm \\\n  --name schemachange-script \\\n  -v \"$PWD\":/usr/src/schemachange \\\n  -w /usr/src/schemachange \\\n  python:3 /bin/bash -c \"pip install --upgrade 'db-schemachange[all]' && schemachange [subcommand] [-h] [--config-folder CONFIG_FOLDER] [--config-file-name CONFIG_FILE_NAME] [-f ROOT_FOLDER] [-m MODULES_FOLDER] [--vars VARS] [--db-type DB_TYPE] [--connections-file-path CONNECTIONS_FILE_PATH] [-c CHANGE_HISTORY_TABLE] [--create-change-history-table] [--query-tag QUERY_TAG] [-v] [-ac] [--dry-run]\"\n```\n\n## Maintainers\n\n- Lam Tran (@LTranData)\n\n## Demo\n\n```bash\nschemachange deploy \\\n  --verbose \\\n  --vars '{\"secret\":\"abc\"}' \\\n  --connections-file-path demo/mysql/config/connections-config.yml \\\n  --config-folder demo/mysql/config \\\n  --root-folder demo/mysql/scripts \\\n  --db-type MYSQL \\\n  --create-change-history-table\n```\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "A simple, lightweight Python-based Database Change Management tool for various databases",
    "version": "0.1.1",
    "project_urls": {
        "Homepage": "https://github.com/LTranData/db-schemachange"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "83ce103d1d960d0d76b70423a61d51fe174ebc6d5b88c60eb061becbc6f0348c",
                "md5": "3279a66f4a7c4232ed23462ce087b5cd",
                "sha256": "a3e85e740b19869775f957e36a658b3eaba5f48312c83f272cbf7b3c17daabb0"
            },
            "downloads": -1,
            "filename": "db_schemachange-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3279a66f4a7c4232ed23462ce087b5cd",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 44678,
            "upload_time": "2025-07-12T08:09:48",
            "upload_time_iso_8601": "2025-07-12T08:09:48.514373Z",
            "url": "https://files.pythonhosted.org/packages/83/ce/103d1d960d0d76b70423a61d51fe174ebc6d5b88c60eb061becbc6f0348c/db_schemachange-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "23cb96dd1fa222941e2e2be84a89940180f412020c56c805ed73af840c783748",
                "md5": "4f0d60b6ae89dbf1a61769f3ef83fe94",
                "sha256": "3b1d6ac0ea56608e41b54e8cba8aa0bd7276b143c35a81f317f6d53fdb514ddf"
            },
            "downloads": -1,
            "filename": "db_schemachange-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "4f0d60b6ae89dbf1a61769f3ef83fe94",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 43273,
            "upload_time": "2025-07-12T08:09:50",
            "upload_time_iso_8601": "2025-07-12T08:09:50.090406Z",
            "url": "https://files.pythonhosted.org/packages/23/cb/96dd1fa222941e2e2be84a89940180f412020c56c805ed73af840c783748/db_schemachange-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-12 08:09:50",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "LTranData",
    "github_project": "db-schemachange",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "Jinja2",
            "specs": [
                [
                    "==",
                    "3.1.6"
                ]
            ]
        },
        {
            "name": "PyYAML",
            "specs": [
                [
                    "==",
                    "6.0.2"
                ]
            ]
        },
        {
            "name": "structlog",
            "specs": [
                [
                    "==",
                    "25.4.0"
                ]
            ]
        },
        {
            "name": "marshmallow",
            "specs": [
                [
                    "==",
                    "4.0.0"
                ]
            ]
        },
        {
            "name": "sqlparse",
            "specs": [
                [
                    "==",
                    "0.5.3"
                ]
            ]
        },
        {
            "name": "pyarrow",
            "specs": [
                [
                    "==",
                    "18.1.0"
                ]
            ]
        },
        {
            "name": "psycopg",
            "specs": [
                [
                    "==",
                    "3.2.9"
                ]
            ]
        },
        {
            "name": "psycopg",
            "specs": [
                [
                    "==",
                    "3.2.9"
                ]
            ]
        },
        {
            "name": "pymssql",
            "specs": [
                [
                    "==",
                    "2.3.6"
                ]
            ]
        },
        {
            "name": "mysql-connector-python",
            "specs": [
                [
                    "==",
                    "9.3.0"
                ]
            ]
        },
        {
            "name": "oracledb",
            "specs": [
                [
                    "==",
                    "3.2.0"
                ]
            ]
        },
        {
            "name": "snowflake-connector-python",
            "specs": [
                [
                    "==",
                    "3.16.0"
                ]
            ]
        },
        {
            "name": "databricks-sql-connector",
            "specs": [
                [
                    "==",
                    "4.0.5"
                ]
            ]
        },
        {
            "name": "databricks-sdk",
            "specs": [
                [
                    "==",
                    "0.57.0"
                ]
            ]
        },
        {
            "name": "pytest",
            "specs": [
                [
                    "==",
                    "8.4.1"
                ]
            ]
        },
        {
            "name": "pytest-cov",
            "specs": [
                [
                    "==",
                    "6.2.1"
                ]
            ]
        }
    ],
    "lcname": "db-schemachange"
}
        
Elapsed time: 0.43012s