simple-ddl-parser


Namesimple-ddl-parser JSON
Version 1.1.0 PyPI version JSON
download
home_pagehttps://github.com/xnuinside/simple-ddl-parser
SummarySimple DDL Parser to parse SQL & dialects like HQL, TSQL (MSSQL), Oracle, AWS Redshift, Snowflake, MySQL, PostgreSQL, etc ddl files to json/python dict with full information about columns: types, defaults, primary keys, etc.; sequences, alters, custom types & other entities from ddl.
upload_time2024-04-21 18:11:25
maintainerNone
docs_urlNone
authorIuliia Volkova
requires_python<4.0,>=3.6
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
Simple DDL Parser
-----------------


.. image:: https://img.shields.io/pypi/v/simple-ddl-parser
   :target: https://img.shields.io/pypi/v/simple-ddl-parser
   :alt: badge1
 
.. image:: https://img.shields.io/pypi/l/simple-ddl-parser
   :target: https://img.shields.io/pypi/l/simple-ddl-parser
   :alt: badge2
 
.. image:: https://img.shields.io/pypi/pyversions/simple-ddl-parser
   :target: https://img.shields.io/pypi/pyversions/simple-ddl-parser
   :alt: badge3
 
.. image:: https://github.com/xnuinside/simple-ddl-parser/actions/workflows/main.yml/badge.svg
   :target: https://github.com/xnuinside/simple-ddl-parser/actions/workflows/main.yml/badge.svg
   :alt: workflow


Build with ply (lex & yacc in python). A lot of samples in 'tests/.

Is it Stable?
^^^^^^^^^^^^^

Yes, library already has about 9000+ downloads per day  - https://pypistats.org/packages/simple-ddl-parser..

As maintainer, I guarantee that any backward incompatible changes will not be done in patch or minor version. But! Pay attention that sometimes output in keywords can be changed in minor version because of fixing wrong behaviour in past.

Updates in version 1.x
^^^^^^^^^^^^^^^^^^^^^^

The full list of updates can be found in the Changelog below (at the end of README).

Version 1.0.0 was released due to significant changes in the output structure and a stricter approach regarding the scope of the produced output. Now, you must provide the argument 'output_mode=name_of_your_dialect' if you wish to see arguments/properties specific to a particular dialect

How does it work?
^^^^^^^^^^^^^^^^^

Parser supports: 


* SQL
* HQL (Hive)
* MSSQL dialec
* Oracle dialect
* MySQL dialect
* PostgreSQL dialect
* BigQuery
* Redshift
* Snowflake
* SparkSQL
* IBM DB2 dialect

You can check dialects sections after ``Supported Statements`` section to get more information that statements from dialects already supported by parser. If you need to add more statements or new dialects - feel free to open the issue. 

Feel free to open Issue with DDL sample
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Pay attentions that I'm adding functional tests for all supported statement, so if you see that your statement is failed and you didn't see it in the test 99,9% that I did n't have sample with such SQL statement - so feel free to open the issue and I will add support for it. 

**If you need some statement, that not supported by parser yet**\ : please provide DDL example & information about that is it SQL dialect or DB.

Types that are used in your DB does not matter, so parser must also work successfully to any DDL for SQL DB. Parser is NOT case sensitive, it did not expect that all queries will be in upper case or lower case. So you can write statements like this:

.. code-block:: sql


       Alter Table Persons ADD CONSTRAINT CHK_PersonAge CHECK (Age>=18 AND City='Sandnes');

It will be parsed as is without errors.

If you have samples that cause an error - please open the issue (but don't forget to add ddl example), I will be glad to fix it.

A lot of statements and output result you can find in tests on the github - https://github.com/xnuinside/simple-ddl-parser/tree/main/tests .

How to install
^^^^^^^^^^^^^^

.. code-block:: bash


       pip install simple-ddl-parser

How to use
----------

Extract additional information from HQL (& other dialects)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

In some dialects like HQL there is a lot of additional information about table like, fore example, is it external table, STORED AS, location & etc. This property will be always empty in 'classic' SQL DB like PostgreSQL or MySQL and this is the reason, why by default this information are 'hidden'.
Also some fields hidden in HQL, because they are simple not exists in HIVE, for example 'deferrable_initially'
To get this 'hql' specific details about table in output please use 'output_mode' argument in run() method.

example:

.. code-block:: python


       ddl = """
       CREATE TABLE IF NOT EXISTS default.salesorderdetail(
           SalesOrderID int,
           ProductID int,
           OrderQty int,
           LineTotal decimal
           )
       PARTITIONED BY (batch_id int, batch_id2 string, batch_32 some_type)
       LOCATION 's3://datalake/table_name/v1'
       ROW FORMAT DELIMITED
           FIELDS TERMINATED BY ','
           COLLECTION ITEMS TERMINATED BY '\002'
           MAP KEYS TERMINATED BY '\003'
       STORED AS TEXTFILE
       """

       result = DDLParser(ddl).run(output_mode="hql")
       print(result)

And you will get output with additional keys 'stored_as', 'location', 'external', etc.

.. code-block:: python


       # additional keys examples
     {
       ...,
       'location': "'s3://datalake/table_name/v1'",
       'map_keys_terminated_by': "'\\003'",
       'partitioned_by': [{'name': 'batch_id', 'size': None, 'type': 'int'},
                           {'name': 'batch_id2', 'size': None, 'type': 'string'},
                           {'name': 'batch_32', 'size': None, 'type': 'some_type'}],
       'primary_key': [],
       'row_format': 'DELIMITED',
       'schema': 'default',
       'stored_as': 'TEXTFILE',
       ... 
     }

If you run parser with command line add flag '-o=hql' or '--output-mode=hql' to get the same result.

Possible output_modes: ['redshift', 'spark_sql', 'mysql', 'bigquery', 'mssql', 'databricks', 'sqlite', 'vertics', 'ibm_db2', 'postgres', 'oracle', 'hql', 'snowflake', 'sql']

From python code
^^^^^^^^^^^^^^^^

.. code-block:: python

       from simple_ddl_parser import DDLParser


       parse_results = DDLParser("""create table dev.data_sync_history(
           data_sync_id bigint not null,
           sync_count bigint not null,
           sync_mark timestamp  not  null,
           sync_start timestamp  not null,
           sync_end timestamp  not null,
           message varchar(2000) null,
           primary key (data_sync_id, sync_start)
       ); """).run()

       print(parse_results)

To parse from file
^^^^^^^^^^^^^^^^^^

.. code-block:: python


       from simple_ddl_parser import parse_from_file

       result = parse_from_file('tests/sql/test_one_statement.sql')
       print(result)

From command line
^^^^^^^^^^^^^^^^^

simple-ddl-parser is installed to environment as command **sdp**

.. code-block:: bash


       sdp path_to_ddl_file

       # for example:

       sdp tests/sql/test_two_tables.sql

You will see the output in **schemas** folder in file with name **test_two_tables_schema.json**

If you want to have also output in console - use **-v** flag for verbose.

.. code-block:: bash


       sdp tests/sql/test_two_tables.sql -v

If you don't want to dump schema in file and just print result to the console, use **--no-dump** flag:

.. code-block:: bash


       sdp tests/sql/test_two_tables.sql --no-dump

You can provide target path where you want to dump result with argument **-t**\ , **--target**\ :

.. code-block:: bash


       sdp tests/sql/test_two_tables.sql -t dump_results/

Get Output in JSON
^^^^^^^^^^^^^^^^^^

If you want to get output in JSON in stdout you can use argument **json_dump=True** in method **.run()** for this

.. code-block:: python

       from simple_ddl_parser import DDLParser


       parse_results = DDLParser("""create table dev.data_sync_history(
           data_sync_id bigint not null,
           sync_count bigint not null,
       ); """).run(json_dump=True)

       print(parse_results)

Output will be:

.. code-block:: json

   [{"columns": [{"name": "data_sync_id", "type": "bigint", "size": null, "references": null, "unique": false, "nullable": false, "default": null, "check": null}, {"name": "sync_count", "type": "bigint", "size": null, "references": null, "unique": false, "nullable": false, "default": null, "check": null}], "primary_key": [], "alter": {}, "checks": [], "index": [], "partitioned_by": [], "tablespace": null, "schema": "dev", "table_name": "data_sync_history"}]

More details
^^^^^^^^^^^^

``DDLParser(ddl).run()``
.run() method contains several arguments, that impact changing output result. As you can saw upper exists argument ``output_mode`` that allow you to set dialect and get more fields in output relative to chosen dialect, for example 'hql'. Possible output_modes: ['redshift', 'spark_sql', 'mysql', 'bigquery', 'mssql', 'databricks', 'sqlite', 'vertics', 'ibm_db2', 'postgres', 'oracle', 'hql', 'snowflake', 'sql']

Also in .run() method exists argument ``group_by_type`` (by default: False). By default output of parser looks like a List with Dicts where each dict == one entity from ddl (table, sequence, type, etc). And to understand that is current entity you need to check Dict like: if 'table_name' in dict - this is a table, if 'type_name' - this is a type & etc.

To make work little bit easy you can set group_by_type=True and you will get output already sorted by types, like:

.. code-block:: python


       { 
           'tables': [all_pasrsed_tables], 
           'sequences': [all_pasrsed_sequences], 
           'types': [all_pasrsed_types], 
           'domains': [all_pasrsed_domains],
           ...
       }

For example:

.. code-block:: python


       ddl = """
       CREATE TYPE "schema--notification"."ContentType" AS
           ENUM ('TEXT','MARKDOWN','HTML');
           CREATE TABLE "schema--notification"."notification" (
               content_type "schema--notification"."ContentType"
           );
       CREATE SEQUENCE dev.incremental_ids
           INCREMENT 10
           START 0
           MINVALUE 0
           MAXVALUE 9223372036854775807
           CACHE 1;
       """

       result = DDLParser(ddl).run(group_by_type=True)

       # result will be:

       {'sequences': [{'cache': 1,
                       'increment': 10,
                       'maxvalue': 9223372036854775807,
                       'minvalue': 0,
                       'schema': 'dev',
                       'sequence_name': 'incremental_ids',
                       'start': 0}],
       'tables': [{'alter': {},
                   'checks': [],
                   'columns': [{'check': None,
                               'default': None,
                               'name': 'content_type',
                               'nullable': True,
                               'references': None,
                               'size': None,
                               'type': '"schema--notification"."ContentType"',
                               'unique': False}],
                   'index': [],
                   'partitioned_by': [],
                   'primary_key': [],
                   'schema': '"schema--notification"',
                   'table_name': '"notification"'}],
       'types': [{'base_type': 'ENUM',
                   'properties': {'values': ["'TEXT'", "'MARKDOWN'", "'HTML'"]},
                   'schema': '"schema--notification"',
                   'type_name': '"ContentType"'}]}

ALTER statements
^^^^^^^^^^^^^^^^

Right now added support only for ALTER statements with FOREIGEIN key

For example, if in your ddl after table definitions (create table statements) you have ALTER table statements like this:

.. code-block:: sql


   ALTER TABLE "material_attachments" ADD FOREIGN KEY ("material_id", "material_title") REFERENCES "materials" ("id", "title");

This statements will be parsed and information about them putted inside 'alter' key in table's dict.
For example, please check alter statement tests - **tests/test_alter_statements.py**

More examples & tests
^^^^^^^^^^^^^^^^^^^^^

You can find in **tests/** folder.

Dump result in json
^^^^^^^^^^^^^^^^^^^

To dump result in json use argument .run(dump=True)

You also can provide a path where you want to have a dumps with schema with argument .run(dump_path='folder_that_use_for_dumps/')

Raise error if DDL cannot be parsed by Parser
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

By default Parser does not raise the error if some statement cannot be parsed - and just skip & produce empty output.

To change this behavior you can pass 'silent=False' argumen to main parser class, like:

.. code-block::

   DDLParser(.., silent=False)


Normalize names
^^^^^^^^^^^^^^^

Use DDLParser(.., normalize_names=True)flag that change output of parser:
If flag is True (default 'False') then all identifiers will be returned without '[', '"' and other delimiters that used in different SQL dialects to separate custom names from reserved words & statements.
For example, if flag set 'True' and you pass this input: 

CREATE TABLE [dbo].\ `TO_Requests <[Request_ID] [int] IDENTITY(1,1>`_ NOT NULL,
    [user_id] [int]

In output you will have names like 'dbo' and 'TO_Requests', not '[dbo]' and '[TO_Requests]'.

Supported Statements
--------------------


* 
  CREATE [OR REPLACE] TABLE [ IF NOT EXISTS ] + columns definition, columns attributes: column name + type + type size(for example, varchar(255)), UNIQUE, PRIMARY KEY, DEFAULT, CHECK, NULL/NOT NULL, REFERENCES, ON DELETE, ON UPDATE,  NOT DEFERRABLE, DEFERRABLE INITIALLY, GENERATED ALWAYS, STORED, COLLATE

* 
  STATEMENTS: PRIMARY KEY, CHECK, FOREIGN KEY in table definitions (in create table();)

* 
  ALTER TABLE STATEMENTS: ADD CHECK (with CONSTRAINT), ADD FOREIGN KEY (with CONSTRAINT), ADD UNIQUE, ADD DEFAULT FOR, ALTER TABLE ONLY, ALTER TABLE IF EXISTS; ALTER .. PRIMARY KEY; ALTER .. USING INDEX TABLESPACE; ALTER .. ADD; ALTER .. MODIFY; ALTER .. ALTER COLUMN; etc

* 
  PARTITION BY statement

* 
  CREATE SEQUENCE with words: INCREMENT [BY], START [WITH], MINVALUE, MAXVALUE, CACHE

* 
  CREATE TYPE statement:  AS TABLE, AS ENUM, AS OBJECT, INTERNALLENGTH, INPUT, OUTPUT

* 
  LIKE statement (in this and only in this case to output will be added 'like' keyword with information about table from that we did like - 'like': {'schema': None, 'table_name': 'Old_Users'}).

* 
  TABLESPACE statement

* 
  COMMENT ON statement

* 
  CREATE SCHEMA [IF NOT EXISTS] ... [AUTHORIZATION] ...

* 
  CREATE DOMAIN [AS]

* 
  CREATE [SMALLFILE | BIGFILE] [TEMPORARY] TABLESPACE statement

* 
  CREATE DATABASE + Properties parsing

SparkSQL Dialect statements
^^^^^^^^^^^^^^^^^^^^^^^^^^^


* USING

HQL Dialect statements
^^^^^^^^^^^^^^^^^^^^^^


* PARTITIONED BY statement
* ROW FORMAT, ROW FORMAT SERDE
* WITH SERDEPROPERTIES ("input.regex" =  "..some regex..")
* STORED AS (AVRO, PARQUET, etc), STORED AS INPUTFORMAT, OUTPUTFORMAT
* COMMENT
* LOCATION
* FIELDS TERMINATED BY, LINES TERMINATED BY, COLLECTION ITEMS TERMINATED BY, MAP KEYS TERMINATED BY
* TBLPROPERTIES ('parquet.compression'='SNAPPY' & etc.)
* SKEWED BY
* CLUSTERED BY 

MySQL
^^^^^


* ON UPDATE in column without reference 

MSSQL
~~~~~


* CONSTRAINT [CLUSTERED]... PRIMARY KEY
* CONSTRAINT ... WITH statement
* PERIOD FOR SYSTEM_TIME in CREATE TABLE statement
* ON [PRIMARY] after CREATE TABLE statement (sample in test files test_mssql_specific.py)
* WITH statement for TABLE properties
* TEXTIMAGE_ON statement
* DEFAULT NEXT VALUE FOR in COLUMN DEFAULT

MSSQL / MySQL/ Oracle
^^^^^^^^^^^^^^^^^^^^^


* type IDENTITY statement
* FOREIGN KEY REFERENCES statement
* 'max' specifier in column size
* CONSTRAINT ... UNIQUE, CONSTRAINT ... CHECK, CONSTRAINT ... FOREIGN KEY, CONSTRAINT ... PRIMARY KEY
* CREATE CLUSTERED INDEX
* CREATE TABLE (...) ORGANIZATION INDEX 

Oracle
^^^^^^


* ENCRYPT column property [+ NO SALT, SALT, USING]
* STORAGE column property

PotgreSQL
^^^^^^^^^


* INHERITS table statement - https://postgrespro.ru/docs/postgresql/14/ddl-inherit 

AWS Redshift Dialect statements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


* ENCODE column property
* SORTKEY, DISTSTYLE, DISTKEY, ENCODE table properties
* 
  CREATE TEMP / TEMPORARY TABLE

* 
  syntax like with LIKE statement:

  ``create temp table tempevent(like event);``

Snowflake Dialect statements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^


* CREATE .. CLONE statements for table, database and schema
* CREATE TABLE [or REPLACE] [ TRANSIENT | TEMPORARY ] .. CLUSTER BY ..
* CONSTRAINT .. [NOT] ENFORCED 
* COMMENT = in CREATE TABLE & CREATE SCHEMA statements
* WITH MASKING POLICY
* WITH TAG, including multiple tags in the same statement.
* DATA_RETENTION_TIME_IN_DAYS
* MAX_DATA_EXTENSION_TIME_IN_DAYS
* CHANGE_TRACKING

BigQuery
^^^^^^^^


* OPTION in CREATE SCHEMA statement
* OPTION in CREATE TABLE statement
* OPTION in column definition statement

Parser settings
^^^^^^^^^^^^^^^

Logging
~~~~~~~


#. Logging to file

To get logging output to file you should provide to Parser 'log_file' argument with path or file name:

.. code-block:: console


       DDLParser(ddl, log_file='parser221.log').run(group_by_type=True)


#. Logging level

To set logging level you should provide argument 'log_level'

.. code-block:: console


       DDLParser(ddl, log_level=logging.INFO).run(group_by_type=True)

Thanks for involving & contributions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Most biggest 'Thanks' ever goes for contributions in parser:
https://github.com/dmaresma
https://github.com/cfhowes
https://github.com/swiatek25
https://github.com/slurpyb
https://github.com/PBalsdon

Big thanks for the involving & contribution with test cases with DDL samples & opening issues goes to:


* https://github.com/kukigai , 
* https://github.com/kliushnichenko ,
* https://github.com/geob3d

for help with debugging & testing support for BigQuery dialect DDLs:


* https://github.com/ankitdata ,
* https://github.com/kalyan939

Changelog
---------

**v1.1.0**

Improvements
^^^^^^^^^^^^

MySQL:


#. Added support for INDEX statement inside table definition
#. Added support for MySQL INVISIBLE/VISIBLE statement - https://github.com/xnuinside/simple-ddl-parser/issues/243

Snowflake:


#. Added support for cluster by statement before columns definition - https://github.com/xnuinside/simple-ddl-parser/issues/234

**v1.0.4**

Improvements
^^^^^^^^^^^^


#. Support functions with schema prefix in ``DEFAULT`` and ``CHECK`` statements. https://github.com/xnuinside/simple-ddl-parser/issues/240
   ### Fixes
#. Fix for REFERENCES NOT NULL - https://github.com/xnuinside/simple-ddl-parser/issues/239
#. Fix for snowflake stage name location format bug fix - https://github.com/xnuinside/simple-ddl-parser/pull/241

**v1.0.3**

Improvements
^^^^^^^^^^^^


#. Fixed bug with ``CREATE OR REPLACE SCHEMA``.
#. Added support of create empty tables without columns CREATE TABLE tablename (); (valid syntax in SQL)

Snowflake
^^^^^^^^^


#. Fixed bug with snowflake ``stage_`` fileformat option value equal a single string as ``FIELD_OPTIONALLY_ENCLOSED_BY = '\"'``\ , ``FIELD_DELIMITER = '|'``
#. improve snowflake fileformat key equals value into dict. type.

**v1.0.2**

Improvements
^^^^^^^^^^^^


#. Fixed bug with places first table property value in 'authorization' key. Now it is used real property name.
#. Fixed typo on Databricks dialect
#. improved equals symbols support within COMMENT statement.
#. turn regexp into functions

MySQL Improvements
^^^^^^^^^^^^^^^^^^


#. UNSIGNED property after int parsed validly now

Snowflake
^^^^^^^^^


#. Snowflake TAG now available on SCHEMA definitions.

**v1.0.1**

Minor Fixes
^^^^^^^^^^^


#. When using ``normalize_names=True`` do not remove ``[]`` from types like ``decimal(21)[]``.
#. When using ``normalize_names=True`` ensure that ``"complex"."type"`` style names convert to ``complex.type``.

**v1.0.0**
In output structure was done important changes that can in theory breaks code.

Important changes
^^^^^^^^^^^^^^^^^


#. Important change: 

all custom table properties that are defined after column definition in 'CREATE TABLE' statement and relative to only one dialect (only for SparkSQL, or HQL,etc), for example, like here:
https://github.com/xnuinside/simple-ddl-parser/blob/main/tests/dialects/test_snowflake.py#L767  or https://github.com/xnuinside/simple-ddl-parser/blob/main/tests/dialects/test_spark_sql.py#L133 will be saved now in property ``table_properties`` as dict.
Previously they was placed on same level of table output as ``columns``\ , ``alter``\ , etc. Now, they grouped and moved to key ``table_properties``.


#. 
   Formatting parser result now represented by 2 classes - Output & TableData, that makes it more strict and readable.

#. 
   The output mode now functions more strictly. If you want to obtain output fields specific to a certain dialect, 
   use output_mode='snowflake' for Snowflake or output_mode='hql' for HQL, etc. 
   Previously, some keys appeared in the result without being filtered by dialect. 
   For example, if 'CLUSTER BY' was in the DDL, it would show up in the 'cluster_by' field regardless of the output mode. 
   However, now all fields that only work in certain dialects and are not part of the basic SQL notation will only be shown 
   if you choose the correct output_mode.

New Dialects support
^^^^^^^^^^^^^^^^^^^^


#. Added as possible output_modes new Dialects: 


* Databricks SQL like 'databricks', 
* Vertica as 'vertica', 
* SqliteFields as 'sqlite',
* PostgreSQL as 'postgres'

Full list of supported dialects you can find in dict - ``supported_dialects``\ :

``from simple_ddl_parser import supported_dialects``

Currently supported: ['redshift', 'spark_sql', 'mysql', 'bigquery', 'mssql', 'databricks', 'sqlite', 'vertics', 'ibm_db2', 'postgres', 'oracle', 'hql', 'snowflake', 'sql']

If you don't see dialect that you want to use - open issue with description and links to Database docs or use one of existed dialects.

Snowflake updates:
^^^^^^^^^^^^^^^^^^


#. For some reasons, 'CLONE' statement in SNOWFLAKE was parsed into 'like' key in output. Now it was changed to 'clone' - inner structure of output stay the same as previously.

MySQL updates:
^^^^^^^^^^^^^^


#. Engine statement now parsed correctly. Previously, output was always '='.

BigQuery updates:
^^^^^^^^^^^^^^^^^


#. Word 'schema' totally removed from output. ``Dataset`` used instead of ``schema`` in BigQuery dialect.

**v0.32.1**

Minor Fixes
^^^^^^^^^^^


#. Removed debug print

**v0.32.0**

Improvements
^^^^^^^^^^^^


#. Added support for several ALTER statements (ADD, DROP, RENAME, etc) - https://github.com/xnuinside/simple-ddl-parser/issues/215
   In 'alter' output added several keys:

   #. 'dropped_columns' - to store information about columns that was in table, but after dropped by alter
   #. 'renamed_columns' - to store information about columns that was renamed
   #. 'modified_columns' - to track alter column changes for defaults, datetype, etc. Argument stores previous columns states.

Fixes
^^^^^


#. Include source column names in FOREIGN KEY references. Fix for: https://github.com/xnuinside/simple-ddl-parser/issues/196
#. ALTER statement now will be parsed correctly if names & schemas written differently in ``create table`` statement and alter. 
   For example, if in create table you use quotes like "schema_name"."table_name", but in alter was schema_name.table_name - previously it didn't work, but now parser understand that it is the same table.

**v0.31.3**

Improvements
^^^^^^^^^^^^

Snowflake update:
~~~~~~~~~~~~~~~~~


#. Added support for Snowflake Virtual Column definition in External Column  ``AS ()`` statement - https://github.com/xnuinside/simple-ddl-parser/issues/218
#. enforce support for Snowflake _FILE_FORMAT options in External Column ddl statement - https://github.com/xnuinside/simple-ddl-parser/issues/221

Others
~~~~~~


#. Support for KEY statement in CREATE TABLE statements. KEY statements will now create INDEX entries in the DDL parser.

**v0.31.2**

Improvements
^^^^^^^^^^^^

Snowflake update:
~~~~~~~~~~~~~~~~~


#. Added support for Snowflake AUTOINCREMENT | IDENTITY column definitions with optional parameter ``ORDER|NOORDER`` statement - https://github.com/xnuinside/simple-ddl-parser/issues/213

Common
~~~~~~


#. Added param 'encoding' to parse_from_file function - https://github.com/xnuinside/simple-ddl-parser/issues/142.
   Default encoding is utf-8.

**v0.31.1**

Improvements
^^^^^^^^^^^^

Snowflake update:
~~~~~~~~~~~~~~~~~


#. Support multiple tag definitions in a single ``WITH TAG`` statement.
#. Added support for Snowflake double single quotes - https://github.com/xnuinside/simple-ddl-parser/issues/208

**v0.31.0**

Fixes:
^^^^^^


#. Move inline flag in regexp (issue with python 3.11) - https://github.com/xnuinside/simple-ddl-parser/pull/200
   Fix for: https://github.com/xnuinside/simple-ddl-parser/issues/199

Improvements:
^^^^^^^^^^^^^


#. Added ``Snowflake Table DDL support of WITH MASKING POLICY column definition`` - https://github.com/xnuinside/simple-ddl-parser/issues/201

Updates:
^^^^^^^^


#. All deps updated to the latest versions.

**v0.30.0**

Fixes:
^^^^^^


#. IDENTITY now parsed normally as a separate column property. Issue: https://github.com/xnuinside/simple-ddl-parser/issues/184

New Features:
^^^^^^^^^^^^^


#. 
   IN TABLESPACE IBM DB2 statement now is parsed into 'tablespace' key. Issue: https://github.com/xnuinside/simple-ddl-parser/issues/194.
   INDEX IN also parsed to 'index_in' key.
   Added support for ORGANIZE BY statement

#. 
   Added support for PostgreSQL INHERITS statement. Issue: https://github.com/xnuinside/simple-ddl-parser/issues/191

**v0.29.1**

Important updates:
^^^^^^^^^^^^^^^^^^


#. Python 3.6 is deprecated in tests and by default, try to move to Python3.7, but better to 3.8, because 3.7 will be deprecated in 2023.

Fixes
^^^^^


#. Fix for https://github.com/xnuinside/simple-ddl-parser/issues/177

Improvements
^^^^^^^^^^^^


#. Added support for Oracle 2 component size for types, like '30 CHAR'. From https://github.com/xnuinside/simple-ddl-parser/issues/176

**v0.29.0**

Fixes
^^^^^


#. AUTOINCREMENT statement now parsed validly same way as AUTO_INCREMENT and showed up in output as 'autoincrement' property of the column
   Fix for: https://github.com/xnuinside/simple-ddl-parser/issues/170
#. Fix issue ' TypeError argument of type 'NoneType' is not iterable' on some foreigen keys https://github.com/xnuinside/simple-ddl-parser/issues/148

New Features
^^^^^^^^^^^^


#. Support for non-numeric column type parameters https://github.com/xnuinside/simple-ddl-parser/issues/171
   It shows in column attribute 'type_parameters'.

**v0.28.1**
Improvements:


#. Lines started with INSERT INTO statement now successfully ignored by parser (so you can keep them in ddl - they will be just skipped)

Fixes:


#. Important fix for multiline comments

**v0.28.0**

Important Changes (Pay attention):


#. Because of parsing now AUTO_INCREMENT as a separate property of column previous output changed.
   Previously it was parsed as a part of type like:  'INT AUTO_INCREMENT'.
   Now type will be only 'INT', but in column property you will see 'autoincrement': True.

Amazing innovation:


#. It's is weird to write in Changelog, but only in version 0.28.0 I recognize that floats that not supported by parser & it was fixed.
   Thanks for the sample in the issue: https://github.com/xnuinside/simple-ddl-parser/issues/163

Improvements:
MariaDB:


#. Added support for MariaDB AUTO_INCREMENT (from ddl here - https://github.com/xnuinside/simple-ddl-parser/issues/144)
   If column is Auto Incremented - it indicated as 'autoincrement': True in column definition

Common:


#. Added parsing for multiline comments in DDL with ``/* */`` syntax.
#. Comments from DDL now all placed in 'comments' keyword if you use ``group_by_type=`` arg in parser.
#. Added argument 'parser_settings={}' (dict type) in method  parse_from_file() - this way you can pass any arguments that you want to DDLParser (& that supported by it)
   So, if you want to set log_level=logging.WARNING for parser - just use it as:
   parse_from_file('path_to_file', parser_settings={'log_level': logging.WARNING}). For issue: https://github.com/xnuinside/simple-ddl-parser/issues/160

**v0.27.0**

Fixes:


#. Fixed parsing CHECKS with IN statement - https://github.com/xnuinside/simple-ddl-parser/issues/150
#. @# symbols added to ID token - (partially) https://github.com/xnuinside/simple-ddl-parser/issues/146

Improvements:


#. Added support for '*' in size column (ORACLE dialect) - https://github.com/xnuinside/simple-ddl-parser/issues/151
#. Added arg 'debug' to parser, works same way as 'silent' - to get more clear error output.

New features:


#. Added support for ORACLE 'ORGANIZATION INDEX'
#. Added support for SparkSQL Partition by with procedure call - https://github.com/xnuinside/simple-ddl-parser/issues/154
#. Added support for DEFAULT CHARSET statement MySQL - https://github.com/xnuinside/simple-ddl-parser/issues/153

**v0.26.5**

Fixes:


#. Parsetab included in builds.
#. Added additional argumen log_file='path_to_file', to enable logging to file with providen name.

**v0.26.4**


#. Bugfix for (support CREATE OR REPLACE with additional keys like transient/temporary): https://github.com/xnuinside/simple-ddl-parser/issues/133

**v0.26.3**

Improvements:


#. Added support for OR REPLACE in CREATE TABLE: https://github.com/xnuinside/simple-ddl-parser/issues/131
#. Added support for AUTO INCREMENT in column:https://github.com/xnuinside/simple-ddl-parser/issues/130

**v0.26.2**

Fixes:


#. Fixed a huge bug for incorrect parsing lines with 'USE' & 'GO' strings inside.
#. Fixed parsing for CREATE SCHEMA for Snowlake & Oracle DDLs

Improvements:


#. Added  COMMENT statement for CREATE TABLE ddl (for SNOWFLAKE dialect support)
#. Added  COMMENT statement for CREATE SCHEMA ddl (for SNOWFLAKE dialect support)

**v0.26.1**

Fixes:


#. support Multiple SERDEPROPERTIES  - https://github.com/xnuinside/simple-ddl-parser/issues/126
#. Fix for issue with LOCATION and TBLPROPERTIES clauses in CREATE TABLE LIKE - https://github.com/xnuinside/simple-ddl-parser/issues/125
#. LOCATION now works correctly with double quote strings

**v0.26.0**
Improvements:


#. Added more explicit debug message on Statement errors - https://github.com/xnuinside/simple-ddl-parser/issues/116
#. Added support for "USING INDEX TABLESPACE" statement in ALTER - https://github.com/xnuinside/simple-ddl-parser/issues/119
#. Added support for IN statements in CHECKS - https://github.com/xnuinside/simple-ddl-parser/issues/121

New features:


#. Support SparkSQL USING - https://github.com/xnuinside/simple-ddl-parser/issues/117
   Updates initiated by ticket https://github.com/xnuinside/simple-ddl-parser/issues/120:
#. In Parser you can use argument json_dump=True in method .run() if you want get result in JSON format.


* README updated

Fixes:


#. Added support for PARTITION BY one column without type
#. Alter table add constraint PRIMARY KEY - https://github.com/xnuinside/simple-ddl-parser/issues/119
#. Fix for paring SET statement - https://github.com/xnuinside/simple-ddl-parser/pull/122
#. Fix for disappeared columns without properties - https://github.com/xnuinside/simple-ddl-parser/issues/123

**v0.25.0**

Fixes:
------


#. Fix for issue with 'at time zone' https://github.com/xnuinside/simple-ddl-parser/issues/112

New features:
-------------


#. Added flag to raise errors if parser cannot parse statement DDLParser(.., silent=False) - https://github.com/xnuinside/simple-ddl-parser/issues/109
#. Added flag to DDLParser(.., normalize_names=True) that change output of parser:
   if flag is True (default 'False') then all identifiers will be returned without '[', '"' and other delimiters that used in different SQL dialects to separate custom names from reserved words & statements.
   For example, if flag set 'True' and you pass this input:

CREATE TABLE [dbo].\ `TO_Requests <[Request_ID] [int] IDENTITY(1,1>`_ NOT NULL,
    [user_id] [int]

In output you will have names like 'dbo' and 'TO_Requests', not '[dbo]' and '[TO_Requests]'.

**v0.24.2**

Fixes:
------


#. Fix for the issue: https://github.com/xnuinside/simple-ddl-parser/issues/108 (reserved words can be used as table name after '.')

**v0.24.1**

Fixes:
------

HQL:
^^^^


#. fields_terminated_by now parses , as "','", not as '' previously

Common:
^^^^^^^


#. To output added 'if_not_exists' field in result to get availability 1-to-1 re-create ddl by metadata.

**v0.24.0**

Fixes:
------

HQL:
^^^^


#. More then 2 tblproperties now are parsed correctly https://github.com/xnuinside/simple-ddl-parser/pull/104

Common:
^^^^^^^


#. 'set' in lower case now also parsed validly.
#. Now names like 'schema', 'database', 'table' can be used as names in CREATE DATABASE | SCHEMA | TABLESPACE | DOMAIN | TYPE statements and after INDEX and CONSTRAINT.
#. Creation of empty tables also parsed correctly (like CREATE Table table;).

New Statements Support:
-----------------------

HQL:
^^^^


#. Added support for CLUSTERED BY - https://github.com/xnuinside/simple-ddl-parser/issues/103
#. Added support for  INTO ... BUCKETS
#. CREATE REMOTE DATABASE | SCHEMA

**v0.23.0**

Big refactoring: less code complexity & increase code coverage. Radon added to pre-commit hooks.

Fixes:
^^^^^^


#. Fix for issue with ALTER UNIQUE - https://github.com/xnuinside/simple-ddl-parser/issues/101

New Features
^^^^^^^^^^^^


#. SQL Comments string from DDL now parsed to "comments" key in output.

PostgreSQL:


#. Added support for ALTER TABLE ONLY | ALTER TABLE IF EXISTS

**v0.22.5**

Fixes:
^^^^^^


#. Fix for issue with '<' - https://github.com/xnuinside/simple-ddl-parser/issues/89

**v0.22.4**

Fixes:
^^^^^^

BigQuery:
^^^^^^^^^


#. Fixed issue with parsing schemas with project in name.
#. Added support for multiple OPTION() statements

**v0.22.3**

Fixes:
^^^^^^

BigQuery:
^^^^^^^^^


#. CREATE TABLE statement with 'project_id' in format like project.dataset.table_name now is parsed validly.
   'project' added to output.
   Also added support project.dataset.name format in CREATE SCHEMA and ALTER statement

**v0.22.2**

Fixes:
^^^^^^


#. Fix for the issue: https://github.com/xnuinside/simple-ddl-parser/issues/94 (column name starts with CREATE)

**v0.22.1**

New Features:
^^^^^^^^^^^^^

BigQuery:
---------


#. Added support for OPTION for full CREATE TABLE statement & column definition

Improvements:
-------------


#. CLUSTED BY can be used without ()

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/xnuinside/simple-ddl-parser",
    "name": "simple-ddl-parser",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.6",
    "maintainer_email": null,
    "keywords": null,
    "author": "Iuliia Volkova",
    "author_email": "xnuinside@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/44/00/3cb18b3af32e39ded7b4270a475b42075ea2937354bbeca4025c3bd03d2e/simple_ddl_parser-1.1.0.tar.gz",
    "platform": null,
    "description": "\nSimple DDL Parser\n-----------------\n\n\n.. image:: https://img.shields.io/pypi/v/simple-ddl-parser\n   :target: https://img.shields.io/pypi/v/simple-ddl-parser\n   :alt: badge1\n \n.. image:: https://img.shields.io/pypi/l/simple-ddl-parser\n   :target: https://img.shields.io/pypi/l/simple-ddl-parser\n   :alt: badge2\n \n.. image:: https://img.shields.io/pypi/pyversions/simple-ddl-parser\n   :target: https://img.shields.io/pypi/pyversions/simple-ddl-parser\n   :alt: badge3\n \n.. image:: https://github.com/xnuinside/simple-ddl-parser/actions/workflows/main.yml/badge.svg\n   :target: https://github.com/xnuinside/simple-ddl-parser/actions/workflows/main.yml/badge.svg\n   :alt: workflow\n\n\nBuild with ply (lex & yacc in python). A lot of samples in 'tests/.\n\nIs it Stable?\n^^^^^^^^^^^^^\n\nYes, library already has about 9000+ downloads per day  - https://pypistats.org/packages/simple-ddl-parser..\n\nAs maintainer, I guarantee that any backward incompatible changes will not be done in patch or minor version. But! Pay attention that sometimes output in keywords can be changed in minor version because of fixing wrong behaviour in past.\n\nUpdates in version 1.x\n^^^^^^^^^^^^^^^^^^^^^^\n\nThe full list of updates can be found in the Changelog below (at the end of README).\n\nVersion 1.0.0 was released due to significant changes in the output structure and a stricter approach regarding the scope of the produced output. Now, you must provide the argument 'output_mode=name_of_your_dialect' if you wish to see arguments/properties specific to a particular dialect\n\nHow does it work?\n^^^^^^^^^^^^^^^^^\n\nParser supports: \n\n\n* SQL\n* HQL (Hive)\n* MSSQL dialec\n* Oracle dialect\n* MySQL dialect\n* PostgreSQL dialect\n* BigQuery\n* Redshift\n* Snowflake\n* SparkSQL\n* IBM DB2 dialect\n\nYou can check dialects sections after ``Supported Statements`` section to get more information that statements from dialects already supported by parser. If you need to add more statements or new dialects - feel free to open the issue. \n\nFeel free to open Issue with DDL sample\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nPay attentions that I'm adding functional tests for all supported statement, so if you see that your statement is failed and you didn't see it in the test 99,9% that I did n't have sample with such SQL statement - so feel free to open the issue and I will add support for it. \n\n**If you need some statement, that not supported by parser yet**\\ : please provide DDL example & information about that is it SQL dialect or DB.\n\nTypes that are used in your DB does not matter, so parser must also work successfully to any DDL for SQL DB. Parser is NOT case sensitive, it did not expect that all queries will be in upper case or lower case. So you can write statements like this:\n\n.. code-block:: sql\n\n\n       Alter Table Persons ADD CONSTRAINT CHK_PersonAge CHECK (Age>=18 AND City='Sandnes');\n\nIt will be parsed as is without errors.\n\nIf you have samples that cause an error - please open the issue (but don't forget to add ddl example), I will be glad to fix it.\n\nA lot of statements and output result you can find in tests on the github - https://github.com/xnuinside/simple-ddl-parser/tree/main/tests .\n\nHow to install\n^^^^^^^^^^^^^^\n\n.. code-block:: bash\n\n\n       pip install simple-ddl-parser\n\nHow to use\n----------\n\nExtract additional information from HQL (& other dialects)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nIn some dialects like HQL there is a lot of additional information about table like, fore example, is it external table, STORED AS, location & etc. This property will be always empty in 'classic' SQL DB like PostgreSQL or MySQL and this is the reason, why by default this information are 'hidden'.\nAlso some fields hidden in HQL, because they are simple not exists in HIVE, for example 'deferrable_initially'\nTo get this 'hql' specific details about table in output please use 'output_mode' argument in run() method.\n\nexample:\n\n.. code-block:: python\n\n\n       ddl = \"\"\"\n       CREATE TABLE IF NOT EXISTS default.salesorderdetail(\n           SalesOrderID int,\n           ProductID int,\n           OrderQty int,\n           LineTotal decimal\n           )\n       PARTITIONED BY (batch_id int, batch_id2 string, batch_32 some_type)\n       LOCATION 's3://datalake/table_name/v1'\n       ROW FORMAT DELIMITED\n           FIELDS TERMINATED BY ','\n           COLLECTION ITEMS TERMINATED BY '\\002'\n           MAP KEYS TERMINATED BY '\\003'\n       STORED AS TEXTFILE\n       \"\"\"\n\n       result = DDLParser(ddl).run(output_mode=\"hql\")\n       print(result)\n\nAnd you will get output with additional keys 'stored_as', 'location', 'external', etc.\n\n.. code-block:: python\n\n\n       # additional keys examples\n     {\n       ...,\n       'location': \"'s3://datalake/table_name/v1'\",\n       'map_keys_terminated_by': \"'\\\\003'\",\n       'partitioned_by': [{'name': 'batch_id', 'size': None, 'type': 'int'},\n                           {'name': 'batch_id2', 'size': None, 'type': 'string'},\n                           {'name': 'batch_32', 'size': None, 'type': 'some_type'}],\n       'primary_key': [],\n       'row_format': 'DELIMITED',\n       'schema': 'default',\n       'stored_as': 'TEXTFILE',\n       ... \n     }\n\nIf you run parser with command line add flag '-o=hql' or '--output-mode=hql' to get the same result.\n\nPossible output_modes: ['redshift', 'spark_sql', 'mysql', 'bigquery', 'mssql', 'databricks', 'sqlite', 'vertics', 'ibm_db2', 'postgres', 'oracle', 'hql', 'snowflake', 'sql']\n\nFrom python code\n^^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n       from simple_ddl_parser import DDLParser\n\n\n       parse_results = DDLParser(\"\"\"create table dev.data_sync_history(\n           data_sync_id bigint not null,\n           sync_count bigint not null,\n           sync_mark timestamp  not  null,\n           sync_start timestamp  not null,\n           sync_end timestamp  not null,\n           message varchar(2000) null,\n           primary key (data_sync_id, sync_start)\n       ); \"\"\").run()\n\n       print(parse_results)\n\nTo parse from file\n^^^^^^^^^^^^^^^^^^\n\n.. code-block:: python\n\n\n       from simple_ddl_parser import parse_from_file\n\n       result = parse_from_file('tests/sql/test_one_statement.sql')\n       print(result)\n\nFrom command line\n^^^^^^^^^^^^^^^^^\n\nsimple-ddl-parser is installed to environment as command **sdp**\n\n.. code-block:: bash\n\n\n       sdp path_to_ddl_file\n\n       # for example:\n\n       sdp tests/sql/test_two_tables.sql\n\nYou will see the output in **schemas** folder in file with name **test_two_tables_schema.json**\n\nIf you want to have also output in console - use **-v** flag for verbose.\n\n.. code-block:: bash\n\n\n       sdp tests/sql/test_two_tables.sql -v\n\nIf you don't want to dump schema in file and just print result to the console, use **--no-dump** flag:\n\n.. code-block:: bash\n\n\n       sdp tests/sql/test_two_tables.sql --no-dump\n\nYou can provide target path where you want to dump result with argument **-t**\\ , **--target**\\ :\n\n.. code-block:: bash\n\n\n       sdp tests/sql/test_two_tables.sql -t dump_results/\n\nGet Output in JSON\n^^^^^^^^^^^^^^^^^^\n\nIf you want to get output in JSON in stdout you can use argument **json_dump=True** in method **.run()** for this\n\n.. code-block:: python\n\n       from simple_ddl_parser import DDLParser\n\n\n       parse_results = DDLParser(\"\"\"create table dev.data_sync_history(\n           data_sync_id bigint not null,\n           sync_count bigint not null,\n       ); \"\"\").run(json_dump=True)\n\n       print(parse_results)\n\nOutput will be:\n\n.. code-block:: json\n\n   [{\"columns\": [{\"name\": \"data_sync_id\", \"type\": \"bigint\", \"size\": null, \"references\": null, \"unique\": false, \"nullable\": false, \"default\": null, \"check\": null}, {\"name\": \"sync_count\", \"type\": \"bigint\", \"size\": null, \"references\": null, \"unique\": false, \"nullable\": false, \"default\": null, \"check\": null}], \"primary_key\": [], \"alter\": {}, \"checks\": [], \"index\": [], \"partitioned_by\": [], \"tablespace\": null, \"schema\": \"dev\", \"table_name\": \"data_sync_history\"}]\n\nMore details\n^^^^^^^^^^^^\n\n``DDLParser(ddl).run()``\n.run() method contains several arguments, that impact changing output result. As you can saw upper exists argument ``output_mode`` that allow you to set dialect and get more fields in output relative to chosen dialect, for example 'hql'. Possible output_modes: ['redshift', 'spark_sql', 'mysql', 'bigquery', 'mssql', 'databricks', 'sqlite', 'vertics', 'ibm_db2', 'postgres', 'oracle', 'hql', 'snowflake', 'sql']\n\nAlso in .run() method exists argument ``group_by_type`` (by default: False). By default output of parser looks like a List with Dicts where each dict == one entity from ddl (table, sequence, type, etc). And to understand that is current entity you need to check Dict like: if 'table_name' in dict - this is a table, if 'type_name' - this is a type & etc.\n\nTo make work little bit easy you can set group_by_type=True and you will get output already sorted by types, like:\n\n.. code-block:: python\n\n\n       { \n           'tables': [all_pasrsed_tables], \n           'sequences': [all_pasrsed_sequences], \n           'types': [all_pasrsed_types], \n           'domains': [all_pasrsed_domains],\n           ...\n       }\n\nFor example:\n\n.. code-block:: python\n\n\n       ddl = \"\"\"\n       CREATE TYPE \"schema--notification\".\"ContentType\" AS\n           ENUM ('TEXT','MARKDOWN','HTML');\n           CREATE TABLE \"schema--notification\".\"notification\" (\n               content_type \"schema--notification\".\"ContentType\"\n           );\n       CREATE SEQUENCE dev.incremental_ids\n           INCREMENT 10\n           START 0\n           MINVALUE 0\n           MAXVALUE 9223372036854775807\n           CACHE 1;\n       \"\"\"\n\n       result = DDLParser(ddl).run(group_by_type=True)\n\n       # result will be:\n\n       {'sequences': [{'cache': 1,\n                       'increment': 10,\n                       'maxvalue': 9223372036854775807,\n                       'minvalue': 0,\n                       'schema': 'dev',\n                       'sequence_name': 'incremental_ids',\n                       'start': 0}],\n       'tables': [{'alter': {},\n                   'checks': [],\n                   'columns': [{'check': None,\n                               'default': None,\n                               'name': 'content_type',\n                               'nullable': True,\n                               'references': None,\n                               'size': None,\n                               'type': '\"schema--notification\".\"ContentType\"',\n                               'unique': False}],\n                   'index': [],\n                   'partitioned_by': [],\n                   'primary_key': [],\n                   'schema': '\"schema--notification\"',\n                   'table_name': '\"notification\"'}],\n       'types': [{'base_type': 'ENUM',\n                   'properties': {'values': [\"'TEXT'\", \"'MARKDOWN'\", \"'HTML'\"]},\n                   'schema': '\"schema--notification\"',\n                   'type_name': '\"ContentType\"'}]}\n\nALTER statements\n^^^^^^^^^^^^^^^^\n\nRight now added support only for ALTER statements with FOREIGEIN key\n\nFor example, if in your ddl after table definitions (create table statements) you have ALTER table statements like this:\n\n.. code-block:: sql\n\n\n   ALTER TABLE \"material_attachments\" ADD FOREIGN KEY (\"material_id\", \"material_title\") REFERENCES \"materials\" (\"id\", \"title\");\n\nThis statements will be parsed and information about them putted inside 'alter' key in table's dict.\nFor example, please check alter statement tests - **tests/test_alter_statements.py**\n\nMore examples & tests\n^^^^^^^^^^^^^^^^^^^^^\n\nYou can find in **tests/** folder.\n\nDump result in json\n^^^^^^^^^^^^^^^^^^^\n\nTo dump result in json use argument .run(dump=True)\n\nYou also can provide a path where you want to have a dumps with schema with argument .run(dump_path='folder_that_use_for_dumps/')\n\nRaise error if DDL cannot be parsed by Parser\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nBy default Parser does not raise the error if some statement cannot be parsed - and just skip & produce empty output.\n\nTo change this behavior you can pass 'silent=False' argumen to main parser class, like:\n\n.. code-block::\n\n   DDLParser(.., silent=False)\n\n\nNormalize names\n^^^^^^^^^^^^^^^\n\nUse DDLParser(.., normalize_names=True)flag that change output of parser:\nIf flag is True (default 'False') then all identifiers will be returned without '[', '\"' and other delimiters that used in different SQL dialects to separate custom names from reserved words & statements.\nFor example, if flag set 'True' and you pass this input: \n\nCREATE TABLE [dbo].\\ `TO_Requests <[Request_ID] [int] IDENTITY(1,1>`_ NOT NULL,\n    [user_id] [int]\n\nIn output you will have names like 'dbo' and 'TO_Requests', not '[dbo]' and '[TO_Requests]'.\n\nSupported Statements\n--------------------\n\n\n* \n  CREATE [OR REPLACE] TABLE [ IF NOT EXISTS ] + columns definition, columns attributes: column name + type + type size(for example, varchar(255)), UNIQUE, PRIMARY KEY, DEFAULT, CHECK, NULL/NOT NULL, REFERENCES, ON DELETE, ON UPDATE,  NOT DEFERRABLE, DEFERRABLE INITIALLY, GENERATED ALWAYS, STORED, COLLATE\n\n* \n  STATEMENTS: PRIMARY KEY, CHECK, FOREIGN KEY in table definitions (in create table();)\n\n* \n  ALTER TABLE STATEMENTS: ADD CHECK (with CONSTRAINT), ADD FOREIGN KEY (with CONSTRAINT), ADD UNIQUE, ADD DEFAULT FOR, ALTER TABLE ONLY, ALTER TABLE IF EXISTS; ALTER .. PRIMARY KEY; ALTER .. USING INDEX TABLESPACE; ALTER .. ADD; ALTER .. MODIFY; ALTER .. ALTER COLUMN; etc\n\n* \n  PARTITION BY statement\n\n* \n  CREATE SEQUENCE with words: INCREMENT [BY], START [WITH], MINVALUE, MAXVALUE, CACHE\n\n* \n  CREATE TYPE statement:  AS TABLE, AS ENUM, AS OBJECT, INTERNALLENGTH, INPUT, OUTPUT\n\n* \n  LIKE statement (in this and only in this case to output will be added 'like' keyword with information about table from that we did like - 'like': {'schema': None, 'table_name': 'Old_Users'}).\n\n* \n  TABLESPACE statement\n\n* \n  COMMENT ON statement\n\n* \n  CREATE SCHEMA [IF NOT EXISTS] ... [AUTHORIZATION] ...\n\n* \n  CREATE DOMAIN [AS]\n\n* \n  CREATE [SMALLFILE | BIGFILE] [TEMPORARY] TABLESPACE statement\n\n* \n  CREATE DATABASE + Properties parsing\n\nSparkSQL Dialect statements\n^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n* USING\n\nHQL Dialect statements\n^^^^^^^^^^^^^^^^^^^^^^\n\n\n* PARTITIONED BY statement\n* ROW FORMAT, ROW FORMAT SERDE\n* WITH SERDEPROPERTIES (\"input.regex\" =  \"..some regex..\")\n* STORED AS (AVRO, PARQUET, etc), STORED AS INPUTFORMAT, OUTPUTFORMAT\n* COMMENT\n* LOCATION\n* FIELDS TERMINATED BY, LINES TERMINATED BY, COLLECTION ITEMS TERMINATED BY, MAP KEYS TERMINATED BY\n* TBLPROPERTIES ('parquet.compression'='SNAPPY' & etc.)\n* SKEWED BY\n* CLUSTERED BY \n\nMySQL\n^^^^^\n\n\n* ON UPDATE in column without reference \n\nMSSQL\n~~~~~\n\n\n* CONSTRAINT [CLUSTERED]... PRIMARY KEY\n* CONSTRAINT ... WITH statement\n* PERIOD FOR SYSTEM_TIME in CREATE TABLE statement\n* ON [PRIMARY] after CREATE TABLE statement (sample in test files test_mssql_specific.py)\n* WITH statement for TABLE properties\n* TEXTIMAGE_ON statement\n* DEFAULT NEXT VALUE FOR in COLUMN DEFAULT\n\nMSSQL / MySQL/ Oracle\n^^^^^^^^^^^^^^^^^^^^^\n\n\n* type IDENTITY statement\n* FOREIGN KEY REFERENCES statement\n* 'max' specifier in column size\n* CONSTRAINT ... UNIQUE, CONSTRAINT ... CHECK, CONSTRAINT ... FOREIGN KEY, CONSTRAINT ... PRIMARY KEY\n* CREATE CLUSTERED INDEX\n* CREATE TABLE (...) ORGANIZATION INDEX \n\nOracle\n^^^^^^\n\n\n* ENCRYPT column property [+ NO SALT, SALT, USING]\n* STORAGE column property\n\nPotgreSQL\n^^^^^^^^^\n\n\n* INHERITS table statement - https://postgrespro.ru/docs/postgresql/14/ddl-inherit \n\nAWS Redshift Dialect statements\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n* ENCODE column property\n* SORTKEY, DISTSTYLE, DISTKEY, ENCODE table properties\n* \n  CREATE TEMP / TEMPORARY TABLE\n\n* \n  syntax like with LIKE statement:\n\n  ``create temp table tempevent(like event);``\n\nSnowflake Dialect statements\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\n* CREATE .. CLONE statements for table, database and schema\n* CREATE TABLE [or REPLACE] [ TRANSIENT | TEMPORARY ] .. CLUSTER BY ..\n* CONSTRAINT .. [NOT] ENFORCED \n* COMMENT = in CREATE TABLE & CREATE SCHEMA statements\n* WITH MASKING POLICY\n* WITH TAG, including multiple tags in the same statement.\n* DATA_RETENTION_TIME_IN_DAYS\n* MAX_DATA_EXTENSION_TIME_IN_DAYS\n* CHANGE_TRACKING\n\nBigQuery\n^^^^^^^^\n\n\n* OPTION in CREATE SCHEMA statement\n* OPTION in CREATE TABLE statement\n* OPTION in column definition statement\n\nParser settings\n^^^^^^^^^^^^^^^\n\nLogging\n~~~~~~~\n\n\n#. Logging to file\n\nTo get logging output to file you should provide to Parser 'log_file' argument with path or file name:\n\n.. code-block:: console\n\n\n       DDLParser(ddl, log_file='parser221.log').run(group_by_type=True)\n\n\n#. Logging level\n\nTo set logging level you should provide argument 'log_level'\n\n.. code-block:: console\n\n\n       DDLParser(ddl, log_level=logging.INFO).run(group_by_type=True)\n\nThanks for involving & contributions\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nMost biggest 'Thanks' ever goes for contributions in parser:\nhttps://github.com/dmaresma\nhttps://github.com/cfhowes\nhttps://github.com/swiatek25\nhttps://github.com/slurpyb\nhttps://github.com/PBalsdon\n\nBig thanks for the involving & contribution with test cases with DDL samples & opening issues goes to:\n\n\n* https://github.com/kukigai , \n* https://github.com/kliushnichenko ,\n* https://github.com/geob3d\n\nfor help with debugging & testing support for BigQuery dialect DDLs:\n\n\n* https://github.com/ankitdata ,\n* https://github.com/kalyan939\n\nChangelog\n---------\n\n**v1.1.0**\n\nImprovements\n^^^^^^^^^^^^\n\nMySQL:\n\n\n#. Added support for INDEX statement inside table definition\n#. Added support for MySQL INVISIBLE/VISIBLE statement - https://github.com/xnuinside/simple-ddl-parser/issues/243\n\nSnowflake:\n\n\n#. Added support for cluster by statement before columns definition - https://github.com/xnuinside/simple-ddl-parser/issues/234\n\n**v1.0.4**\n\nImprovements\n^^^^^^^^^^^^\n\n\n#. Support functions with schema prefix in ``DEFAULT`` and ``CHECK`` statements. https://github.com/xnuinside/simple-ddl-parser/issues/240\n   ### Fixes\n#. Fix for REFERENCES NOT NULL - https://github.com/xnuinside/simple-ddl-parser/issues/239\n#. Fix for snowflake stage name location format bug fix - https://github.com/xnuinside/simple-ddl-parser/pull/241\n\n**v1.0.3**\n\nImprovements\n^^^^^^^^^^^^\n\n\n#. Fixed bug with ``CREATE OR REPLACE SCHEMA``.\n#. Added support of create empty tables without columns CREATE TABLE tablename (); (valid syntax in SQL)\n\nSnowflake\n^^^^^^^^^\n\n\n#. Fixed bug with snowflake ``stage_`` fileformat option value equal a single string as ``FIELD_OPTIONALLY_ENCLOSED_BY = '\\\"'``\\ , ``FIELD_DELIMITER = '|'``\n#. improve snowflake fileformat key equals value into dict. type.\n\n**v1.0.2**\n\nImprovements\n^^^^^^^^^^^^\n\n\n#. Fixed bug with places first table property value in 'authorization' key. Now it is used real property name.\n#. Fixed typo on Databricks dialect\n#. improved equals symbols support within COMMENT statement.\n#. turn regexp into functions\n\nMySQL Improvements\n^^^^^^^^^^^^^^^^^^\n\n\n#. UNSIGNED property after int parsed validly now\n\nSnowflake\n^^^^^^^^^\n\n\n#. Snowflake TAG now available on SCHEMA definitions.\n\n**v1.0.1**\n\nMinor Fixes\n^^^^^^^^^^^\n\n\n#. When using ``normalize_names=True`` do not remove ``[]`` from types like ``decimal(21)[]``.\n#. When using ``normalize_names=True`` ensure that ``\"complex\".\"type\"`` style names convert to ``complex.type``.\n\n**v1.0.0**\nIn output structure was done important changes that can in theory breaks code.\n\nImportant changes\n^^^^^^^^^^^^^^^^^\n\n\n#. Important change: \n\nall custom table properties that are defined after column definition in 'CREATE TABLE' statement and relative to only one dialect (only for SparkSQL, or HQL,etc), for example, like here:\nhttps://github.com/xnuinside/simple-ddl-parser/blob/main/tests/dialects/test_snowflake.py#L767  or https://github.com/xnuinside/simple-ddl-parser/blob/main/tests/dialects/test_spark_sql.py#L133 will be saved now in property ``table_properties`` as dict.\nPreviously they was placed on same level of table output as ``columns``\\ , ``alter``\\ , etc. Now, they grouped and moved to key ``table_properties``.\n\n\n#. \n   Formatting parser result now represented by 2 classes - Output & TableData, that makes it more strict and readable.\n\n#. \n   The output mode now functions more strictly. If you want to obtain output fields specific to a certain dialect, \n   use output_mode='snowflake' for Snowflake or output_mode='hql' for HQL, etc. \n   Previously, some keys appeared in the result without being filtered by dialect. \n   For example, if 'CLUSTER BY' was in the DDL, it would show up in the 'cluster_by' field regardless of the output mode. \n   However, now all fields that only work in certain dialects and are not part of the basic SQL notation will only be shown \n   if you choose the correct output_mode.\n\nNew Dialects support\n^^^^^^^^^^^^^^^^^^^^\n\n\n#. Added as possible output_modes new Dialects: \n\n\n* Databricks SQL like 'databricks', \n* Vertica as 'vertica', \n* SqliteFields as 'sqlite',\n* PostgreSQL as 'postgres'\n\nFull list of supported dialects you can find in dict - ``supported_dialects``\\ :\n\n``from simple_ddl_parser import supported_dialects``\n\nCurrently supported: ['redshift', 'spark_sql', 'mysql', 'bigquery', 'mssql', 'databricks', 'sqlite', 'vertics', 'ibm_db2', 'postgres', 'oracle', 'hql', 'snowflake', 'sql']\n\nIf you don't see dialect that you want to use - open issue with description and links to Database docs or use one of existed dialects.\n\nSnowflake updates:\n^^^^^^^^^^^^^^^^^^\n\n\n#. For some reasons, 'CLONE' statement in SNOWFLAKE was parsed into 'like' key in output. Now it was changed to 'clone' - inner structure of output stay the same as previously.\n\nMySQL updates:\n^^^^^^^^^^^^^^\n\n\n#. Engine statement now parsed correctly. Previously, output was always '='.\n\nBigQuery updates:\n^^^^^^^^^^^^^^^^^\n\n\n#. Word 'schema' totally removed from output. ``Dataset`` used instead of ``schema`` in BigQuery dialect.\n\n**v0.32.1**\n\nMinor Fixes\n^^^^^^^^^^^\n\n\n#. Removed debug print\n\n**v0.32.0**\n\nImprovements\n^^^^^^^^^^^^\n\n\n#. Added support for several ALTER statements (ADD, DROP, RENAME, etc) - https://github.com/xnuinside/simple-ddl-parser/issues/215\n   In 'alter' output added several keys:\n\n   #. 'dropped_columns' - to store information about columns that was in table, but after dropped by alter\n   #. 'renamed_columns' - to store information about columns that was renamed\n   #. 'modified_columns' - to track alter column changes for defaults, datetype, etc. Argument stores previous columns states.\n\nFixes\n^^^^^\n\n\n#. Include source column names in FOREIGN KEY references. Fix for: https://github.com/xnuinside/simple-ddl-parser/issues/196\n#. ALTER statement now will be parsed correctly if names & schemas written differently in ``create table`` statement and alter. \n   For example, if in create table you use quotes like \"schema_name\".\"table_name\", but in alter was schema_name.table_name - previously it didn't work, but now parser understand that it is the same table.\n\n**v0.31.3**\n\nImprovements\n^^^^^^^^^^^^\n\nSnowflake update:\n~~~~~~~~~~~~~~~~~\n\n\n#. Added support for Snowflake Virtual Column definition in External Column  ``AS ()`` statement - https://github.com/xnuinside/simple-ddl-parser/issues/218\n#. enforce support for Snowflake _FILE_FORMAT options in External Column ddl statement - https://github.com/xnuinside/simple-ddl-parser/issues/221\n\nOthers\n~~~~~~\n\n\n#. Support for KEY statement in CREATE TABLE statements. KEY statements will now create INDEX entries in the DDL parser.\n\n**v0.31.2**\n\nImprovements\n^^^^^^^^^^^^\n\nSnowflake update:\n~~~~~~~~~~~~~~~~~\n\n\n#. Added support for Snowflake AUTOINCREMENT | IDENTITY column definitions with optional parameter ``ORDER|NOORDER`` statement - https://github.com/xnuinside/simple-ddl-parser/issues/213\n\nCommon\n~~~~~~\n\n\n#. Added param 'encoding' to parse_from_file function - https://github.com/xnuinside/simple-ddl-parser/issues/142.\n   Default encoding is utf-8.\n\n**v0.31.1**\n\nImprovements\n^^^^^^^^^^^^\n\nSnowflake update:\n~~~~~~~~~~~~~~~~~\n\n\n#. Support multiple tag definitions in a single ``WITH TAG`` statement.\n#. Added support for Snowflake double single quotes - https://github.com/xnuinside/simple-ddl-parser/issues/208\n\n**v0.31.0**\n\nFixes:\n^^^^^^\n\n\n#. Move inline flag in regexp (issue with python 3.11) - https://github.com/xnuinside/simple-ddl-parser/pull/200\n   Fix for: https://github.com/xnuinside/simple-ddl-parser/issues/199\n\nImprovements:\n^^^^^^^^^^^^^\n\n\n#. Added ``Snowflake Table DDL support of WITH MASKING POLICY column definition`` - https://github.com/xnuinside/simple-ddl-parser/issues/201\n\nUpdates:\n^^^^^^^^\n\n\n#. All deps updated to the latest versions.\n\n**v0.30.0**\n\nFixes:\n^^^^^^\n\n\n#. IDENTITY now parsed normally as a separate column property. Issue: https://github.com/xnuinside/simple-ddl-parser/issues/184\n\nNew Features:\n^^^^^^^^^^^^^\n\n\n#. \n   IN TABLESPACE IBM DB2 statement now is parsed into 'tablespace' key. Issue: https://github.com/xnuinside/simple-ddl-parser/issues/194.\n   INDEX IN also parsed to 'index_in' key.\n   Added support for ORGANIZE BY statement\n\n#. \n   Added support for PostgreSQL INHERITS statement. Issue: https://github.com/xnuinside/simple-ddl-parser/issues/191\n\n**v0.29.1**\n\nImportant updates:\n^^^^^^^^^^^^^^^^^^\n\n\n#. Python 3.6 is deprecated in tests and by default, try to move to Python3.7, but better to 3.8, because 3.7 will be deprecated in 2023.\n\nFixes\n^^^^^\n\n\n#. Fix for https://github.com/xnuinside/simple-ddl-parser/issues/177\n\nImprovements\n^^^^^^^^^^^^\n\n\n#. Added support for Oracle 2 component size for types, like '30 CHAR'. From https://github.com/xnuinside/simple-ddl-parser/issues/176\n\n**v0.29.0**\n\nFixes\n^^^^^\n\n\n#. AUTOINCREMENT statement now parsed validly same way as AUTO_INCREMENT and showed up in output as 'autoincrement' property of the column\n   Fix for: https://github.com/xnuinside/simple-ddl-parser/issues/170\n#. Fix issue ' TypeError argument of type 'NoneType' is not iterable' on some foreigen keys https://github.com/xnuinside/simple-ddl-parser/issues/148\n\nNew Features\n^^^^^^^^^^^^\n\n\n#. Support for non-numeric column type parameters https://github.com/xnuinside/simple-ddl-parser/issues/171\n   It shows in column attribute 'type_parameters'.\n\n**v0.28.1**\nImprovements:\n\n\n#. Lines started with INSERT INTO statement now successfully ignored by parser (so you can keep them in ddl - they will be just skipped)\n\nFixes:\n\n\n#. Important fix for multiline comments\n\n**v0.28.0**\n\nImportant Changes (Pay attention):\n\n\n#. Because of parsing now AUTO_INCREMENT as a separate property of column previous output changed.\n   Previously it was parsed as a part of type like:  'INT AUTO_INCREMENT'.\n   Now type will be only 'INT', but in column property you will see 'autoincrement': True.\n\nAmazing innovation:\n\n\n#. It's is weird to write in Changelog, but only in version 0.28.0 I recognize that floats that not supported by parser & it was fixed.\n   Thanks for the sample in the issue: https://github.com/xnuinside/simple-ddl-parser/issues/163\n\nImprovements:\nMariaDB:\n\n\n#. Added support for MariaDB AUTO_INCREMENT (from ddl here - https://github.com/xnuinside/simple-ddl-parser/issues/144)\n   If column is Auto Incremented - it indicated as 'autoincrement': True in column definition\n\nCommon:\n\n\n#. Added parsing for multiline comments in DDL with ``/* */`` syntax.\n#. Comments from DDL now all placed in 'comments' keyword if you use ``group_by_type=`` arg in parser.\n#. Added argument 'parser_settings={}' (dict type) in method  parse_from_file() - this way you can pass any arguments that you want to DDLParser (& that supported by it)\n   So, if you want to set log_level=logging.WARNING for parser - just use it as:\n   parse_from_file('path_to_file', parser_settings={'log_level': logging.WARNING}). For issue: https://github.com/xnuinside/simple-ddl-parser/issues/160\n\n**v0.27.0**\n\nFixes:\n\n\n#. Fixed parsing CHECKS with IN statement - https://github.com/xnuinside/simple-ddl-parser/issues/150\n#. @# symbols added to ID token - (partially) https://github.com/xnuinside/simple-ddl-parser/issues/146\n\nImprovements:\n\n\n#. Added support for '*' in size column (ORACLE dialect) - https://github.com/xnuinside/simple-ddl-parser/issues/151\n#. Added arg 'debug' to parser, works same way as 'silent' - to get more clear error output.\n\nNew features:\n\n\n#. Added support for ORACLE 'ORGANIZATION INDEX'\n#. Added support for SparkSQL Partition by with procedure call - https://github.com/xnuinside/simple-ddl-parser/issues/154\n#. Added support for DEFAULT CHARSET statement MySQL - https://github.com/xnuinside/simple-ddl-parser/issues/153\n\n**v0.26.5**\n\nFixes:\n\n\n#. Parsetab included in builds.\n#. Added additional argumen log_file='path_to_file', to enable logging to file with providen name.\n\n**v0.26.4**\n\n\n#. Bugfix for (support CREATE OR REPLACE with additional keys like transient/temporary): https://github.com/xnuinside/simple-ddl-parser/issues/133\n\n**v0.26.3**\n\nImprovements:\n\n\n#. Added support for OR REPLACE in CREATE TABLE: https://github.com/xnuinside/simple-ddl-parser/issues/131\n#. Added support for AUTO INCREMENT in column:https://github.com/xnuinside/simple-ddl-parser/issues/130\n\n**v0.26.2**\n\nFixes:\n\n\n#. Fixed a huge bug for incorrect parsing lines with 'USE' & 'GO' strings inside.\n#. Fixed parsing for CREATE SCHEMA for Snowlake & Oracle DDLs\n\nImprovements:\n\n\n#. Added  COMMENT statement for CREATE TABLE ddl (for SNOWFLAKE dialect support)\n#. Added  COMMENT statement for CREATE SCHEMA ddl (for SNOWFLAKE dialect support)\n\n**v0.26.1**\n\nFixes:\n\n\n#. support Multiple SERDEPROPERTIES  - https://github.com/xnuinside/simple-ddl-parser/issues/126\n#. Fix for issue with LOCATION and TBLPROPERTIES clauses in CREATE TABLE LIKE - https://github.com/xnuinside/simple-ddl-parser/issues/125\n#. LOCATION now works correctly with double quote strings\n\n**v0.26.0**\nImprovements:\n\n\n#. Added more explicit debug message on Statement errors - https://github.com/xnuinside/simple-ddl-parser/issues/116\n#. Added support for \"USING INDEX TABLESPACE\" statement in ALTER - https://github.com/xnuinside/simple-ddl-parser/issues/119\n#. Added support for IN statements in CHECKS - https://github.com/xnuinside/simple-ddl-parser/issues/121\n\nNew features:\n\n\n#. Support SparkSQL USING - https://github.com/xnuinside/simple-ddl-parser/issues/117\n   Updates initiated by ticket https://github.com/xnuinside/simple-ddl-parser/issues/120:\n#. In Parser you can use argument json_dump=True in method .run() if you want get result in JSON format.\n\n\n* README updated\n\nFixes:\n\n\n#. Added support for PARTITION BY one column without type\n#. Alter table add constraint PRIMARY KEY - https://github.com/xnuinside/simple-ddl-parser/issues/119\n#. Fix for paring SET statement - https://github.com/xnuinside/simple-ddl-parser/pull/122\n#. Fix for disappeared columns without properties - https://github.com/xnuinside/simple-ddl-parser/issues/123\n\n**v0.25.0**\n\nFixes:\n------\n\n\n#. Fix for issue with 'at time zone' https://github.com/xnuinside/simple-ddl-parser/issues/112\n\nNew features:\n-------------\n\n\n#. Added flag to raise errors if parser cannot parse statement DDLParser(.., silent=False) - https://github.com/xnuinside/simple-ddl-parser/issues/109\n#. Added flag to DDLParser(.., normalize_names=True) that change output of parser:\n   if flag is True (default 'False') then all identifiers will be returned without '[', '\"' and other delimiters that used in different SQL dialects to separate custom names from reserved words & statements.\n   For example, if flag set 'True' and you pass this input:\n\nCREATE TABLE [dbo].\\ `TO_Requests <[Request_ID] [int] IDENTITY(1,1>`_ NOT NULL,\n    [user_id] [int]\n\nIn output you will have names like 'dbo' and 'TO_Requests', not '[dbo]' and '[TO_Requests]'.\n\n**v0.24.2**\n\nFixes:\n------\n\n\n#. Fix for the issue: https://github.com/xnuinside/simple-ddl-parser/issues/108 (reserved words can be used as table name after '.')\n\n**v0.24.1**\n\nFixes:\n------\n\nHQL:\n^^^^\n\n\n#. fields_terminated_by now parses , as \"','\", not as '' previously\n\nCommon:\n^^^^^^^\n\n\n#. To output added 'if_not_exists' field in result to get availability 1-to-1 re-create ddl by metadata.\n\n**v0.24.0**\n\nFixes:\n------\n\nHQL:\n^^^^\n\n\n#. More then 2 tblproperties now are parsed correctly https://github.com/xnuinside/simple-ddl-parser/pull/104\n\nCommon:\n^^^^^^^\n\n\n#. 'set' in lower case now also parsed validly.\n#. Now names like 'schema', 'database', 'table' can be used as names in CREATE DATABASE | SCHEMA | TABLESPACE | DOMAIN | TYPE statements and after INDEX and CONSTRAINT.\n#. Creation of empty tables also parsed correctly (like CREATE Table table;).\n\nNew Statements Support:\n-----------------------\n\nHQL:\n^^^^\n\n\n#. Added support for CLUSTERED BY - https://github.com/xnuinside/simple-ddl-parser/issues/103\n#. Added support for  INTO ... BUCKETS\n#. CREATE REMOTE DATABASE | SCHEMA\n\n**v0.23.0**\n\nBig refactoring: less code complexity & increase code coverage. Radon added to pre-commit hooks.\n\nFixes:\n^^^^^^\n\n\n#. Fix for issue with ALTER UNIQUE - https://github.com/xnuinside/simple-ddl-parser/issues/101\n\nNew Features\n^^^^^^^^^^^^\n\n\n#. SQL Comments string from DDL now parsed to \"comments\" key in output.\n\nPostgreSQL:\n\n\n#. Added support for ALTER TABLE ONLY | ALTER TABLE IF EXISTS\n\n**v0.22.5**\n\nFixes:\n^^^^^^\n\n\n#. Fix for issue with '<' - https://github.com/xnuinside/simple-ddl-parser/issues/89\n\n**v0.22.4**\n\nFixes:\n^^^^^^\n\nBigQuery:\n^^^^^^^^^\n\n\n#. Fixed issue with parsing schemas with project in name.\n#. Added support for multiple OPTION() statements\n\n**v0.22.3**\n\nFixes:\n^^^^^^\n\nBigQuery:\n^^^^^^^^^\n\n\n#. CREATE TABLE statement with 'project_id' in format like project.dataset.table_name now is parsed validly.\n   'project' added to output.\n   Also added support project.dataset.name format in CREATE SCHEMA and ALTER statement\n\n**v0.22.2**\n\nFixes:\n^^^^^^\n\n\n#. Fix for the issue: https://github.com/xnuinside/simple-ddl-parser/issues/94 (column name starts with CREATE)\n\n**v0.22.1**\n\nNew Features:\n^^^^^^^^^^^^^\n\nBigQuery:\n---------\n\n\n#. Added support for OPTION for full CREATE TABLE statement & column definition\n\nImprovements:\n-------------\n\n\n#. CLUSTED BY can be used without ()\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Simple DDL Parser to parse SQL & dialects like HQL, TSQL (MSSQL), Oracle, AWS Redshift, Snowflake, MySQL, PostgreSQL, etc ddl files to json/python dict with full information about columns: types, defaults, primary keys, etc.; sequences, alters, custom types & other entities from ddl.",
    "version": "1.1.0",
    "project_urls": {
        "Homepage": "https://github.com/xnuinside/simple-ddl-parser",
        "Repository": "https://github.com/xnuinside/simple-ddl-parser"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9954507a6e46977b7eef8bbf3c3dabe8c0de136687ab2947e4f0800a63a587ea",
                "md5": "927016c97391af5bbc13d3f4dfab72c6",
                "sha256": "11cda34ae10b200bb940c701f12bb858c8f7098e23ca7902fe975d09587168a4"
            },
            "downloads": -1,
            "filename": "simple_ddl_parser-1.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "927016c97391af5bbc13d3f4dfab72c6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.6",
            "size": 75297,
            "upload_time": "2024-04-21T18:11:23",
            "upload_time_iso_8601": "2024-04-21T18:11:23.386332Z",
            "url": "https://files.pythonhosted.org/packages/99/54/507a6e46977b7eef8bbf3c3dabe8c0de136687ab2947e4f0800a63a587ea/simple_ddl_parser-1.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "44003cb18b3af32e39ded7b4270a475b42075ea2937354bbeca4025c3bd03d2e",
                "md5": "1a96e01a17dff2e811e450cae07ed146",
                "sha256": "0c9bcc463fc05b773dbc469e0f1af9cd78b8fe59742eb9fbac9005b9150fe850"
            },
            "downloads": -1,
            "filename": "simple_ddl_parser-1.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "1a96e01a17dff2e811e450cae07ed146",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.6",
            "size": 77598,
            "upload_time": "2024-04-21T18:11:25",
            "upload_time_iso_8601": "2024-04-21T18:11:25.971326Z",
            "url": "https://files.pythonhosted.org/packages/44/00/3cb18b3af32e39ded7b4270a475b42075ea2937354bbeca4025c3bd03d2e/simple_ddl_parser-1.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-21 18:11:25",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "xnuinside",
    "github_project": "simple-ddl-parser",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "simple-ddl-parser"
}
        
Elapsed time: 0.24668s