sql-metadata


Namesql-metadata JSON
Version 2.8.0 PyPI version JSON
download
home_pagehttps://github.com/macbre/sql-metadata
SummaryUses tokenized query returned by python-sqlparse and generates query metadata
upload_time2023-04-20 17:44:40
maintainer
docs_urlNone
authorMaciej Brencz
requires_python>=3.7.2,<4.0.0
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # sql-metadata

[![PyPI](https://img.shields.io/pypi/v/sql_metadata.svg)](https://pypi.python.org/pypi/sql_metadata)
[![Tests](https://github.com/macbre/sql-metadata/actions/workflows/python-ci.yml/badge.svg)](https://github.com/macbre/sql-metadata/actions/workflows/python-ci.yml)
[![Coverage Status](https://coveralls.io/repos/github/macbre/sql-metadata/badge.svg?branch=master&1)](https://coveralls.io/github/macbre/sql-metadata?branch=master)
<a href="https://github.com/psf/black"><img alt="Code style: black" src="https://img.shields.io/badge/code%20style-black-000000.svg"></a>
[![Maintenance](https://img.shields.io/badge/maintained%3F-yes-green.svg)](https://github.com/macbre/sql-metadata/graphs/commit-activity)
[![Downloads](https://pepy.tech/badge/sql-metadata/month)](https://pepy.tech/project/sql-metadata)

Uses tokenized query returned by [`python-sqlparse`](https://github.com/andialbrecht/sqlparse) and generates query metadata.

**Extracts column names and tables** used by the query. 
Automatically conduct **column alias resolution**, **sub queries aliases resolution** as well as **tables aliases resolving**.

Provides also a helper for **normalization of SQL queries**.

Supported queries syntax:

* MySQL
* PostgreSQL
* Sqlite
* MSSQL
* [Apache Hive](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML)

(note that listed backends can differ quite substantially but should work in regard of query types supported by `sql-metadata`)

You can test the capabilities of `sql-metadata` with an interactive demo: [https://sql-app.infocruncher.com/](https://sql-app.infocruncher.com/)

## Usage

```
pip install sql-metadata
```

### Extracting raw sql-metadata tokens

```python
from sql_metadata import Parser

# extract raw sql-metadata tokens
Parser("SELECT * FROM foo").tokens
# ['SELECT', '*', 'FROM', 'foo']
```

### Extracting columns from query

```python
from sql_metadata import Parser

# get columns from query - for more examples see `tests/test_getting_columns.py`
Parser("SELECT test, id FROM foo, bar").columns
# ['test', 'id']

Parser("INSERT /* VoteHelper::addVote xxx */  INTO `page_vote` (article_id,user_id,`time`) VALUES ('442001','27574631','20180228130846')").columns
# ['article_id', 'user_id', 'time']

parser = Parser("SELECT a.* FROM product_a.users AS a JOIN product_b.users AS b ON a.ip_address = b.ip_address")

# note that aliases are auto-resolved
parser.columns
# ['product_a.*', 'product_a.users.ip_address', 'product_b.users.ip_address']

# note that you can also extract columns with their place in the query
# which will return dict with lists divided into select, where, order_by, group_by, join, insert and update
parser.columns_dict
# {'select': ['product_a.users.*'], 'join': ['product_a.users.ip_address', 'product_b.users.ip_address']}
```

### Extracting columns aliases from query

```python
from sql_metadata import Parser
parser = Parser("SELECT a, (b + c - u) as alias1, custome_func(d) alias2 from aa, bb order by alias1")

# note that columns list do not contain aliases of the columns
parser.columns
# ["a", "b", "c", "u", "d"]

# but you can still extract aliases names
parser.columns_aliases_names
# ["alias1", "alias2"]

# aliases are resolved to the columns which they refer to
parser.columns_aliases
# {"alias1": ["b", "c", "u"], "alias2": "d"}

# you can also extract aliases used by section of the query in which they are used
parser.columns_aliases_dict
# {"order_by": ["alias1"], "select": ["alias1", "alias2"]}

# the same applies to aliases used in queries section when you extract columns_dict
# here only the alias is used in order by but it's resolved to actual columns
assert parser.columns_dict == {'order_by': ['b', 'c', 'u'],
                               'select': ['a', 'b', 'c', 'u', 'd']}
```

### Extracting tables from query

```python
from sql_metadata import Parser

# get tables from query - for more examples see `tests/test_getting_tables.py`
Parser("SELECT a.* FROM product_a.users AS a JOIN product_b.users AS b ON a.ip_address = b.ip_address").tables
# ['product_a.users', 'product_b.users']

Parser("SELECT test, id FROM foo, bar").tables
# ['foo', 'bar']

# you can also extract aliases of the tables as a dictionary
parser = Parser("SELECT f.test FROM foo AS f")

# get table aliases
parser.tables_aliases
# {'f': 'foo'}

# note that aliases are auto-resolved for columns
parser.columns
# ["foo.test"]
```

### Extracting values from insert query
```python
from sql_metadata import Parser

parser = Parser(
    "INSERT /* VoteHelper::addVote xxx */  INTO `page_vote` (article_id,user_id,`time`) " 
    "VALUES ('442001','27574631','20180228130846')"
)
# extract values from query
parser.values
# ["442001", "27574631", "20180228130846"]

# extract a dictionary with column-value pairs
parser.values_dict
#{"article_id": "442001", "user_id": "27574631", "time": "20180228130846"}

# if column names are not set auto-add placeholders
parser = Parser(
    "INSERT IGNORE INTO `table` VALUES (9, 2.15, '123', '2017-01-01');"
)
parser.values
# [9, 2.15, "123", "2017-01-01"]

parser.values_dict
#{"column_1": 9, "column_2": 2.15, "column_3": "123", "column_4": "2017-01-01"}
```


### Extracting limit and offset
```python
from sql_metadata import Parser

Parser('SELECT foo_limit FROM bar_offset LIMIT 50 OFFSET 1000').limit_and_offset
# (50, 1000)

Parser('SELECT foo_limit FROM bar_offset limit 2000,50').limit_and_offset
# (50, 2000)
```

### Extracting with names

```python
from sql_metadata import Parser

parser = Parser(
    """
WITH
    database1.tableFromWith AS (SELECT aa.* FROM table3 as aa 
                                left join table4 on aa.col1=table4.col2),
    test as (SELECT * from table3)
SELECT
  "xxxxx"
FROM
  database1.tableFromWith alias
LEFT JOIN database2.table2 ON ("tt"."ttt"."fff" = "xx"."xxx")
"""
)

# get names/ aliases of with statements
parser.with_names
# ["database1.tableFromWith", "test"]

# get definition of with queries
parser.with_queries
# {"database1.tableFromWith": "SELECT aa.* FROM table3 as aa left join table4 on aa.col1=table4.col2"
#  "test": "SELECT * from table3"}

# note that names of with statements do not appear in tables
parser.tables
# ["table3", "table4", "database2.table2"]
```

### Extracting sub-queries

```python
from sql_metadata import Parser

parser = Parser(
"""
SELECT COUNT(1) FROM
(SELECT std.task_id FROM some_task_detail std WHERE std.STATUS = 1) a
JOIN (SELECT st.task_id FROM some_task st WHERE task_type_id = 80) b
ON a.task_id = b.task_id;
"""
)

# get sub-queries dictionary
parser.subqueries
# {"a": "SELECT std.task_id FROM some_task_detail std WHERE std.STATUS = 1",
#  "b": "SELECT st.task_id FROM some_task st WHERE task_type_id = 80"}


# get names/ aliases of sub-queries / derived tables
parser.subqueries_names
# ["a", "b"]

# note that columns coming from sub-queries are resolved to real columns
parser.columns
#["some_task_detail.task_id", "some_task_detail.STATUS", "some_task.task_id", 
# "task_type_id"]

# same applies for columns_dict, note the join columns are resolved
parser.columns_dict
#{'join': ['some_task_detail.task_id', 'some_task.task_id'],
# 'select': ['some_task_detail.task_id', 'some_task.task_id'],
# 'where': ['some_task_detail.STATUS', 'task_type_id']}

```

See `tests` file for more examples of a bit more complex queries.

### Queries normalization and comments extraction

```python
from sql_metadata import Parser
parser = Parser('SELECT /* Test */ foo FROM bar WHERE id in (1, 2, 56)')

# generalize query
parser.generalize
# 'SELECT foo FROM bar WHERE id in (XYZ)'

# remove comments
parser.without_comments
# 'SELECT foo FROM bar WHERE id in (1, 2, 56)'

# extract comments
parser.comments
# ['/* Test */']
```

See `test/test_normalization.py` file for more examples of a bit more complex queries.

## Migrating from `sql_metadata` 1.x

`sql_metadata.compat` module has been implemented to make the introduction of sql-metadata v2.0 smoother.

You can use it by simply changing the imports in your code from:

```python
from sql_metadata import get_query_columns, get_query_tables
```

into:

```python
from sql_metadata.compat import get_query_columns, get_query_tables
```

The following functions from the old API are available in the `sql_metadata.compat` module:

* `generalize_sql`
* `get_query_columns` (since #131 columns aliases ARE NOT returned by this function)
* `get_query_limit_and_offset`
* `get_query_tables`
* `get_query_tokens`
* `preprocess_query`

## Authors and contributors

Created and maintained by [@macbre](https://github.com/macbre) with a great contributions from [@collerek](https://github.com/collerek) and the others.

* aborecki (https://github.com/aborecki)
* collerek (https://github.com/collerek)
* dylanhogg (https://github.com/dylanhogg)
* macbre (https://github.com/macbre)

## Stargazers over time

[![Stargazers over time](https://starchart.cc/macbre/sql-metadata.svg)](https://starchart.cc/macbre/sql-metadata)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/macbre/sql-metadata",
    "name": "sql-metadata",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7.2,<4.0.0",
    "maintainer_email": "",
    "keywords": "",
    "author": "Maciej Brencz",
    "author_email": "maciej.brencz@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/f7/c0/61731ddba44b2e9afc31cc35892c20510504dbec9fc20903bffa0a583993/sql_metadata-2.8.0.tar.gz",
    "platform": null,
    "description": "# sql-metadata\n\n[![PyPI](https://img.shields.io/pypi/v/sql_metadata.svg)](https://pypi.python.org/pypi/sql_metadata)\n[![Tests](https://github.com/macbre/sql-metadata/actions/workflows/python-ci.yml/badge.svg)](https://github.com/macbre/sql-metadata/actions/workflows/python-ci.yml)\n[![Coverage Status](https://coveralls.io/repos/github/macbre/sql-metadata/badge.svg?branch=master&1)](https://coveralls.io/github/macbre/sql-metadata?branch=master)\n<a href=\"https://github.com/psf/black\"><img alt=\"Code style: black\" src=\"https://img.shields.io/badge/code%20style-black-000000.svg\"></a>\n[![Maintenance](https://img.shields.io/badge/maintained%3F-yes-green.svg)](https://github.com/macbre/sql-metadata/graphs/commit-activity)\n[![Downloads](https://pepy.tech/badge/sql-metadata/month)](https://pepy.tech/project/sql-metadata)\n\nUses tokenized query returned by [`python-sqlparse`](https://github.com/andialbrecht/sqlparse) and generates query metadata.\n\n**Extracts column names and tables** used by the query. \nAutomatically conduct **column alias resolution**, **sub queries aliases resolution** as well as **tables aliases resolving**.\n\nProvides also a helper for **normalization of SQL queries**.\n\nSupported queries syntax:\n\n* MySQL\n* PostgreSQL\n* Sqlite\n* MSSQL\n* [Apache Hive](https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML)\n\n(note that listed backends can differ quite substantially but should work in regard of query types supported by `sql-metadata`)\n\nYou can test the capabilities of `sql-metadata` with an interactive demo: [https://sql-app.infocruncher.com/](https://sql-app.infocruncher.com/)\n\n## Usage\n\n```\npip install sql-metadata\n```\n\n### Extracting raw sql-metadata tokens\n\n```python\nfrom sql_metadata import Parser\n\n# extract raw sql-metadata tokens\nParser(\"SELECT * FROM foo\").tokens\n# ['SELECT', '*', 'FROM', 'foo']\n```\n\n### Extracting columns from query\n\n```python\nfrom sql_metadata import Parser\n\n# get columns from query - for more examples see `tests/test_getting_columns.py`\nParser(\"SELECT test, id FROM foo, bar\").columns\n# ['test', 'id']\n\nParser(\"INSERT /* VoteHelper::addVote xxx */  INTO `page_vote` (article_id,user_id,`time`) VALUES ('442001','27574631','20180228130846')\").columns\n# ['article_id', 'user_id', 'time']\n\nparser = Parser(\"SELECT a.* FROM product_a.users AS a JOIN product_b.users AS b ON a.ip_address = b.ip_address\")\n\n# note that aliases are auto-resolved\nparser.columns\n# ['product_a.*', 'product_a.users.ip_address', 'product_b.users.ip_address']\n\n# note that you can also extract columns with their place in the query\n# which will return dict with lists divided into select, where, order_by, group_by, join, insert and update\nparser.columns_dict\n# {'select': ['product_a.users.*'], 'join': ['product_a.users.ip_address', 'product_b.users.ip_address']}\n```\n\n### Extracting columns aliases from query\n\n```python\nfrom sql_metadata import Parser\nparser = Parser(\"SELECT a, (b + c - u) as alias1, custome_func(d) alias2 from aa, bb order by alias1\")\n\n# note that columns list do not contain aliases of the columns\nparser.columns\n# [\"a\", \"b\", \"c\", \"u\", \"d\"]\n\n# but you can still extract aliases names\nparser.columns_aliases_names\n# [\"alias1\", \"alias2\"]\n\n# aliases are resolved to the columns which they refer to\nparser.columns_aliases\n# {\"alias1\": [\"b\", \"c\", \"u\"], \"alias2\": \"d\"}\n\n# you can also extract aliases used by section of the query in which they are used\nparser.columns_aliases_dict\n# {\"order_by\": [\"alias1\"], \"select\": [\"alias1\", \"alias2\"]}\n\n# the same applies to aliases used in queries section when you extract columns_dict\n# here only the alias is used in order by but it's resolved to actual columns\nassert parser.columns_dict == {'order_by': ['b', 'c', 'u'],\n                               'select': ['a', 'b', 'c', 'u', 'd']}\n```\n\n### Extracting tables from query\n\n```python\nfrom sql_metadata import Parser\n\n# get tables from query - for more examples see `tests/test_getting_tables.py`\nParser(\"SELECT a.* FROM product_a.users AS a JOIN product_b.users AS b ON a.ip_address = b.ip_address\").tables\n# ['product_a.users', 'product_b.users']\n\nParser(\"SELECT test, id FROM foo, bar\").tables\n# ['foo', 'bar']\n\n# you can also extract aliases of the tables as a dictionary\nparser = Parser(\"SELECT f.test FROM foo AS f\")\n\n# get table aliases\nparser.tables_aliases\n# {'f': 'foo'}\n\n# note that aliases are auto-resolved for columns\nparser.columns\n# [\"foo.test\"]\n```\n\n### Extracting values from insert query\n```python\nfrom sql_metadata import Parser\n\nparser = Parser(\n    \"INSERT /* VoteHelper::addVote xxx */  INTO `page_vote` (article_id,user_id,`time`) \" \n    \"VALUES ('442001','27574631','20180228130846')\"\n)\n# extract values from query\nparser.values\n# [\"442001\", \"27574631\", \"20180228130846\"]\n\n# extract a dictionary with column-value pairs\nparser.values_dict\n#{\"article_id\": \"442001\", \"user_id\": \"27574631\", \"time\": \"20180228130846\"}\n\n# if column names are not set auto-add placeholders\nparser = Parser(\n    \"INSERT IGNORE INTO `table` VALUES (9, 2.15, '123', '2017-01-01');\"\n)\nparser.values\n# [9, 2.15, \"123\", \"2017-01-01\"]\n\nparser.values_dict\n#{\"column_1\": 9, \"column_2\": 2.15, \"column_3\": \"123\", \"column_4\": \"2017-01-01\"}\n```\n\n\n### Extracting limit and offset\n```python\nfrom sql_metadata import Parser\n\nParser('SELECT foo_limit FROM bar_offset LIMIT 50 OFFSET 1000').limit_and_offset\n# (50, 1000)\n\nParser('SELECT foo_limit FROM bar_offset limit 2000,50').limit_and_offset\n# (50, 2000)\n```\n\n### Extracting with names\n\n```python\nfrom sql_metadata import Parser\n\nparser = Parser(\n    \"\"\"\nWITH\n    database1.tableFromWith AS (SELECT aa.* FROM table3 as aa \n                                left join table4 on aa.col1=table4.col2),\n    test as (SELECT * from table3)\nSELECT\n  \"xxxxx\"\nFROM\n  database1.tableFromWith alias\nLEFT JOIN database2.table2 ON (\"tt\".\"ttt\".\"fff\" = \"xx\".\"xxx\")\n\"\"\"\n)\n\n# get names/ aliases of with statements\nparser.with_names\n# [\"database1.tableFromWith\", \"test\"]\n\n# get definition of with queries\nparser.with_queries\n# {\"database1.tableFromWith\": \"SELECT aa.* FROM table3 as aa left join table4 on aa.col1=table4.col2\"\n#  \"test\": \"SELECT * from table3\"}\n\n# note that names of with statements do not appear in tables\nparser.tables\n# [\"table3\", \"table4\", \"database2.table2\"]\n```\n\n### Extracting sub-queries\n\n```python\nfrom sql_metadata import Parser\n\nparser = Parser(\n\"\"\"\nSELECT COUNT(1) FROM\n(SELECT std.task_id FROM some_task_detail std WHERE std.STATUS = 1) a\nJOIN (SELECT st.task_id FROM some_task st WHERE task_type_id = 80) b\nON a.task_id = b.task_id;\n\"\"\"\n)\n\n# get sub-queries dictionary\nparser.subqueries\n# {\"a\": \"SELECT std.task_id FROM some_task_detail std WHERE std.STATUS = 1\",\n#  \"b\": \"SELECT st.task_id FROM some_task st WHERE task_type_id = 80\"}\n\n\n# get names/ aliases of sub-queries / derived tables\nparser.subqueries_names\n# [\"a\", \"b\"]\n\n# note that columns coming from sub-queries are resolved to real columns\nparser.columns\n#[\"some_task_detail.task_id\", \"some_task_detail.STATUS\", \"some_task.task_id\", \n# \"task_type_id\"]\n\n# same applies for columns_dict, note the join columns are resolved\nparser.columns_dict\n#{'join': ['some_task_detail.task_id', 'some_task.task_id'],\n# 'select': ['some_task_detail.task_id', 'some_task.task_id'],\n# 'where': ['some_task_detail.STATUS', 'task_type_id']}\n\n```\n\nSee `tests` file for more examples of a bit more complex queries.\n\n### Queries normalization and comments extraction\n\n```python\nfrom sql_metadata import Parser\nparser = Parser('SELECT /* Test */ foo FROM bar WHERE id in (1, 2, 56)')\n\n# generalize query\nparser.generalize\n# 'SELECT foo FROM bar WHERE id in (XYZ)'\n\n# remove comments\nparser.without_comments\n# 'SELECT foo FROM bar WHERE id in (1, 2, 56)'\n\n# extract comments\nparser.comments\n# ['/* Test */']\n```\n\nSee `test/test_normalization.py` file for more examples of a bit more complex queries.\n\n## Migrating from `sql_metadata` 1.x\n\n`sql_metadata.compat` module has been implemented to make the introduction of sql-metadata v2.0 smoother.\n\nYou can use it by simply changing the imports in your code from:\n\n```python\nfrom sql_metadata import get_query_columns, get_query_tables\n```\n\ninto:\n\n```python\nfrom sql_metadata.compat import get_query_columns, get_query_tables\n```\n\nThe following functions from the old API are available in the `sql_metadata.compat` module:\n\n* `generalize_sql`\n* `get_query_columns` (since #131 columns aliases ARE NOT returned by this function)\n* `get_query_limit_and_offset`\n* `get_query_tables`\n* `get_query_tokens`\n* `preprocess_query`\n\n## Authors and contributors\n\nCreated and maintained by [@macbre](https://github.com/macbre) with a great contributions from [@collerek](https://github.com/collerek) and the others.\n\n* aborecki (https://github.com/aborecki)\n* collerek (https://github.com/collerek)\n* dylanhogg (https://github.com/dylanhogg)\n* macbre (https://github.com/macbre)\n\n## Stargazers over time\n\n[![Stargazers over time](https://starchart.cc/macbre/sql-metadata.svg)](https://starchart.cc/macbre/sql-metadata)\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Uses tokenized query returned by python-sqlparse and generates query metadata",
    "version": "2.8.0",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e64abac7e36246f0e17b96250d7d2de6ba55e23685b18bb144c81f56eb622857",
                "md5": "fc518f5b504a9d4049894b67ae842cab",
                "sha256": "f0cadad6915ce99ac9869245399d8702536f17b1feba68a2b5763a0f2aa43352"
            },
            "downloads": -1,
            "filename": "sql_metadata-2.8.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fc518f5b504a9d4049894b67ae842cab",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7.2,<4.0.0",
            "size": 21883,
            "upload_time": "2023-04-20T17:44:37",
            "upload_time_iso_8601": "2023-04-20T17:44:37.877253Z",
            "url": "https://files.pythonhosted.org/packages/e6/4a/bac7e36246f0e17b96250d7d2de6ba55e23685b18bb144c81f56eb622857/sql_metadata-2.8.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f7c061731ddba44b2e9afc31cc35892c20510504dbec9fc20903bffa0a583993",
                "md5": "5046f34634118ad3e24d42c1d726e8f6",
                "sha256": "3897dc881ae00b49d8efa1b0fa8270617fa4f6d818d2a945703f594cecfe93ad"
            },
            "downloads": -1,
            "filename": "sql_metadata-2.8.0.tar.gz",
            "has_sig": false,
            "md5_digest": "5046f34634118ad3e24d42c1d726e8f6",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7.2,<4.0.0",
            "size": 21913,
            "upload_time": "2023-04-20T17:44:40",
            "upload_time_iso_8601": "2023-04-20T17:44:40.430156Z",
            "url": "https://files.pythonhosted.org/packages/f7/c0/61731ddba44b2e9afc31cc35892c20510504dbec9fc20903bffa0a583993/sql_metadata-2.8.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-04-20 17:44:40",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "macbre",
    "github_project": "sql-metadata",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "sql-metadata"
}
        
Elapsed time: 1.23053s