SPyQL
=====
SQL with Python in the middle
.. image:: https://img.shields.io/pypi/v/spyql.svg
:target: https://pypi.org/project/spyql/
:alt: https://pypi.python.org/pypi/spyql
.. image:: https://readthedocs.org/projects/spyql/badge/?version=latest
:target: https://spyql.readthedocs.io/en/latest/
:alt: https://spyql.readthedocs.io/en/latest/?version=latest
.. image:: https://codecov.io/gh/dcmoura/spyql/branch/master/graph/badge.svg?token=5C7I7LG814
:target: https://codecov.io/gh/dcmoura/spyql
:alt: codecov
.. image:: https://pepy.tech/badge/spyql
:target: https://pepy.tech/project/spyql
:alt: downloads
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/psf/black
:alt: code style: black
.. image:: https://img.shields.io/badge/License-MIT-yellow.svg
:target: https://opensource.org/licenses/MIT
:alt: license: MIT
About
-----
.. intro_start
SPyQL is a query language that combines:
* the simplicity and structure of SQL;
* with the power and readability of Python.
.. code-block:: sql
SELECT
date.fromtimestamp(.purchase_ts) AS purchase_date,
.price * .quantity AS total
FROM json
WHERE .department.upper() == 'IT'
ORDER BY 2 DESC
TO csv
SQL provides the structure of the query, while Python is used to define expressions, bringing along a vast ecosystem of packages.
SPyQL is fast and memory efficient. Take a look at the `benchmarks with GB-size JSON data <https://colab.research.google.com/github/dcmoura/spyql/blob/master/notebooks/json_benchmark.ipynb>`_.
SPyQL CLI
^^^^^^^^^
SPyQL offers a command-line interface that allows running SPyQL queries on top of text data (e.g. CSV, JSON). Data can come from files but also from data streams, such as as Kafka, or from databases such as PostgreSQL. Basically, data can come from any command that outputs text :-). More, data can be generated by a Python expression! And since SPyQL also writes to different formats, it allows to easily convert between data formats.
Take a look at the Command line examples to see how to query parquet, process API calls, transverse directories of zipped JSONs, convert CSV to JSON, and import JSON/CSV data into SQL databases, among many other things.
See also:
* `Tutorial (v0.8) <https://danielcmoura.com/blog/2022/spyql-cell-towers/>`_
* `Demo video (v0.4) <https://vimeo.com/danielcmoura/spyqldemo>`_
SPyQL Module
^^^^^^^^^^^^
SPyQL is also available as a Python module. In addition to the CLI features, you can also:
* query variables (e.g. lists of dicts);
* get results into in-memory data structures.
Principles
^^^^^^^^^^
We aim for SPyQL to be:
* **Simple**\ : simple to use with a straightforward implementation;
* **Familiar**\ : you should feel at home if you are acquainted with SQL and Python;
* **Light**\ : small memory footprint that allows you to process large data that fit into your machine;
* **Useful**\ : it should make your life easier, filling a gap in the eco-system.
.. intro_end
Distinctive features of SPyQL
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* Row order guarantee
* Natural window for aggregations
* No distinction between aggregate and window functions
* IMPORT clause
* Natural support for lists, sets, dictionaries, objects, etc
* 1-liner by design
* Multiple data formats supported
Testimonials
------------
|
"I'm very impressed - this is some very neat pragmatic software design."
Simon Willison, Creator of Datasette, co-creator of Django
|
"I love this tool! I use it every day"...
Alin Panaitiu, Creator of Lunar
|
"Brilliant tool, thanks a lot for creating it and for the example here!"
Greg Sadetsky, Co-founder and CTO at Decibel Ads
|
Documentation
--------------
The official documentation of SPyQL can be found at: `<https://spyql.readthedocs.io/>`_.
Installation
------------
The easiest way to install SPyQL is from pip:
.. code-block:: sh
pip install spyql
Hello world
-----------
.. hello_start
To test your installation run in the terminal:
.. code-block:: sh
spyql "SELECT 'Hello world' as Message TO pretty"
Output:
.. code-block::
Message
-----------
Hello world
You can try replacing the output format by JSON or CSV, and adding more columns. e.g. run in the terminal:
.. code-block:: sh
spyql "SELECT 'Hello world' as message, 1+2 as three TO json"
Output:
.. code-block:: json
{"message": "Hello world", "three": 3}
.. hello_end
.. recipes_start
Example queries
---------------
You can run the following example queries in the terminal:
``spyql "the_query" < a_data_file``
Example data files are not provided on most cases.
Query a CSV (and print a pretty table)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: sql
SELECT a_col_name, 'positive' if int(col2) >= 0 else 'negative' AS sign
FROM csv
TO pretty
Convert CSV to a flat JSON
^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: sql
SELECT * FROM csv TO json
Convert from CSV to a hierarchical JSON
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: sql
SELECT {'client': {'id': col1, 'name': col2}, 'price': 120.40} AS json
FROM csv TO json
or
.. code-block:: sql
SELECT {'id': col1, 'name': col2} AS client, 120.40 AS price
FROM csv TO json
JSON to CSV, filtering out NULLs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: sql
SELECT .client.id AS id, .client.name AS name, .price
FROM json
WHERE .client.name is not NULL
TO csv
Explode JSON to CSV
^^^^^^^^^^^^^^^^^^^
.. code-block:: sql
SELECT .invoice_num AS id, .items.name AS name, .items.price AS price
FROM json
EXPLODE .items
TO csv
Sample input:
.. code-block:: json
{"invoice_num" : 1028, "items": [{"name": "tomatoes", "price": 1.5}, {"name": "bananas", "price": 2.0}]}
{"invoice_num" : 1029, "items": [{"name": "peaches", "price": 3.12}]}
Output:
.. code-block::
id, name, price
1028, tomatoes, 1.5
1028, bananas, 2.0
1029, peaches, 3.12
Python iterator/list/comprehension to JSON
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: sql
SELECT 10 * cos(col1 * ((pi * 4) / 90))
FROM range(80)
TO json
or
.. code-block:: sql
SELECT col1
FROM [10 * cos(i * ((pi * 4) / 90)) for i in range(80)]
TO json
Importing python modules
^^^^^^^^^^^^^^^^^^^^^^^^
Here we import ``hashlib`` to calculate a md5 hash for each input line.
Before running this example you need to install the ``hashlib`` package (\ ``pip install hashlib``\ ).
.. code-block:: sql
IMPORT hashlib as hl
SELECT hl.md5(col1.encode('utf-8')).hexdigest()
FROM text
Getting the top 5 records
^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: sql
SELECT int(score) AS score, player_name
FROM csv
ORDER BY 1 DESC NULLS LAST, score_date
LIMIT 5
Aggregations
^^^^^^^^^^^^
Totals by player, alphabetically ordered.
.. code-block:: sql
SELECT .player_name, sum_agg(.score) AS total_score
FROM json
GROUP BY 1
ORDER BY 1
Partial aggregations
^^^^^^^^^^^^^^^^^^^^
Calculating the cumulative sum of a variable using the ``PARTIALS`` modifier. Also demoing the lag aggregator.
.. code-block:: sql
SELECT PARTIALS
.new_entries,
sum_agg(.new_entries) AS cum_new_entries,
lag(.new_entries) AS prev_entries
FROM json
TO json
Sample input:
.. code-block:: json
{"new_entries" : 10}
{"new_entries" : 5}
{"new_entries" : 25}
{"new_entries" : null}
{}
{"new_entries" : 100}
Output:
.. code-block:: json
{"new_entries" : 10, "cum_new_entries" : 10, "prev_entries": null}
{"new_entries" : 5, "cum_new_entries" : 15, "prev_entries": 10}
{"new_entries" : 25, "cum_new_entries" : 40, "prev_entries": 5}
{"new_entries" : null, "cum_new_entries" : 40, "prev_entries": 25}
{"new_entries" : null, "cum_new_entries" : 40, "prev_entries": null}
{"new_entries" : 100, "cum_new_entries" : 140, "prev_entries": null}
If ``PARTIALS`` was omitted the result would be equivalent to the last output row.
Distinct rows
^^^^^^^^^^^^^
.. code-block:: sql
SELECT DISTINCT *
FROM csv
Command line examples
---------------------
To run the following examples, type ``Ctrl-x Ctrl-e`` on you terminal. This will open your default editor (emacs/vim). Paste the code of one of the examples, save and exit.
Queries on Parquet with directories
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Here, ``find`` transverses a directory and executes ``parquet-tools`` for each parquet file, dumping each file to json format. ``jq -c`` makes sure that the output has 1 json per line before handing over to spyql. This is far from being an efficient way to query parquet files, but it might be a handy option if you need to do a quick inspection.
.. code-block:: sh
find /the/directory -name "*.parquet" -exec parquet-tools cat --json {} \; |
jq -c |
spyql "
SELECT .a_field, .a_num_field * 2 + 1
FROM json
"
Querying multiple json.gz files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: sh
gzcat *.json.gz |
jq -c |
spyql "
SELECT .a_field, .a_num_field * 2 + 1
FROM json
"
Querying YAML / XML / TOML files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`yq <https://kislyuk.github.io/yq/#>`_ converts yaml, xml and toml files to json, allowing to easily query any of these with spyql.
.. code-block:: sh
cat file.yaml | yq -c | spyql "SELECT .a_field FROM json"
.. code-block:: sh
cat file.xml | xq -c | spyql "SELECT .a_field FROM json"
.. code-block:: sh
cat file.toml | tomlq -c | spyql "SELECT .a_field FROM json"
Kafka to PostegreSQL pipeline
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Read data from a kafka topic and write to postgres table name ``customer``.
.. code-block:: sh
kafkacat -b the.broker.com -t the.topic |
spyql -Otable=customer -Ochunk_size=1 --unbuffered "
SELECT
.customer.id AS id,
.customer.name AS name
FROM json
TO sql
" |
psql -U an_user_name -h a.host.com a_database_name
Monitoring statistics in Kafka
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Read data from a kafka topic, continuously calculating statistics.
.. code-block:: sh
kafkacat -b the.broker.com -t the.topic |
spyql --unbuffered "
SELECT PARTIALS
count_agg(*) AS running_count,
sum_agg(value) AS running_sum,
min_agg(value) AS min_so_far,
value AS current_value
FROM json
TO csv
"
Sub-queries (piping)
^^^^^^^^^^^^^^^^^^^^
A special file format (spy) is used to efficiently pipe data between queries.
.. code-block:: sh
cat a_file.json |
spyql "
SELECT ' '.join([.first_name, .middle_name, .last_name]) AS full_name
FROM json
TO spy" |
spyql "SELECT full_name, full_name.upper() FROM spy"
(Equi) Joins
^^^^^^^^^^^^^
It is possible to make simple (LEFT) JOIN operations based on dictionary lookups.
Given `numbers.json`:
.. code-block:: json
{
"1": "One",
"2": "Two",
"3": "Three"
}
Query:
.. code-block:: sh
spyql -Jnums=numbers.json "
SELECT nums[col1] as res
FROM [3,4,1,1]
TO json"
Output:
.. code-block:: json
{"res": "Three"}
{"res": null}
{"res": "One"}
{"res": "One"}
If you want a INNER JOIN instead of a LEFT JOIN, you can add a criteria to the where clause, e.g.:
.. code-block:: sql
SELECT nums[col1] as res
FROM [3,4,1,1]
WHERE col1 in nums
TO json
Output:
.. code-block:: json
{"res": "Three"}
{"res": "One"}
{"res": "One"}
Queries over APIs
^^^^^^^^^^^^^^^^^
.. code-block:: sh
curl https://reqres.in/api/users?page=2 |
spyql "
SELECT
.data.email AS email,
'Dear {}, thank you for being a great customer!'.format(.data.first_name) AS msg
FROM json
EXPLODE .data
TO json
"
Plotting to the terminal
^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: sh
spyql "
SELECT col1
FROM [10 * cos(i * ((pi * 4) / 90)) for i in range(80)]
TO plot
"
Plotting with `matplotcli <https://github.com/dcmoura/matplotcli>`_
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: sh
spyql "
SELECT col1 AS y
FROM [10 * cos(i * ((pi * 4) / 90)) for i in range(80)]
TO json
" | plt "plot(y)"
.. image:: imgs/matplotcli_demo1.png
:width: 600
:alt: matplotcli demo
.. recipes_end
----
This package was created with `Cookiecutter <https://github.com/audreyr/cookiecutter>`_ and the ``audreyr/cookiecutter-pypackage`` `project template <https://github.com/audreyr/cookiecutter-pypackage>`_.
Raw data
{
"_id": null,
"home_page": "https://github.com/dcmoura/spyql",
"name": "spyql",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": "",
"keywords": "sql data csv json",
"author": "Daniel C. Moura",
"author_email": "daniel.c.moura@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/a9/53/933bf4d9f5f666ca7a256d51a7bb3b6e0d630bc4c938f7ce8dc361c4cbcd/spyql-0.9.0.tar.gz",
"platform": null,
"description": "\nSPyQL\n=====\n\nSQL with Python in the middle\n\n\n.. image:: https://img.shields.io/pypi/v/spyql.svg\n :target: https://pypi.org/project/spyql/\n :alt: https://pypi.python.org/pypi/spyql\n\n\n.. image:: https://readthedocs.org/projects/spyql/badge/?version=latest\n :target: https://spyql.readthedocs.io/en/latest/\n :alt: https://spyql.readthedocs.io/en/latest/?version=latest\n\n\n.. image:: https://codecov.io/gh/dcmoura/spyql/branch/master/graph/badge.svg?token=5C7I7LG814\n :target: https://codecov.io/gh/dcmoura/spyql\n :alt: codecov\n\n\n.. image:: https://pepy.tech/badge/spyql\n :target: https://pepy.tech/project/spyql\n :alt: downloads\n\n\n.. image:: https://img.shields.io/badge/code%20style-black-000000.svg\n :target: https://github.com/psf/black\n :alt: code style: black\n\n\n.. image:: https://img.shields.io/badge/License-MIT-yellow.svg\n :target: https://opensource.org/licenses/MIT\n :alt: license: MIT\n\n\nAbout\n-----\n.. intro_start\n\nSPyQL is a query language that combines:\n\n\n* the simplicity and structure of SQL;\n* with the power and readability of Python.\n\n.. code-block:: sql\n\n SELECT\n date.fromtimestamp(.purchase_ts) AS purchase_date,\n .price * .quantity AS total\n FROM json\n WHERE .department.upper() == 'IT'\n ORDER BY 2 DESC\n TO csv\n\nSQL provides the structure of the query, while Python is used to define expressions, bringing along a vast ecosystem of packages.\n\nSPyQL is fast and memory efficient. Take a look at the `benchmarks with GB-size JSON data <https://colab.research.google.com/github/dcmoura/spyql/blob/master/notebooks/json_benchmark.ipynb>`_.\n\n\n\nSPyQL CLI\n^^^^^^^^^\n\nSPyQL offers a command-line interface that allows running SPyQL queries on top of text data (e.g. CSV, JSON). Data can come from files but also from data streams, such as as Kafka, or from databases such as PostgreSQL. Basically, data can come from any command that outputs text :-). More, data can be generated by a Python expression! And since SPyQL also writes to different formats, it allows to easily convert between data formats.\n\nTake a look at the Command line examples to see how to query parquet, process API calls, transverse directories of zipped JSONs, convert CSV to JSON, and import JSON/CSV data into SQL databases, among many other things.\n\nSee also:\n\n* `Tutorial (v0.8) <https://danielcmoura.com/blog/2022/spyql-cell-towers/>`_\n\n* `Demo video (v0.4) <https://vimeo.com/danielcmoura/spyqldemo>`_\n\n\nSPyQL Module\n^^^^^^^^^^^^\n\nSPyQL is also available as a Python module. In addition to the CLI features, you can also:\n\n* query variables (e.g. lists of dicts);\n* get results into in-memory data structures.\n\n\nPrinciples\n^^^^^^^^^^\n\nWe aim for SPyQL to be:\n\n\n* **Simple**\\ : simple to use with a straightforward implementation;\n* **Familiar**\\ : you should feel at home if you are acquainted with SQL and Python;\n* **Light**\\ : small memory footprint that allows you to process large data that fit into your machine;\n* **Useful**\\ : it should make your life easier, filling a gap in the eco-system.\n\n.. intro_end\n\nDistinctive features of SPyQL\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n* Row order guarantee\n* Natural window for aggregations\n* No distinction between aggregate and window functions\n* IMPORT clause\n* Natural support for lists, sets, dictionaries, objects, etc\n* 1-liner by design\n* Multiple data formats supported\n\n\nTestimonials\n------------\n\n|\n\n \"I'm very impressed - this is some very neat pragmatic software design.\"\n\nSimon Willison, Creator of Datasette, co-creator of Django\n\n|\n\n \"I love this tool! I use it every day\"...\n\nAlin Panaitiu, Creator of Lunar\n\n|\n\n \"Brilliant tool, thanks a lot for creating it and for the example here!\"\n\nGreg Sadetsky, Co-founder and CTO at Decibel Ads\n\n|\n\nDocumentation\n--------------\n\nThe official documentation of SPyQL can be found at: `<https://spyql.readthedocs.io/>`_.\n\n\nInstallation\n------------\n\nThe easiest way to install SPyQL is from pip:\n\n.. code-block:: sh\n\n pip install spyql\n\nHello world\n-----------\n\n.. hello_start\n\nTo test your installation run in the terminal:\n\n.. code-block:: sh\n\n spyql \"SELECT 'Hello world' as Message TO pretty\"\n\nOutput:\n\n.. code-block::\n\n Message\n -----------\n Hello world\n\nYou can try replacing the output format by JSON or CSV, and adding more columns. e.g. run in the terminal:\n\n.. code-block:: sh\n\n spyql \"SELECT 'Hello world' as message, 1+2 as three TO json\"\n\nOutput:\n\n.. code-block:: json\n\n {\"message\": \"Hello world\", \"three\": 3}\n\n\n.. hello_end\n\n.. recipes_start\n\nExample queries\n---------------\n\nYou can run the following example queries in the terminal:\n``spyql \"the_query\" < a_data_file``\n\nExample data files are not provided on most cases.\n\nQuery a CSV (and print a pretty table)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: sql\n\n SELECT a_col_name, 'positive' if int(col2) >= 0 else 'negative' AS sign\n FROM csv\n TO pretty\n\nConvert CSV to a flat JSON\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: sql\n\n SELECT * FROM csv TO json\n\nConvert from CSV to a hierarchical JSON\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: sql\n\n SELECT {'client': {'id': col1, 'name': col2}, 'price': 120.40} AS json\n FROM csv TO json\n\nor\n\n.. code-block:: sql\n\n SELECT {'id': col1, 'name': col2} AS client, 120.40 AS price\n FROM csv TO json\n\nJSON to CSV, filtering out NULLs\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: sql\n\n SELECT .client.id AS id, .client.name AS name, .price\n FROM json\n WHERE .client.name is not NULL\n TO csv\n\nExplode JSON to CSV\n^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: sql\n\n SELECT .invoice_num AS id, .items.name AS name, .items.price AS price\n FROM json\n EXPLODE .items\n TO csv\n\nSample input:\n\n.. code-block:: json\n\n {\"invoice_num\" : 1028, \"items\": [{\"name\": \"tomatoes\", \"price\": 1.5}, {\"name\": \"bananas\", \"price\": 2.0}]}\n {\"invoice_num\" : 1029, \"items\": [{\"name\": \"peaches\", \"price\": 3.12}]}\n\nOutput:\n\n.. code-block::\n\n id, name, price\n 1028, tomatoes, 1.5\n 1028, bananas, 2.0\n 1029, peaches, 3.12\n\nPython iterator/list/comprehension to JSON\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: sql\n\n SELECT 10 * cos(col1 * ((pi * 4) / 90))\n FROM range(80)\n TO json\n\nor\n\n.. code-block:: sql\n\n SELECT col1\n FROM [10 * cos(i * ((pi * 4) / 90)) for i in range(80)]\n TO json\n\nImporting python modules\n^^^^^^^^^^^^^^^^^^^^^^^^\n\nHere we import ``hashlib`` to calculate a md5 hash for each input line.\nBefore running this example you need to install the ``hashlib`` package (\\ ``pip install hashlib``\\ ).\n\n.. code-block:: sql\n\n IMPORT hashlib as hl\n SELECT hl.md5(col1.encode('utf-8')).hexdigest()\n FROM text\n\nGetting the top 5 records\n^^^^^^^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: sql\n\n SELECT int(score) AS score, player_name\n FROM csv\n ORDER BY 1 DESC NULLS LAST, score_date\n LIMIT 5\n\nAggregations\n^^^^^^^^^^^^\n\nTotals by player, alphabetically ordered.\n\n.. code-block:: sql\n\n SELECT .player_name, sum_agg(.score) AS total_score\n FROM json\n GROUP BY 1\n ORDER BY 1\n\nPartial aggregations\n^^^^^^^^^^^^^^^^^^^^\n\nCalculating the cumulative sum of a variable using the ``PARTIALS`` modifier. Also demoing the lag aggregator.\n\n.. code-block:: sql\n\n SELECT PARTIALS\n .new_entries,\n sum_agg(.new_entries) AS cum_new_entries,\n lag(.new_entries) AS prev_entries\n FROM json\n TO json\n\nSample input:\n\n.. code-block:: json\n\n {\"new_entries\" : 10}\n {\"new_entries\" : 5}\n {\"new_entries\" : 25}\n {\"new_entries\" : null}\n {}\n {\"new_entries\" : 100}\n\nOutput:\n\n.. code-block:: json\n\n {\"new_entries\" : 10, \"cum_new_entries\" : 10, \"prev_entries\": null}\n {\"new_entries\" : 5, \"cum_new_entries\" : 15, \"prev_entries\": 10}\n {\"new_entries\" : 25, \"cum_new_entries\" : 40, \"prev_entries\": 5}\n {\"new_entries\" : null, \"cum_new_entries\" : 40, \"prev_entries\": 25}\n {\"new_entries\" : null, \"cum_new_entries\" : 40, \"prev_entries\": null}\n {\"new_entries\" : 100, \"cum_new_entries\" : 140, \"prev_entries\": null}\n\nIf ``PARTIALS`` was omitted the result would be equivalent to the last output row.\n\nDistinct rows\n^^^^^^^^^^^^^\n\n.. code-block:: sql\n\n SELECT DISTINCT *\n FROM csv\n\nCommand line examples\n---------------------\n\nTo run the following examples, type ``Ctrl-x Ctrl-e`` on you terminal. This will open your default editor (emacs/vim). Paste the code of one of the examples, save and exit.\n\nQueries on Parquet with directories\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nHere, ``find`` transverses a directory and executes ``parquet-tools`` for each parquet file, dumping each file to json format. ``jq -c`` makes sure that the output has 1 json per line before handing over to spyql. This is far from being an efficient way to query parquet files, but it might be a handy option if you need to do a quick inspection.\n\n.. code-block:: sh\n\n find /the/directory -name \"*.parquet\" -exec parquet-tools cat --json {} \\; |\n jq -c |\n spyql \"\n SELECT .a_field, .a_num_field * 2 + 1\n FROM json\n \"\n\nQuerying multiple json.gz files\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: sh\n\n gzcat *.json.gz |\n jq -c |\n spyql \"\n SELECT .a_field, .a_num_field * 2 + 1\n FROM json\n \"\n\nQuerying YAML / XML / TOML files\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n`yq <https://kislyuk.github.io/yq/#>`_ converts yaml, xml and toml files to json, allowing to easily query any of these with spyql.\n\n.. code-block:: sh\n\n cat file.yaml | yq -c | spyql \"SELECT .a_field FROM json\"\n\n.. code-block:: sh\n\n cat file.xml | xq -c | spyql \"SELECT .a_field FROM json\"\n\n.. code-block:: sh\n\n cat file.toml | tomlq -c | spyql \"SELECT .a_field FROM json\"\n\nKafka to PostegreSQL pipeline\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nRead data from a kafka topic and write to postgres table name ``customer``.\n\n.. code-block:: sh\n\n kafkacat -b the.broker.com -t the.topic |\n spyql -Otable=customer -Ochunk_size=1 --unbuffered \"\n SELECT\n .customer.id AS id,\n .customer.name AS name\n FROM json\n TO sql\n \" |\n psql -U an_user_name -h a.host.com a_database_name\n\nMonitoring statistics in Kafka\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nRead data from a kafka topic, continuously calculating statistics.\n\n.. code-block:: sh\n\n kafkacat -b the.broker.com -t the.topic |\n spyql --unbuffered \"\n SELECT PARTIALS\n count_agg(*) AS running_count,\n sum_agg(value) AS running_sum,\n min_agg(value) AS min_so_far,\n value AS current_value\n FROM json\n TO csv\n \"\n\nSub-queries (piping)\n^^^^^^^^^^^^^^^^^^^^\n\nA special file format (spy) is used to efficiently pipe data between queries.\n\n.. code-block:: sh\n\n cat a_file.json |\n spyql \"\n SELECT ' '.join([.first_name, .middle_name, .last_name]) AS full_name\n FROM json\n TO spy\" |\n spyql \"SELECT full_name, full_name.upper() FROM spy\"\n\n\n\n(Equi) Joins\n^^^^^^^^^^^^^\n\nIt is possible to make simple (LEFT) JOIN operations based on dictionary lookups.\n\nGiven `numbers.json`:\n\n.. code-block:: json\n\n {\n \"1\": \"One\",\n \"2\": \"Two\",\n \"3\": \"Three\"\n }\n\n\nQuery:\n\n.. code-block:: sh\n\n spyql -Jnums=numbers.json \"\n\t SELECT nums[col1] as res\n\t FROM [3,4,1,1]\n\t TO json\"\n\n\nOutput:\n\n.. code-block:: json\n\n {\"res\": \"Three\"}\n {\"res\": null}\n {\"res\": \"One\"}\n {\"res\": \"One\"}\n\n\nIf you want a INNER JOIN instead of a LEFT JOIN, you can add a criteria to the where clause, e.g.:\n\n.. code-block:: sql\n\n SELECT nums[col1] as res\n FROM [3,4,1,1]\n WHERE col1 in nums\n TO json\n\n\nOutput:\n\n.. code-block:: json\n\n {\"res\": \"Three\"}\n {\"res\": \"One\"}\n {\"res\": \"One\"}\n\n\nQueries over APIs\n^^^^^^^^^^^^^^^^^\n\n.. code-block:: sh\n\n curl https://reqres.in/api/users?page=2 |\n spyql \"\n SELECT\n .data.email AS email,\n 'Dear {}, thank you for being a great customer!'.format(.data.first_name) AS msg\n FROM json\n EXPLODE .data\n TO json\n \"\n\nPlotting to the terminal\n^^^^^^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: sh\n\n spyql \"\n SELECT col1\n FROM [10 * cos(i * ((pi * 4) / 90)) for i in range(80)]\n TO plot\n \"\n\nPlotting with `matplotcli <https://github.com/dcmoura/matplotcli>`_\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n.. code-block:: sh\n\n spyql \"\n SELECT col1 AS y\n FROM [10 * cos(i * ((pi * 4) / 90)) for i in range(80)]\n TO json\n \" | plt \"plot(y)\"\n\n\n.. image:: imgs/matplotcli_demo1.png\n :width: 600\n :alt: matplotcli demo\n\n\n.. recipes_end\n\n----\n\nThis package was created with `Cookiecutter <https://github.com/audreyr/cookiecutter>`_ and the ``audreyr/cookiecutter-pypackage`` `project template <https://github.com/audreyr/cookiecutter-pypackage>`_.\n",
"bugtrack_url": null,
"license": "MIT license",
"summary": "SPyQL: SQL with Python in the middle",
"version": "0.9.0",
"split_keywords": [
"sql",
"data",
"csv",
"json"
],
"urls": [
{
"comment_text": "",
"digests": {
"md5": "1d2a6d759c873b284961f6be7b263bae",
"sha256": "4c0faa5a808513b7ab42383253c8ea7196001c82f9d886c1b195608514948b65"
},
"downloads": -1,
"filename": "spyql-0.9.0-py2.py3-none-any.whl",
"has_sig": false,
"md5_digest": "1d2a6d759c873b284961f6be7b263bae",
"packagetype": "bdist_wheel",
"python_version": "py2.py3",
"requires_python": ">=3.6",
"size": 34525,
"upload_time": "2022-12-04T00:21:00",
"upload_time_iso_8601": "2022-12-04T00:21:00.725681Z",
"url": "https://files.pythonhosted.org/packages/0e/05/e01da34bc949f9c5592120497b4dd282d517c6ccb479808ceedd60d53f9b/spyql-0.9.0-py2.py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"md5": "930377fe6ec11c1923853845fd579a9c",
"sha256": "cdfaa34efe05e30591f2017489e7f3447a0c92ef9b3ccca96b2feef8d707bbae"
},
"downloads": -1,
"filename": "spyql-0.9.0.tar.gz",
"has_sig": false,
"md5_digest": "930377fe6ec11c1923853845fd579a9c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 174747,
"upload_time": "2022-12-04T00:21:02",
"upload_time_iso_8601": "2022-12-04T00:21:02.627187Z",
"url": "https://files.pythonhosted.org/packages/a9/53/933bf4d9f5f666ca7a256d51a7bb3b6e0d630bc4c938f7ce8dc361c4cbcd/spyql-0.9.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2022-12-04 00:21:02",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "dcmoura",
"github_project": "spyql",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "spyql"
}