stream-sqlite


Namestream-sqlite JSON
Version 0.0.41 PyPI version JSON
download
home_pagehttps://github.com/uktrade/stream-sqlite
SummaryPython function to extract all the rows from a SQLite database file concurrently with iterating over its bytes, without needing random access to the file
upload_time2022-12-28 19:07:34
maintainer
docs_urlNone
authorDepartment for International Trade
requires_python>=3.5.0
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # stream-sqlite [![CircleCI](https://circleci.com/gh/uktrade/stream-sqlite.svg?style=shield)](https://circleci.com/gh/uktrade/stream-sqlite) [![Test Coverage](https://api.codeclimate.com/v1/badges/b665c7634e8194fe6878/test_coverage)](https://codeclimate.com/github/uktrade/stream-sqlite/test_coverage)

Python function to extract all the rows from a SQLite database file concurrently with iterating over its bytes, without needing random access to the file.

Note that the [SQLite file format](https://www.sqlite.org/fileformat.html) is not designed to be streamed; the data is arranged in _pages_ of a fixed number of bytes, and the information to identify a page often comes _after_ the page in the stream (sometimes a great deal after). Therefore, pages are buffered in memory until they can be identified.


## Installation

```bash
pip install stream-sqlite
```


## Usage

```python
from stream_sqlite import stream_sqlite
import httpx

# Iterable that yields the bytes of a sqlite file
def sqlite_bytes():
    with httpx.stream('GET', 'http://www.parlgov.org/static/stable/2020/parlgov-stable.db') as r:
        yield from r.iter_bytes(chunk_size=65_536)

# If there is a single table in the file, there will be exactly one iteration of the outer loop.
# If there are multiple tables, each can appear multiple times.
for table_name, pragma_table_info, rows in stream_sqlite(sqlite_bytes(), max_buffer_size=1_048_576):
    for row in rows:
        print(row)
```


## Recommendations

If you have control over the SQLite file, `VACUUM;` should be run on it before streaming. In addition to minimising the size of the file, `VACUUM;` arranges the pages in a way that often reduces the buffering required when streaming. This is especially true if it was the target of intermingled `INSERT`s and/or `DELETE`s over multiple tables.

Also, indexes are not used for extracting the rows while streaming. If streaming is the only use case of the SQLite file, and you have control over it, indexes should be removed, and `VACUUM;` then run.

Some tests suggest that if the file is written in autovacuum mode, i.e. `PRAGMA auto_vacuum = FULL;`, then the pages are arranged in a way that reduces the buffering required when streaming. Your mileage may vary.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/uktrade/stream-sqlite",
    "name": "stream-sqlite",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.5.0",
    "maintainer_email": "",
    "keywords": "",
    "author": "Department for International Trade",
    "author_email": "sre@digital.trade.gov.uk",
    "download_url": "https://files.pythonhosted.org/packages/61/7c/f41dbc6f6221a6beac3e173ce3895d224807f7dcfe5531c3976b4d98e5de/stream-sqlite-0.0.41.tar.gz",
    "platform": null,
    "description": "# stream-sqlite [![CircleCI](https://circleci.com/gh/uktrade/stream-sqlite.svg?style=shield)](https://circleci.com/gh/uktrade/stream-sqlite) [![Test Coverage](https://api.codeclimate.com/v1/badges/b665c7634e8194fe6878/test_coverage)](https://codeclimate.com/github/uktrade/stream-sqlite/test_coverage)\n\nPython function to extract all the rows from a SQLite database file concurrently with iterating over its bytes, without needing random access to the file.\n\nNote that the [SQLite file format](https://www.sqlite.org/fileformat.html) is not designed to be streamed; the data is arranged in _pages_ of a fixed number of bytes, and the information to identify a page often comes _after_ the page in the stream (sometimes a great deal after). Therefore, pages are buffered in memory until they can be identified.\n\n\n## Installation\n\n```bash\npip install stream-sqlite\n```\n\n\n## Usage\n\n```python\nfrom stream_sqlite import stream_sqlite\nimport httpx\n\n# Iterable that yields the bytes of a sqlite file\ndef sqlite_bytes():\n    with httpx.stream('GET', 'http://www.parlgov.org/static/stable/2020/parlgov-stable.db') as r:\n        yield from r.iter_bytes(chunk_size=65_536)\n\n# If there is a single table in the file, there will be exactly one iteration of the outer loop.\n# If there are multiple tables, each can appear multiple times.\nfor table_name, pragma_table_info, rows in stream_sqlite(sqlite_bytes(), max_buffer_size=1_048_576):\n    for row in rows:\n        print(row)\n```\n\n\n## Recommendations\n\nIf you have control over the SQLite file, `VACUUM;` should be run on it before streaming. In addition to minimising the size of the file, `VACUUM;` arranges the pages in a way that often reduces the buffering required when streaming. This is especially true if it was the target of intermingled `INSERT`s and/or `DELETE`s over multiple tables.\n\nAlso, indexes are not used for extracting the rows while streaming. If streaming is the only use case of the SQLite file, and you have control over it, indexes should be removed, and `VACUUM;` then run.\n\nSome tests suggest that if the file is written in autovacuum mode, i.e. `PRAGMA auto_vacuum = FULL;`, then the pages are arranged in a way that reduces the buffering required when streaming. Your mileage may vary.\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Python function to extract all the rows from a SQLite database file concurrently with iterating over its bytes, without needing random access to the file",
    "version": "0.0.41",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "7abe365a8127fe92ad563b76b83d1cb3",
                "sha256": "3aa1bbf4b50eb67df7e5f56b9bbe828b31750c05c9bd883be29d15b8bdc016f5"
            },
            "downloads": -1,
            "filename": "stream_sqlite-0.0.41-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7abe365a8127fe92ad563b76b83d1cb3",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.5.0",
            "size": 7406,
            "upload_time": "2022-12-28T19:07:32",
            "upload_time_iso_8601": "2022-12-28T19:07:32.611532Z",
            "url": "https://files.pythonhosted.org/packages/d9/3d/56b05e6633743723c9de189550338b5c6c53b2d45baa2503dc355eb453dd/stream_sqlite-0.0.41-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "md5": "d03ffab22af39822b948a748bcf3fbdd",
                "sha256": "449ed55fb765b79b5a7599855a037dec2e07cdff448a4197d04da103e09499e2"
            },
            "downloads": -1,
            "filename": "stream-sqlite-0.0.41.tar.gz",
            "has_sig": false,
            "md5_digest": "d03ffab22af39822b948a748bcf3fbdd",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.5.0",
            "size": 7057,
            "upload_time": "2022-12-28T19:07:34",
            "upload_time_iso_8601": "2022-12-28T19:07:34.702454Z",
            "url": "https://files.pythonhosted.org/packages/61/7c/f41dbc6f6221a6beac3e173ce3895d224807f7dcfe5531c3976b4d98e5de/stream-sqlite-0.0.41.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2022-12-28 19:07:34",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "uktrade",
    "github_project": "stream-sqlite",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "circle": true,
    "lcname": "stream-sqlite"
}
        
Elapsed time: 0.02468s