scriptdb


Namescriptdb JSON
Version 1.0.6 PyPI version JSON
download
home_pageNone
SummarySimple SQLite sync/async wrapper with migration support for use in ad-hoc scripts
upload_time2025-09-04 08:25:20
maintainerNone
docs_urlNone
authorScriptDB Contributors
requires_python>=3.8
licenseMIT License Copyright (c) 2025 Mihanentalpo Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords sqlite migrations asyncio database
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ScriptDB

ScriptDB is a tiny wrapper around SQLite with built‑in migration
support. It can be used asynchronously or synchronously. 
ScriptDB is designed for small integration scripts and ETL jobs 
where using an external database would be unnecessary. 
The project aims to provide a pleasant developer experience
while keeping the API minimal.

## Features

* **Async and sync** – choose between the async [`aiosqlite`](https://github.com/omnilib/aiosqlite)
  backend or the synchronous stdlib `sqlite3` backend.
* **Migrations** – declare migrations as SQL snippet(s) or Python callables and
  let ScriptDB apply them once.
* **Lightweight** – no server to run and no complicated setup; perfect for
  throw‑away scripts or small tools.
* **WAL by default** – connections use SQLite's write-ahead logging mode;
  disable with `use_wal=False` if rollback journals are required.

Composite primary keys are not supported; each table must have a single-column primary key.

## Requirements

ScriptDB requires SQLite version **3.21.0** or newer. This covers SQLite
releases bundled with modern Python builds, so no additional dependency is
needed on most systems.

## Installation

To use the synchronous implementation:

```bash
pip install scriptdb
```

To use the asynchronous version (installs aiosqlite):

```bash
pip install scriptdb[async]
```

## Sync or Async

Both the asynchronous and synchronous interfaces expose the same API.
The only difference is whether methods are coroutines (`AsyncBaseDB` and
`AsyncCacheDB`) or regular blocking functions (`SyncBaseDB` and
`SyncCacheDB`). Import `AsyncBaseDB`/`AsyncCacheDB` from `scriptdb.asyncdb` for
asynchronous usage or `SyncBaseDB`/`SyncCacheDB` from `scriptdb.syncdb` for
synchronous usage. For convenience, each module also exposes `BaseDB` and
`CacheDB` aliases pointing to the respective implementations.

## Asynchronous quick start

Create a subclass of `AsyncBaseDB` and provide a list of migrations:

```python
from scriptdb import AsyncBaseDB

class MyDB(AsyncBaseDB):
    def migrations(self):
        return [
            {
                "name": "create_links",
                "sql": """
                    CREATE TABLE links(
                        resource_id INTEGER PRIMARY KEY,
                        referrer_url TEXT,
                        url TEXT,
                        status INTEGER,
                        progress INTEGER,
                        is_done INTEGER,
                        content BLOB
                    )
                """,
            },
            {
                "name": "add_created_idx",
                # run multiple statements sequentially
                "sqls": [
                    "ALTER TABLE links ADD COLUMN created_at TEXT",  # new column
                    "CREATE INDEX idx_links_created_at ON links(created_at)",  # index
                ],
            },
        ]

async def main():
    async with MyDB.open("app.db") as db:  # WAL journaling is enabled by default
        await db.execute(
            "INSERT INTO links(url, status, progress, is_done) VALUES(?,?,?,?)",
            ("https://example.com/data", 0, 0, 0),
        )
        row = await db.query_one("SELECT url FROM links")
        print(row["url"])  # -> https://example.com/data

    # Manual open/close without a context manager
    db = await MyDB.open("app.db")
    try:
        await db.execute(
            "INSERT INTO links(url, status, progress, is_done) VALUES(?,?,?,?)",
            ("https://example.com/other", 0, 0, 0),
        )
    finally:
        await db.close()

    # Daemonize the aiosqlite worker thread to avoid hanging on exit
    async with MyDB.open("app.db", daemonize_thread=True) as db:
        await db.execute("SELECT 1")

    db = await MyDB.open("app.db", daemonize_thread=True)
    try:
        await db.execute("SELECT 1")
    finally:
        await db.close()
```

Always close the database connection with `close()` or use the `async with`
context manager as shown above. If you call `MyDB.open()` without a context
manager, remember to `await db.close()` when finished. Leaving a database open
may keep background tasks alive and prevent your application from exiting
cleanly.

### Daemonizable aiosqlite

`aiosqlite` runs a worker thread to execute SQLite operations. There has been an
ongoing debate in the `aiosqlite` project about whether this thread should be a
daemon. A non-daemon worker can keep the Python process alive even after all
tasks have finished. ScriptDB ships with the internal
`daemonizable_aiosqlite` module that wraps `aiosqlite.connect` and allows this
worker thread to be marked as daemon.

The test suite in this repository relies on this module; without it, lingering
threads would prevent tests from completing. To enable daemon mode in your
application, pass `daemonize_thread=True` when opening the database as shown
above. Use this option only if your program hangs on exit, as daemon threads can
be terminated abruptly, potentially losing in-flight work.

## Synchronous quick start

Create a subclass of `SyncBaseDB` for blocking use:

```python
from scriptdb import SyncBaseDB

class MyDB(SyncBaseDB):
    def migrations(self):
        return [
            {
                "name": "create_links",
                "sql": """
                    CREATE TABLE links(
                        resource_id INTEGER PRIMARY KEY,
                        referrer_url TEXT,
                        url TEXT,
                        status INTEGER,
                        progress INTEGER,
                        is_done INTEGER,
                        content BLOB
                    )
                """,
            },
            {
                "name": "add_created_idx",
                "sqls": [
                    "ALTER TABLE links ADD COLUMN created_at TEXT",  # new column
                    "CREATE INDEX idx_links_created_at ON links(created_at)",  # index
                ],
            },
        ]

with MyDB.open("app.db") as db:  # WAL journaling is enabled by default
    db.execute(
        "INSERT INTO links(url, status, progress, is_done) VALUES(?,?,?,?)",
        ("https://example.com/data", 0, 0, 0),
    )
    row = db.query_one("SELECT url FROM links")
    print(row["url"])  # -> https://example.com/data

# Manual open/close without a context manager
db = MyDB.open("app.db")
try:
    db.execute(
        "INSERT INTO links(url, status, progress, is_done) VALUES(?,?,?,?)",
        ("https://example.com/other", 0, 0, 0),
    )
finally:
    db.close()
```

Always close the database connection with `close()` or use the `with`
context manager as shown above. Leaving a database open may keep background
tasks alive and prevent your application from exiting cleanly.

## Usage examples

The `AsyncBaseDB` API supports migrations and offers helpers for common operations
and background tasks:

```python
from scriptdb import AsyncBaseDB, run_every_seconds, run_every_queries

class MyDB(AsyncBaseDB):
    def migrations(self):
        return [
            {
                "name": "init",
                "sql": """
                    CREATE TABLE links(
                        resource_id INTEGER PRIMARY KEY,
                        referrer_url TEXT,
                        url TEXT,
                        status INTEGER,
                        progress INTEGER,
                        is_done INTEGER,
                        content BLOB
                    )
                """,
            },
            {"name": "idx_status", "sql": "CREATE INDEX idx_links_status ON links(status)"},
            {"name": "create_meta", "sql": "CREATE TABLE meta(key TEXT PRIMARY KEY, value TEXT)"},
        ]

    # Periodically remove finished links
    @run_every_seconds(60)
    async def cleanup(self):
        await self.execute("DELETE FROM links WHERE is_done = 1")

    # Write a checkpoint every 100 executed queries
    @run_every_queries(100)
    async def checkpoint(self):
        await self.execute("PRAGMA wal_checkpoint")

async def main():
    async with MyDB.open("app.db") as db:  # pass use_wal=False to disable WAL

        # Insert many links at once
        await db.execute_many(
            "INSERT INTO links(url) VALUES(?)",
            [("https://a",), ("https://b",), ("https://c",)],
        )

        # Fetch all URLs
        rows = await db.query_many("SELECT url FROM links")
        print([r["url"] for r in rows])

        # Stream links one by one
        async for row in db.query_many_gen("SELECT url FROM links"):
            print(row["url"])
```

### Helper methods

`AsyncBaseDB` and `SyncBaseDB` include convenience helpers for common insert,
update and delete operations:

```python
# Insert one record and get its primary key
pk = await db.insert_one("links", {"url": "https://a"})

# Insert many records
await db.insert_many("links", [{"url": "https://b"}, {"url": "https://c"}])

# Upsert a single record
await db.upsert_one("links", {"resource_id": pk, "status": 200})

# Upsert many records
await db.upsert_many(
    "links",
    [
        {"resource_id": 1, "status": 200},
        {"resource_id": 2, "status": 404},
    ],
)

# Update selected columns in a record
await db.update_one("links", pk, {"progress": 50})

# Delete records
await db.delete_one("links", pk)
await db.delete_many("links", "status = ?", (404,))
```

### Query helpers

The library also offers helpers for common read patterns:

```python
# Get a single value
count = await db.query_scalar("SELECT COUNT(*) FROM links")

# Get a list from the first column of each row
ids = await db.query_column("SELECT resource_id FROM links ORDER BY resource_id")

# Build dictionaries from rows
# Use primary key automatically
records = await db.query_dict("SELECT * FROM links")

# Explicit column names for key and value
urls = await db.query_dict(
    "SELECT resource_id, url FROM links", key="resource_id", value="url"
)

# Callables for custom key and value
status_by_url = await db.query_dict(
    "SELECT * FROM links",
    key=lambda r: r["url"],
    value=lambda r: r["status"],
)
```

## Useful implementations

### CacheDB

`AsyncCacheDB` and `SyncCacheDB` provide a simple key‑value store with optional
expiration.

```python
from scriptdb import AsyncCacheDB

async def main():
    async with AsyncCacheDB.open("cache.db") as cache:
        await cache.set("answer", b"42", expire_sec=60)
        if await cache.is_set("answer"):
            print("cached!")
        print(await cache.get("answer"))  # b"42"
```

```python
from scriptdb import SyncCacheDB

with SyncCacheDB.open("cache.db") as cache:
    cache.set("answer", b"42", expire_sec=60)
    if cache.is_set("answer"):
        print("cached!")
    print(cache.get("answer"))  # b"42"
```

A value without `expire_sec` will be kept indefinitely. Use `is_set` to check for
keys without retrieving their values. To easily cache function results, use the
`cache` decorator method from a cache instance:

```python
import asyncio
from scriptdb import AsyncCacheDB

async def main():
    async with AsyncCacheDB.open("cache.db") as cache:

        @cache.cache(expire_sec=30)
        async def slow():
            await asyncio.sleep(1)
            return 1

        await slow()
```

Subsequent calls within 30 seconds will return the cached result without
executing the function. You can supply `key_func` to control how the cache key
is generated.

## Simple DDL query builder

If you often forget SQLite syntax, ScriptDB includes a small helper to build
`CREATE TABLE`, `ALTER TABLE`, and `DROP TABLE` statements programmatically.
Unlike raw SQL, there is no syntax to memorize. Just type Builder. and let your IDE suggest the available operations.

> **Warning**
> The builder quotes identifiers to mitigate SQL injection, but options that
> accept raw SQL snippets (such as `check=` expressions) are not sanitized.
> Never pass untrusted user data to these parameters.

```python
from scriptdb import Builder

# The following examples build SQL strings; they do not execute them.

# Build query to create a table with several columns
create_sql = (
    Builder.create_table("users")
    .primary_key("id", int)
    .add_field("username", str, not_null=True)
    .add_field("email", str)
    .done()
)

# Build query to add new columns
add_cols_sql = (
    Builder.alter_table("users")
    .add_column("age", int, default=0)
    .add_column("created_at", int)
    .done()
)

# Build query to remove an old column
drop_col_sql = Builder.alter_table("users").drop_column("email").done()

# Build query to create an index
index_sql = Builder.create_index("idx_users_username", "users", on="username")

# Build query to drop the table when finished
drop_sql = Builder.drop_table("users").done()
```

These builders are convenient for defining migrations in `*BaseDB` subclasses:

```python
from scriptdb import SyncBaseDB, Builder

class MyDB(SyncBaseDB):
    def migrations(self):
        return [
            {
                "name": "create_users",
                "sql": (
                    Builder.create_table("users")
                    .primary_key("id", int)
                    .add_field("username", str, not_null=True)
                    .add_field("email", str)
                    .done()
                ),
            },
            {
                "name": "add_fields",
                "sql": (
                    Builder.alter_table("users")
                    .add_column("age", int, default=0)
                    .add_column("created_at", int)
                    .done()
                ),
            },
            {
                "name": "remove_email",
                "sql": (
                    Builder.alter_table("users")
                    .drop_column("email")
                    .done()
                ),
            },
            {
                "name": "index_username",
                "sql": Builder.create_index("idx_users_username", "users", on="username"),
            },
            {
                "name": "drop_users",
                "sql": Builder.drop_table("users").done(),
            },
        ]
```

## Running tests

```bash
make test
```

Run linters and type checks with:

```bash
make lint
```

## Contributing

Issues and pull requests are welcome. Please run the tests before submitting
changes.

## License

This project is licensed under the terms of the MIT license. See
[LICENSE](https://github.com/MihanEntalpo/ScriptDB/blob/main/LICENSE) for details.

## Development

Clone the repository and create a virtual environment:

```bash
git clone https://github.com/MihanEntalpo/ScriptDB.git
cd ScriptDB
python -m venv venv
source venv/bin/activate
pip install -e .[async,test]
```

If you don't have `make`, install it first, e.g. `sudo apt-get install make`.

Before committing, ensure code passes the linters, type checks, and tests with coverage:

```bash
make lint
make test
```

## AI Usage disclaimer

* The package was initially created with help of OpenAI Codex.
* All algorithms, functionality, and logic were devised by a human.
* The human supervised and reviewed every function and method generated by Codex.
* Some parts were manually corrected, as it is often difficult to obtain sane edits from AI.
* Although some code was made by an LLM, this is not vibe-coding; you can trust this code as if I had written it myself.


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "scriptdb",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "sqlite, migrations, asyncio, database",
    "author": "ScriptDB Contributors",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/2f/c7/ae03d77c347031ffa01291de3c5e1fe4dd7357386509eb1e402a8dc442ef/scriptdb-1.0.6.tar.gz",
    "platform": null,
    "description": "# ScriptDB\n\nScriptDB is a tiny wrapper around SQLite with built\u2011in migration\nsupport. It can be used asynchronously or synchronously. \nScriptDB is designed for small integration scripts and ETL jobs \nwhere using an external database would be unnecessary. \nThe project aims to provide a pleasant developer experience\nwhile keeping the API minimal.\n\n## Features\n\n* **Async and sync** \u2013 choose between the async [`aiosqlite`](https://github.com/omnilib/aiosqlite)\n  backend or the synchronous stdlib `sqlite3` backend.\n* **Migrations** \u2013 declare migrations as SQL snippet(s) or Python callables and\n  let ScriptDB apply them once.\n* **Lightweight** \u2013 no server to run and no complicated setup; perfect for\n  throw\u2011away scripts or small tools.\n* **WAL by default** \u2013 connections use SQLite's write-ahead logging mode;\n  disable with `use_wal=False` if rollback journals are required.\n\nComposite primary keys are not supported; each table must have a single-column primary key.\n\n## Requirements\n\nScriptDB requires SQLite version **3.21.0** or newer. This covers SQLite\nreleases bundled with modern Python builds, so no additional dependency is\nneeded on most systems.\n\n## Installation\n\nTo use the synchronous implementation:\n\n```bash\npip install scriptdb\n```\n\nTo use the asynchronous version (installs aiosqlite):\n\n```bash\npip install scriptdb[async]\n```\n\n## Sync or Async\n\nBoth the asynchronous and synchronous interfaces expose the same API.\nThe only difference is whether methods are coroutines (`AsyncBaseDB` and\n`AsyncCacheDB`) or regular blocking functions (`SyncBaseDB` and\n`SyncCacheDB`). Import `AsyncBaseDB`/`AsyncCacheDB` from `scriptdb.asyncdb` for\nasynchronous usage or `SyncBaseDB`/`SyncCacheDB` from `scriptdb.syncdb` for\nsynchronous usage. For convenience, each module also exposes `BaseDB` and\n`CacheDB` aliases pointing to the respective implementations.\n\n## Asynchronous quick start\n\nCreate a subclass of `AsyncBaseDB` and provide a list of migrations:\n\n```python\nfrom scriptdb import AsyncBaseDB\n\nclass MyDB(AsyncBaseDB):\n    def migrations(self):\n        return [\n            {\n                \"name\": \"create_links\",\n                \"sql\": \"\"\"\n                    CREATE TABLE links(\n                        resource_id INTEGER PRIMARY KEY,\n                        referrer_url TEXT,\n                        url TEXT,\n                        status INTEGER,\n                        progress INTEGER,\n                        is_done INTEGER,\n                        content BLOB\n                    )\n                \"\"\",\n            },\n            {\n                \"name\": \"add_created_idx\",\n                # run multiple statements sequentially\n                \"sqls\": [\n                    \"ALTER TABLE links ADD COLUMN created_at TEXT\",  # new column\n                    \"CREATE INDEX idx_links_created_at ON links(created_at)\",  # index\n                ],\n            },\n        ]\n\nasync def main():\n    async with MyDB.open(\"app.db\") as db:  # WAL journaling is enabled by default\n        await db.execute(\n            \"INSERT INTO links(url, status, progress, is_done) VALUES(?,?,?,?)\",\n            (\"https://example.com/data\", 0, 0, 0),\n        )\n        row = await db.query_one(\"SELECT url FROM links\")\n        print(row[\"url\"])  # -> https://example.com/data\n\n    # Manual open/close without a context manager\n    db = await MyDB.open(\"app.db\")\n    try:\n        await db.execute(\n            \"INSERT INTO links(url, status, progress, is_done) VALUES(?,?,?,?)\",\n            (\"https://example.com/other\", 0, 0, 0),\n        )\n    finally:\n        await db.close()\n\n    # Daemonize the aiosqlite worker thread to avoid hanging on exit\n    async with MyDB.open(\"app.db\", daemonize_thread=True) as db:\n        await db.execute(\"SELECT 1\")\n\n    db = await MyDB.open(\"app.db\", daemonize_thread=True)\n    try:\n        await db.execute(\"SELECT 1\")\n    finally:\n        await db.close()\n```\n\nAlways close the database connection with `close()` or use the `async with`\ncontext manager as shown above. If you call `MyDB.open()` without a context\nmanager, remember to `await db.close()` when finished. Leaving a database open\nmay keep background tasks alive and prevent your application from exiting\ncleanly.\n\n### Daemonizable aiosqlite\n\n`aiosqlite` runs a worker thread to execute SQLite operations. There has been an\nongoing debate in the `aiosqlite` project about whether this thread should be a\ndaemon. A non-daemon worker can keep the Python process alive even after all\ntasks have finished. ScriptDB ships with the internal\n`daemonizable_aiosqlite` module that wraps `aiosqlite.connect` and allows this\nworker thread to be marked as daemon.\n\nThe test suite in this repository relies on this module; without it, lingering\nthreads would prevent tests from completing. To enable daemon mode in your\napplication, pass `daemonize_thread=True` when opening the database as shown\nabove. Use this option only if your program hangs on exit, as daemon threads can\nbe terminated abruptly, potentially losing in-flight work.\n\n## Synchronous quick start\n\nCreate a subclass of `SyncBaseDB` for blocking use:\n\n```python\nfrom scriptdb import SyncBaseDB\n\nclass MyDB(SyncBaseDB):\n    def migrations(self):\n        return [\n            {\n                \"name\": \"create_links\",\n                \"sql\": \"\"\"\n                    CREATE TABLE links(\n                        resource_id INTEGER PRIMARY KEY,\n                        referrer_url TEXT,\n                        url TEXT,\n                        status INTEGER,\n                        progress INTEGER,\n                        is_done INTEGER,\n                        content BLOB\n                    )\n                \"\"\",\n            },\n            {\n                \"name\": \"add_created_idx\",\n                \"sqls\": [\n                    \"ALTER TABLE links ADD COLUMN created_at TEXT\",  # new column\n                    \"CREATE INDEX idx_links_created_at ON links(created_at)\",  # index\n                ],\n            },\n        ]\n\nwith MyDB.open(\"app.db\") as db:  # WAL journaling is enabled by default\n    db.execute(\n        \"INSERT INTO links(url, status, progress, is_done) VALUES(?,?,?,?)\",\n        (\"https://example.com/data\", 0, 0, 0),\n    )\n    row = db.query_one(\"SELECT url FROM links\")\n    print(row[\"url\"])  # -> https://example.com/data\n\n# Manual open/close without a context manager\ndb = MyDB.open(\"app.db\")\ntry:\n    db.execute(\n        \"INSERT INTO links(url, status, progress, is_done) VALUES(?,?,?,?)\",\n        (\"https://example.com/other\", 0, 0, 0),\n    )\nfinally:\n    db.close()\n```\n\nAlways close the database connection with `close()` or use the `with`\ncontext manager as shown above. Leaving a database open may keep background\ntasks alive and prevent your application from exiting cleanly.\n\n## Usage examples\n\nThe `AsyncBaseDB` API supports migrations and offers helpers for common operations\nand background tasks:\n\n```python\nfrom scriptdb import AsyncBaseDB, run_every_seconds, run_every_queries\n\nclass MyDB(AsyncBaseDB):\n    def migrations(self):\n        return [\n            {\n                \"name\": \"init\",\n                \"sql\": \"\"\"\n                    CREATE TABLE links(\n                        resource_id INTEGER PRIMARY KEY,\n                        referrer_url TEXT,\n                        url TEXT,\n                        status INTEGER,\n                        progress INTEGER,\n                        is_done INTEGER,\n                        content BLOB\n                    )\n                \"\"\",\n            },\n            {\"name\": \"idx_status\", \"sql\": \"CREATE INDEX idx_links_status ON links(status)\"},\n            {\"name\": \"create_meta\", \"sql\": \"CREATE TABLE meta(key TEXT PRIMARY KEY, value TEXT)\"},\n        ]\n\n    # Periodically remove finished links\n    @run_every_seconds(60)\n    async def cleanup(self):\n        await self.execute(\"DELETE FROM links WHERE is_done = 1\")\n\n    # Write a checkpoint every 100 executed queries\n    @run_every_queries(100)\n    async def checkpoint(self):\n        await self.execute(\"PRAGMA wal_checkpoint\")\n\nasync def main():\n    async with MyDB.open(\"app.db\") as db:  # pass use_wal=False to disable WAL\n\n        # Insert many links at once\n        await db.execute_many(\n            \"INSERT INTO links(url) VALUES(?)\",\n            [(\"https://a\",), (\"https://b\",), (\"https://c\",)],\n        )\n\n        # Fetch all URLs\n        rows = await db.query_many(\"SELECT url FROM links\")\n        print([r[\"url\"] for r in rows])\n\n        # Stream links one by one\n        async for row in db.query_many_gen(\"SELECT url FROM links\"):\n            print(row[\"url\"])\n```\n\n### Helper methods\n\n`AsyncBaseDB` and `SyncBaseDB` include convenience helpers for common insert,\nupdate and delete operations:\n\n```python\n# Insert one record and get its primary key\npk = await db.insert_one(\"links\", {\"url\": \"https://a\"})\n\n# Insert many records\nawait db.insert_many(\"links\", [{\"url\": \"https://b\"}, {\"url\": \"https://c\"}])\n\n# Upsert a single record\nawait db.upsert_one(\"links\", {\"resource_id\": pk, \"status\": 200})\n\n# Upsert many records\nawait db.upsert_many(\n    \"links\",\n    [\n        {\"resource_id\": 1, \"status\": 200},\n        {\"resource_id\": 2, \"status\": 404},\n    ],\n)\n\n# Update selected columns in a record\nawait db.update_one(\"links\", pk, {\"progress\": 50})\n\n# Delete records\nawait db.delete_one(\"links\", pk)\nawait db.delete_many(\"links\", \"status = ?\", (404,))\n```\n\n### Query helpers\n\nThe library also offers helpers for common read patterns:\n\n```python\n# Get a single value\ncount = await db.query_scalar(\"SELECT COUNT(*) FROM links\")\n\n# Get a list from the first column of each row\nids = await db.query_column(\"SELECT resource_id FROM links ORDER BY resource_id\")\n\n# Build dictionaries from rows\n# Use primary key automatically\nrecords = await db.query_dict(\"SELECT * FROM links\")\n\n# Explicit column names for key and value\nurls = await db.query_dict(\n    \"SELECT resource_id, url FROM links\", key=\"resource_id\", value=\"url\"\n)\n\n# Callables for custom key and value\nstatus_by_url = await db.query_dict(\n    \"SELECT * FROM links\",\n    key=lambda r: r[\"url\"],\n    value=lambda r: r[\"status\"],\n)\n```\n\n## Useful implementations\n\n### CacheDB\n\n`AsyncCacheDB` and `SyncCacheDB` provide a simple key\u2011value store with optional\nexpiration.\n\n```python\nfrom scriptdb import AsyncCacheDB\n\nasync def main():\n    async with AsyncCacheDB.open(\"cache.db\") as cache:\n        await cache.set(\"answer\", b\"42\", expire_sec=60)\n        if await cache.is_set(\"answer\"):\n            print(\"cached!\")\n        print(await cache.get(\"answer\"))  # b\"42\"\n```\n\n```python\nfrom scriptdb import SyncCacheDB\n\nwith SyncCacheDB.open(\"cache.db\") as cache:\n    cache.set(\"answer\", b\"42\", expire_sec=60)\n    if cache.is_set(\"answer\"):\n        print(\"cached!\")\n    print(cache.get(\"answer\"))  # b\"42\"\n```\n\nA value without `expire_sec` will be kept indefinitely. Use `is_set` to check for\nkeys without retrieving their values. To easily cache function results, use the\n`cache` decorator method from a cache instance:\n\n```python\nimport asyncio\nfrom scriptdb import AsyncCacheDB\n\nasync def main():\n    async with AsyncCacheDB.open(\"cache.db\") as cache:\n\n        @cache.cache(expire_sec=30)\n        async def slow():\n            await asyncio.sleep(1)\n            return 1\n\n        await slow()\n```\n\nSubsequent calls within 30 seconds will return the cached result without\nexecuting the function. You can supply `key_func` to control how the cache key\nis generated.\n\n## Simple DDL query builder\n\nIf you often forget SQLite syntax, ScriptDB includes a small helper to build\n`CREATE TABLE`, `ALTER TABLE`, and `DROP TABLE` statements programmatically.\nUnlike raw SQL, there is no syntax to memorize. Just type Builder. and let your IDE suggest the available operations.\n\n> **Warning**\n> The builder quotes identifiers to mitigate SQL injection, but options that\n> accept raw SQL snippets (such as `check=` expressions) are not sanitized.\n> Never pass untrusted user data to these parameters.\n\n```python\nfrom scriptdb import Builder\n\n# The following examples build SQL strings; they do not execute them.\n\n# Build query to create a table with several columns\ncreate_sql = (\n    Builder.create_table(\"users\")\n    .primary_key(\"id\", int)\n    .add_field(\"username\", str, not_null=True)\n    .add_field(\"email\", str)\n    .done()\n)\n\n# Build query to add new columns\nadd_cols_sql = (\n    Builder.alter_table(\"users\")\n    .add_column(\"age\", int, default=0)\n    .add_column(\"created_at\", int)\n    .done()\n)\n\n# Build query to remove an old column\ndrop_col_sql = Builder.alter_table(\"users\").drop_column(\"email\").done()\n\n# Build query to create an index\nindex_sql = Builder.create_index(\"idx_users_username\", \"users\", on=\"username\")\n\n# Build query to drop the table when finished\ndrop_sql = Builder.drop_table(\"users\").done()\n```\n\nThese builders are convenient for defining migrations in `*BaseDB` subclasses:\n\n```python\nfrom scriptdb import SyncBaseDB, Builder\n\nclass MyDB(SyncBaseDB):\n    def migrations(self):\n        return [\n            {\n                \"name\": \"create_users\",\n                \"sql\": (\n                    Builder.create_table(\"users\")\n                    .primary_key(\"id\", int)\n                    .add_field(\"username\", str, not_null=True)\n                    .add_field(\"email\", str)\n                    .done()\n                ),\n            },\n            {\n                \"name\": \"add_fields\",\n                \"sql\": (\n                    Builder.alter_table(\"users\")\n                    .add_column(\"age\", int, default=0)\n                    .add_column(\"created_at\", int)\n                    .done()\n                ),\n            },\n            {\n                \"name\": \"remove_email\",\n                \"sql\": (\n                    Builder.alter_table(\"users\")\n                    .drop_column(\"email\")\n                    .done()\n                ),\n            },\n            {\n                \"name\": \"index_username\",\n                \"sql\": Builder.create_index(\"idx_users_username\", \"users\", on=\"username\"),\n            },\n            {\n                \"name\": \"drop_users\",\n                \"sql\": Builder.drop_table(\"users\").done(),\n            },\n        ]\n```\n\n## Running tests\n\n```bash\nmake test\n```\n\nRun linters and type checks with:\n\n```bash\nmake lint\n```\n\n## Contributing\n\nIssues and pull requests are welcome. Please run the tests before submitting\nchanges.\n\n## License\n\nThis project is licensed under the terms of the MIT license. See\n[LICENSE](https://github.com/MihanEntalpo/ScriptDB/blob/main/LICENSE) for details.\n\n## Development\n\nClone the repository and create a virtual environment:\n\n```bash\ngit clone https://github.com/MihanEntalpo/ScriptDB.git\ncd ScriptDB\npython -m venv venv\nsource venv/bin/activate\npip install -e .[async,test]\n```\n\nIf you don't have `make`, install it first, e.g. `sudo apt-get install make`.\n\nBefore committing, ensure code passes the linters, type checks, and tests with coverage:\n\n```bash\nmake lint\nmake test\n```\n\n## AI Usage disclaimer\n\n* The package was initially created with help of OpenAI Codex.\n* All algorithms, functionality, and logic were devised by a human.\n* The human supervised and reviewed every function and method generated by Codex.\n* Some parts were manually corrected, as it is often difficult to obtain sane edits from AI.\n* Although some code was made by an LLM, this is not vibe-coding; you can trust this code as if I had written it myself.\n\n",
    "bugtrack_url": null,
    "license": "MIT License\n        \n        Copyright (c) 2025 Mihanentalpo\n        \n        Permission is hereby granted, free of charge, to any person obtaining a copy\n        of this software and associated documentation files (the \"Software\"), to deal\n        in the Software without restriction, including without limitation the rights\n        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n        copies of the Software, and to permit persons to whom the Software is\n        furnished to do so, subject to the following conditions:\n        \n        The above copyright notice and this permission notice shall be included in all\n        copies or substantial portions of the Software.\n        \n        THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n        SOFTWARE.\n        ",
    "summary": "Simple SQLite sync/async wrapper with migration support for use in ad-hoc scripts",
    "version": "1.0.6",
    "project_urls": {
        "Homepage": "https://github.com/MihanEntalpo/ScriptDB",
        "Repository": "https://github.com/MihanEntalpo/ScriptDB"
    },
    "split_keywords": [
        "sqlite",
        " migrations",
        " asyncio",
        " database"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "73cf56a594b13bf98c4aa13b253d106b44712d649313c03f476b3c74975652a5",
                "md5": "19ec1a7f96a257457135c1e50a96b827",
                "sha256": "7fc043feb5b2c9cef821d8c9d5ea3fd4ab1243b7bad9640aa30547ec43514dca"
            },
            "downloads": -1,
            "filename": "scriptdb-1.0.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "19ec1a7f96a257457135c1e50a96b827",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 27235,
            "upload_time": "2025-09-04T08:25:18",
            "upload_time_iso_8601": "2025-09-04T08:25:18.759477Z",
            "url": "https://files.pythonhosted.org/packages/73/cf/56a594b13bf98c4aa13b253d106b44712d649313c03f476b3c74975652a5/scriptdb-1.0.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "2fc7ae03d77c347031ffa01291de3c5e1fe4dd7357386509eb1e402a8dc442ef",
                "md5": "26e37835bd33c884fd0d4a32f3670d55",
                "sha256": "a50c30b9cd3ad0cf032de94b394ff8c0125c1424057c902de1eb7bc0647ef7dd"
            },
            "downloads": -1,
            "filename": "scriptdb-1.0.6.tar.gz",
            "has_sig": false,
            "md5_digest": "26e37835bd33c884fd0d4a32f3670d55",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 39118,
            "upload_time": "2025-09-04T08:25:20",
            "upload_time_iso_8601": "2025-09-04T08:25:20.376333Z",
            "url": "https://files.pythonhosted.org/packages/2f/c7/ae03d77c347031ffa01291de3c5e1fe4dd7357386509eb1e402a8dc442ef/scriptdb-1.0.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-04 08:25:20",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "MihanEntalpo",
    "github_project": "ScriptDB",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "scriptdb"
}
        
Elapsed time: 2.74521s