sdblpy


Namesdblpy JSON
Version 0.3.0 PyPI version JSON
download
home_pageNone
Summarylite surrealDB client that only supports websocket raw queries and async pooled connections
upload_time2024-10-03 23:51:52
maintainerNone
docs_urlNone
authorNone
requires_pythonNone
licenseMIT License
keywords surrealdb lite surrealdb surrealdb surrealdb
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # surreal-lite-py
An unofficial Python API for surrealDB that only has one dependency (websockets) and has a very simple interface. One interface is a blocking isolated query interface, and the other is an async connection pool interface.

## Contents in order of appearance

- [Installation](#installation)
- [Async Connection Pool Interface](#async-connection-pool-interface)
- [Basic Blocking Interface](#basic-blocking-interface)
- [Basic Async Interface](#basic-async-interface)
- [Migrations via command line](#migrations-via-command-line)
- [Run SQL scripts via command line](#run-sql-scripts-via-command-line)
- [Command line parameters](#command-line-parameters)
- [Migrations via python code](#migrations-via-python-code)
- [Future Plans](#future-plans)

## Installation
You can install the package using the following command:
```bash
pip install sdblpy
```

## Async Connection Pool Interface
You can spin up an async connection pool and make requests using the code below:
```python
import asyncio

from sblpy.pool.connection_pool import execute_pooled_query, client_pool, shutdown_pool
from sblpy.query import Query


async def main():
    # Create a pool of 5 clients
    asyncio.create_task(client_pool(
        host="localhost",
        port=8000,
        user="root",
        password="root",
        namespace="default", # if not provided the "default" namespace is used
        database="default", # if not provided the "default" database is used
        number_of_clients=5, # if not provided 5 clients are created
        max_size=2**20 # if not provided the max size is 2**20 (1MB)
    ))

    # make 400 requests
    for _ in range(100):
        _ = await execute_pooled_query(Query("CREATE user:tobie SET name = 'Tobie';"))
        _ = await execute_pooled_query(Query("CREATE user:jaime SET name = 'Jaime';"))
        response = await execute_pooled_query(Query("SELECT * FROM user;"))
        print(response)
        _ = await execute_pooled_query(Query("DELETE user;"))

    # Shutdown the pool    
    await shutdown_pool(number_of_clients=5)

if __name__ == "__main__":
    asyncio.run(main())
```

Here we can see that we pass in a `Query` object that defines the query and the params if they are also passed into the `Query` object constructor. If you print this you can also see that the response is raw. In the integration tests you can see how to parse this response using `response["result"][0]["result"]` This is because we do not want any serialization errors happening in the connection pool. You have control over how you handle the response. This can also help isolate against breaking changes in the future. It also must be noted that the connections in the connection pool cannot be reconfigured. Therefore if you are setting a large `max_size` parameter for the connections, that memory will be allocated for each connection for the lifetime of the connection pool. If you are expecting a one-off large query, it might be better to use a basic blocking or async interface as these connections are discarded after use.

## Basic Blocking Interface
We can create a basic blocking interface using the code below:
```python
from sblpy.connection import SurrealSyncConnection

connection = SurrealSyncConnection(
            host="localhost",
            port=8000,              # set to 433 if using encrypted connection
            user="root",
            password="root",
            namespace="default",    # if not provided the "default" namespace is used
            database="default",     # if not provided the "default" database is used
            max_size=2**20,         # if not provided the max size is 2**20 (1MB),
            encrypted=False         # default is False, please ensure that server
                                    # supports encryption with SSL certificates before setting to True
        )

_ = connection.query("CREATE user:tobie SET name = 'Tobie';")
_ = connection.query("CREATE user:jaime SET name = 'Jaime';")
outcome = connection.query("SELECT * FROM user;")
print(outcome)
```

Here you will see that the response is a lot smoother. This is because if there are any errors or issue with parsing, we can directly throw them as the connection is going to close anyway once the connection goes out of scope. The python garbage collector will take care of cleaning up the connection but this will be delayed. If you want to ensure that the connection is closed, you can call `connection.socket.close()` to close the connection.

We can also use context for a blocking interface as seen below:

```python
from sblpy.connection import SurrealSyncConnection

with SurrealSyncConnection(
            host="localhost",
            port=8000,              # set to 433 if using encrypted connection
            user="root",
            password="root",
            namespace="default",    # if not provided the "default" namespace is used
            database="default",     # if not provided the "default" database is used
            max_size=2**20,         # if not provided the max size is 2**20 (1MB)
            encrypted=False         # default is False, please ensure that server
                                    # supports encryption with SSL certificates before setting to True
        ) as connection:
    conn.query("CREATE user:tobie SET name = 'Tobie';")
    conn.query("CREATE user:jaime SET name = 'Jaime';")
    outcome = conn.query("SELECT * FROM user;")
```

## Basic Async Interface

We can create a one-off async connection with the following code:

```python
from sblpy.async_connection import AsyncSurrealConnection

con = AsyncSurrealConnection(
    "localhost",
    8000,                   # set to 433 if using encrypted connection
    "root",
    "root",
    namespace="default",    # if not provided the "default" namespace is used
    database="default",     # if not provided the "default" database is used
    max_size=2**20,         # if not provided the max size is 2**20 (1MB)
    encrypted=False         # default is False, please ensure that server 
                            # supports encryption with SSL certificates before setting to True
)
await con.query("CREATE user:tobie SET name = 'Tobie';")
await con.query("CREATE user:jaime SET name = 'Jaime';")

outcome = await con.query("SELECT * FROM user;")
print(outcome)
```

## Migrations via command line

You can run migrations via the command line. First we must setup the migrations folder with the following command:

```bash
sdblpy migrations create
```

This creates the following folder structure in the current working directory:

```
└── surreal_migrations
    ├── down
    │   └── 1.sql
    └── up
        └── 1.sql

```

If we run the same `sdblpy migrations create` again we will get another migration file with number 2 as seen below:

```
└── surreal_migrations
    ├── down
    │   ├── 1.sql
    │   └── 2.sql
    └── up
        ├── 1.sql
        └── 2.sql
```

We can now make some simple migrations in the sql scripts:

```sql
-- surreal_migrations/up/1.sql
CREATE user:tobie SET name = 'Tobie';
```

```sql
-- surreal_migrations/down/1.sql
DELETE user:tobie;
```

```sql
-- surreal_migrations/up/2.sql
CREATE user:jaime SET name = 'Jaime';
```

```sql
-- surreal_migrations/down/2.sql
DELETE user:jaime;
```

Before we run any migrations, we must ensure that the database is running and we also must check the migrations version of the database. We can do this with the following command:

```bash
sdblpy migrations version -ho localhost -p 8000 -u root -pw root -ns default -d default
```

And this gives us the following output:

```
Current version: 0
```

We can see that we are at version `0`. If we refer to the [command line parameters table section](#command-line-parameters) we can see that we passed in all default values so the `sdblpy migrations version` command will also just work if your server is running on the default values.

We can now run all the migrations with the following command:

```bash
sdblpy migrations run
```

Running the `sdblpy migrations version` command again will give us the following output:

```
Current version: 2
```

Here we can see that our migrations have run successfully as the `sdblpy migrations run` gets the current version of the database and runs all the migrations that are greater than the current version. We can also decrement the version by one with the following command:

```bash
sdblpy migrations down
```

Our version is now down to `1` if we run the `sdblpy migrations version` command again. We can bump up the version of our database by one with the following command:

```bash
sdblpy migrations up
```

Our version is now up to `2` if we run the `sdblpy migrations version` command again. But lets double check that the migrations have actually run by running SQL scripts in the command line.

## Run SQL scripts via command line

We can run SQL scripts against the database using the command line. First, lets create a simple SQL script called `main.sql` in our current working directory:

```sql
-- main.sql
SELECT * FROM user;
```
If our database has the migrations run in the previous section, then we should see both users come back from the table. We can run the SQL script with the following command:

```bash
sdblpy run sql -f main.sql
```

Provided that the database is running the default parameters otherwise you will have to add them as additional arguments after the `sdblpy run sql` command, we should get the following:

```
[{'id': 'user:jaime', 'name': 'Jaime'}, {'id': 'user:tobie', 'name': 'Tobie'}]
```

What happens here is the SQL script is run against the database and the response is printed to the console. This is a very simple way to run SQL scripts against the database.

## Command line parameters

Below are the command line parameters that can be passed to the `sdblpy` command:

| Argument            | Flags               | Required  | Default      | Description                                           |
|---------------------|---------------------|-----------|--------------|-------------------------------------------------------|
| `command`           |                     | Yes       |              | The main command (e.g., 'migrations', 'run').          |
| `subcommand`        |                     | Yes       |              | The subcommand (e.g., 'up', 'down', 'create', 'run', 'version'). |
| `--host`            | `-ho`, `--host`     | No        | `localhost`  | The database host.                                    |
| `--port`            | `-p`, `--port`      | No        | `8000`       | The database port.                                    |
| `--user`            | `-u`, `--user`      | No        | `root`       | The database user.                                    |
| `--password`        | `-pw`, `--password` | No        | `root`       | The database password.                                |
| `--namespace`       | `-ns`, `--namespace`| No        | `default`    | The database namespace.                               |
| `--database`        | `-d`, `--database`  | No        | `default`    | The database name.                                    |
| `--file`            | `-f`, `--file`      | No        | `main.sql`   | Pointer to SQL file.                                  |


## Migrations via python code

We can define and run migrations directly in the python code. Migrations can come from a string, list of strings, or a file. Below is an example of how we can construct a migration:

```python
from sblpy.migrations.migrations import Migration


migration_one = Migration.from_docstring("""
            CREATE user:tobie SET name = 'Tobie';
            CREATE user:jaime SET name = 'Jaime';
        """)

migration_two = Migration.from_list([
            "CREATE user:tobie SET name = 'Tobie';",
            "CREATE user:jaime SET name = 'Jaime'"
        ])

migration_three = Migration.from_file("./some/path/to/file.sql")
```

Once we have these migrations, we need a runner that gets the version of of the database, and then performs migration operations. These operations are the exact same code that the command line interface uses. Below is an example of how we can run migrations:

```python
from sblpy.connection import SurrealSyncConnection
from sblpy.migrations.migrations import Migration
from sblpy.migrations.runner import MigrationRunner
from sblpy.migrations.db_processes import get_latest_version

# define the connection used to run the migrations
connection = SurrealSyncConnection(
    host="localhost",
    port=8000,
    user="root",
    password="root"
)

# define the migrations and the order of the migrations 
up_migrations = [
    Migration.from_docstring("""CREATE user:tobie SET name = 'Tobie';"""), # version 1
    Migration.from_docstring("""CREATE user:jaime SET name = 'Jaime';""") # version 2
]

down_migrations = [
    Migration.from_docstring("""DELETE user:tobie;"""), # version 1
    Migration.from_docstring("""DELETE user:jaime;""") # version 2
]

# define the migration runner
runner = MigrationRunner(
    up_migrations=up_migrations,
    down_migrations=down_migrations,
    connection=connection
)

# run the migrations
runner.run()

# decrement the version by one
runner.decrement()

# increment the version by one
runner.increment()

# get the latest version of the database
latest_version: int = get_latest_version(
    connection.connection.host,
    connection.connection.port,
    connection.connection.user,
    connection.connection.password
)
```

And with this we can run migrations directly in our python. For instance, it is a good idea to run migrations in the `setUp` and `tearDown` methods of a test class when building your own unit tests. You can also run your own migrations in the `main` method of your application before the server starts. This will ensure that the database is in the correct state before the server starts without having to run the migrations manually in a separate terminal or init pod.

## Future Plans

There isn't much, this is just a super simple API. The less moving parts the less that can go wrong. I want to keep the dependencies to a minimum and the codebase as simple as possible. However, I do want to add the following features:

- [ ] Schema Introspection
- [ ] Connection Pool Monitoring
- [ ] `Model` class for ORM like functionality
- [ ] Query Builder
- [ ] Connection pool monitoring
- [ ] Query Execution Time Logging
- [ ] Pagination Support for Large Datasets
- [ ] Auto-reconnect for Long-Lived Connections
- [ ] Connection Retry Mechanism
- [ ] Params testing and Documentation
- [ ] CBOR data serialization
- [ ] Native SurrealDB data types
- [ ] Local Key value cache

If you want to contribute to this project feel free to reach out on the python Discord channel for SurrealDB.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "sdblpy",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "surrealdb, lite, SurrealDB, surrealDB, Surrealdb",
    "author": null,
    "author_email": "Maxwell Flitton <maxwellflitton@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/f4/8f/5311108c10e73d174a4f2aa935ed5f15506d654391e396467803d8368f57/sdblpy-0.3.0.tar.gz",
    "platform": null,
    "description": "# surreal-lite-py\nAn unofficial Python API for surrealDB that only has one dependency (websockets) and has a very simple interface. One interface is a blocking isolated query interface, and the other is an async connection pool interface.\n\n## Contents in order of appearance\n\n- [Installation](#installation)\n- [Async Connection Pool Interface](#async-connection-pool-interface)\n- [Basic Blocking Interface](#basic-blocking-interface)\n- [Basic Async Interface](#basic-async-interface)\n- [Migrations via command line](#migrations-via-command-line)\n- [Run SQL scripts via command line](#run-sql-scripts-via-command-line)\n- [Command line parameters](#command-line-parameters)\n- [Migrations via python code](#migrations-via-python-code)\n- [Future Plans](#future-plans)\n\n## Installation\nYou can install the package using the following command:\n```bash\npip install sdblpy\n```\n\n## Async Connection Pool Interface\nYou can spin up an async connection pool and make requests using the code below:\n```python\nimport asyncio\n\nfrom sblpy.pool.connection_pool import execute_pooled_query, client_pool, shutdown_pool\nfrom sblpy.query import Query\n\n\nasync def main():\n    # Create a pool of 5 clients\n    asyncio.create_task(client_pool(\n        host=\"localhost\",\n        port=8000,\n        user=\"root\",\n        password=\"root\",\n        namespace=\"default\", # if not provided the \"default\" namespace is used\n        database=\"default\", # if not provided the \"default\" database is used\n        number_of_clients=5, # if not provided 5 clients are created\n        max_size=2**20 # if not provided the max size is 2**20 (1MB)\n    ))\n\n    # make 400 requests\n    for _ in range(100):\n        _ = await execute_pooled_query(Query(\"CREATE user:tobie SET name = 'Tobie';\"))\n        _ = await execute_pooled_query(Query(\"CREATE user:jaime SET name = 'Jaime';\"))\n        response = await execute_pooled_query(Query(\"SELECT * FROM user;\"))\n        print(response)\n        _ = await execute_pooled_query(Query(\"DELETE user;\"))\n\n    # Shutdown the pool    \n    await shutdown_pool(number_of_clients=5)\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\nHere we can see that we pass in a `Query` object that defines the query and the params if they are also passed into the `Query` object constructor. If you print this you can also see that the response is raw. In the integration tests you can see how to parse this response using `response[\"result\"][0][\"result\"]` This is because we do not want any serialization errors happening in the connection pool. You have control over how you handle the response. This can also help isolate against breaking changes in the future. It also must be noted that the connections in the connection pool cannot be reconfigured. Therefore if you are setting a large `max_size` parameter for the connections, that memory will be allocated for each connection for the lifetime of the connection pool. If you are expecting a one-off large query, it might be better to use a basic blocking or async interface as these connections are discarded after use.\n\n## Basic Blocking Interface\nWe can create a basic blocking interface using the code below:\n```python\nfrom sblpy.connection import SurrealSyncConnection\n\nconnection = SurrealSyncConnection(\n            host=\"localhost\",\n            port=8000,              # set to 433 if using encrypted connection\n            user=\"root\",\n            password=\"root\",\n            namespace=\"default\",    # if not provided the \"default\" namespace is used\n            database=\"default\",     # if not provided the \"default\" database is used\n            max_size=2**20,         # if not provided the max size is 2**20 (1MB),\n            encrypted=False         # default is False, please ensure that server\n                                    # supports encryption with SSL certificates before setting to True\n        )\n\n_ = connection.query(\"CREATE user:tobie SET name = 'Tobie';\")\n_ = connection.query(\"CREATE user:jaime SET name = 'Jaime';\")\noutcome = connection.query(\"SELECT * FROM user;\")\nprint(outcome)\n```\n\nHere you will see that the response is a lot smoother. This is because if there are any errors or issue with parsing, we can directly throw them as the connection is going to close anyway once the connection goes out of scope. The python garbage collector will take care of cleaning up the connection but this will be delayed. If you want to ensure that the connection is closed, you can call `connection.socket.close()` to close the connection.\n\nWe can also use context for a blocking interface as seen below:\n\n```python\nfrom sblpy.connection import SurrealSyncConnection\n\nwith SurrealSyncConnection(\n            host=\"localhost\",\n            port=8000,              # set to 433 if using encrypted connection\n            user=\"root\",\n            password=\"root\",\n            namespace=\"default\",    # if not provided the \"default\" namespace is used\n            database=\"default\",     # if not provided the \"default\" database is used\n            max_size=2**20,         # if not provided the max size is 2**20 (1MB)\n            encrypted=False         # default is False, please ensure that server\n                                    # supports encryption with SSL certificates before setting to True\n        ) as connection:\n    conn.query(\"CREATE user:tobie SET name = 'Tobie';\")\n    conn.query(\"CREATE user:jaime SET name = 'Jaime';\")\n    outcome = conn.query(\"SELECT * FROM user;\")\n```\n\n## Basic Async Interface\n\nWe can create a one-off async connection with the following code:\n\n```python\nfrom sblpy.async_connection import AsyncSurrealConnection\n\ncon = AsyncSurrealConnection(\n    \"localhost\",\n    8000,                   # set to 433 if using encrypted connection\n    \"root\",\n    \"root\",\n    namespace=\"default\",    # if not provided the \"default\" namespace is used\n    database=\"default\",     # if not provided the \"default\" database is used\n    max_size=2**20,         # if not provided the max size is 2**20 (1MB)\n    encrypted=False         # default is False, please ensure that server \n                            # supports encryption with SSL certificates before setting to True\n)\nawait con.query(\"CREATE user:tobie SET name = 'Tobie';\")\nawait con.query(\"CREATE user:jaime SET name = 'Jaime';\")\n\noutcome = await con.query(\"SELECT * FROM user;\")\nprint(outcome)\n```\n\n## Migrations via command line\n\nYou can run migrations via the command line. First we must setup the migrations folder with the following command:\n\n```bash\nsdblpy migrations create\n```\n\nThis creates the following folder structure in the current working directory:\n\n```\n\u2514\u2500\u2500 surreal_migrations\n    \u251c\u2500\u2500 down\n    \u2502\u00a0\u00a0 \u2514\u2500\u2500 1.sql\n    \u2514\u2500\u2500 up\n        \u2514\u2500\u2500 1.sql\n\n```\n\nIf we run the same `sdblpy migrations create` again we will get another migration file with number 2 as seen below:\n\n```\n\u2514\u2500\u2500 surreal_migrations\n    \u251c\u2500\u2500 down\n    \u2502\u00a0\u00a0 \u251c\u2500\u2500 1.sql\n    \u2502\u00a0\u00a0 \u2514\u2500\u2500 2.sql\n    \u2514\u2500\u2500 up\n        \u251c\u2500\u2500 1.sql\n        \u2514\u2500\u2500 2.sql\n```\n\nWe can now make some simple migrations in the sql scripts:\n\n```sql\n-- surreal_migrations/up/1.sql\nCREATE user:tobie SET name = 'Tobie';\n```\n\n```sql\n-- surreal_migrations/down/1.sql\nDELETE user:tobie;\n```\n\n```sql\n-- surreal_migrations/up/2.sql\nCREATE user:jaime SET name = 'Jaime';\n```\n\n```sql\n-- surreal_migrations/down/2.sql\nDELETE user:jaime;\n```\n\nBefore we run any migrations, we must ensure that the database is running and we also must check the migrations version of the database. We can do this with the following command:\n\n```bash\nsdblpy migrations version -ho localhost -p 8000 -u root -pw root -ns default -d default\n```\n\nAnd this gives us the following output:\n\n```\nCurrent version: 0\n```\n\nWe can see that we are at version `0`. If we refer to the [command line parameters table section](#command-line-parameters) we can see that we passed in all default values so the `sdblpy migrations version` command will also just work if your server is running on the default values.\n\nWe can now run all the migrations with the following command:\n\n```bash\nsdblpy migrations run\n```\n\nRunning the `sdblpy migrations version` command again will give us the following output:\n\n```\nCurrent version: 2\n```\n\nHere we can see that our migrations have run successfully as the `sdblpy migrations run` gets the current version of the database and runs all the migrations that are greater than the current version. We can also decrement the version by one with the following command:\n\n```bash\nsdblpy migrations down\n```\n\nOur version is now down to `1` if we run the `sdblpy migrations version` command again. We can bump up the version of our database by one with the following command:\n\n```bash\nsdblpy migrations up\n```\n\nOur version is now up to `2` if we run the `sdblpy migrations version` command again. But lets double check that the migrations have actually run by running SQL scripts in the command line.\n\n## Run SQL scripts via command line\n\nWe can run SQL scripts against the database using the command line. First, lets create a simple SQL script called `main.sql` in our current working directory:\n\n```sql\n-- main.sql\nSELECT * FROM user;\n```\nIf our database has the migrations run in the previous section, then we should see both users come back from the table. We can run the SQL script with the following command:\n\n```bash\nsdblpy run sql -f main.sql\n```\n\nProvided that the database is running the default parameters otherwise you will have to add them as additional arguments after the `sdblpy run sql` command, we should get the following:\n\n```\n[{'id': 'user:jaime', 'name': 'Jaime'}, {'id': 'user:tobie', 'name': 'Tobie'}]\n```\n\nWhat happens here is the SQL script is run against the database and the response is printed to the console. This is a very simple way to run SQL scripts against the database.\n\n## Command line parameters\n\nBelow are the command line parameters that can be passed to the `sdblpy` command:\n\n| Argument            | Flags               | Required  | Default      | Description                                           |\n|---------------------|---------------------|-----------|--------------|-------------------------------------------------------|\n| `command`           |                     | Yes       |              | The main command (e.g., 'migrations', 'run').          |\n| `subcommand`        |                     | Yes       |              | The subcommand (e.g., 'up', 'down', 'create', 'run', 'version'). |\n| `--host`            | `-ho`, `--host`     | No        | `localhost`  | The database host.                                    |\n| `--port`            | `-p`, `--port`      | No        | `8000`       | The database port.                                    |\n| `--user`            | `-u`, `--user`      | No        | `root`       | The database user.                                    |\n| `--password`        | `-pw`, `--password` | No        | `root`       | The database password.                                |\n| `--namespace`       | `-ns`, `--namespace`| No        | `default`    | The database namespace.                               |\n| `--database`        | `-d`, `--database`  | No        | `default`    | The database name.                                    |\n| `--file`            | `-f`, `--file`      | No        | `main.sql`   | Pointer to SQL file.                                  |\n\n\n## Migrations via python code\n\nWe can define and run migrations directly in the python code. Migrations can come from a string, list of strings, or a file. Below is an example of how we can construct a migration:\n\n```python\nfrom sblpy.migrations.migrations import Migration\n\n\nmigration_one = Migration.from_docstring(\"\"\"\n            CREATE user:tobie SET name = 'Tobie';\n            CREATE user:jaime SET name = 'Jaime';\n        \"\"\")\n\nmigration_two = Migration.from_list([\n            \"CREATE user:tobie SET name = 'Tobie';\",\n            \"CREATE user:jaime SET name = 'Jaime'\"\n        ])\n\nmigration_three = Migration.from_file(\"./some/path/to/file.sql\")\n```\n\nOnce we have these migrations, we need a runner that gets the version of of the database, and then performs migration operations. These operations are the exact same code that the command line interface uses. Below is an example of how we can run migrations:\n\n```python\nfrom sblpy.connection import SurrealSyncConnection\nfrom sblpy.migrations.migrations import Migration\nfrom sblpy.migrations.runner import MigrationRunner\nfrom sblpy.migrations.db_processes import get_latest_version\n\n# define the connection used to run the migrations\nconnection = SurrealSyncConnection(\n    host=\"localhost\",\n    port=8000,\n    user=\"root\",\n    password=\"root\"\n)\n\n# define the migrations and the order of the migrations \nup_migrations = [\n    Migration.from_docstring(\"\"\"CREATE user:tobie SET name = 'Tobie';\"\"\"), # version 1\n    Migration.from_docstring(\"\"\"CREATE user:jaime SET name = 'Jaime';\"\"\") # version 2\n]\n\ndown_migrations = [\n    Migration.from_docstring(\"\"\"DELETE user:tobie;\"\"\"), # version 1\n    Migration.from_docstring(\"\"\"DELETE user:jaime;\"\"\") # version 2\n]\n\n# define the migration runner\nrunner = MigrationRunner(\n    up_migrations=up_migrations,\n    down_migrations=down_migrations,\n    connection=connection\n)\n\n# run the migrations\nrunner.run()\n\n# decrement the version by one\nrunner.decrement()\n\n# increment the version by one\nrunner.increment()\n\n# get the latest version of the database\nlatest_version: int = get_latest_version(\n    connection.connection.host,\n    connection.connection.port,\n    connection.connection.user,\n    connection.connection.password\n)\n```\n\nAnd with this we can run migrations directly in our python. For instance, it is a good idea to run migrations in the `setUp` and `tearDown` methods of a test class when building your own unit tests. You can also run your own migrations in the `main` method of your application before the server starts. This will ensure that the database is in the correct state before the server starts without having to run the migrations manually in a separate terminal or init pod.\n\n## Future Plans\n\nThere isn't much, this is just a super simple API. The less moving parts the less that can go wrong. I want to keep the dependencies to a minimum and the codebase as simple as possible. However, I do want to add the following features:\n\n- [ ] Schema Introspection\n- [ ] Connection Pool Monitoring\n- [ ] `Model` class for ORM like functionality\n- [ ] Query Builder\n- [ ] Connection pool monitoring\n- [ ] Query Execution Time Logging\n- [ ] Pagination Support for Large Datasets\n- [ ] Auto-reconnect for Long-Lived Connections\n- [ ] Connection Retry Mechanism\n- [ ] Params testing and Documentation\n- [ ] CBOR data serialization\n- [ ] Native SurrealDB data types\n- [ ] Local Key value cache\n\nIf you want to contribute to this project feel free to reach out on the python Discord channel for SurrealDB.\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "lite surrealDB client that only supports websocket raw queries and async pooled connections",
    "version": "0.3.0",
    "project_urls": {
        "Homepage": "https://github.com/maxwellflitton/surreal-lite-py"
    },
    "split_keywords": [
        "surrealdb",
        " lite",
        " surrealdb",
        " surrealdb",
        " surrealdb"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cf64773df76ef3479c21459b6b6c3ec49048a322ecd81257c69d2095bbe844c1",
                "md5": "93d8129a34b9d8f8bb7e456abae58e57",
                "sha256": "a5b963556d7979fab567fe91ac48916affdd9fdf916ca18b90224ebad2a8e01b"
            },
            "downloads": -1,
            "filename": "sdblpy-0.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "93d8129a34b9d8f8bb7e456abae58e57",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 18609,
            "upload_time": "2024-10-03T23:51:50",
            "upload_time_iso_8601": "2024-10-03T23:51:50.402242Z",
            "url": "https://files.pythonhosted.org/packages/cf/64/773df76ef3479c21459b6b6c3ec49048a322ecd81257c69d2095bbe844c1/sdblpy-0.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f48f5311108c10e73d174a4f2aa935ed5f15506d654391e396467803d8368f57",
                "md5": "71024862ae402dcbfbc07ae67d498bd8",
                "sha256": "4d90803cff46c7472497a10c5de1d7fa09f130d012b74c7b2365b809475189d9"
            },
            "downloads": -1,
            "filename": "sdblpy-0.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "71024862ae402dcbfbc07ae67d498bd8",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 19299,
            "upload_time": "2024-10-03T23:51:52",
            "upload_time_iso_8601": "2024-10-03T23:51:52.556123Z",
            "url": "https://files.pythonhosted.org/packages/f4/8f/5311108c10e73d174a4f2aa935ed5f15506d654391e396467803d8368f57/sdblpy-0.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-03 23:51:52",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "maxwellflitton",
    "github_project": "surreal-lite-py",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "sdblpy"
}
        
Elapsed time: 0.36273s