# SQLMemo: Memoizing SQL-backed Computation Cache
[![PyPI - Version](https://img.shields.io/pypi/v/sqlmemo)](https://pypi.org/project/sqlmemo/) [![GitHub branch status](https://img.shields.io/github/checks-status/gavento/sqlmemo/main)](https://github.com/gavento/sqlmemo/actions/workflows/ci.yml) [![codecov](https://codecov.io/gh/gavento/sqlmemo/graph/badge.svg?token=AtpsvIAOnt)](https://codecov.io/gh/gavento/sqlmemo)
SQLMemo is a powerful and flexible persistent memoization cache for Python, primarily intended for small- to medium-scale scientific computing, and backed by an SQL database. It provides a simple yet robust way to cache function results, improving performance for expensive computations and enabling efficient data reuse across multiple runs.
## Key Features
- SQL-backed storage (SQLite by default, supports other databases)
- Rich argument signature support (primitives, dicts, lists, sets, tuples, dataclasses, good handling of `*args` and `**kwargs`)
- Flexible return value storage (anything pickle-able, with optional dill support)
- Optional argument storage (only hashes by default), optional JSONified argument and/or result storage (e.g. for programmatic or external SQL access to the records), all with filtering and transformation options (see the API below)
- Databases stable between Python versions (tested with Python 3.10-3.13)
- Thread-safe, can be used from multiple processes and machines
- Optional exception storage and optional re-raising of stored exceptions
- Simplicity, only a single dependency (SQLAlchemy), well-tested and properly typed
## Quick Start
Install SQLMemo using pip:
```bash
pip install sqlmemo
```
Here's a basic example of how to use SQLMemo:
```python
from sqlmemo import SQLMemo
import time
# Any URI supported by SQLAlchemy, uses SQLite by default
# Multiple functions can be cached in the same database
@SQLMemo("./cache.sqlite")
def slow_function(x, y=1.0, *args):
time.sleep(1.0)
return (x, y, *args)
# First call: takes about 1 second
result1 = slow_function("a", "b", "c")
print(result1) # ('a', 'b', 'c')
# Second call: returns immediately from cache
result2 = slow_function("a", "b", "c")
print(result2) # ('a', 'b', 'c')
# Access the SQLMemo instance itself
cache = slow_function._sqlmemo
# Check cache statistics
print(cache.stats) # InstanceStats(hits=1, misses=1, errors=0)
# Get database statistics
print(cache.get_db_stats()) # DBStats(records=1, records_done=1, records_running=0, records_error=0)
# Trim the database to keep only the most recent 10 records for this function
cache.trim(keep_records=10)
# Get the database entry for the call above
record = cache.get_record(x="a", y="b", args=("c",))
print(record) # SQLMemoRecord(id=1, func_name='slow_function', args_hash='...', ...
```
## Motivation
As a scientist and programmer, I often found myself needing to cache expensive computations across multiple runs of my experiments. While many simple caching solutions exist, they often fell short in one way or another. Some weren't thread-safe, others didn't scale well, used slow or less-portable storage (e.g. JSON files or `shelve`), and many used naive hashing methods for arguments.
I created SQLMemo to address these issues and provide a robust, flexible caching solution for scientific computing, though it may hopefully be able to help you with any general caching in your Python code. It's designed to be simple to use, yet powerful enough to handle complex caching scenarios. By open-sourcing SQLMemo, I hope to provide a trustworthy and efficient tool that other researchers and developers can rely on for their own projects.
## SQLMemo may not be the right tool for you
- ... if your cache many small and fast calls, and you need a very high-throughput cache
- ... if you need to integrate it with other job-scheduling systems or workflows (though it can be made to work for a distributed computation, these systems usually have their own caching logic)
- ... if you need to cache very large artifacts (say 10 MBs or more, though local SQLite is surprisingly good at this) or storing external files (e.g. trained ML models)
## API Reference
### `sqlmemo.SQLMemo`
Construct a new SQLMemo cache, which stores the pickled result of the function calls in a SQL database, indexed by a hash of the function arguments and the function name.
#### Parameters
- db (str | Path | Engine | None): The database URL, path, or SQLAlchemy Engine object. If None, an in-memory temporary DB is used. If a Path or a str without a scheme is provided, SQLite is assumed.
- func_name (str | None): The name of the function. If None, the name will be inferred from the qualified name of the function passed to the decorator.
- table_name (str | None): The name of the database table to store the memoized results. Defaults to "sqlmemo_data".
- store_args_pickle (bool | Iterable[str] | Callable): Whether and how to store pickled function arguments in the cache (see below).
- store_args_json (bool | Iterable[str] | Callable): Whether and how to store JSON-encoded function arguments in the cache (see below).
- store_value_json (bool | Callable): Whether and how to store JSON-encoded function return value in the cache (see below).
- store_exceptions (bool | Iterable[Type] | Callable[[Exception], bool]): Which exceptions to store from the wrapped function when called (see below).
- reraise_exceptions (bool | Iterable[Type] | Callable[[Exception], bool]): Which exceptions to reraise from previously recorded exceptions in the cache (see below).
- hash_factory (Callable): The hash function to use for hashing the function arguments. SHA256 by default.
- use_dill (bool): Whether to use dill instead of pickle for serialization. Dill can serialize more types (lambdas, local classes, etc.) but is slower and complex objects serialization may be less stable across code changes and python versions.
- apply_default_args (bool): Whether to apply default arguments to the function call before hashing and storage (True by default). True is the more defensive setting (e.g. sensitive to changing argument defaults or adding new arguments). It does allow e.g. for adding a default value to an argument where it was already called with the default.
#### Storing arguments and return values
While the cache always stores the pickled result of the function calls, it can also optionally store the function arguments as JSON and/or pickle, and can store the function return value as JSON. All of the three optional fields can be further customized to:
- Store only a subset of arguments
- Transform the arguments or the return value before storage, e.g. to extract only some arguments or their features to be stored in the cache in a more accessible format (JSON)
The values of the parameters can be:
- True: store the value as is.
- False: do not store the value.
- Callable: transform the value using the callable before storage. The callable is called with exactly the same arguments as the wrapped function for argument transformation, and with the return value (single argument) for the return value transformation.
- Iterable[str]: store only the values of the arguments with the given names (not applicable for `store_value_json`).
The JSON versions of the arguments and return values may be e.g. useful to access the SQL database with a query, e.g. `SELECT * FROM sqlmemo_data WHERE args_json->>'x' = 'value'`.
Note that the stored argument values are NEVER used to match future calls, only the stored hashes.
#### Storing and reraising exceptions
Any exception thrown by the wrapped function can be optionally stored in the cache, and optionally reraised on a subsequent call.
These features are controlled by `store_exceptions` and `reraise_exceptions` parameters, and act independently: storing an exception does not mean it will be reraised, and vice versa. When a stored exception is found without `reraise_exceptions`, the computation is repeated and the new value (or exception, with `store_exceptions` enabled) is stored, overwriting the old record.
The values of the parameters can be:
- True: store/reraise all exceptions.
- False: do not store/reraise any exceptions.
- Iterable[Type]: store/reraise exceptions that are subclasses of any of the given types. Example: `store_exceptions=(ValueError, TypeError)`.
- Callable[[Exception], bool]: store/reraise exceptions for which the callable returns True. Example: `store_exceptions=lambda e: isinstance(e, ValueError)`.
Note that exceptions raised in the called wrapped function are propagated either way, regardless of `store_exceptions` or `reraise_exceptions`.
#### Database
If not database URL is provided to the constructor, it can be set later using `SQLMemo.set_engine(...)`. This is useful to e.g. configure the database engine based on dynamic configuration.
The connection to the database and any database structures are created lazily on the first call to the wrapper. If you want to trigger the creation of the database structures (e.g. tables) before the first call, you can use `SQLMemo.get_db_stats()`.
#### Other
The instance is fully thread-safe after construction, all user-facing functions use a mutex internally to ensure thread-safety. The wrapper can be called from multiple threads at once, and the called function may call itself recursively.
The instance provides two sets of statistics:
- `stats`: statistics of this instance of SQLMemo, including the number of hits, misses, and errors.
- `get_db_stats()`: statistics of the records in the database associated with this function.
You can access the SQLMemo instance as the `_sqlmemo` attribute of the wrapped function. Example:
```python
@SQLMemo()
def my_func(...):
...
my_func._sqlmemo.stats # InstanceStats(hits=0, misses=0, errors=0)
```
### `sqlmemo.SQLMemo` methods and properties
- `get_record(*args, **kwargs) -> SQLMemoRecord | None`:
- Returns the database record for the given arguments, if it exists. Read-only access to the database, incl checking for existing records.
- `stats` property:
- Returns the `InstanceStats` for this instance.
- `get_db_stats() -> DBStats`:
- Returns statistics for stored records for this function.
- Requires a database query.
- `trim(keep_records: int = 0) -> None`:
- Trims the DB to keep only the specified number of newest records for this function.
- By default, deletes all records for this function.
- `__repr__() -> str`:
- Returns a string representation of the SQLMemo instance.
## Performance Considerations
- Use SQLite for small to medium-sized caches - it is surprisingly fast even with larger binary blobs (tens of MBs or more)
- Consider using a full-fledged database for larger datasets or concurrent access
- Be mindful of serialization overhead for large objects
- Compression can help with performance and database size
- For SQLite, consider using [sqlite-zstd](https://github.com/phiresky/sqlite-zstd) (per-row compression of selected columns with shared compression dictionaries) or [sqlite_zstd_vfs](https://github.com/mlin/sqlite_zstd_vfs) (VFS-level compression)
## Roadmap and potential extensions
- Support method calls (currently needs `self` argument to be hashable)
- Handle concurrent calls with the same arguments (currently all concurrent calls with the same arguments happen; waiting and timeout logic needed)
- Consider adding automatic trimming of old records to reduce database size (possibly with more retention policies)
- Compression of stored values (not very efficient for small data, full-DB compression should be better when available)
- Support wrapping async functions (doable, unclear if needed)
## Contributing
Contributions to SQLMemo are welcome! Please submit issues and pull requests on the project's [GitHub repository](https://github.com/gavento/sqlmemo).
## Authors
- Tomáš Gavenčiak ([@gavento](https://github.com/gavento), [ACS Research](https://acsresearch.org/))
SQLMemo is released under the MIT License. See the LICENSE file for details.
## Changelog
- 0.3.1: Fixed superfluous pytest import, CI and metadata improvements.
- 0.3.0: Refactoring into a public version. Minor API changes, renamed to sqlmemo (from sqlcache), improved testing, tooling, and documentation, `apply_default_args=True` by default.
- 0.2.0: (internal) Refactoring and renaming. DB format change, changed argument storage logic, improved testing, stats, strip async stubs.
- 0.1.0: (internal) Initial release. Exception storage and reraising, JSON arguments and return values, support concurrent and recursive calls.
Raw data
{
"_id": null,
"home_page": "https://github.com/gavento/sqlmemo",
"name": "sqlmemo",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": null,
"keywords": "memoization, cache, sql",
"author": "Tomas Gavenciak",
"author_email": "gavento@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/08/ce/372d24b4a753fc64e6f351ff460a0641cf35619da8b07d13b8bbd70c761a/sqlmemo-0.3.1.tar.gz",
"platform": null,
"description": "# SQLMemo: Memoizing SQL-backed Computation Cache\n\n[![PyPI - Version](https://img.shields.io/pypi/v/sqlmemo)](https://pypi.org/project/sqlmemo/) [![GitHub branch status](https://img.shields.io/github/checks-status/gavento/sqlmemo/main)](https://github.com/gavento/sqlmemo/actions/workflows/ci.yml) [![codecov](https://codecov.io/gh/gavento/sqlmemo/graph/badge.svg?token=AtpsvIAOnt)](https://codecov.io/gh/gavento/sqlmemo)\n\nSQLMemo is a powerful and flexible persistent memoization cache for Python, primarily intended for small- to medium-scale scientific computing, and backed by an SQL database. It provides a simple yet robust way to cache function results, improving performance for expensive computations and enabling efficient data reuse across multiple runs.\n\n## Key Features\n\n- SQL-backed storage (SQLite by default, supports other databases)\n- Rich argument signature support (primitives, dicts, lists, sets, tuples, dataclasses, good handling of `*args` and `**kwargs`)\n- Flexible return value storage (anything pickle-able, with optional dill support)\n- Optional argument storage (only hashes by default), optional JSONified argument and/or result storage (e.g. for programmatic or external SQL access to the records), all with filtering and transformation options (see the API below)\n- Databases stable between Python versions (tested with Python 3.10-3.13)\n- Thread-safe, can be used from multiple processes and machines\n- Optional exception storage and optional re-raising of stored exceptions\n- Simplicity, only a single dependency (SQLAlchemy), well-tested and properly typed\n\n## Quick Start\n\nInstall SQLMemo using pip:\n\n```bash\npip install sqlmemo\n```\n\nHere's a basic example of how to use SQLMemo:\n\n```python\nfrom sqlmemo import SQLMemo\nimport time\n\n# Any URI supported by SQLAlchemy, uses SQLite by default\n# Multiple functions can be cached in the same database\n@SQLMemo(\"./cache.sqlite\")\ndef slow_function(x, y=1.0, *args):\n time.sleep(1.0)\n return (x, y, *args)\n\n# First call: takes about 1 second\nresult1 = slow_function(\"a\", \"b\", \"c\")\nprint(result1) # ('a', 'b', 'c')\n\n# Second call: returns immediately from cache\nresult2 = slow_function(\"a\", \"b\", \"c\")\nprint(result2) # ('a', 'b', 'c')\n\n# Access the SQLMemo instance itself\ncache = slow_function._sqlmemo\n\n# Check cache statistics\nprint(cache.stats) # InstanceStats(hits=1, misses=1, errors=0)\n\n# Get database statistics\nprint(cache.get_db_stats()) # DBStats(records=1, records_done=1, records_running=0, records_error=0)\n\n# Trim the database to keep only the most recent 10 records for this function\ncache.trim(keep_records=10)\n\n# Get the database entry for the call above\nrecord = cache.get_record(x=\"a\", y=\"b\", args=(\"c\",))\nprint(record) # SQLMemoRecord(id=1, func_name='slow_function', args_hash='...', ...\n```\n\n## Motivation\n\nAs a scientist and programmer, I often found myself needing to cache expensive computations across multiple runs of my experiments. While many simple caching solutions exist, they often fell short in one way or another. Some weren't thread-safe, others didn't scale well, used slow or less-portable storage (e.g. JSON files or `shelve`), and many used naive hashing methods for arguments.\n\nI created SQLMemo to address these issues and provide a robust, flexible caching solution for scientific computing, though it may hopefully be able to help you with any general caching in your Python code. It's designed to be simple to use, yet powerful enough to handle complex caching scenarios. By open-sourcing SQLMemo, I hope to provide a trustworthy and efficient tool that other researchers and developers can rely on for their own projects.\n\n## SQLMemo may not be the right tool for you\n\n- ... if your cache many small and fast calls, and you need a very high-throughput cache\n- ... if you need to integrate it with other job-scheduling systems or workflows (though it can be made to work for a distributed computation, these systems usually have their own caching logic)\n- ... if you need to cache very large artifacts (say 10 MBs or more, though local SQLite is surprisingly good at this) or storing external files (e.g. trained ML models)\n\n## API Reference\n\n### `sqlmemo.SQLMemo`\n\nConstruct a new SQLMemo cache, which stores the pickled result of the function calls in a SQL database, indexed by a hash of the function arguments and the function name.\n\n#### Parameters\n\n- db (str | Path | Engine | None): The database URL, path, or SQLAlchemy Engine object. If None, an in-memory temporary DB is used. If a Path or a str without a scheme is provided, SQLite is assumed.\n- func_name (str | None): The name of the function. If None, the name will be inferred from the qualified name of the function passed to the decorator.\n- table_name (str | None): The name of the database table to store the memoized results. Defaults to \"sqlmemo_data\".\n- store_args_pickle (bool | Iterable[str] | Callable): Whether and how to store pickled function arguments in the cache (see below).\n- store_args_json (bool | Iterable[str] | Callable): Whether and how to store JSON-encoded function arguments in the cache (see below).\n- store_value_json (bool | Callable): Whether and how to store JSON-encoded function return value in the cache (see below).\n- store_exceptions (bool | Iterable[Type] | Callable[[Exception], bool]): Which exceptions to store from the wrapped function when called (see below).\n- reraise_exceptions (bool | Iterable[Type] | Callable[[Exception], bool]): Which exceptions to reraise from previously recorded exceptions in the cache (see below).\n- hash_factory (Callable): The hash function to use for hashing the function arguments. SHA256 by default.\n- use_dill (bool): Whether to use dill instead of pickle for serialization. Dill can serialize more types (lambdas, local classes, etc.) but is slower and complex objects serialization may be less stable across code changes and python versions.\n- apply_default_args (bool): Whether to apply default arguments to the function call before hashing and storage (True by default). True is the more defensive setting (e.g. sensitive to changing argument defaults or adding new arguments). It does allow e.g. for adding a default value to an argument where it was already called with the default.\n\n#### Storing arguments and return values\n\nWhile the cache always stores the pickled result of the function calls, it can also optionally store the function arguments as JSON and/or pickle, and can store the function return value as JSON. All of the three optional fields can be further customized to:\n\n- Store only a subset of arguments\n- Transform the arguments or the return value before storage, e.g. to extract only some arguments or their features to be stored in the cache in a more accessible format (JSON)\n\nThe values of the parameters can be:\n- True: store the value as is.\n- False: do not store the value.\n- Callable: transform the value using the callable before storage. The callable is called with exactly the same arguments as the wrapped function for argument transformation, and with the return value (single argument) for the return value transformation.\n- Iterable[str]: store only the values of the arguments with the given names (not applicable for `store_value_json`).\n\nThe JSON versions of the arguments and return values may be e.g. useful to access the SQL database with a query, e.g. `SELECT * FROM sqlmemo_data WHERE args_json->>'x' = 'value'`.\n\nNote that the stored argument values are NEVER used to match future calls, only the stored hashes.\n\n#### Storing and reraising exceptions\n\nAny exception thrown by the wrapped function can be optionally stored in the cache, and optionally reraised on a subsequent call.\nThese features are controlled by `store_exceptions` and `reraise_exceptions` parameters, and act independently: storing an exception does not mean it will be reraised, and vice versa. When a stored exception is found without `reraise_exceptions`, the computation is repeated and the new value (or exception, with `store_exceptions` enabled) is stored, overwriting the old record.\n\nThe values of the parameters can be:\n- True: store/reraise all exceptions.\n- False: do not store/reraise any exceptions.\n- Iterable[Type]: store/reraise exceptions that are subclasses of any of the given types. Example: `store_exceptions=(ValueError, TypeError)`.\n- Callable[[Exception], bool]: store/reraise exceptions for which the callable returns True. Example: `store_exceptions=lambda e: isinstance(e, ValueError)`.\n\nNote that exceptions raised in the called wrapped function are propagated either way, regardless of `store_exceptions` or `reraise_exceptions`.\n\n#### Database\n\nIf not database URL is provided to the constructor, it can be set later using `SQLMemo.set_engine(...)`. This is useful to e.g. configure the database engine based on dynamic configuration.\n\nThe connection to the database and any database structures are created lazily on the first call to the wrapper. If you want to trigger the creation of the database structures (e.g. tables) before the first call, you can use `SQLMemo.get_db_stats()`.\n\n#### Other\n\nThe instance is fully thread-safe after construction, all user-facing functions use a mutex internally to ensure thread-safety. The wrapper can be called from multiple threads at once, and the called function may call itself recursively.\n\nThe instance provides two sets of statistics:\n- `stats`: statistics of this instance of SQLMemo, including the number of hits, misses, and errors.\n- `get_db_stats()`: statistics of the records in the database associated with this function.\n\nYou can access the SQLMemo instance as the `_sqlmemo` attribute of the wrapped function. Example:\n```python\n@SQLMemo()\ndef my_func(...):\n ...\nmy_func._sqlmemo.stats # InstanceStats(hits=0, misses=0, errors=0)\n```\n\n### `sqlmemo.SQLMemo` methods and properties\n\n- `get_record(*args, **kwargs) -> SQLMemoRecord | None`:\n - Returns the database record for the given arguments, if it exists. Read-only access to the database, incl checking for existing records.\n\n- `stats` property:\n - Returns the `InstanceStats` for this instance.\n\n- `get_db_stats() -> DBStats`:\n - Returns statistics for stored records for this function.\n - Requires a database query.\n\n- `trim(keep_records: int = 0) -> None`:\n - Trims the DB to keep only the specified number of newest records for this function.\n - By default, deletes all records for this function.\n\n- `__repr__() -> str`:\n - Returns a string representation of the SQLMemo instance.\n\n## Performance Considerations\n\n- Use SQLite for small to medium-sized caches - it is surprisingly fast even with larger binary blobs (tens of MBs or more)\n- Consider using a full-fledged database for larger datasets or concurrent access\n- Be mindful of serialization overhead for large objects\n- Compression can help with performance and database size\n - For SQLite, consider using [sqlite-zstd](https://github.com/phiresky/sqlite-zstd) (per-row compression of selected columns with shared compression dictionaries) or [sqlite_zstd_vfs](https://github.com/mlin/sqlite_zstd_vfs) (VFS-level compression)\n\n## Roadmap and potential extensions\n\n- Support method calls (currently needs `self` argument to be hashable)\n- Handle concurrent calls with the same arguments (currently all concurrent calls with the same arguments happen; waiting and timeout logic needed)\n- Consider adding automatic trimming of old records to reduce database size (possibly with more retention policies)\n- Compression of stored values (not very efficient for small data, full-DB compression should be better when available)\n- Support wrapping async functions (doable, unclear if needed)\n\n## Contributing\n\nContributions to SQLMemo are welcome! Please submit issues and pull requests on the project's [GitHub repository](https://github.com/gavento/sqlmemo).\n\n## Authors\n\n- Tom\u00e1\u0161 Gaven\u010diak ([@gavento](https://github.com/gavento), [ACS Research](https://acsresearch.org/))\n\nSQLMemo is released under the MIT License. See the LICENSE file for details.\n\n## Changelog\n\n- 0.3.1: Fixed superfluous pytest import, CI and metadata improvements.\n- 0.3.0: Refactoring into a public version. Minor API changes, renamed to sqlmemo (from sqlcache), improved testing, tooling, and documentation, `apply_default_args=True` by default.\n- 0.2.0: (internal) Refactoring and renaming. DB format change, changed argument storage logic, improved testing, stats, strip async stubs.\n- 0.1.0: (internal) Initial release. Exception storage and reraising, JSON arguments and return values, support concurrent and recursive calls.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Memoizing SQL-backed Computation Cache",
"version": "0.3.1",
"project_urls": {
"Homepage": "https://github.com/gavento/sqlmemo",
"Repository": "https://github.com/gavento/sqlmemo"
},
"split_keywords": [
"memoization",
" cache",
" sql"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9aa2474ed0d5147ec3062584e35c3ee63d1ba2fe92a59374b351e18a11fffdd2",
"md5": "a5bb155d4cf7e62675690a7a3f79403b",
"sha256": "61c2b925bbcf31f875fde1582cc8e0bb7741c8619c4e96c35b872b4d8c6c4bc5"
},
"downloads": -1,
"filename": "sqlmemo-0.3.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a5bb155d4cf7e62675690a7a3f79403b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 16074,
"upload_time": "2024-10-21T16:16:02",
"upload_time_iso_8601": "2024-10-21T16:16:02.817076Z",
"url": "https://files.pythonhosted.org/packages/9a/a2/474ed0d5147ec3062584e35c3ee63d1ba2fe92a59374b351e18a11fffdd2/sqlmemo-0.3.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "08ce372d24b4a753fc64e6f351ff460a0641cf35619da8b07d13b8bbd70c761a",
"md5": "4951aab774092624aed9165134df630d",
"sha256": "a1297cbea1af581d5ec1adbd5e44d5274d7e9973fb72a7131b51858655af42a3"
},
"downloads": -1,
"filename": "sqlmemo-0.3.1.tar.gz",
"has_sig": false,
"md5_digest": "4951aab774092624aed9165134df630d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 16110,
"upload_time": "2024-10-21T16:16:04",
"upload_time_iso_8601": "2024-10-21T16:16:04.059800Z",
"url": "https://files.pythonhosted.org/packages/08/ce/372d24b4a753fc64e6f351ff460a0641cf35619da8b07d13b8bbd70c761a/sqlmemo-0.3.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-21 16:16:04",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "gavento",
"github_project": "sqlmemo",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "sqlmemo"
}