# AsyncPGX
**High-performance asynchronous PostgreSQL wrapper with Redis-like API**
AsyncPGX is a Python library that provides a Redis-like interface for PostgreSQL, built on top of `asyncpg` for maximum performance. It offers familiar key-value operations while leveraging PostgreSQL's reliability and ACID properties.
## Features
- 🚀 **High Performance**: Built on `asyncpg` with minimal overhead
- 🔄 **Fully Async**: Native `asyncio` support with connection pooling
- 🎯 **Redis-like API**: Familiar interface for easy adoption
- ⏰ **TTL Support**: Automatic key expiration with background cleanup
- 🔢 **Atomic Operations**: Thread-safe increment/decrement operations
- 🔍 **Pattern Matching**: Glob-style key pattern matching
- 📊 **Multiple Data Types**: Support for strings, numbers, JSON, and Python objects
- 🛡️ **Production Ready**: Comprehensive error handling and connection management
## Installation
```bash
pip install asyncpgx
```
### Requirements
- Python 3.8+
- PostgreSQL 10+
- asyncpg 0.29.0+
## Quick Start
```python
import asyncio
from asyncpgx import AsyncPostgresClient
async def main():
# Connect to PostgreSQL
db = AsyncPostgresClient("postgresql://user:password@localhost:5432/dbname")
# Basic operations
await db.set("user:1", {"name": "Alice", "age": 30})
user = await db.get("user:1")
print(user) # {'name': 'Alice', 'age': 30}
# TTL operations
await db.set("session:abc", "session_data", expire=3600) # 1 hour TTL
ttl = await db.ttl("session:abc")
print(f"Session expires in {ttl} seconds")
# Atomic operations
await db.incr("page_views", 1)
views = await db.get("page_views")
print(f"Page views: {views}")
# Pattern matching
await db.set("user:1:profile", "profile_data")
await db.set("user:2:profile", "profile_data")
user_keys = await db.keys("user:*:profile")
print(user_keys) # ['user:1:profile', 'user:2:profile']
# Cleanup
await db.disconnect()
asyncio.run(main())
```
## API Reference
### Connection
```python
db = AsyncPostgresClient(
url="postgresql://user:password@host:port/database",
min_connections=5, # Minimum connections in pool
max_connections=20, # Maximum connections in pool
command_timeout=60.0, # Command timeout in seconds
auto_cleanup=True, # Enable automatic TTL cleanup
cleanup_interval=300 # Cleanup interval in seconds
)
# Manual connection management
await db.connect()
await db.disconnect()
# Or use context manager (recommended)
async with AsyncPostgresClient(url) as db:
await db.set("key", "value")
```
### Core Operations
#### `set(key, value, expire=None)`
Store a key-value pair with optional TTL.
```python
# Basic set
await db.set("name", "Alice")
# With TTL (expires in 60 seconds)
await db.set("temp_key", "temp_value", expire=60)
# Different data types
await db.set("user", {"id": 1, "name": "Alice"})
await db.set("numbers", [1, 2, 3, 4, 5])
await db.set("flag", True)
```
#### `get(key)`
Retrieve a value by key.
```python
value = await db.get("name") # Returns "Alice" or None
user = await db.get("user") # Returns dict or None
```
#### `delete(*keys)`
Delete one or more keys.
```python
# Delete single key
deleted = await db.delete("name") # Returns 1 if deleted, 0 if not found
# Delete multiple keys
deleted = await db.delete("key1", "key2", "key3") # Returns count of deleted keys
```
#### `exists(key)`
Check if a key exists.
```python
if await db.exists("user:1"):
print("User exists")
```
#### `keys(pattern="*")`
List keys matching a glob pattern.
```python
all_keys = await db.keys() # All keys
user_keys = await db.keys("user:*") # Keys starting with "user:"
profiles = await db.keys("*:profile") # Keys ending with ":profile"
sessions = await db.keys("session:??") # Sessions with 2-char IDs
```
### TTL Operations
#### `expire(key, seconds)`
Set TTL on an existing key.
```python
# Set 1 hour TTL
success = await db.expire("user:1", 3600)
```
#### `ttl(key)`
Get remaining TTL for a key.
```python
ttl = await db.ttl("user:1")
# Returns: remaining seconds, -1 if no TTL, -2 if key doesn't exist
```
### Numeric Operations
#### `incr(key, amount=1)`
Increment a numeric value.
```python
# Increment by 1 (default)
new_value = await db.incr("counter")
# Increment by custom amount
new_value = await db.incr("score", 10)
# Works with string numbers too
await db.set("string_num", "42")
result = await db.incr("string_num", 8) # Returns 50
```
#### `decr(key, amount=1)`
Decrement a numeric value.
```python
new_value = await db.decr("lives") # Decrement by 1
new_value = await db.decr("score", 5) # Decrement by 5
```
### Utility Operations
#### `flushall()`
Remove all keys from storage.
```python
success = await db.flushall() # Returns True if successful
```
## Data Types
AsyncPGX automatically handles serialization for various Python data types:
- **Strings**: Stored as UTF-8 text
- **Integers**: Stored efficiently as text
- **Floats**: Stored with full precision
- **Booleans**: Stored as "True"/"False"
- **Lists/Tuples**: JSON serialization
- **Dictionaries**: JSON serialization
- **Other Objects**: Pickle serialization (fallback)
## Performance
AsyncPGX is designed for high performance:
- **Connection Pooling**: Efficient connection reuse
- **Optimized Queries**: Minimal SQL overhead
- **Batch Operations**: Support for concurrent operations
- **Efficient Serialization**: Type-aware serialization
- **Index Optimization**: Automatic indexing for TTL and pattern matching
### Benchmarks
Typical performance on modern hardware:
- **SET operations**: 10,000+ ops/sec
- **GET operations**: 15,000+ ops/sec
- **Pattern matching**: Optimized with PostgreSQL indexes
- **TTL cleanup**: Background processing with minimal impact
## Error Handling
```python
try:
await db.set("key", "value")
except ConnectionError:
print("Database connection failed")
except ValueError as e:
print(f"Invalid operation: {e}")
```
Common exceptions:
- `ConnectionError`: Database connection issues
- `ValueError`: Invalid data types or operations
- `RuntimeError`: Schema or configuration errors
## Advanced Usage
### Custom Connection Settings
```python
db = AsyncPostgresClient(
url="postgresql://user:password@host:port/database",
min_connections=10,
max_connections=100,
command_timeout=30.0,
auto_cleanup=True,
cleanup_interval=60 # Clean expired keys every minute
)
```
### Concurrent Operations
```python
# Concurrent writes
tasks = [
db.set(f"key:{i}", f"value:{i}")
for i in range(100)
]
await asyncio.gather(*tasks)
# Concurrent reads
tasks = [
db.get(f"key:{i}")
for i in range(100)
]
results = await asyncio.gather(*tasks)
```
### Context Manager Usage
```python
# Automatic connection management
async with AsyncPostgresClient(url) as db:
await db.set("key", "value")
value = await db.get("key")
# Connection automatically closed
```
## Examples
See the `examples/` directory for complete usage examples:
- `basic_usage.py`: Basic operations and data types
- `advanced_usage.py`: Concurrent operations, error handling, and TTL
## Requirements
Create a PostgreSQL database and ensure the connection URL is correct:
```sql
CREATE DATABASE asyncpgx_test;
```
The library will automatically create the required tables and indexes.
## License
MIT License - see LICENSE file for details.
## Contributing
Contributions are welcome! Please read our contributing guidelines and submit pull requests.
## Support
For issues and questions:
- GitHub Issues: [Report bugs](https://github.com/asyncpgx/asyncpgx/issues)
- Documentation: [Read the docs](https://asyncpgx.readthedocs.io/)
Raw data
{
"_id": null,
"home_page": null,
"name": "PyAsyncPGX",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "postgresql async redis-like key-value database asyncpg",
"author": "AsyncPGX Team",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/9d/47/caec5528db6262d0ef900015449e336493d7d05263c5519eca5fdee1ceb7/PyAsyncPGX-1.0.0.tar.gz",
"platform": null,
"description": "# AsyncPGX\r\n\r\n**High-performance asynchronous PostgreSQL wrapper with Redis-like API**\r\n\r\nAsyncPGX is a Python library that provides a Redis-like interface for PostgreSQL, built on top of `asyncpg` for maximum performance. It offers familiar key-value operations while leveraging PostgreSQL's reliability and ACID properties.\r\n\r\n## Features\r\n\r\n- \ud83d\ude80 **High Performance**: Built on `asyncpg` with minimal overhead\r\n- \ud83d\udd04 **Fully Async**: Native `asyncio` support with connection pooling\r\n- \ud83c\udfaf **Redis-like API**: Familiar interface for easy adoption\r\n- \u23f0 **TTL Support**: Automatic key expiration with background cleanup\r\n- \ud83d\udd22 **Atomic Operations**: Thread-safe increment/decrement operations\r\n- \ud83d\udd0d **Pattern Matching**: Glob-style key pattern matching\r\n- \ud83d\udcca **Multiple Data Types**: Support for strings, numbers, JSON, and Python objects\r\n- \ud83d\udee1\ufe0f **Production Ready**: Comprehensive error handling and connection management\r\n\r\n## Installation\r\n\r\n```bash\r\npip install asyncpgx\r\n```\r\n\r\n### Requirements\r\n\r\n- Python 3.8+\r\n- PostgreSQL 10+\r\n- asyncpg 0.29.0+\r\n\r\n## Quick Start\r\n\r\n```python\r\nimport asyncio\r\nfrom asyncpgx import AsyncPostgresClient\r\n\r\nasync def main():\r\n # Connect to PostgreSQL\r\n db = AsyncPostgresClient(\"postgresql://user:password@localhost:5432/dbname\")\r\n \r\n # Basic operations\r\n await db.set(\"user:1\", {\"name\": \"Alice\", \"age\": 30})\r\n user = await db.get(\"user:1\")\r\n print(user) # {'name': 'Alice', 'age': 30}\r\n \r\n # TTL operations\r\n await db.set(\"session:abc\", \"session_data\", expire=3600) # 1 hour TTL\r\n ttl = await db.ttl(\"session:abc\")\r\n print(f\"Session expires in {ttl} seconds\")\r\n \r\n # Atomic operations\r\n await db.incr(\"page_views\", 1)\r\n views = await db.get(\"page_views\")\r\n print(f\"Page views: {views}\")\r\n \r\n # Pattern matching\r\n await db.set(\"user:1:profile\", \"profile_data\")\r\n await db.set(\"user:2:profile\", \"profile_data\")\r\n user_keys = await db.keys(\"user:*:profile\")\r\n print(user_keys) # ['user:1:profile', 'user:2:profile']\r\n \r\n # Cleanup\r\n await db.disconnect()\r\n\r\nasyncio.run(main())\r\n```\r\n\r\n## API Reference\r\n\r\n### Connection\r\n\r\n```python\r\ndb = AsyncPostgresClient(\r\n url=\"postgresql://user:password@host:port/database\",\r\n min_connections=5, # Minimum connections in pool\r\n max_connections=20, # Maximum connections in pool\r\n command_timeout=60.0, # Command timeout in seconds\r\n auto_cleanup=True, # Enable automatic TTL cleanup\r\n cleanup_interval=300 # Cleanup interval in seconds\r\n)\r\n\r\n# Manual connection management\r\nawait db.connect()\r\nawait db.disconnect()\r\n\r\n# Or use context manager (recommended)\r\nasync with AsyncPostgresClient(url) as db:\r\n await db.set(\"key\", \"value\")\r\n```\r\n\r\n### Core Operations\r\n\r\n#### `set(key, value, expire=None)`\r\nStore a key-value pair with optional TTL.\r\n\r\n```python\r\n# Basic set\r\nawait db.set(\"name\", \"Alice\")\r\n\r\n# With TTL (expires in 60 seconds)\r\nawait db.set(\"temp_key\", \"temp_value\", expire=60)\r\n\r\n# Different data types\r\nawait db.set(\"user\", {\"id\": 1, \"name\": \"Alice\"})\r\nawait db.set(\"numbers\", [1, 2, 3, 4, 5])\r\nawait db.set(\"flag\", True)\r\n```\r\n\r\n#### `get(key)`\r\nRetrieve a value by key.\r\n\r\n```python\r\nvalue = await db.get(\"name\") # Returns \"Alice\" or None\r\nuser = await db.get(\"user\") # Returns dict or None\r\n```\r\n\r\n#### `delete(*keys)`\r\nDelete one or more keys.\r\n\r\n```python\r\n# Delete single key\r\ndeleted = await db.delete(\"name\") # Returns 1 if deleted, 0 if not found\r\n\r\n# Delete multiple keys\r\ndeleted = await db.delete(\"key1\", \"key2\", \"key3\") # Returns count of deleted keys\r\n```\r\n\r\n#### `exists(key)`\r\nCheck if a key exists.\r\n\r\n```python\r\nif await db.exists(\"user:1\"):\r\n print(\"User exists\")\r\n```\r\n\r\n#### `keys(pattern=\"*\")`\r\nList keys matching a glob pattern.\r\n\r\n```python\r\nall_keys = await db.keys() # All keys\r\nuser_keys = await db.keys(\"user:*\") # Keys starting with \"user:\"\r\nprofiles = await db.keys(\"*:profile\") # Keys ending with \":profile\"\r\nsessions = await db.keys(\"session:??\") # Sessions with 2-char IDs\r\n```\r\n\r\n### TTL Operations\r\n\r\n#### `expire(key, seconds)`\r\nSet TTL on an existing key.\r\n\r\n```python\r\n# Set 1 hour TTL\r\nsuccess = await db.expire(\"user:1\", 3600)\r\n```\r\n\r\n#### `ttl(key)`\r\nGet remaining TTL for a key.\r\n\r\n```python\r\nttl = await db.ttl(\"user:1\")\r\n# Returns: remaining seconds, -1 if no TTL, -2 if key doesn't exist\r\n```\r\n\r\n### Numeric Operations\r\n\r\n#### `incr(key, amount=1)`\r\nIncrement a numeric value.\r\n\r\n```python\r\n# Increment by 1 (default)\r\nnew_value = await db.incr(\"counter\")\r\n\r\n# Increment by custom amount\r\nnew_value = await db.incr(\"score\", 10)\r\n\r\n# Works with string numbers too\r\nawait db.set(\"string_num\", \"42\")\r\nresult = await db.incr(\"string_num\", 8) # Returns 50\r\n```\r\n\r\n#### `decr(key, amount=1)`\r\nDecrement a numeric value.\r\n\r\n```python\r\nnew_value = await db.decr(\"lives\") # Decrement by 1\r\nnew_value = await db.decr(\"score\", 5) # Decrement by 5\r\n```\r\n\r\n### Utility Operations\r\n\r\n#### `flushall()`\r\nRemove all keys from storage.\r\n\r\n```python\r\nsuccess = await db.flushall() # Returns True if successful\r\n```\r\n\r\n## Data Types\r\n\r\nAsyncPGX automatically handles serialization for various Python data types:\r\n\r\n- **Strings**: Stored as UTF-8 text\r\n- **Integers**: Stored efficiently as text\r\n- **Floats**: Stored with full precision\r\n- **Booleans**: Stored as \"True\"/\"False\"\r\n- **Lists/Tuples**: JSON serialization\r\n- **Dictionaries**: JSON serialization\r\n- **Other Objects**: Pickle serialization (fallback)\r\n\r\n## Performance\r\n\r\nAsyncPGX is designed for high performance:\r\n\r\n- **Connection Pooling**: Efficient connection reuse\r\n- **Optimized Queries**: Minimal SQL overhead\r\n- **Batch Operations**: Support for concurrent operations\r\n- **Efficient Serialization**: Type-aware serialization\r\n- **Index Optimization**: Automatic indexing for TTL and pattern matching\r\n\r\n### Benchmarks\r\n\r\nTypical performance on modern hardware:\r\n\r\n- **SET operations**: 10,000+ ops/sec\r\n- **GET operations**: 15,000+ ops/sec\r\n- **Pattern matching**: Optimized with PostgreSQL indexes\r\n- **TTL cleanup**: Background processing with minimal impact\r\n\r\n## Error Handling\r\n\r\n```python\r\ntry:\r\n await db.set(\"key\", \"value\")\r\nexcept ConnectionError:\r\n print(\"Database connection failed\")\r\nexcept ValueError as e:\r\n print(f\"Invalid operation: {e}\")\r\n```\r\n\r\nCommon exceptions:\r\n- `ConnectionError`: Database connection issues\r\n- `ValueError`: Invalid data types or operations\r\n- `RuntimeError`: Schema or configuration errors\r\n\r\n## Advanced Usage\r\n\r\n### Custom Connection Settings\r\n\r\n```python\r\ndb = AsyncPostgresClient(\r\n url=\"postgresql://user:password@host:port/database\",\r\n min_connections=10,\r\n max_connections=100,\r\n command_timeout=30.0,\r\n auto_cleanup=True,\r\n cleanup_interval=60 # Clean expired keys every minute\r\n)\r\n```\r\n\r\n### Concurrent Operations\r\n\r\n```python\r\n# Concurrent writes\r\ntasks = [\r\n db.set(f\"key:{i}\", f\"value:{i}\")\r\n for i in range(100)\r\n]\r\nawait asyncio.gather(*tasks)\r\n\r\n# Concurrent reads\r\ntasks = [\r\n db.get(f\"key:{i}\")\r\n for i in range(100)\r\n]\r\nresults = await asyncio.gather(*tasks)\r\n```\r\n\r\n### Context Manager Usage\r\n\r\n```python\r\n# Automatic connection management\r\nasync with AsyncPostgresClient(url) as db:\r\n await db.set(\"key\", \"value\")\r\n value = await db.get(\"key\")\r\n# Connection automatically closed\r\n```\r\n\r\n## Examples\r\n\r\nSee the `examples/` directory for complete usage examples:\r\n\r\n- `basic_usage.py`: Basic operations and data types\r\n- `advanced_usage.py`: Concurrent operations, error handling, and TTL\r\n\r\n## Requirements\r\n\r\nCreate a PostgreSQL database and ensure the connection URL is correct:\r\n\r\n```sql\r\nCREATE DATABASE asyncpgx_test;\r\n```\r\n\r\nThe library will automatically create the required tables and indexes.\r\n\r\n## License\r\n\r\nMIT License - see LICENSE file for details.\r\n\r\n## Contributing\r\n\r\nContributions are welcome! Please read our contributing guidelines and submit pull requests.\r\n\r\n## Support\r\n\r\nFor issues and questions:\r\n- GitHub Issues: [Report bugs](https://github.com/asyncpgx/asyncpgx/issues)\r\n- Documentation: [Read the docs](https://asyncpgx.readthedocs.io/)\r\n\r\n",
"bugtrack_url": null,
"license": null,
"summary": "High-performance asynchronous PostgreSQL wrapper with Redis-like API",
"version": "1.0.0",
"project_urls": null,
"split_keywords": [
"postgresql",
"async",
"redis-like",
"key-value",
"database",
"asyncpg"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "7df830e8ba218227fc73385a478a46df0294fa127b8e7036735c066686adde9c",
"md5": "9595d3bba5c630c50248bc88cacd6542",
"sha256": "d287c442db61ba75dc90c830d9aec725d672c96eebe8ba07d816f5d1b00dd552"
},
"downloads": -1,
"filename": "PyAsyncPGX-1.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "9595d3bba5c630c50248bc88cacd6542",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 19832,
"upload_time": "2025-08-20T05:40:29",
"upload_time_iso_8601": "2025-08-20T05:40:29.988089Z",
"url": "https://files.pythonhosted.org/packages/7d/f8/30e8ba218227fc73385a478a46df0294fa127b8e7036735c066686adde9c/PyAsyncPGX-1.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "9d47caec5528db6262d0ef900015449e336493d7d05263c5519eca5fdee1ceb7",
"md5": "6af03065d5f33c7fe2a7a507609e1fb7",
"sha256": "7d6e584dd35c6b55768d2057f30e5ec5008055d08426494904d37f3aff651bed"
},
"downloads": -1,
"filename": "PyAsyncPGX-1.0.0.tar.gz",
"has_sig": false,
"md5_digest": "6af03065d5f33c7fe2a7a507609e1fb7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 14255,
"upload_time": "2025-08-20T05:40:31",
"upload_time_iso_8601": "2025-08-20T05:40:31.504078Z",
"url": "https://files.pythonhosted.org/packages/9d/47/caec5528db6262d0ef900015449e336493d7d05263c5519eca5fdee1ceb7/PyAsyncPGX-1.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-20 05:40:31",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "pyasyncpgx"
}