Name | quez JSON |
Version |
1.1.1
JSON |
| download |
home_page | https://sr.ht/~cwt/quez/ |
Summary | Pluggable, compressed in-memory queues for both sync and asyncio applications. |
upload_time | 2025-08-08 05:30:10 |
maintainer | None |
docs_url | None |
author | Chaiwat Suttipongsakul |
requires_python | <4.0,>=3.10 |
license | MIT |
keywords |
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Quez
**Quez** is a high-performance, memory-efficient library providing pluggable, compressed queues and deques for buffering data in both synchronous and asynchronous Python applications.
This library excels at managing large volumes of in-memory data, making it perfect for streaming data pipelines, logging systems, or high-throughput servers. It transparently compresses objects as they enter the data structure and decompresses them upon retrieval, slashing the memory footprint of in-flight data while maintaining a simple, familiar interface.
### **Key Features**
* **Flexible Data Structures**: Provides both FIFO (Queue) and Deque (double-ended queue) implementations to support a variety of access patterns.
* **Dual Sync and Async Interfaces**: Offers thread-safe quez.CompressedQueue and quez.CompressedDeque for multi-threaded applications, alongside quez.AsyncCompressedQueue and quez.AsyncCompressedDeque for asyncio.
* **Pluggable Compression Strategies**: Includes built-in support for zlib (default), bz2, and lzma, with optional zstd and lzo. The flexible architecture lets you plug in custom compression, serialization, or encryption algorithms.
* **Real-Time Observability**: Track performance with the .stats property, which reports item count, raw and compressed data sizes, and live compression ratio.
* **Optimized for Performance**: In the asyncio versions, CPU-intensive compression and decompression tasks run in a background thread pool, keeping the event loop responsive.
* **Memory Efficiency**: Handles large, temporary data bursts without excessive memory usage, preventing swapping and performance degradation.
## Installation
You can install the core library from PyPI:
pip install quez
To enable optional, high-performance compression backends, you can install them as extras. For example, to install with zstd support:
pip install quez[zstd]
Or install with all optional compressors:
pip install quez[all]
Available extras:
* zstd: Enables the ZstdCompressor.
* lzo: Enables the LzoCompressor.
## **Quick Start**
Here's a quick example of using CompressedQueue to compress and store a random string:
```pycon
>>> import random
>>> import string
>>> from quez import CompressedQueue
>>> data = ''.join(random.choices(string.ascii_letters + string.digits, k=100)) * 10
>>> len(data)
1000
>>> q = CompressedQueue() # Initialize the Queue with default ZlibCompressor
>>> q.put(data)
>>> q.stats
{'count': 1, 'raw_size_bytes': 1018, 'compressed_size_bytes': 131, 'compression_ratio_pct': 87.13163064833006}
>>> data == q.get()
True
>>> q.stats
{'count': 0, 'raw_size_bytes': 0, 'compressed_size_bytes': 0, 'compression_ratio_pct': None}
```
## Usage
### Synchronous Queue
Use CompressedQueue in standard multi-threaded Python applications.
```python
from quez import CompressedQueue
from quez.compressors import LzmaCompressor
# Use a different compressor for higher compression
q = CompressedQueue(compressor=LzmaCompressor())
# The API is the same as the standard queue.Queue
q.put({"data": "some important data"})
item = q.get()
q.task_done()
q.join()
```
### Asynchronous Queue
Use AsyncCompressedQueue in asyncio applications. The API mirrors asyncio.Queue.
```python
import asyncio
from quez import AsyncCompressedQueue
from quez.compressors import ZstdCompressor # Requires `pip install quez[zstd]`
async def main():
# Using the high-speed Zstd compressor
q = AsyncCompressedQueue(compressor=ZstdCompressor())
await q.put({"request_id": "abc-123", "payload": "..."})
item = await q.get()
q.task_done()
await q.join()
print(item)
asyncio.run(main())
```
### Synchronous & Asynchronous Deque
For use cases requiring efficient appends and pops from both ends (LIFO and FIFO), use CompressedDeque and AsyncCompressedDeque. Their interfaces are similar to collections.deque.
**Synchronous Deque (CompressedDeque)**
```python
from quez import CompressedDeque
# Deques support adding/removing from both ends
d = CompressedDeque(maxsize=5)
d.append("item-at-right") # Add to the right
d.appendleft("item-at-left") # Add to the left
# Items are still compressed
print(d.stats)
# Retrieve from both ends
print(d.popleft()) # "item-at-left"
print(d.pop()) # "item-at-right"
```
**Asynchronous Deque (AsyncCompressedDeque)**
```python
import asyncio
from quez import AsyncCompressedDeque
async def main():
d = AsyncCompressedDeque(maxsize=5)
await d.append("item-at-right")
await d.appendleft("item-at-left")
print(d.stats)
print(await d.popleft()) # "item-at-left"
print(await d.pop()) # "item-at-right"
asyncio.run(main())
```
### Extensibility
You can easily provide your own custom serializers or compressors. Any object that conforms to the Serializer or Compressor protocol can be used.
**Example: Custom JSON Serializer**
```python
import json
from quez import CompressedQueue
class JsonSerializer:
def dumps(self, obj):
# Serialize to JSON and encode to bytes
return json.dumps(obj).encode('utf-8')
def loads(self, data):
# Decode from bytes and parse JSON
return json.loads(data.decode('utf-8'))
# Now, use it with the queue
json_queue = CompressedQueue(serializer=JsonSerializer())
json_queue.put({"message": "hello world"})
data = json_queue.get()
print(data) # {'message': 'hello world'}
```
## A Note on Performance & Overhead
**Compression Overhead:** Keep in mind that compression algorithms have overhead. For very small or highly random data payloads (e.g., under 100-200 bytes), the compressed output might occasionally be slightly larger than the original. The memory-saving benefits of quez are most significant when dealing with larger objects or data with repeating patterns.
Raw data
{
"_id": null,
"home_page": "https://sr.ht/~cwt/quez/",
"name": "quez",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": null,
"keywords": null,
"author": "Chaiwat Suttipongsakul",
"author_email": "cwt@bashell.com",
"download_url": "https://files.pythonhosted.org/packages/88/c1/6c2f9906400750acffb2efab32cb31dbaccce34813b39a18a357ce6e6f72/quez-1.1.1.tar.gz",
"platform": null,
"description": "# Quez\n\n**Quez** is a high-performance, memory-efficient library providing pluggable, compressed queues and deques for buffering data in both synchronous and asynchronous Python applications.\n\nThis library excels at managing large volumes of in-memory data, making it perfect for streaming data pipelines, logging systems, or high-throughput servers. It transparently compresses objects as they enter the data structure and decompresses them upon retrieval, slashing the memory footprint of in-flight data while maintaining a simple, familiar interface.\n\n### **Key Features**\n\n* **Flexible Data Structures**: Provides both FIFO (Queue) and Deque (double-ended queue) implementations to support a variety of access patterns.\n* **Dual Sync and Async Interfaces**: Offers thread-safe quez.CompressedQueue and quez.CompressedDeque for multi-threaded applications, alongside quez.AsyncCompressedQueue and quez.AsyncCompressedDeque for asyncio.\n* **Pluggable Compression Strategies**: Includes built-in support for zlib (default), bz2, and lzma, with optional zstd and lzo. The flexible architecture lets you plug in custom compression, serialization, or encryption algorithms.\n* **Real-Time Observability**: Track performance with the .stats property, which reports item count, raw and compressed data sizes, and live compression ratio.\n* **Optimized for Performance**: In the asyncio versions, CPU-intensive compression and decompression tasks run in a background thread pool, keeping the event loop responsive.\n* **Memory Efficiency**: Handles large, temporary data bursts without excessive memory usage, preventing swapping and performance degradation.\n\n## Installation\n\nYou can install the core library from PyPI:\n\n pip install quez\n\nTo enable optional, high-performance compression backends, you can install them as extras. For example, to install with zstd support:\n\n pip install quez[zstd]\n\nOr install with all optional compressors:\n\n pip install quez[all]\n\nAvailable extras:\n\n* zstd: Enables the ZstdCompressor.\n* lzo: Enables the LzoCompressor.\n\n## **Quick Start**\n\nHere's a quick example of using CompressedQueue to compress and store a random string:\n\n```pycon\n>>> import random\n>>> import string\n>>> from quez import CompressedQueue\n>>> data = ''.join(random.choices(string.ascii_letters + string.digits, k=100)) * 10\n>>> len(data)\n1000\n>>> q = CompressedQueue() # Initialize the Queue with default ZlibCompressor\n>>> q.put(data)\n>>> q.stats\n{'count': 1, 'raw_size_bytes': 1018, 'compressed_size_bytes': 131, 'compression_ratio_pct': 87.13163064833006}\n>>> data == q.get()\nTrue\n>>> q.stats\n{'count': 0, 'raw_size_bytes': 0, 'compressed_size_bytes': 0, 'compression_ratio_pct': None}\n```\n\n## Usage\n\n### Synchronous Queue\n\nUse CompressedQueue in standard multi-threaded Python applications.\n\n```python\nfrom quez import CompressedQueue\nfrom quez.compressors import LzmaCompressor\n\n# Use a different compressor for higher compression\nq = CompressedQueue(compressor=LzmaCompressor())\n\n# The API is the same as the standard queue.Queue\nq.put({\"data\": \"some important data\"})\nitem = q.get()\nq.task_done()\nq.join()\n```\n\n### Asynchronous Queue\n\nUse AsyncCompressedQueue in asyncio applications. The API mirrors asyncio.Queue.\n\n```python\nimport asyncio\nfrom quez import AsyncCompressedQueue\nfrom quez.compressors import ZstdCompressor # Requires `pip install quez[zstd]`\n\nasync def main():\n # Using the high-speed Zstd compressor\n q = AsyncCompressedQueue(compressor=ZstdCompressor())\n\n await q.put({\"request_id\": \"abc-123\", \"payload\": \"...\"})\n item = await q.get()\n q.task_done()\n await q.join()\n print(item)\n\nasyncio.run(main())\n```\n\n### Synchronous & Asynchronous Deque\n\nFor use cases requiring efficient appends and pops from both ends (LIFO and FIFO), use CompressedDeque and AsyncCompressedDeque. Their interfaces are similar to collections.deque.\n\n**Synchronous Deque (CompressedDeque)**\n\n```python\nfrom quez import CompressedDeque\n\n# Deques support adding/removing from both ends\nd = CompressedDeque(maxsize=5)\n\nd.append(\"item-at-right\") # Add to the right\nd.appendleft(\"item-at-left\") # Add to the left\n\n# Items are still compressed\nprint(d.stats)\n\n# Retrieve from both ends\nprint(d.popleft()) # \"item-at-left\"\nprint(d.pop()) # \"item-at-right\"\n```\n\n**Asynchronous Deque (AsyncCompressedDeque)**\n\n```python\nimport asyncio\nfrom quez import AsyncCompressedDeque\n\nasync def main():\n d = AsyncCompressedDeque(maxsize=5)\n\n await d.append(\"item-at-right\")\n await d.appendleft(\"item-at-left\")\n\n print(d.stats)\n\n print(await d.popleft()) # \"item-at-left\"\n print(await d.pop()) # \"item-at-right\"\n\nasyncio.run(main())\n```\n\n### Extensibility\n\nYou can easily provide your own custom serializers or compressors. Any object that conforms to the Serializer or Compressor protocol can be used.\n\n**Example: Custom JSON Serializer**\n\n```python\nimport json\nfrom quez import CompressedQueue\n\nclass JsonSerializer:\n def dumps(self, obj):\n # Serialize to JSON and encode to bytes\n return json.dumps(obj).encode('utf-8')\n\n def loads(self, data):\n # Decode from bytes and parse JSON\n return json.loads(data.decode('utf-8'))\n\n# Now, use it with the queue\njson_queue = CompressedQueue(serializer=JsonSerializer())\n\njson_queue.put({\"message\": \"hello world\"})\ndata = json_queue.get()\nprint(data) # {'message': 'hello world'}\n```\n\n## A Note on Performance & Overhead\n\n**Compression Overhead:** Keep in mind that compression algorithms have overhead. For very small or highly random data payloads (e.g., under 100-200 bytes), the compressed output might occasionally be slightly larger than the original. The memory-saving benefits of quez are most significant when dealing with larger objects or data with repeating patterns.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Pluggable, compressed in-memory queues for both sync and asyncio applications.",
"version": "1.1.1",
"project_urls": {
"GitHub Mirror": "https://github.com/cwt/quez",
"Homepage": "https://sr.ht/~cwt/quez/",
"Repository": "https://hg.sr.ht/~cwt/quez"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "eb39ec00ab5bafe58a2f9f2f22e637a77bc01b8815c34320bd66ecd741d596e1",
"md5": "d9486fed28377acb931cf5b1793ff4de",
"sha256": "cf9b2beb71e40a3042bd2267e03c5cc79ea762ab2f54404f60e34b609e994e6d"
},
"downloads": -1,
"filename": "quez-1.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d9486fed28377acb931cf5b1793ff4de",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 13368,
"upload_time": "2025-08-08T05:30:09",
"upload_time_iso_8601": "2025-08-08T05:30:09.064310Z",
"url": "https://files.pythonhosted.org/packages/eb/39/ec00ab5bafe58a2f9f2f22e637a77bc01b8815c34320bd66ecd741d596e1/quez-1.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "88c16c2f9906400750acffb2efab32cb31dbaccce34813b39a18a357ce6e6f72",
"md5": "b40f8316e38e4eb1462c9d41b6d54f27",
"sha256": "eec6c8fd480a93f2b3af33673c9b1f32d7a84310731bf2ed1ed41de7796db7c4"
},
"downloads": -1,
"filename": "quez-1.1.1.tar.gz",
"has_sig": false,
"md5_digest": "b40f8316e38e4eb1462c9d41b6d54f27",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 12743,
"upload_time": "2025-08-08T05:30:10",
"upload_time_iso_8601": "2025-08-08T05:30:10.315592Z",
"url": "https://files.pythonhosted.org/packages/88/c1/6c2f9906400750acffb2efab32cb31dbaccce34813b39a18a357ce6e6f72/quez-1.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-08 05:30:10",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "cwt",
"github_project": "quez",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "quez"
}