fastpgcache


Namefastpgcache JSON
Version 0.1.8 PyPI version JSON
download
home_pageNone
SummaryA fast Redis-like caching library using PostgreSQL with UNLOGGED tables for high performance
upload_time2025-10-26 02:31:12
maintainerNone
docs_urlNone
authorNone
requires_python>=3.7
licenseMIT
keywords postgresql cache redis database caching fast performance
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # FastPgCache 🐘⚡

A **fast Redis-like caching library** for PostgreSQL with high performance using UNLOGGED tables. Get Redis-style caching without the extra infrastructure!

## Why FastPgCache?

- **🚀 Fast** - Uses PostgreSQL UNLOGGED tables for Redis-like performance
- **⚡ CuckooFilter** - Lightning-fast negative lookups (10-1000x speedup for missing keys)
- **📦 Batch Operations** - `set_many()` for high-throughput bulk inserts
- **⏰ TTL Support** - Automatic expiry like Redis SET with EX
- **🔄 Redis-like API** - Familiar methods: `set()`, `get()`, `delete()`, `exists()`, `ttl()`
- **🎯 Simple** - One less service to manage
- **💪 ACID** - Get caching within PostgreSQL transactions
- **📦 JSON Support** - Automatic JSON serialization/deserialization
- **🔐 Databricks Integration** - Simplified API with automatic token rotation
- **🔒 User Isolation** - Automatic per-user cache isolation (no race conditions!)

## 🚀 Performance: UNLOGGED vs Regular Tables

FastPgCache uses PostgreSQL **UNLOGGED tables** for dramatically better performance. Here are real-world benchmarks from Databricks PostgreSQL:

### Load Test Results (10 threads, 100 ops each)

| Metric | UNLOGGED Table | Regular Table | Improvement |
|--------|----------------|---------------|-------------|
| **Throughput** | **553 ops/sec** | 496 ops/sec | **+11.5%** |
| **SET Mean** | **7.58 ms** | 12.17 ms | **37% faster** |
| **SET P95** | **10.71 ms** | 17.97 ms | **40% faster** |
| **SET P99** | **14.65 ms** | 21.67 ms | **32% faster** |
| **GET Mean** | **7.60 ms** | 8.04 ms | **5% faster** |
| **GET P95** | **10.74 ms** | 12.09 ms | **11% faster** |

**Key Takeaway:** UNLOGGED tables provide **37% faster writes** and **11.5% higher throughput**, making them ideal for caching workloads.

### What are UNLOGGED Tables?

UNLOGGED tables are a PostgreSQL feature that:
- ✅ **Skip write-ahead logging (WAL)** - Much faster writes
- ✅ **Perfect for cache** - Temporary data that can be regenerated
- ✅ **Still ACID** - Transaction support within PostgreSQL
- ⚠️ **Data lost on crash** - Acceptable for cache (not for permanent data)

Learn more: [PostgreSQL UNLOGGED Tables](https://www.postgresql.org/docs/current/sql-createtable.html#SQL-CREATETABLE-UNLOGGED)


## 🔒 User Isolation

**By default, each user gets isolated cache** - all users share the same table, but rows are filtered by `user_id`:

**How it works:**
```sql
-- Table structure (UNLOGGED for performance!)
CREATE UNLOGGED TABLE public.cache (
    user_id TEXT NOT NULL,
    key TEXT NOT NULL,
    value TEXT NOT NULL,
    ...
    PRIMARY KEY (user_id, key)
);

-- Alice's data: WHERE user_id = 'alice@company.com'
-- Bob's data:   WHERE user_id = 'bob@company.com'
```
## Quick Start

### Installation

```bash
pip install fastpgcache
```

Or from source:

```bash
git clone https://github.com/vivian-xie-db/fastpgcache.git
cd fastpgcache
pip install -e .
```

### With Databricks Support

```bash
pip install fastpgcache[databricks]
```

## Usage (Redis-Like Pattern)

> **Important:** Like Redis, there are two distinct roles:
> - **Admin/DBA:** Sets up cache once (like starting Redis server)
> - **Regular Users:** Just connect and use (like Redis clients)

### Step 1: Admin Setup (Admin/DBA Only - Once)

**Admin/DBA runs this ONCE to create the UNLOGGED cache table.**

⚠️ Admins must be databricks users who are added with PostgresSQL roles and have databricks_superuser privileges.

![](assets/admin1.png)
![](assets/admin2.png)

#### Option A: Command Line (Quick)

After `pip install fastpgcache`, the admin command is automatically available:

```bash
# Local PostgreSQL
fastpgcache-admin --host localhost --user postgres --password mypass

# With custom schema
fastpgcache-admin --host myhost --user admin --password mypass --schema my_cache

# Databricks (NO password needed - token provider handles authentication)

# Local IDE development (with profile)
fastpgcache-admin \
  --databricks \
  --host myhost.cloud.databricks.com \
  --database databricks_postgres \
  --user admin@company.com \
  --instance-name my_instance \
  --profile DEFAULT

# Online notebook mode (no profile needed)
fastpgcache-admin \
  --databricks \
  --host myhost.cloud.databricks.com \
  --database databricks_postgres \
  --user admin@company.com \
  --instance-name my_instance

# CI/CD with force recreate (no prompts)
fastpgcache-admin --host myhost --user admin --password $DB_PASS --force
```

#### Option B: Python Code (Programmatic)

```python
from fastpgcache import setup_cache

# Local PostgreSQL
setup_cache(
    host="localhost",
    database="postgres",
    user="postgres",
    password="mypassword"
)

# Custom schema
setup_cache(
    host="myhost",
    database="mydb",
    user="admin",
    password="mypass",
    schema="my_cache"
)

# Databricks with token provider
from databricks.sdk import WorkspaceClient
from fastpgcache import DatabricksTokenProvider

# Local IDE development (with profile)
w = WorkspaceClient(profile="DEFAULT")
token_provider = DatabricksTokenProvider(
    workspace_client=w,
    instance_names=["my_instance"],
    refresh_interval=3600,
    auto_refresh=True
)

# OR for online notebook mode (no profile needed)
w = WorkspaceClient()
token_provider = DatabricksTokenProvider(
    workspace_client=w,
    instance_names=["my_instance"],
    refresh_interval=3600,
    auto_refresh=True
)

setup_cache(
    host="myhost.cloud.databricks.com",
    database="databricks_postgres",
    user="admin@company.com",
    token_provider=token_provider
)
```

**This is NOT for regular users! Only admin/DBA/DevOps.**

> **Note:** The `fastpgcache-admin` command creates UNLOGGED tables automatically for optimal performance. You don't need to write any code - just run the command with appropriate credentials.

The script supports these options:
- `--host`: Database host (default: localhost)
- `--database`: Database name (default: postgres)
- `--user`: Admin user with CREATE TABLE permissions
- `--password`: Database password (**ONLY for local PostgreSQL, omit for Databricks**)
- `--schema`: Schema for cache table (default: public)
- `--force`: Force recreate without prompts (for CI/CD)
- `--databricks`: Use Databricks token authentication (no password needed)
- `--instance-name`: Databricks instance name (required with `--databricks`)
- `--profile`: Databricks auth profile (**ONLY for local IDE, omit for online notebooks**)

**Profile usage:**
- ✅ **Local IDE:** Use `--profile DEFAULT` (or your configured profile name)
- ❌ **Online Notebook:** Omit `--profile` (uses runtime credentials automatically)

**When to use `--password`:**
- ✅ Local PostgreSQL: `--password mypass`
- ❌ Databricks: Don't use `--password` (token provider handles it)

### Step 2: Users Connect and Use (✅ Regular Users)

**Users just connect - NO setup() calls needed:**

```python
from fastpgcache import FastPgCache

# Just connect - like Redis!
cache = FastPgCache(
    host="your-host",
    database="your-db",
    user="alice@company.com",
    password="user-password"
)

# Use immediately - no setup needed!
cache.set("session", {"user": "Alice"}, ttl=3600)
user_data = cache.get("session")

# Each user's data is automatically isolated
```

## Databricks Token Authentication

FastPgCache provides a **super-simple API** for Databricks - just set `instance_name` and it auto-configures everything!

### ✅ Simplified API (Recommended)

```python
from fastpgcache import FastPgCache

# Databricks - Local IDE (with profile)
cache = FastPgCache(
    host="my-instance.database.cloud.databricks.com",
    database="databricks_postgres",
    user="user@company.com",
    instance_name="my_instance",  # Auto-enables Databricks mode!
    profile="DEFAULT"  # Optional: for local IDE
)

# Databricks - Online Notebook (no profile)
cache = FastPgCache(
    host="my-instance.database.cloud.databricks.com",
    database="databricks_postgres",
    user="user@company.com",
    instance_name="my_instance"  # Auto-enables Databricks mode!
    # No profile = uses runtime credentials
)

# Regular PostgreSQL (for comparison)
cache = FastPgCache(
    host="localhost",
    database="postgres",
    user="postgres",
    password="mypass"
)
```
### Usage Example

```python 
# 1. Set values with complex data
cache.set("test:key1", "value1", ttl=3600)
cache.set("test:key2", {"name": "Alice", "age": 30})
cache.set("test:key3", [1, 2, 3, 4, 5])
test_data = {
        "user": {
            "id": 123,
            "name": "Alice",
            "tags": ["admin", "developer"],
            "settings": {
                "theme": "dark",
                "notifications": True
            }
        }
    }
cache.set("json:complex", test_data)
test_list = [1, 2, 3, {"nested": "value"}, [4, 5, 6]]
cache.set("json:list", test_list)

# 2. Set values with TTL
cache.set("user:123", {"name": "Alice", "role": "admin"}, ttl=3600)
cache.set("user:456", {"name": "Bob", "role": "user"}, ttl=3600)
cache.set("session:abc", {"user_id": 123, "ip": "192.168.1.1"}, ttl=1800)


print("✓ Values set\n")

# 3. Get values
user123 = cache.get("user:123")
print(f"user:123 = {user123}")
session = cache.get("session:abc")
print(f"session:abc = {session}\n")

# 4. Check if key exists
print(f"user:123 exists: {cache.exists('user:123')}")
print(f"user:999 exists: {cache.exists('user:999')}\n")

# 5. Get TTL
ttl = cache.ttl("user:123")
print(f"user:123 expires in {ttl} seconds\n")

# 6. Store value without expiry
cache.set("config:app", {"theme": "dark", "language": "en"})
config_ttl = cache.ttl("config:app")
print(f"config:app TTL: {config_ttl} (-1 = no expiry)\n")

# 7. Manual token refresh (optional - normally automatic)
new_token = token_provider.refresh_token()
print(f"Token refreshed (length: {len(new_token)})\n")

# 8. Continue using cache - connection will automatically use new token
test_value = cache.get("user:123")
print(f"user:123 = {test_value}")
print("✓ Cache working perfectly with new token\n")

# Close the connection (also stops token auto-refresh)
cache.close()
print("✓ Cache closed and token provider stopped")
```

## API Reference

### FastPgCache

```python
FastPgCache(
    connection_string=None,
    host='localhost',
    port=5432,
    database='postgres',
    user='postgres',
    password='',
    schema='public',
    minconn=1,
    maxconn=10,
    use_cuckoo_filter=True,
    cuckoo_capacity=1000000,
    instance_name=None,
    profile=None
)
```

Initialize the cache client.

**Parameters:**
- `connection_string` (str, optional): PostgreSQL connection string
- `host` (str): Database host (default: 'localhost')
- `port` (int): Database port (default: 5432)
- `database` (str): Database name (default: 'postgres')
- `user` (str): Database user (default: 'postgres')
- `password` (str): Database password (ignored if instance_name is set)
- `schema` (str): PostgreSQL schema name for cache table (default: 'public')
- `minconn` (int): Minimum connections in pool (default: 1)
- `maxconn` (int): Maximum connections in pool (default: 10)
- `use_cuckoo_filter` (bool): Enable CuckooFilter for fast negative lookups (default: True)
- `cuckoo_capacity` (int): CuckooFilter capacity (default: 1,000,000)
- `instance_name` (str, optional): Databricks lakebase instance name - auto-enables Databricks mode!
- `profile` (str, optional): Databricks profile for local IDE (omit for online notebooks)

### Methods

#### set(key, value, ttl=None)

Store a value in the cache.

**Parameters:**
- `key` (str): Cache key
- `value` (str|dict|list): Value to cache (dicts/lists are auto-serialized to JSON)
- `ttl` (int, optional): Time to live in seconds (None = no expiry)

**Returns:** `bool` - True if successful

```python
cache.set("user:123", {"name": "Alice"}, ttl=3600)
```

#### set_many(items, ttl=None)

Store multiple values in a single transaction (much faster for bulk operations).

**Parameters:**
- `items` (dict): Dictionary of key-value pairs to cache
- `ttl` (int, optional): Time to live in seconds (None = no expiry), applies to all items

**Returns:** `int` - Number of items successfully set

**Performance:** 10-30x faster than individual `set()` calls for bulk inserts!

```python
# Bulk insert - single transaction!
items = {
    "user:123": {"name": "Alice"},
    "user:456": {"name": "Bob"},
    "user:789": {"name": "Charlie"}
}
count = cache.set_many(items, ttl=3600)
print(f"Inserted {count} items")
```

#### get(key, parse_json=True)

Retrieve a value from the cache.

**Parameters:**
- `key` (str): Cache key
- `parse_json` (bool): Auto-parse JSON values (default: True)

**Returns:** Value or None if not found/expired

```python
user = cache.get("user:123")
```

#### delete(key)

Delete a cache entry.

**Parameters:**
- `key` (str): Cache key

**Returns:** `bool` - True if deleted, False if not found

```python
cache.delete("user:123")
```

#### exists(key)

Check if a key exists and is not expired.

**Parameters:**
- `key` (str): Cache key

**Returns:** `bool` - True if exists

```python
if cache.exists("user:123"):
    print("Key exists!")
```

#### ttl(key)

Get time to live for a key.

**Parameters:**
- `key` (str): Cache key

**Returns:** `int` - Seconds until expiry, -1 if no expiry, -2 if not found

```python
seconds = cache.ttl("user:123")
```

**TTL Return Values:**
- Positive number: Seconds until expiry
- `-1`: No expiry set (permanent)
- `-2`: Key not found

#### cleanup()

Remove all expired cache entries.

**Returns:** `int` - Number of entries deleted

```python
deleted = cache.cleanup()
```

#### close()

Close all connections in the pool.

```python
cache.close()
```

## ⚡ CuckooFilter: Lightning-Fast Negative Lookups

FastPgCache includes an **optional CuckooFilter** (enabled by default) that provides **10-1000x speedup** for checking keys that don't exist!

### What is a CuckooFilter?

A CuckooFilter is a probabilistic data structure that:
- ✅ **Fast negative lookups** - Instantly know if a key definitely doesn't exist
- ✅ **Memory efficient** - Uses ~1MB per 100K keys
- ✅ **Supports deletion** - Unlike Bloom filters, can remove items
- ⚠️ **Low false positive rate** - <1% chance of false positives (configurable)

### Performance Benefits

```python
# With CuckooFilter (default)
cache = FastPgCache(
    host="localhost",
    database="postgres",
    user="postgres",
    password="mypass",
    instance_name='my_instance',
    use_cuckoo_filter=True  # Default: enabled
)

# Check 10,000 non-existent keys
for i in range(10000):
    cache.get(f"missing_key_{i}")  # ⚡ INSTANT - no DB query!

# With CuckooFilter disabled
cache = FastPgCache(
    host="localhost",
    database="postgres",
    user="postgres",
    password="mypass",
    instance_name='my_instance',
    use_cuckoo_filter=False  # Disabled
)

# Check 10,000 non-existent keys
for i in range(10000):
    cache.get(f"missing_key_{i}")  # 🐌 SLOW - 10,000 DB queries
```

### Real-World Performance

| Operation | With CuckooFilter | Without CuckooFilter | Speedup |
|-----------|------------------|---------------------|---------|
| **Negative lookup (key doesn't exist)** | 0.001 ms | 10 ms | **10,000x** |
| **Positive lookup (key exists)** | 10 ms | 10 ms | Same |
| **Memory usage** | +10 MB (1M keys) | 0 MB | Tradeoff |

### When to Disable CuckooFilter

Disable CuckooFilter if:
- ❌ Your application has **very high cache hit rate** (>95%) - CuckooFilter won't help much
- ❌ You need to **minimize memory usage** - CuckooFilter uses ~10MB per million keys
- ❌ Your cache is **very small** (<1000 keys) - overhead isn't worth it

### Using CuckooFilter Standalone

You can also use CuckooFilter directly as a standalone data structure, separate from FastPgCache:

```python
from fastpgcache import CuckooFilter

# Create a CuckooFilter with 100,000 capacity
filter = CuckooFilter(capacity=100000)

# Insert items
filter.insert("user:123")
filter.insert("session:abc")
filter.insert("product:456")

# Check if item might exist (fast!)
if filter.lookup("user:123"):
    print("Item might be in the set")  # True
else:
    print("Item definitely NOT in the set")  # False positive impossible

# Check non-existent item
if filter.lookup("user:999"):
    print("Might exist (false positive)")
else:
    print("Definitely doesn't exist")  # This will print!

# Delete items (unlike Bloom filters!)
filter.delete("user:123")
print(filter.lookup("user:123"))  # False

# Get statistics
stats = filter.stats()
print(f"Items: {stats['size']:,}")
print(f"Load factor: {stats['load_factor']:.2%}")
print(f"False positive rate: {stats['estimated_fpr']:.4%}")
```

**Use cases for standalone CuckooFilter:**
- 🔍 **Deduplication** - Check if you've seen an item before
- 🚫 **Blocklists** - Fast IP/user blocklist checks
- 📊 **Analytics** - Track unique visitors without storing all IDs
- 🔒 **Rate limiting** - Check request frequency
- 🎯 **Recommendation systems** - Filter already-shown items

**Advantages over Bloom filters:**
- ✅ Supports **deletion** (Bloom filters don't!)
- ✅ Better **space efficiency** for same false positive rate
- ✅ Better **cache locality** (fewer memory accesses)

## 📦 Batch Operations: High-Throughput Inserts

FastPgCache provides `set_many()` for **10-30x faster bulk inserts** compared to individual `set()` calls!

### Performance Comparison

```python
# Slow: Individual set() calls
start = time.time()
for i in range(1000):
    cache.set(f"key_{i}", f"value_{i}", ttl=3600)
slow_time = time.time() - start
print(f"Individual set(): {slow_time:.2f}s")  # ~30 seconds (remote DB)

# Fast: Batch set_many() call
start = time.time()
items = {f"key_{i}": f"value_{i}" for i in range(1000)}
cache.set_many(items, ttl=3600)
fast_time = time.time() - start
print(f"set_many(): {fast_time:.2f}s")  # ~1 second (remote DB)
print(f"Speedup: {slow_time / fast_time:.1f}x")  # ~30x faster!
```

### Why is set_many() so Fast?

- **Single transaction** - All inserts in one DB roundtrip
- **Reduced network latency** - One connection instead of 1000
- **Less overhead** - Single commit instead of 1000

### Usage Example

```python
# Prepare bulk data
users = {
    "user:123": {"name": "Alice", "role": "admin"},
    "user:456": {"name": "Bob", "role": "user"},
    "user:789": {"name": "Charlie", "role": "moderator"}
}

# Insert all at once (with TTL)
count = cache.set_many(users, ttl=3600)
print(f"Inserted {count} users in a single transaction!")

# Works with any data type
cache.set_many({
    "config:theme": "dark",
    "config:lang": "en",
    "config:tz": "UTC"
})
```

### When to Use set_many()

- ✅ **Bulk imports** - Loading large datasets into cache
- ✅ **Initial cache warming** - Pre-populating cache on startup
- ✅ **Batch processing** - Processing records in batches
- ✅ **Remote databases** - Network latency is significant
- ❌ **Single inserts** - Use regular `set()` for simplicity

## Important Notes

### Cache Persistence

✅ **Cache data PERSISTS when:**
- You close and reopen connections (`cache.close()` then create new `FastPgCache`)
- You restart your application
- Multiple applications connect to the same database

❌ **Cache data is LOST when:**
- PostgreSQL server crashes or restarts (UNLOGGED table behavior)
- You call `setup(force_recreate=True)` during admin setup

### Other Notes

- **UNLOGGED Tables** - Data is not crash-safe (lost on database crash). This is by design for cache performance. For durability, you would need to modify the setup SQL to remove `UNLOGGED` (not recommended for cache).
- **First Setup** - Admin runs `admin_setup_cache.py` once to create UNLOGGED tables and functions. Safe to run multiple times (won't lose data).
- **Cleanup** - Schedule `cache.cleanup()` to remove expired entries (they're auto-removed on access, but cleanup helps with storage)

### Verifying UNLOGGED Table

To verify your cache table is properly configured as UNLOGGED:

```sql
-- Check table type
SELECT 
    relname as table_name,
    CASE relpersistence
        WHEN 'u' THEN 'UNLOGGED'
        WHEN 'p' THEN 'PERMANENT'
        WHEN 't' THEN 'TEMPORARY'
    END as table_type
FROM pg_class
WHERE relname = 'cache' AND relkind = 'r';
```

You should see `UNLOGGED` as the table_type.

## Troubleshooting

**psycopg2 not found:**
```bash
pip install psycopg2-binary
```

### Databricks Token Issues

**Token Refresh Failing:**

1. **Workspace Client Configuration:**
   ```python
   # Local IDE development (with profile)
   w = WorkspaceClient(profile="DEFAULT")
   
   # OR online notebook mode (no profile needed)
   # w = WorkspaceClient()
   
   # Test credential generation
   cred = w.database.generate_database_credential(
       request_id=str(uuid.uuid4()),
       instance_names=["my_instance"]
   )
   print(f"Token generated: {cred.token[:20]}...")
   ```

2. **Instance Names:**
   ```python
   # Ensure instance name is correct
   token_provider = DatabricksTokenProvider(
       workspace_client=w,
       instance_names=["correct_instance_name"],  # Must match exactly
       ...
   )
   ```

3. **Network Connectivity:**
   - Ensure connection to Databricks workspace
   - Check firewall/proxy settings

**Connection Errors After Token Refresh:**

- Make sure refresh happens before expiry (adjust `refresh_interval`)
- Keep connection pool size reasonable (lower = faster refresh)

## Requirements

- Python 3.7+
- PostgreSQL 9.6+
- psycopg2-binary

## License

MIT License - see LICENSE file for details

## Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

## Support

For issues and questions, please open an issue on GitHub: https://github.com/vivian-xie-db/fastpgcache/issues

## Additional Resources

- [Databricks SDK Documentation](https://databricks-sdk-py.readthedocs.io/)
- [PostgreSQL UNLOGGED Tables](https://www.postgresql.org/docs/current/sql-createtable.html#SQL-CREATETABLE-UNLOGGED)
- [PostgreSQL Authentication](https://www.postgresql.org/docs/current/auth-password.html)
- [Load Testing Documentation](examples/README_load_testing.md)
- [Examples Directory](examples/)


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "fastpgcache",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "postgresql, cache, redis, database, caching, fast, performance",
    "author": null,
    "author_email": "Wenwen Xie <wenwen.xie@databricks.com>",
    "download_url": "https://files.pythonhosted.org/packages/0e/9e/664240f03aad2669e3673399520112c969630083380141abc90e04ccd9f8/fastpgcache-0.1.8.tar.gz",
    "platform": null,
    "description": "# FastPgCache \ud83d\udc18\u26a1\n\nA **fast Redis-like caching library** for PostgreSQL with high performance using UNLOGGED tables. Get Redis-style caching without the extra infrastructure!\n\n## Why FastPgCache?\n\n- **\ud83d\ude80 Fast** - Uses PostgreSQL UNLOGGED tables for Redis-like performance\n- **\u26a1 CuckooFilter** - Lightning-fast negative lookups (10-1000x speedup for missing keys)\n- **\ud83d\udce6 Batch Operations** - `set_many()` for high-throughput bulk inserts\n- **\u23f0 TTL Support** - Automatic expiry like Redis SET with EX\n- **\ud83d\udd04 Redis-like API** - Familiar methods: `set()`, `get()`, `delete()`, `exists()`, `ttl()`\n- **\ud83c\udfaf Simple** - One less service to manage\n- **\ud83d\udcaa ACID** - Get caching within PostgreSQL transactions\n- **\ud83d\udce6 JSON Support** - Automatic JSON serialization/deserialization\n- **\ud83d\udd10 Databricks Integration** - Simplified API with automatic token rotation\n- **\ud83d\udd12 User Isolation** - Automatic per-user cache isolation (no race conditions!)\n\n## \ud83d\ude80 Performance: UNLOGGED vs Regular Tables\n\nFastPgCache uses PostgreSQL **UNLOGGED tables** for dramatically better performance. Here are real-world benchmarks from Databricks PostgreSQL:\n\n### Load Test Results (10 threads, 100 ops each)\n\n| Metric | UNLOGGED Table | Regular Table | Improvement |\n|--------|----------------|---------------|-------------|\n| **Throughput** | **553 ops/sec** | 496 ops/sec | **+11.5%** |\n| **SET Mean** | **7.58 ms** | 12.17 ms | **37% faster** |\n| **SET P95** | **10.71 ms** | 17.97 ms | **40% faster** |\n| **SET P99** | **14.65 ms** | 21.67 ms | **32% faster** |\n| **GET Mean** | **7.60 ms** | 8.04 ms | **5% faster** |\n| **GET P95** | **10.74 ms** | 12.09 ms | **11% faster** |\n\n**Key Takeaway:** UNLOGGED tables provide **37% faster writes** and **11.5% higher throughput**, making them ideal for caching workloads.\n\n### What are UNLOGGED Tables?\n\nUNLOGGED tables are a PostgreSQL feature that:\n- \u2705 **Skip write-ahead logging (WAL)** - Much faster writes\n- \u2705 **Perfect for cache** - Temporary data that can be regenerated\n- \u2705 **Still ACID** - Transaction support within PostgreSQL\n- \u26a0\ufe0f **Data lost on crash** - Acceptable for cache (not for permanent data)\n\nLearn more: [PostgreSQL UNLOGGED Tables](https://www.postgresql.org/docs/current/sql-createtable.html#SQL-CREATETABLE-UNLOGGED)\n\n\n## \ud83d\udd12 User Isolation\n\n**By default, each user gets isolated cache** - all users share the same table, but rows are filtered by `user_id`:\n\n**How it works:**\n```sql\n-- Table structure (UNLOGGED for performance!)\nCREATE UNLOGGED TABLE public.cache (\n    user_id TEXT NOT NULL,\n    key TEXT NOT NULL,\n    value TEXT NOT NULL,\n    ...\n    PRIMARY KEY (user_id, key)\n);\n\n-- Alice's data: WHERE user_id = 'alice@company.com'\n-- Bob's data:   WHERE user_id = 'bob@company.com'\n```\n## Quick Start\n\n### Installation\n\n```bash\npip install fastpgcache\n```\n\nOr from source:\n\n```bash\ngit clone https://github.com/vivian-xie-db/fastpgcache.git\ncd fastpgcache\npip install -e .\n```\n\n### With Databricks Support\n\n```bash\npip install fastpgcache[databricks]\n```\n\n## Usage (Redis-Like Pattern)\n\n> **Important:** Like Redis, there are two distinct roles:\n> - **Admin/DBA:** Sets up cache once (like starting Redis server)\n> - **Regular Users:** Just connect and use (like Redis clients)\n\n### Step 1: Admin Setup (Admin/DBA Only - Once)\n\n**Admin/DBA runs this ONCE to create the UNLOGGED cache table.**\n\n\u26a0\ufe0f Admins must be databricks users who are added with PostgresSQL roles and have databricks_superuser privileges.\n\n![](assets/admin1.png)\n![](assets/admin2.png)\n\n#### Option A: Command Line (Quick)\n\nAfter `pip install fastpgcache`, the admin command is automatically available:\n\n```bash\n# Local PostgreSQL\nfastpgcache-admin --host localhost --user postgres --password mypass\n\n# With custom schema\nfastpgcache-admin --host myhost --user admin --password mypass --schema my_cache\n\n# Databricks (NO password needed - token provider handles authentication)\n\n# Local IDE development (with profile)\nfastpgcache-admin \\\n  --databricks \\\n  --host myhost.cloud.databricks.com \\\n  --database databricks_postgres \\\n  --user admin@company.com \\\n  --instance-name my_instance \\\n  --profile DEFAULT\n\n# Online notebook mode (no profile needed)\nfastpgcache-admin \\\n  --databricks \\\n  --host myhost.cloud.databricks.com \\\n  --database databricks_postgres \\\n  --user admin@company.com \\\n  --instance-name my_instance\n\n# CI/CD with force recreate (no prompts)\nfastpgcache-admin --host myhost --user admin --password $DB_PASS --force\n```\n\n#### Option B: Python Code (Programmatic)\n\n```python\nfrom fastpgcache import setup_cache\n\n# Local PostgreSQL\nsetup_cache(\n    host=\"localhost\",\n    database=\"postgres\",\n    user=\"postgres\",\n    password=\"mypassword\"\n)\n\n# Custom schema\nsetup_cache(\n    host=\"myhost\",\n    database=\"mydb\",\n    user=\"admin\",\n    password=\"mypass\",\n    schema=\"my_cache\"\n)\n\n# Databricks with token provider\nfrom databricks.sdk import WorkspaceClient\nfrom fastpgcache import DatabricksTokenProvider\n\n# Local IDE development (with profile)\nw = WorkspaceClient(profile=\"DEFAULT\")\ntoken_provider = DatabricksTokenProvider(\n    workspace_client=w,\n    instance_names=[\"my_instance\"],\n    refresh_interval=3600,\n    auto_refresh=True\n)\n\n# OR for online notebook mode (no profile needed)\nw = WorkspaceClient()\ntoken_provider = DatabricksTokenProvider(\n    workspace_client=w,\n    instance_names=[\"my_instance\"],\n    refresh_interval=3600,\n    auto_refresh=True\n)\n\nsetup_cache(\n    host=\"myhost.cloud.databricks.com\",\n    database=\"databricks_postgres\",\n    user=\"admin@company.com\",\n    token_provider=token_provider\n)\n```\n\n**This is NOT for regular users! Only admin/DBA/DevOps.**\n\n> **Note:** The `fastpgcache-admin` command creates UNLOGGED tables automatically for optimal performance. You don't need to write any code - just run the command with appropriate credentials.\n\nThe script supports these options:\n- `--host`: Database host (default: localhost)\n- `--database`: Database name (default: postgres)\n- `--user`: Admin user with CREATE TABLE permissions\n- `--password`: Database password (**ONLY for local PostgreSQL, omit for Databricks**)\n- `--schema`: Schema for cache table (default: public)\n- `--force`: Force recreate without prompts (for CI/CD)\n- `--databricks`: Use Databricks token authentication (no password needed)\n- `--instance-name`: Databricks instance name (required with `--databricks`)\n- `--profile`: Databricks auth profile (**ONLY for local IDE, omit for online notebooks**)\n\n**Profile usage:**\n- \u2705 **Local IDE:** Use `--profile DEFAULT` (or your configured profile name)\n- \u274c **Online Notebook:** Omit `--profile` (uses runtime credentials automatically)\n\n**When to use `--password`:**\n- \u2705 Local PostgreSQL: `--password mypass`\n- \u274c Databricks: Don't use `--password` (token provider handles it)\n\n### Step 2: Users Connect and Use (\u2705 Regular Users)\n\n**Users just connect - NO setup() calls needed:**\n\n```python\nfrom fastpgcache import FastPgCache\n\n# Just connect - like Redis!\ncache = FastPgCache(\n    host=\"your-host\",\n    database=\"your-db\",\n    user=\"alice@company.com\",\n    password=\"user-password\"\n)\n\n# Use immediately - no setup needed!\ncache.set(\"session\", {\"user\": \"Alice\"}, ttl=3600)\nuser_data = cache.get(\"session\")\n\n# Each user's data is automatically isolated\n```\n\n## Databricks Token Authentication\n\nFastPgCache provides a **super-simple API** for Databricks - just set `instance_name` and it auto-configures everything!\n\n### \u2705 Simplified API (Recommended)\n\n```python\nfrom fastpgcache import FastPgCache\n\n# Databricks - Local IDE (with profile)\ncache = FastPgCache(\n    host=\"my-instance.database.cloud.databricks.com\",\n    database=\"databricks_postgres\",\n    user=\"user@company.com\",\n    instance_name=\"my_instance\",  # Auto-enables Databricks mode!\n    profile=\"DEFAULT\"  # Optional: for local IDE\n)\n\n# Databricks - Online Notebook (no profile)\ncache = FastPgCache(\n    host=\"my-instance.database.cloud.databricks.com\",\n    database=\"databricks_postgres\",\n    user=\"user@company.com\",\n    instance_name=\"my_instance\"  # Auto-enables Databricks mode!\n    # No profile = uses runtime credentials\n)\n\n# Regular PostgreSQL (for comparison)\ncache = FastPgCache(\n    host=\"localhost\",\n    database=\"postgres\",\n    user=\"postgres\",\n    password=\"mypass\"\n)\n```\n### Usage Example\n\n```python \n# 1. Set values with complex data\ncache.set(\"test:key1\", \"value1\", ttl=3600)\ncache.set(\"test:key2\", {\"name\": \"Alice\", \"age\": 30})\ncache.set(\"test:key3\", [1, 2, 3, 4, 5])\ntest_data = {\n        \"user\": {\n            \"id\": 123,\n            \"name\": \"Alice\",\n            \"tags\": [\"admin\", \"developer\"],\n            \"settings\": {\n                \"theme\": \"dark\",\n                \"notifications\": True\n            }\n        }\n    }\ncache.set(\"json:complex\", test_data)\ntest_list = [1, 2, 3, {\"nested\": \"value\"}, [4, 5, 6]]\ncache.set(\"json:list\", test_list)\n\n# 2. Set values with TTL\ncache.set(\"user:123\", {\"name\": \"Alice\", \"role\": \"admin\"}, ttl=3600)\ncache.set(\"user:456\", {\"name\": \"Bob\", \"role\": \"user\"}, ttl=3600)\ncache.set(\"session:abc\", {\"user_id\": 123, \"ip\": \"192.168.1.1\"}, ttl=1800)\n\n\nprint(\"\u2713 Values set\\n\")\n\n# 3. Get values\nuser123 = cache.get(\"user:123\")\nprint(f\"user:123 = {user123}\")\nsession = cache.get(\"session:abc\")\nprint(f\"session:abc = {session}\\n\")\n\n# 4. Check if key exists\nprint(f\"user:123 exists: {cache.exists('user:123')}\")\nprint(f\"user:999 exists: {cache.exists('user:999')}\\n\")\n\n# 5. Get TTL\nttl = cache.ttl(\"user:123\")\nprint(f\"user:123 expires in {ttl} seconds\\n\")\n\n# 6. Store value without expiry\ncache.set(\"config:app\", {\"theme\": \"dark\", \"language\": \"en\"})\nconfig_ttl = cache.ttl(\"config:app\")\nprint(f\"config:app TTL: {config_ttl} (-1 = no expiry)\\n\")\n\n# 7. Manual token refresh (optional - normally automatic)\nnew_token = token_provider.refresh_token()\nprint(f\"Token refreshed (length: {len(new_token)})\\n\")\n\n# 8. Continue using cache - connection will automatically use new token\ntest_value = cache.get(\"user:123\")\nprint(f\"user:123 = {test_value}\")\nprint(\"\u2713 Cache working perfectly with new token\\n\")\n\n# Close the connection (also stops token auto-refresh)\ncache.close()\nprint(\"\u2713 Cache closed and token provider stopped\")\n```\n\n## API Reference\n\n### FastPgCache\n\n```python\nFastPgCache(\n    connection_string=None,\n    host='localhost',\n    port=5432,\n    database='postgres',\n    user='postgres',\n    password='',\n    schema='public',\n    minconn=1,\n    maxconn=10,\n    use_cuckoo_filter=True,\n    cuckoo_capacity=1000000,\n    instance_name=None,\n    profile=None\n)\n```\n\nInitialize the cache client.\n\n**Parameters:**\n- `connection_string` (str, optional): PostgreSQL connection string\n- `host` (str): Database host (default: 'localhost')\n- `port` (int): Database port (default: 5432)\n- `database` (str): Database name (default: 'postgres')\n- `user` (str): Database user (default: 'postgres')\n- `password` (str): Database password (ignored if instance_name is set)\n- `schema` (str): PostgreSQL schema name for cache table (default: 'public')\n- `minconn` (int): Minimum connections in pool (default: 1)\n- `maxconn` (int): Maximum connections in pool (default: 10)\n- `use_cuckoo_filter` (bool): Enable CuckooFilter for fast negative lookups (default: True)\n- `cuckoo_capacity` (int): CuckooFilter capacity (default: 1,000,000)\n- `instance_name` (str, optional): Databricks lakebase instance name - auto-enables Databricks mode!\n- `profile` (str, optional): Databricks profile for local IDE (omit for online notebooks)\n\n### Methods\n\n#### set(key, value, ttl=None)\n\nStore a value in the cache.\n\n**Parameters:**\n- `key` (str): Cache key\n- `value` (str|dict|list): Value to cache (dicts/lists are auto-serialized to JSON)\n- `ttl` (int, optional): Time to live in seconds (None = no expiry)\n\n**Returns:** `bool` - True if successful\n\n```python\ncache.set(\"user:123\", {\"name\": \"Alice\"}, ttl=3600)\n```\n\n#### set_many(items, ttl=None)\n\nStore multiple values in a single transaction (much faster for bulk operations).\n\n**Parameters:**\n- `items` (dict): Dictionary of key-value pairs to cache\n- `ttl` (int, optional): Time to live in seconds (None = no expiry), applies to all items\n\n**Returns:** `int` - Number of items successfully set\n\n**Performance:** 10-30x faster than individual `set()` calls for bulk inserts!\n\n```python\n# Bulk insert - single transaction!\nitems = {\n    \"user:123\": {\"name\": \"Alice\"},\n    \"user:456\": {\"name\": \"Bob\"},\n    \"user:789\": {\"name\": \"Charlie\"}\n}\ncount = cache.set_many(items, ttl=3600)\nprint(f\"Inserted {count} items\")\n```\n\n#### get(key, parse_json=True)\n\nRetrieve a value from the cache.\n\n**Parameters:**\n- `key` (str): Cache key\n- `parse_json` (bool): Auto-parse JSON values (default: True)\n\n**Returns:** Value or None if not found/expired\n\n```python\nuser = cache.get(\"user:123\")\n```\n\n#### delete(key)\n\nDelete a cache entry.\n\n**Parameters:**\n- `key` (str): Cache key\n\n**Returns:** `bool` - True if deleted, False if not found\n\n```python\ncache.delete(\"user:123\")\n```\n\n#### exists(key)\n\nCheck if a key exists and is not expired.\n\n**Parameters:**\n- `key` (str): Cache key\n\n**Returns:** `bool` - True if exists\n\n```python\nif cache.exists(\"user:123\"):\n    print(\"Key exists!\")\n```\n\n#### ttl(key)\n\nGet time to live for a key.\n\n**Parameters:**\n- `key` (str): Cache key\n\n**Returns:** `int` - Seconds until expiry, -1 if no expiry, -2 if not found\n\n```python\nseconds = cache.ttl(\"user:123\")\n```\n\n**TTL Return Values:**\n- Positive number: Seconds until expiry\n- `-1`: No expiry set (permanent)\n- `-2`: Key not found\n\n#### cleanup()\n\nRemove all expired cache entries.\n\n**Returns:** `int` - Number of entries deleted\n\n```python\ndeleted = cache.cleanup()\n```\n\n#### close()\n\nClose all connections in the pool.\n\n```python\ncache.close()\n```\n\n## \u26a1 CuckooFilter: Lightning-Fast Negative Lookups\n\nFastPgCache includes an **optional CuckooFilter** (enabled by default) that provides **10-1000x speedup** for checking keys that don't exist!\n\n### What is a CuckooFilter?\n\nA CuckooFilter is a probabilistic data structure that:\n- \u2705 **Fast negative lookups** - Instantly know if a key definitely doesn't exist\n- \u2705 **Memory efficient** - Uses ~1MB per 100K keys\n- \u2705 **Supports deletion** - Unlike Bloom filters, can remove items\n- \u26a0\ufe0f **Low false positive rate** - <1% chance of false positives (configurable)\n\n### Performance Benefits\n\n```python\n# With CuckooFilter (default)\ncache = FastPgCache(\n    host=\"localhost\",\n    database=\"postgres\",\n    user=\"postgres\",\n    password=\"mypass\",\n    instance_name='my_instance',\n    use_cuckoo_filter=True  # Default: enabled\n)\n\n# Check 10,000 non-existent keys\nfor i in range(10000):\n    cache.get(f\"missing_key_{i}\")  # \u26a1 INSTANT - no DB query!\n\n# With CuckooFilter disabled\ncache = FastPgCache(\n    host=\"localhost\",\n    database=\"postgres\",\n    user=\"postgres\",\n    password=\"mypass\",\n    instance_name='my_instance',\n    use_cuckoo_filter=False  # Disabled\n)\n\n# Check 10,000 non-existent keys\nfor i in range(10000):\n    cache.get(f\"missing_key_{i}\")  # \ud83d\udc0c SLOW - 10,000 DB queries\n```\n\n### Real-World Performance\n\n| Operation | With CuckooFilter | Without CuckooFilter | Speedup |\n|-----------|------------------|---------------------|---------|\n| **Negative lookup (key doesn't exist)** | 0.001 ms | 10 ms | **10,000x** |\n| **Positive lookup (key exists)** | 10 ms | 10 ms | Same |\n| **Memory usage** | +10 MB (1M keys) | 0 MB | Tradeoff |\n\n### When to Disable CuckooFilter\n\nDisable CuckooFilter if:\n- \u274c Your application has **very high cache hit rate** (>95%) - CuckooFilter won't help much\n- \u274c You need to **minimize memory usage** - CuckooFilter uses ~10MB per million keys\n- \u274c Your cache is **very small** (<1000 keys) - overhead isn't worth it\n\n### Using CuckooFilter Standalone\n\nYou can also use CuckooFilter directly as a standalone data structure, separate from FastPgCache:\n\n```python\nfrom fastpgcache import CuckooFilter\n\n# Create a CuckooFilter with 100,000 capacity\nfilter = CuckooFilter(capacity=100000)\n\n# Insert items\nfilter.insert(\"user:123\")\nfilter.insert(\"session:abc\")\nfilter.insert(\"product:456\")\n\n# Check if item might exist (fast!)\nif filter.lookup(\"user:123\"):\n    print(\"Item might be in the set\")  # True\nelse:\n    print(\"Item definitely NOT in the set\")  # False positive impossible\n\n# Check non-existent item\nif filter.lookup(\"user:999\"):\n    print(\"Might exist (false positive)\")\nelse:\n    print(\"Definitely doesn't exist\")  # This will print!\n\n# Delete items (unlike Bloom filters!)\nfilter.delete(\"user:123\")\nprint(filter.lookup(\"user:123\"))  # False\n\n# Get statistics\nstats = filter.stats()\nprint(f\"Items: {stats['size']:,}\")\nprint(f\"Load factor: {stats['load_factor']:.2%}\")\nprint(f\"False positive rate: {stats['estimated_fpr']:.4%}\")\n```\n\n**Use cases for standalone CuckooFilter:**\n- \ud83d\udd0d **Deduplication** - Check if you've seen an item before\n- \ud83d\udeab **Blocklists** - Fast IP/user blocklist checks\n- \ud83d\udcca **Analytics** - Track unique visitors without storing all IDs\n- \ud83d\udd12 **Rate limiting** - Check request frequency\n- \ud83c\udfaf **Recommendation systems** - Filter already-shown items\n\n**Advantages over Bloom filters:**\n- \u2705 Supports **deletion** (Bloom filters don't!)\n- \u2705 Better **space efficiency** for same false positive rate\n- \u2705 Better **cache locality** (fewer memory accesses)\n\n## \ud83d\udce6 Batch Operations: High-Throughput Inserts\n\nFastPgCache provides `set_many()` for **10-30x faster bulk inserts** compared to individual `set()` calls!\n\n### Performance Comparison\n\n```python\n# Slow: Individual set() calls\nstart = time.time()\nfor i in range(1000):\n    cache.set(f\"key_{i}\", f\"value_{i}\", ttl=3600)\nslow_time = time.time() - start\nprint(f\"Individual set(): {slow_time:.2f}s\")  # ~30 seconds (remote DB)\n\n# Fast: Batch set_many() call\nstart = time.time()\nitems = {f\"key_{i}\": f\"value_{i}\" for i in range(1000)}\ncache.set_many(items, ttl=3600)\nfast_time = time.time() - start\nprint(f\"set_many(): {fast_time:.2f}s\")  # ~1 second (remote DB)\nprint(f\"Speedup: {slow_time / fast_time:.1f}x\")  # ~30x faster!\n```\n\n### Why is set_many() so Fast?\n\n- **Single transaction** - All inserts in one DB roundtrip\n- **Reduced network latency** - One connection instead of 1000\n- **Less overhead** - Single commit instead of 1000\n\n### Usage Example\n\n```python\n# Prepare bulk data\nusers = {\n    \"user:123\": {\"name\": \"Alice\", \"role\": \"admin\"},\n    \"user:456\": {\"name\": \"Bob\", \"role\": \"user\"},\n    \"user:789\": {\"name\": \"Charlie\", \"role\": \"moderator\"}\n}\n\n# Insert all at once (with TTL)\ncount = cache.set_many(users, ttl=3600)\nprint(f\"Inserted {count} users in a single transaction!\")\n\n# Works with any data type\ncache.set_many({\n    \"config:theme\": \"dark\",\n    \"config:lang\": \"en\",\n    \"config:tz\": \"UTC\"\n})\n```\n\n### When to Use set_many()\n\n- \u2705 **Bulk imports** - Loading large datasets into cache\n- \u2705 **Initial cache warming** - Pre-populating cache on startup\n- \u2705 **Batch processing** - Processing records in batches\n- \u2705 **Remote databases** - Network latency is significant\n- \u274c **Single inserts** - Use regular `set()` for simplicity\n\n## Important Notes\n\n### Cache Persistence\n\n\u2705 **Cache data PERSISTS when:**\n- You close and reopen connections (`cache.close()` then create new `FastPgCache`)\n- You restart your application\n- Multiple applications connect to the same database\n\n\u274c **Cache data is LOST when:**\n- PostgreSQL server crashes or restarts (UNLOGGED table behavior)\n- You call `setup(force_recreate=True)` during admin setup\n\n### Other Notes\n\n- **UNLOGGED Tables** - Data is not crash-safe (lost on database crash). This is by design for cache performance. For durability, you would need to modify the setup SQL to remove `UNLOGGED` (not recommended for cache).\n- **First Setup** - Admin runs `admin_setup_cache.py` once to create UNLOGGED tables and functions. Safe to run multiple times (won't lose data).\n- **Cleanup** - Schedule `cache.cleanup()` to remove expired entries (they're auto-removed on access, but cleanup helps with storage)\n\n### Verifying UNLOGGED Table\n\nTo verify your cache table is properly configured as UNLOGGED:\n\n```sql\n-- Check table type\nSELECT \n    relname as table_name,\n    CASE relpersistence\n        WHEN 'u' THEN 'UNLOGGED'\n        WHEN 'p' THEN 'PERMANENT'\n        WHEN 't' THEN 'TEMPORARY'\n    END as table_type\nFROM pg_class\nWHERE relname = 'cache' AND relkind = 'r';\n```\n\nYou should see `UNLOGGED` as the table_type.\n\n## Troubleshooting\n\n**psycopg2 not found:**\n```bash\npip install psycopg2-binary\n```\n\n### Databricks Token Issues\n\n**Token Refresh Failing:**\n\n1. **Workspace Client Configuration:**\n   ```python\n   # Local IDE development (with profile)\n   w = WorkspaceClient(profile=\"DEFAULT\")\n   \n   # OR online notebook mode (no profile needed)\n   # w = WorkspaceClient()\n   \n   # Test credential generation\n   cred = w.database.generate_database_credential(\n       request_id=str(uuid.uuid4()),\n       instance_names=[\"my_instance\"]\n   )\n   print(f\"Token generated: {cred.token[:20]}...\")\n   ```\n\n2. **Instance Names:**\n   ```python\n   # Ensure instance name is correct\n   token_provider = DatabricksTokenProvider(\n       workspace_client=w,\n       instance_names=[\"correct_instance_name\"],  # Must match exactly\n       ...\n   )\n   ```\n\n3. **Network Connectivity:**\n   - Ensure connection to Databricks workspace\n   - Check firewall/proxy settings\n\n**Connection Errors After Token Refresh:**\n\n- Make sure refresh happens before expiry (adjust `refresh_interval`)\n- Keep connection pool size reasonable (lower = faster refresh)\n\n## Requirements\n\n- Python 3.7+\n- PostgreSQL 9.6+\n- psycopg2-binary\n\n## License\n\nMIT License - see LICENSE file for details\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## Support\n\nFor issues and questions, please open an issue on GitHub: https://github.com/vivian-xie-db/fastpgcache/issues\n\n## Additional Resources\n\n- [Databricks SDK Documentation](https://databricks-sdk-py.readthedocs.io/)\n- [PostgreSQL UNLOGGED Tables](https://www.postgresql.org/docs/current/sql-createtable.html#SQL-CREATETABLE-UNLOGGED)\n- [PostgreSQL Authentication](https://www.postgresql.org/docs/current/auth-password.html)\n- [Load Testing Documentation](examples/README_load_testing.md)\n- [Examples Directory](examples/)\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A fast Redis-like caching library using PostgreSQL with UNLOGGED tables for high performance",
    "version": "0.1.8",
    "project_urls": {
        "Bug Tracker": "https://github.com/vivian-xie-db/fastpgcache/issues",
        "Documentation": "https://github.com/vivian-xie-db/fastpgcache#readme",
        "Homepage": "https://github.com/vivian-xie-db/fastpgcache",
        "Repository": "https://github.com/vivian-xie-db/fastpgcache.git"
    },
    "split_keywords": [
        "postgresql",
        " cache",
        " redis",
        " database",
        " caching",
        " fast",
        " performance"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ffcb5587a66e561a4292a12f235f7a69c476c05dd3f42b13e8c37e2d21810f04",
                "md5": "d870d4a70805424b5c9be906c6ec93b7",
                "sha256": "af90c57939af716bde711774110ea664b4506e49818f7d8c374bed6f09fd6f34"
            },
            "downloads": -1,
            "filename": "fastpgcache-0.1.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d870d4a70805424b5c9be906c6ec93b7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 23902,
            "upload_time": "2025-10-26T02:31:11",
            "upload_time_iso_8601": "2025-10-26T02:31:11.334160Z",
            "url": "https://files.pythonhosted.org/packages/ff/cb/5587a66e561a4292a12f235f7a69c476c05dd3f42b13e8c37e2d21810f04/fastpgcache-0.1.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "0e9e664240f03aad2669e3673399520112c969630083380141abc90e04ccd9f8",
                "md5": "352182670bd11ddb715e47f632af5654",
                "sha256": "ca366f08d38ba32a7d93e5ed82948cc11ed95a90a3c3d5c3d328ea01cda67aa1"
            },
            "downloads": -1,
            "filename": "fastpgcache-0.1.8.tar.gz",
            "has_sig": false,
            "md5_digest": "352182670bd11ddb715e47f632af5654",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 303715,
            "upload_time": "2025-10-26T02:31:12",
            "upload_time_iso_8601": "2025-10-26T02:31:12.847315Z",
            "url": "https://files.pythonhosted.org/packages/0e/9e/664240f03aad2669e3673399520112c969630083380141abc90e04ccd9f8/fastpgcache-0.1.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-26 02:31:12",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "vivian-xie-db",
    "github_project": "fastpgcache",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "fastpgcache"
}
        
Elapsed time: 1.54164s